📚 Welcome to Software Testing
Software testing is the process of evaluating and verifying that a software application works as expected. It finds bugs before users do — protecting businesses from financial loss, reputational damage and legal liability. A single bug in banking software can cause crores in losses; a bug in medical software can cost lives.
Testing roles are among the most stable in IT — every software project needs testers. With automation skills (Selenium, TestNG, API testing), testers earn ₹6–25 LPA in India. The ISTQB certification is the global standard — recognized in 100+ countries.
💡 How to use this page: Click any topic in the left sidebar. All notes load instantly — no page reloads. Use Prev / Next to go in order from beginner to advanced.
▶ SDLC & STLC
BeginnerSDLC — Software Development Life Cycle
| # | Phase | Activities | Output |
|---|---|---|---|
| 1 | Requirement Analysis | Gather and understand business requirements from stakeholders | BRS (Business Requirement Specification), SRS |
| 2 | System Design | Architects design high-level and low-level system design | HLD (High Level Design), LLD (Low Level Design) |
| 3 | Implementation (Coding) | Developers write code based on design documents | Source code, unit tests |
| 4 | Testing | QA team tests the software against requirements | Bug reports, test reports |
| 5 | Deployment | Tested software deployed to production environment | Live application |
| 6 | Maintenance | Monitor, fix bugs, add features after go-live | Patches, updates, new versions |
STLC — Software Testing Life Cycle
V-Model — Testing at Every Stage
| Development Phase | Corresponding Test Phase | Test Type |
|---|---|---|
| Requirement Analysis | Acceptance Testing (UAT) | Does the system meet business needs? |
| System Design | System Testing | Does the integrated system work correctly? |
| High Level Design | Integration Testing | Do modules work together correctly? |
| Low Level Design | Component / Unit Testing | Does each unit/module work correctly? |
| Coding | Unit Testing | Does the code logic work as written? |
💡 Entry vs Exit Criteria: Entry Criteria = conditions that MUST be true before testing begins (e.g., build ready, test environment up, test cases reviewed). Exit Criteria = conditions that signal testing is complete (e.g., 95% tests executed, 0 P1 bugs open, defect density below threshold). Always define both in your test plan.
📊 Types of Testing
Beginner → IntermediateFunctional vs Non-Functional Testing
| Category | Type | What It Tests | Example |
|---|---|---|---|
| Functional | Unit Testing | Individual functions/methods in isolation | Test that calculateSalary() returns correct value |
| Functional | Integration Testing | Interaction between 2+ modules | Test that Login + Dashboard modules work together |
| Functional | System Testing | Complete end-to-end system | Test entire enrollment flow from registration to certificate |
| Functional | UAT (User Acceptance) | Software meets business requirements | Business users validate the system before go-live |
| Functional | Regression Testing | New changes did not break existing features | Re-run test suite after every code change |
| Functional | Smoke Testing | Basic critical functions work | Login, homepage load, main navigation work after new build |
| Functional | Sanity Testing | Specific bug fix or new feature works correctly | Test only the fixed module after a hotfix |
| Non-Functional | Performance Testing | Speed, stability, scalability under load | Homepage loads in under 2 seconds for 1000 users |
| Non-Functional | Security Testing | Vulnerabilities, unauthorized access | SQL injection, XSS, authentication bypass attempts |
| Non-Functional | Usability Testing | Ease of use and user experience | Can a new user complete enrollment in under 5 minutes? |
| Non-Functional | Compatibility Testing | Works across browsers, devices, OS | Test on Chrome, Firefox, Safari, iOS, Android |
Black Box vs White Box vs Grey Box
Test without knowing internal code structure. Tester only knows inputs and expected outputs.
Equivalence partitioning, boundary value analysis, decision table. Used by manual QA testers for functional testing.
Test with full knowledge of internal code. Testers check code paths, logic and structure.
Statement coverage, branch coverage, path coverage. Used by developers for unit testing.
Partial knowledge of internals. Tester knows data structures and DB schema but not full source code.
Database testing, API testing where you know the data model. Integration testing.
Severity vs Priority — Most Asked Interview Question
| Severity | Priority | |
|---|---|---|
| Definition | Technical measure of how much the defect impacts functionality | Business measure of how urgently the fix is needed |
| Set by | QA / Tester | Product Owner / Manager |
| Examples | High: app crashes. Low: minor typo in footer | High: typo in company name on homepage. Low: crash in feature used by 2 users/year |
| Key combo | 🚫 High Severity + Low Priority | Logo crash in rarely-used feature |
| Key combo 2 | ⚠ Low Severity + High Priority | Spelling of CEO name wrong on homepage — no functional impact but very visible |
📋 Test Case Design Techniques
IntermediateTest case design techniques help create the minimum number of test cases that give maximum test coverage. These are the backbone of effective testing — and core ISTQB syllabus topics.
1. Equivalence Partitioning (EP)
Divide input data into partitions (classes) where all values in a partition should behave the same way. Test ONE value from each partition — valid and invalid.
Example: Age field accepts 18–60 for a loan application.
| Partition | Range | Test Value | Expected |
|---|---|---|---|
| Invalid (too low) | < 18 | 15 | Error message |
| Valid | 18 – 60 | 35 | Accepted |
| Invalid (too high) | > 60 | 65 | Error message |
2. Boundary Value Analysis (BVA)
Bugs cluster at boundaries. Test the exact boundary values and values just inside/outside. For range 18–60: test 17, 18, 19 and 59, 60, 61.
💡 BVA Rule: For each boundary, test: value just below, the boundary itself, and value just above. For range min=18, max=60 → test: 17 (invalid), 18 (valid), 19 (valid), 59 (valid), 60 (valid), 61 (invalid) = 6 test cases covering both boundaries.
3. Decision Table Testing
For features with multiple conditions and combinations. List all conditions and their combinations systematically — ensures no combination is missed.
| Test Case | Username Valid? | Password Valid? | Account Active? | Expected Result |
|---|---|---|---|---|
| TC1 | Yes | Yes | Yes | Login Success ✅ |
| TC2 | Yes | Yes | No | Account Locked Message |
| TC3 | Yes | No | Yes | Invalid Password Error |
| TC4 | No | Yes | Yes | Invalid Username Error |
| TC5 | Yes | No | No | Account Locked Message |
| TC6 | No | No | Yes | Invalid Username Error |
| TC7 | No | Yes | No | Account Locked Message |
| TC8 | No | No | No | Account Locked Message |
4. State Transition Testing
Test how a system moves between states based on events. Ideal for login systems, order processing, workflow applications.
🔍 Manual Testing Deep Dive
IntermediateBug Life Cycle
Writing Effective Test Cases
| Field | Description | Example |
|---|---|---|
| Test Case ID | Unique identifier | TC_LOGIN_001 |
| Test Case Title | Brief description of what is being tested | Verify successful login with valid credentials |
| Pre-conditions | What must be true before test execution | User must be registered. Browser must be open. Internet connected. |
| Test Steps | Step-by-step instructions (numbered) | 1. Navigate to cuesysinfotech.com/login 2. Enter valid email 3. Enter valid password 4. Click Login button |
| Test Data | Specific data to use during the test | Email: ravi@test.com, Password: Test@1234 |
| Expected Result | What SHOULD happen (before executing) | User is redirected to dashboard. Welcome message displayed. Login time recorded. |
| Actual Result | What ACTUALLY happened (filled after execution) | User successfully logged in. Redirected to dashboard. |
| Status | Pass/Fail/Blocked/Skipped | Pass |
Effective Bug Report — Must Include
💻 Selenium WebDriver
Intermediate → AdvancedSelenium is the industry standard for browser automation testing. WebDriver drives real browsers (Chrome, Firefox, Safari) programmatically — simulating exactly what a user does. It is used at Google, Microsoft and every major enterprise for regression automation.
Locators — Finding Web Elements
import org.openqa.selenium.*;
import org.openqa.selenium.chrome.ChromeDriver;
public class LoginTest {
public static void main(String[] args) {
WebDriver driver = new ChromeDriver();
driver.get("https://www.cuesysinfotech.com");
// 1. By ID — fastest, most reliable
driver.findElement(By.id("username"));
// 2. By Name
driver.findElement(By.name("email"));
// 3. By CSS Selector — fast, flexible
driver.findElement(By.cssSelector("#login-btn"));
driver.findElement(By.cssSelector(".btn-primary"));
driver.findElement(By.cssSelector("input[type='email']"));
// 4. By XPath — powerful, use when ID/CSS not available
driver.findElement(By.xpath("//button[@type='submit']"));
driver.findElement(By.xpath("//h1[contains(text(),'Welcome')]"));
// 5. By LinkText
driver.findElement(By.linkText("Sign In"));
driver.findElement(By.partialLinkText("Sign"));
driver.quit();
}
}Waits — Handling Dynamic Elements
// Implicit Wait — applied globally, waits up to N seconds for any element
driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(10));
// Explicit Wait — wait for SPECIFIC condition on ONE element (preferred)
WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(15));
// Wait until element is visible
WebElement btn = wait.until(
ExpectedConditions.visibilityOfElementLocated(By.id("submit-btn"))
);
// Wait until element is clickable
wait.until(ExpectedConditions.elementToBeClickable(By.id("login"))).click();
// Wait for text to appear
wait.until(ExpectedConditions.textToBePresentInElement(
driver.findElement(By.id("msg")), "Login successful"
));
// Fluent Wait — poll every 500ms, ignore specific exceptions
Wait<WebDriver> fluent = new FluentWait<>(driver)
.withTimeout(Duration.ofSeconds(30))
.pollingEvery(Duration.ofMillis(500))
.ignoring(NoSuchElementException.class);Page Object Model (POM) — Industry Best Practice
// LoginPage.java — Page Class
public class LoginPage {
private WebDriver driver;
// Element locators as fields
@FindBy(id = "email")
private WebElement emailField;
@FindBy(id = "password")
private WebElement passwordField;
@FindBy(id = "login-btn")
private WebElement loginButton;
public LoginPage(WebDriver driver) {
this.driver = driver;
PageFactory.initElements(driver, this);
}
// Page actions as methods
public void enterEmail(String email) { emailField.sendKeys(email); }
public void enterPassword(String pass) { passwordField.sendKeys(pass); }
public void clickLogin() { loginButton.click(); }
public DashboardPage login(String email, String pass) {
enterEmail(email);
enterPassword(pass);
clickLogin();
return new DashboardPage(driver);
}
}
// LoginTest.java — Test Class (clean, readable)
public class LoginTest {
@Test
public void testValidLogin() {
LoginPage login = new LoginPage(driver);
DashboardPage dashboard = login.login("user@test.com", "Pass@123");
Assert.assertTrue(dashboard.isWelcomeDisplayed());
}
}⚡ TestNG Framework
IntermediateTestNG Annotations — Execution Order
| Annotation | Runs | Use For | Example |
|---|---|---|---|
@BeforeSuite | Once before entire suite | Suite-wide setup — DB connection, browser init | Start WebDriver server |
@AfterSuite | Once after entire suite | Suite-wide teardown — close connections | Generate final HTML report |
@BeforeClass | Once before first method in class | Class-level setup — open browser | driver = new ChromeDriver() |
@AfterClass | Once after last method in class | Class-level teardown — close browser | driver.quit() |
@BeforeMethod | Before EACH @Test method | Method-level setup — navigate to URL, login | driver.get("https://...") |
@AfterMethod | After EACH @Test method | Method-level teardown — screenshot on fail, logout | takeScreenshotOnFail() |
@Test | The actual test method | Your test logic — steps, assertions | @Test public void testLogin() |
@DataProvider | Provides data arrays | Parameterized test data | Return Object[][] with test datasets |
Data-Driven Testing with @DataProvider
public class LoginDataDrivenTest {
@DataProvider(name = "loginData")
public Object[][] getLoginData() {
return new Object[][] {
{"valid@email.com", "Valid@123", "Welcome, User!", true},
{"wrong@email.com", "Wrong@123", "Invalid credentials", false},
{"", "Pass@123", "Email required", false},
{"valid@email.com", "", "Password required", false},
};
}
@Test(dataProvider = "loginData")
public void testLogin(String email, String pass, String expected, boolean shouldPass) {
LoginPage page = new LoginPage(driver);
page.login(email, pass);
String actual = page.getMessage();
Assert.assertEquals(actual, expected);
}
}🔗 API Testing with Postman
IntermediateHTTP Methods & Status Codes — Must Know
| Code Range | Meaning | Common Codes |
|---|---|---|
| 2xx — Success | Request was received, understood and accepted | 200 OK, 201 Created, 204 No Content |
| 3xx — Redirection | Further action is needed to complete the request | 301 Moved Permanently, 302 Found (temporary redirect) |
| 4xx — Client Error | The request has an error on the client side | 400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, 422 Unprocessable Entity |
| 5xx — Server Error | The server failed to fulfill a valid request | 500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, 504 Gateway Timeout |
Postman — Writing API Tests
// In Postman's "Tests" tab for GET /api/students
// 1. Status code assertion
pm.test("Status is 200", () => {
pm.response.to.have.status(200);
});
// 2. Response time
pm.test("Response time under 500ms", () => {
pm.expect(pm.response.responseTime).to.be.below(500);
});
// 3. JSON body assertions
const body = pm.response.json();
pm.test("Response is an array", () => {
pm.expect(body).to.be.an('array');
});
pm.test("First student has name field", () => {
pm.expect(body[0]).to.have.property('name');
});
// 4. Content-Type header
pm.test("Content-Type is JSON", () => {
pm.response.to.have.header("Content-Type");
pm.expect(pm.response.headers.get("Content-Type")).to.include("application/json");
});
// 5. Save value to environment variable for next request
const token = body.token;
pm.environment.set("authToken", token);What to Test in APIs — Checklist
👥 Agile Testing
IntermediateIn Agile, testing is not a separate phase at the end — it is integrated into every sprint. Testers work alongside developers from day one, providing continuous feedback. This "shift-left" approach catches bugs earlier and reduces cost.
Agile Testing Principles
TDD vs BDD
| TDD (Test-Driven Development) | BDD (Behaviour-Driven Development) | |
|---|---|---|
| Written by | Developers | QA + Business Analysts + Developers (collaboration) |
| Language | Unit test code (JUnit, TestNG) | Plain English scenarios (Gherkin: Given/When/Then) |
| Focus | Implementation correctness | Business behaviour and requirements |
| Tool | JUnit, Mockito, TestNG | Cucumber, JBehave, SpecFlow |
| Example | @Test public void sum_of_two_numbers_is_correct() | Given I have 2 items in cart When I apply 10% discount Then total should be reduced by 10% |
📈 Performance Testing with JMeter
AdvancedTypes of Performance Testing
| Type | What It Measures | How | Goal |
|---|---|---|---|
| Load Testing | System performance under expected load | Ramp up to normal peak user count (e.g., 1000 concurrent) | Verify response times meet SLA under normal conditions |
| Stress Testing | Breaking point of the system | Keep increasing load beyond normal until system fails | Find the maximum capacity and how the system fails (gracefully?) |
| Spike Testing | System reaction to sudden traffic spikes | Instantly jump from 100 to 5000 users, then back down | Does the system recover after a spike? How long does it take? |
| Endurance Testing | Performance over extended period | Run normal load for 8–24 hours continuously | Detect memory leaks, resource exhaustion, performance degradation over time |
| Volume Testing | Large amounts of data | Test with millions of records in the database | Does performance degrade with large data volumes? |
| Scalability Testing | Ability to scale up/out | Add servers/instances and measure improvement | Verify horizontal/vertical scaling works as expected |
Key Performance Metrics
💡 JMeter Quick Start: (1) Create Test Plan → Add Thread Group (users=100, ramp-up=60s, loops=1). (2) Add HTTP Request sampler (server, path, method). (3) Add Listeners: View Results Tree (debug), Aggregate Report (results), Response Time Graph. (4) Run and analyse results — look for error rate > 0% and response time spikes. (5) Add CSV Data Set Config for parameterised test data.
❓ Software Testing Interview Q&A
Most frequently asked questions at product and service companies. Click to reveal model answers.
Model Answer:
Verification asks: "Are we building the product RIGHT?" It checks that the work products match specified requirements — done without executing code. Examples: reviewing SRS documents, walkthroughs, inspections, desk checks of test cases. Validation asks: "Are we building the RIGHT product?" It checks that the final product meets the user's actual needs — done by executing software. Examples: user acceptance testing, beta testing, system testing. Memory aid: Verification = review of documents/code (static). Validation = testing the running software (dynamic). Both are needed: you can build the wrong product perfectly (fails validation), or build the right product incorrectly (fails verification). ISTQB defines: Verification and Validation together constitute QA (quality assurance) and quality control activities.
Model Answer:
Severity is the technical measure of the impact a defect has on the software functionality. It is assessed by the tester. Severity levels: Critical (system crash, data loss), High (major feature broken), Medium (feature works partially), Low (cosmetic issue, typo). Priority is the business measure of how urgently a fix is needed. It is determined by the product owner or project manager. Priority levels: P1 (fix immediately), P2 (fix in current sprint), P3 (fix in next sprint), P4 (fix when time permits). Classic combinations that appear in interviews: Low Severity + High Priority: Company name misspelled on homepage. Minor technical bug but extremely visible to customers — fix immediately. High Severity + Low Priority: Critical crash in a module used by only 2 users per year. Technically serious but not urgent for the business. The key point: Severity is technical, Priority is business. They can conflict — that conflict requires discussion between tester, developer and product owner.
Model Answer:
The Page Object Model (POM) is a test automation design pattern that creates an object repository for web UI elements. Each web page (or significant component) corresponds to a class. That class contains: (1) Web element locators as fields (usually with @FindBy annotations). (2) Methods representing user actions on that page (login(), searchProduct(), clickCheckout()). Benefits: Reduced code duplication — element locators are defined in ONE place. When the UI changes, update only the page class — not every test. Readability — tests read like business actions: loginPage.login(username, password) rather than driver.findElement(By.id("username")).sendKeys(username). Reusability — one page class used across multiple test classes. Maintainability — clear separation between test logic and page interaction logic. Structure: LoginPage class (page objects) + LoginTest class (test logic). The test class creates a LoginPage object and calls its methods. The LoginPage class handles all UI interactions internally.
Model Answer:
Smoke Testing is a broad, shallow test of the most critical functionality to determine if a new build is stable enough for further testing. It is done AFTER receiving a new build. If smoke tests fail, the build is rejected and returned to developers without further testing. Tests cover: can the application launch? Can users log in? Do main navigation items work? Think of it as a "health check" on the build. It is also called Build Verification Testing (BVT). Sanity Testing is narrow, deep testing of a specific functionality or bug fix. It is done AFTER receiving a build with a specific fix. It verifies that the fix works correctly and has not broken related features. It does NOT test the entire application — only the relevant area. Think of it as a "targeted check" on a specific change. Memory aid: Smoke = wide, shallow (is the whole build OK?). Sanity = narrow, deep (is THIS specific fix OK?). Both are done quickly — neither is exhaustive. Both help decide whether to proceed with full testing.
Model Answer:
Regression testing ensures that new code changes have not broken existing functionality that was previously working. Every time code changes (new feature, bug fix, refactoring), regression tests run to verify nothing is broken. Why important: software is interconnected — a change in the login module can affect the checkout flow. Without regression testing, you cannot know what broke. Building a regression suite: (1) Start with test cases for all critical business workflows (highest business value, highest usage). (2) Add test cases for areas that have historically been most bug-prone. (3) Include test cases that cover integration points between modules. (4) Add test cases for bug fixes — every bug that was found and fixed gets a regression test to prevent recurrence. (5) Automate the regression suite — manual regression is too slow for frequent releases. Selenium + TestNG + CI/CD (Jenkins/GitHub Actions) triggers regression on every code commit. Challenge: regression suites grow over time. Manage with risk-based prioritization — not all regression tests run for every change.
Model Answer:
API testing verifies that application programming interfaces work correctly — sending requests to the API endpoints and validating responses. It operates at the service layer (below the UI, above the database). Tools: Postman for manual/exploratory, REST Assured (Java) for automation, Newman for CI/CD execution. Key differences from UI testing: Speed — API tests run in milliseconds vs seconds for UI tests. Stability — APIs are more stable than UI (less affected by design changes). Independence — can test business logic without UI being ready. Coverage — test the actual data validation logic, not just what the UI shows. What to test in APIs: status codes, response body fields, data types, required fields presence, error messages for invalid input, authentication (401 without token, 403 for unauthorised), response time, pagination, filtering. Where API testing fits: it catches backend bugs that might not be visible in the UI, and provides faster feedback than E2E UI tests. In the testing pyramid, API/integration tests form the middle layer.
Model Answer:
ISTQB (International Software Testing Qualifications Board) is a non-profit organisation that provides globally recognised software testing certifications. The Foundation Level (CTFL) is the entry-level certification — recognized in 100+ countries and required by many companies. Exam details: 40 questions, 60 minutes, 65% pass mark (26 correct). Multiple choice format. Chapters and approximate weights: (1) Fundamentals of Testing — 15%: what is testing, why testing, testing principles, test process, psychology. (2) Testing Throughout the SDLC — 15%: testing in different models (V-model, Agile), test levels (unit/integration/system/acceptance), test types. (3) Static Testing — 10%: reviews, static analysis, value of static testing. (4) Test Analysis and Design — 25%: black box techniques (EP, BVA, decision table, state transition), white box techniques (statement/branch coverage), experience-based. (5) Managing the Test Activities — 25%: test planning, estimation, test monitoring and control, configuration management, defect management. (6) Test Tools — 10%: tool classification, benefits and risks of automation. Preparation: ISTQB official syllabus + these notes + practice papers. 3–4 weeks of study typically sufficient for the Foundation Level.
🏅 ISTQB CTFL Exam Tips: (1) Read EVERY question carefully — ISTQB questions are precise. (2) Look for absolute words: "ALWAYS", "NEVER", "BEST" — these often point to the answer. (3) Eliminate obviously wrong answers first. (4) Testing principles (7 of them) appear in almost every exam. (5) Know all black-box techniques thoroughly — 25% of exam is test design. (6) Official ISTQB mock exams are the best practice — patterns repeat year to year. Target score: 75%+ in practice before attempting the real exam.