📋 CuesysLearn Main Website 🔥 Learning Portal
info@cuesysinfotech.com Contact
CuesysLearn
Software Testingby Cuesys Infotech Pvt Ltd
Free Training
🔥 Learning Portal 🔒 SAP Security 📋 Software Testing ☕ Java ☁️ Cloud 🏫 Free Training
🔥 Learning Portal 📋 Software Testing
📋 SOFTWARE TESTING 🏅 ISTQB Ready Beginner → Advanced

Software Testing
Complete Notes

Master software testing from fundamentals to automation — manual testing, SDLC/STLC, Selenium WebDriver, TestNG, API testing, Agile testing and performance testing. Everything to crack testing interviews and clear the ISTQB Foundation Level certification.

📋 11 Topics Covered
❓ 75+ Interview Q&As
🏅 ISTQB CTFL Ready
💻 Selenium Code Examples
✅ Updated 2026

📚 Welcome to Software Testing

Software testing is the process of evaluating and verifying that a software application works as expected. It finds bugs before users do — protecting businesses from financial loss, reputational damage and legal liability. A single bug in banking software can cause crores in losses; a bug in medical software can cost lives.

Testing roles are among the most stable in IT — every software project needs testers. With automation skills (Selenium, TestNG, API testing), testers earn ₹6–25 LPA in India. The ISTQB certification is the global standard — recognized in 100+ countries.

📋
Manual Testing
Test cases, defect lifecycle, test plans, bug reporting
SDLC & STLC
V-model, Agile, test phases, entry/exit criteria
📊
Testing Types
Functional, non-functional, black box, white box, grey box
🔍
Test Design
Equivalence partitioning, boundary value, decision tables
💻
Selenium
WebDriver, locators, POM, waits, data-driven testing
TestNG
Annotations, test suites, parameterization, parallel execution
🔗
API Testing
REST APIs, Postman collections, status codes, assertions
👥
Agile Testing
Sprint testing, exploratory testing, TDD, BDD, shift-left
📈
Performance
JMeter, load testing, stress testing, performance metrics
Interview Q&A
75+ real questions from top companies with model answers
🏅
ISTQB
Foundation Level exam guide, syllabus breakdown, pass tips

💡 How to use this page: Click any topic in the left sidebar. All notes load instantly — no page reloads. Use Prev / Next to go in order from beginner to advanced.

▶ SDLC & STLC

Beginner

SDLC — Software Development Life Cycle

#PhaseActivitiesOutput
1Requirement AnalysisGather and understand business requirements from stakeholdersBRS (Business Requirement Specification), SRS
2System DesignArchitects design high-level and low-level system designHLD (High Level Design), LLD (Low Level Design)
3Implementation (Coding)Developers write code based on design documentsSource code, unit tests
4TestingQA team tests the software against requirementsBug reports, test reports
5DeploymentTested software deployed to production environmentLive application
6MaintenanceMonitor, fix bugs, add features after go-livePatches, updates, new versions

STLC — Software Testing Life Cycle

1 1. ReqAnalysis Understandwhat to test 2 2. TestPlanning Estimate, scope,resources 3 3. Test CaseDesign Write & reviewtest cases 4 4. EnvSetup Prepare testenvironment 5 5. TestExecution Run tests, logdefects 6 6. TestClosure Reports, metrics,lessons learned

V-Model — Testing at Every Stage

Development PhaseCorresponding Test PhaseTest Type
Requirement AnalysisAcceptance Testing (UAT)Does the system meet business needs?
System DesignSystem TestingDoes the integrated system work correctly?
High Level DesignIntegration TestingDo modules work together correctly?
Low Level DesignComponent / Unit TestingDoes each unit/module work correctly?
CodingUnit TestingDoes the code logic work as written?

💡 Entry vs Exit Criteria: Entry Criteria = conditions that MUST be true before testing begins (e.g., build ready, test environment up, test cases reviewed). Exit Criteria = conditions that signal testing is complete (e.g., 95% tests executed, 0 P1 bugs open, defect density below threshold). Always define both in your test plan.

📊 Types of Testing

Beginner → Intermediate

Functional vs Non-Functional Testing

CategoryTypeWhat It TestsExample
FunctionalUnit TestingIndividual functions/methods in isolationTest that calculateSalary() returns correct value
FunctionalIntegration TestingInteraction between 2+ modulesTest that Login + Dashboard modules work together
FunctionalSystem TestingComplete end-to-end systemTest entire enrollment flow from registration to certificate
FunctionalUAT (User Acceptance)Software meets business requirementsBusiness users validate the system before go-live
FunctionalRegression TestingNew changes did not break existing featuresRe-run test suite after every code change
FunctionalSmoke TestingBasic critical functions workLogin, homepage load, main navigation work after new build
FunctionalSanity TestingSpecific bug fix or new feature works correctlyTest only the fixed module after a hotfix
Non-FunctionalPerformance TestingSpeed, stability, scalability under loadHomepage loads in under 2 seconds for 1000 users
Non-FunctionalSecurity TestingVulnerabilities, unauthorized accessSQL injection, XSS, authentication bypass attempts
Non-FunctionalUsability TestingEase of use and user experienceCan a new user complete enrollment in under 5 minutes?
Non-FunctionalCompatibility TestingWorks across browsers, devices, OSTest on Chrome, Firefox, Safari, iOS, Android

Black Box vs White Box vs Grey Box

#E3F2FD

Test without knowing internal code structure. Tester only knows inputs and expected outputs.

Equivalence partitioning, boundary value analysis, decision table. Used by manual QA testers for functional testing.

#E8F5E9

Test with full knowledge of internal code. Testers check code paths, logic and structure.

Statement coverage, branch coverage, path coverage. Used by developers for unit testing.

#F3E5F5

Partial knowledge of internals. Tester knows data structures and DB schema but not full source code.

Database testing, API testing where you know the data model. Integration testing.

Severity vs Priority — Most Asked Interview Question

SeverityPriority
DefinitionTechnical measure of how much the defect impacts functionalityBusiness measure of how urgently the fix is needed
Set byQA / TesterProduct Owner / Manager
ExamplesHigh: app crashes. Low: minor typo in footerHigh: typo in company name on homepage. Low: crash in feature used by 2 users/year
Key combo🚫 High Severity + Low PriorityLogo crash in rarely-used feature
Key combo 2⚠ Low Severity + High PrioritySpelling of CEO name wrong on homepage — no functional impact but very visible

📋 Test Case Design Techniques

Intermediate

Test case design techniques help create the minimum number of test cases that give maximum test coverage. These are the backbone of effective testing — and core ISTQB syllabus topics.

1. Equivalence Partitioning (EP)

Divide input data into partitions (classes) where all values in a partition should behave the same way. Test ONE value from each partition — valid and invalid.

Example: Age field accepts 18–60 for a loan application.

PartitionRangeTest ValueExpected
Invalid (too low)< 1815Error message
Valid18 – 6035Accepted
Invalid (too high)> 6065Error message

2. Boundary Value Analysis (BVA)

Bugs cluster at boundaries. Test the exact boundary values and values just inside/outside. For range 18–60: test 17, 18, 19 and 59, 60, 61.

💡 BVA Rule: For each boundary, test: value just below, the boundary itself, and value just above. For range min=18, max=60 → test: 17 (invalid), 18 (valid), 19 (valid), 59 (valid), 60 (valid), 61 (invalid) = 6 test cases covering both boundaries.

3. Decision Table Testing

For features with multiple conditions and combinations. List all conditions and their combinations systematically — ensures no combination is missed.

Test CaseUsername Valid?Password Valid?Account Active?Expected Result
TC1 Yes Yes Yes Login Success ✅
TC2 Yes Yes No Account Locked Message
TC3 Yes No Yes Invalid Password Error
TC4 No Yes Yes Invalid Username Error
TC5 Yes No No Account Locked Message
TC6 No No Yes Invalid Username Error
TC7 No Yes No Account Locked Message
TC8 No No No Account Locked Message

4. State Transition Testing

Test how a system moves between states based on events. Ideal for login systems, order processing, workflow applications.

1
States to identify — List all valid states the system can be in. Example for ATM: Idle, Card Inserted, PIN Entered, Transaction In Progress, Card Ejected.
2
Events/Transitions — Identify what causes state changes. Event: insert card → State changes from Idle to Card Inserted.
3
Test each transition — Create test cases for each valid state transition AND test invalid transitions (e.g., enter PIN without inserting card).
4
State table — Create a table: current state × event = next state. Every cell is a potential test case.

🔍 Manual Testing Deep Dive

Intermediate

Bug Life Cycle

New Assigned Open Fixed Retest Closed

Writing Effective Test Cases

FieldDescriptionExample
Test Case IDUnique identifierTC_LOGIN_001
Test Case TitleBrief description of what is being testedVerify successful login with valid credentials
Pre-conditionsWhat must be true before test executionUser must be registered. Browser must be open. Internet connected.
Test StepsStep-by-step instructions (numbered)1. Navigate to cuesysinfotech.com/login 2. Enter valid email 3. Enter valid password 4. Click Login button
Test DataSpecific data to use during the testEmail: ravi@test.com, Password: Test@1234
Expected ResultWhat SHOULD happen (before executing)User is redirected to dashboard. Welcome message displayed. Login time recorded.
Actual ResultWhat ACTUALLY happened (filled after execution)User successfully logged in. Redirected to dashboard.
StatusPass/Fail/Blocked/SkippedPass

Effective Bug Report — Must Include

1
Title — Clear, specific one-line summary: "Login button unresponsive when caps lock is ON in Chrome v122"
2
Environment — Browser + version, OS, device, application version, URL. Reproducibility depends on environment.
3
Steps to Reproduce — Numbered, precise, minimal. Another tester must be able to reproduce it following your exact steps.
4
Expected vs Actual — What SHOULD happen vs what ACTUALLY happened. The gap is the bug.
5
Severity & Priority — Your assessment. Attach screenshots and video recordings — they are worth 1000 words.
6
Attachments — Screenshot with annotation, screen recording, log files, HAR network trace for API issues.

💻 Selenium WebDriver

Intermediate → Advanced

Selenium is the industry standard for browser automation testing. WebDriver drives real browsers (Chrome, Firefox, Safari) programmatically — simulating exactly what a user does. It is used at Google, Microsoft and every major enterprise for regression automation.

Locators — Finding Web Elements

JAVA — SELENIUMimport org.openqa.selenium.*; import org.openqa.selenium.chrome.ChromeDriver; public class LoginTest { public static void main(String[] args) { WebDriver driver = new ChromeDriver(); driver.get("https://www.cuesysinfotech.com"); // 1. By ID — fastest, most reliable driver.findElement(By.id("username")); // 2. By Name driver.findElement(By.name("email")); // 3. By CSS Selector — fast, flexible driver.findElement(By.cssSelector("#login-btn")); driver.findElement(By.cssSelector(".btn-primary")); driver.findElement(By.cssSelector("input[type='email']")); // 4. By XPath — powerful, use when ID/CSS not available driver.findElement(By.xpath("//button[@type='submit']")); driver.findElement(By.xpath("//h1[contains(text(),'Welcome')]")); // 5. By LinkText driver.findElement(By.linkText("Sign In")); driver.findElement(By.partialLinkText("Sign")); driver.quit(); } }

Waits — Handling Dynamic Elements

JAVA — SELENIUM WAITS// Implicit Wait — applied globally, waits up to N seconds for any element driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(10)); // Explicit Wait — wait for SPECIFIC condition on ONE element (preferred) WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(15)); // Wait until element is visible WebElement btn = wait.until( ExpectedConditions.visibilityOfElementLocated(By.id("submit-btn")) ); // Wait until element is clickable wait.until(ExpectedConditions.elementToBeClickable(By.id("login"))).click(); // Wait for text to appear wait.until(ExpectedConditions.textToBePresentInElement( driver.findElement(By.id("msg")), "Login successful" )); // Fluent Wait — poll every 500ms, ignore specific exceptions Wait<WebDriver> fluent = new FluentWait<>(driver) .withTimeout(Duration.ofSeconds(30)) .pollingEvery(Duration.ofMillis(500)) .ignoring(NoSuchElementException.class);

Page Object Model (POM) — Industry Best Practice

JAVA — POM PATTERN// LoginPage.java — Page Class public class LoginPage { private WebDriver driver; // Element locators as fields @FindBy(id = "email") private WebElement emailField; @FindBy(id = "password") private WebElement passwordField; @FindBy(id = "login-btn") private WebElement loginButton; public LoginPage(WebDriver driver) { this.driver = driver; PageFactory.initElements(driver, this); } // Page actions as methods public void enterEmail(String email) { emailField.sendKeys(email); } public void enterPassword(String pass) { passwordField.sendKeys(pass); } public void clickLogin() { loginButton.click(); } public DashboardPage login(String email, String pass) { enterEmail(email); enterPassword(pass); clickLogin(); return new DashboardPage(driver); } } // LoginTest.java — Test Class (clean, readable) public class LoginTest { @Test public void testValidLogin() { LoginPage login = new LoginPage(driver); DashboardPage dashboard = login.login("user@test.com", "Pass@123"); Assert.assertTrue(dashboard.isWelcomeDisplayed()); } }

⚡ TestNG Framework

Intermediate

TestNG Annotations — Execution Order

AnnotationRunsUse ForExample
@BeforeSuiteOnce before entire suiteSuite-wide setup — DB connection, browser initStart WebDriver server
@AfterSuiteOnce after entire suiteSuite-wide teardown — close connectionsGenerate final HTML report
@BeforeClassOnce before first method in classClass-level setup — open browserdriver = new ChromeDriver()
@AfterClassOnce after last method in classClass-level teardown — close browserdriver.quit()
@BeforeMethodBefore EACH @Test methodMethod-level setup — navigate to URL, logindriver.get("https://...")
@AfterMethodAfter EACH @Test methodMethod-level teardown — screenshot on fail, logouttakeScreenshotOnFail()
@TestThe actual test methodYour test logic — steps, assertions@Test public void testLogin()
@DataProviderProvides data arraysParameterized test dataReturn Object[][] with test datasets

Data-Driven Testing with @DataProvider

JAVA — TESTNG DATA-DRIVENpublic class LoginDataDrivenTest { @DataProvider(name = "loginData") public Object[][] getLoginData() { return new Object[][] { {"valid@email.com", "Valid@123", "Welcome, User!", true}, {"wrong@email.com", "Wrong@123", "Invalid credentials", false}, {"", "Pass@123", "Email required", false}, {"valid@email.com", "", "Password required", false}, }; } @Test(dataProvider = "loginData") public void testLogin(String email, String pass, String expected, boolean shouldPass) { LoginPage page = new LoginPage(driver); page.login(email, pass); String actual = page.getMessage(); Assert.assertEquals(actual, expected); } }

🔗 API Testing with Postman

Intermediate

HTTP Methods & Status Codes — Must Know

Code RangeMeaningCommon Codes
2xx — SuccessRequest was received, understood and accepted200 OK, 201 Created, 204 No Content
3xx — RedirectionFurther action is needed to complete the request301 Moved Permanently, 302 Found (temporary redirect)
4xx — Client ErrorThe request has an error on the client side400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, 422 Unprocessable Entity
5xx — Server ErrorThe server failed to fulfill a valid request500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, 504 Gateway Timeout

Postman — Writing API Tests

JAVASCRIPT — POSTMAN TESTS// In Postman's "Tests" tab for GET /api/students // 1. Status code assertion pm.test("Status is 200", () => { pm.response.to.have.status(200); }); // 2. Response time pm.test("Response time under 500ms", () => { pm.expect(pm.response.responseTime).to.be.below(500); }); // 3. JSON body assertions const body = pm.response.json(); pm.test("Response is an array", () => { pm.expect(body).to.be.an('array'); }); pm.test("First student has name field", () => { pm.expect(body[0]).to.have.property('name'); }); // 4. Content-Type header pm.test("Content-Type is JSON", () => { pm.response.to.have.header("Content-Type"); pm.expect(pm.response.headers.get("Content-Type")).to.include("application/json"); }); // 5. Save value to environment variable for next request const token = body.token; pm.environment.set("authToken", token);

What to Test in APIs — Checklist

1
Happy path — Test the expected correct scenario — valid inputs, correct authentication, expected response.
2
Negative tests — Invalid data (wrong types, missing required fields, out-of-range values). Expect 400/422 errors.
3
Authentication — Test without token (expect 401). Test with expired token. Test with invalid token. Test with valid token.
4
Authorization — Test accessing resource you do not own (expect 403). Test admin-only endpoints with regular user.
5
Boundary values — Empty strings, null values, maximum length strings, zero, negative numbers for numeric fields.
6
Response schema — All expected fields present. Correct data types. No extra sensitive fields leaked (passwords, internal IDs).
7
Performance — Response time under acceptable threshold. No timeout on normal load.

👥 Agile Testing

Intermediate

In Agile, testing is not a separate phase at the end — it is integrated into every sprint. Testers work alongside developers from day one, providing continuous feedback. This "shift-left" approach catches bugs earlier and reduces cost.

Agile Testing Principles

1
Test early and continuously — Testing starts on Day 1 of the sprint, not after development is "done". Testers review user stories during sprint planning to clarify requirements.
2
Whole-team responsibility — Quality is everyone's responsibility — not just the QA team. Developers write unit tests; QA writes integration and UI tests; everyone participates in sprint review.
3
Automate ruthlessly — Manual regression testing every sprint is unsustainable. Automate regression tests so the team can release confidently with every sprint.
4
Test at the right level — Use the testing pyramid: many unit tests (fast, cheap), some integration tests (medium), few UI/E2E tests (slow, expensive). Do not invert the pyramid.
5
Exploratory testing in every sprint — After automated tests pass, experienced testers explore the application freely — finding bugs that scripted tests miss. Typically 20–30% of sprint testing time.
6
Definition of Done includes testing — A user story is not "done" until: unit tests pass, integration tests pass, test cases reviewed, exploratory testing complete, no P1/P2 bugs open.

TDD vs BDD

TDD (Test-Driven Development)BDD (Behaviour-Driven Development)
Written byDevelopersQA + Business Analysts + Developers (collaboration)
LanguageUnit test code (JUnit, TestNG)Plain English scenarios (Gherkin: Given/When/Then)
FocusImplementation correctnessBusiness behaviour and requirements
ToolJUnit, Mockito, TestNGCucumber, JBehave, SpecFlow
Example@Test public void sum_of_two_numbers_is_correct()Given I have 2 items in cart When I apply 10% discount Then total should be reduced by 10%

📈 Performance Testing with JMeter

Advanced

Types of Performance Testing

TypeWhat It MeasuresHowGoal
Load TestingSystem performance under expected loadRamp up to normal peak user count (e.g., 1000 concurrent)Verify response times meet SLA under normal conditions
Stress TestingBreaking point of the systemKeep increasing load beyond normal until system failsFind the maximum capacity and how the system fails (gracefully?)
Spike TestingSystem reaction to sudden traffic spikesInstantly jump from 100 to 5000 users, then back downDoes the system recover after a spike? How long does it take?
Endurance TestingPerformance over extended periodRun normal load for 8–24 hours continuouslyDetect memory leaks, resource exhaustion, performance degradation over time
Volume TestingLarge amounts of dataTest with millions of records in the databaseDoes performance degrade with large data volumes?
Scalability TestingAbility to scale up/outAdd servers/instances and measure improvementVerify horizontal/vertical scaling works as expected

Key Performance Metrics

Response Time
Time from sending request to receiving complete response. Target: under 2s for web, under 200ms for API.
Throughput
Requests processed per second (RPS/TPS). Higher = better server capacity.
Error Rate
Percentage of failed requests. Should be 0% under normal load; investigate any errors.
Concurrent Users
Number of users active simultaneously. Peak concurrent users = key capacity metric.
CPU & Memory
Server resource utilisation under load. Should not exceed 80% CPU or 85% memory.
Apdex Score
Application Performance Index — measures user satisfaction: 0 (worst) to 1.0 (best). Target: 0.94+

💡 JMeter Quick Start: (1) Create Test Plan → Add Thread Group (users=100, ramp-up=60s, loops=1). (2) Add HTTP Request sampler (server, path, method). (3) Add Listeners: View Results Tree (debug), Aggregate Report (results), Response Time Graph. (4) Run and analyse results — look for error rate > 0% and response time spikes. (5) Add CSV Data Set Config for parameterised test data.

❓ Software Testing Interview Q&A

Most frequently asked questions at product and service companies. Click to reveal model answers.

Q1. What is the difference between Verification and Validation? +

Model Answer:

Verification asks: "Are we building the product RIGHT?" It checks that the work products match specified requirements — done without executing code. Examples: reviewing SRS documents, walkthroughs, inspections, desk checks of test cases. Validation asks: "Are we building the RIGHT product?" It checks that the final product meets the user's actual needs — done by executing software. Examples: user acceptance testing, beta testing, system testing. Memory aid: Verification = review of documents/code (static). Validation = testing the running software (dynamic). Both are needed: you can build the wrong product perfectly (fails validation), or build the right product incorrectly (fails verification). ISTQB defines: Verification and Validation together constitute QA (quality assurance) and quality control activities.

Q2. What is the difference between Severity and Priority in bug reporting? +

Model Answer:

Severity is the technical measure of the impact a defect has on the software functionality. It is assessed by the tester. Severity levels: Critical (system crash, data loss), High (major feature broken), Medium (feature works partially), Low (cosmetic issue, typo). Priority is the business measure of how urgently a fix is needed. It is determined by the product owner or project manager. Priority levels: P1 (fix immediately), P2 (fix in current sprint), P3 (fix in next sprint), P4 (fix when time permits). Classic combinations that appear in interviews: Low Severity + High Priority: Company name misspelled on homepage. Minor technical bug but extremely visible to customers — fix immediately. High Severity + Low Priority: Critical crash in a module used by only 2 users per year. Technically serious but not urgent for the business. The key point: Severity is technical, Priority is business. They can conflict — that conflict requires discussion between tester, developer and product owner.

Q3. Explain the Page Object Model design pattern in Selenium. +

Model Answer:

The Page Object Model (POM) is a test automation design pattern that creates an object repository for web UI elements. Each web page (or significant component) corresponds to a class. That class contains: (1) Web element locators as fields (usually with @FindBy annotations). (2) Methods representing user actions on that page (login(), searchProduct(), clickCheckout()). Benefits: Reduced code duplication — element locators are defined in ONE place. When the UI changes, update only the page class — not every test. Readability — tests read like business actions: loginPage.login(username, password) rather than driver.findElement(By.id("username")).sendKeys(username). Reusability — one page class used across multiple test classes. Maintainability — clear separation between test logic and page interaction logic. Structure: LoginPage class (page objects) + LoginTest class (test logic). The test class creates a LoginPage object and calls its methods. The LoginPage class handles all UI interactions internally.

Q4. What is the difference between Smoke Testing and Sanity Testing? +

Model Answer:

Smoke Testing is a broad, shallow test of the most critical functionality to determine if a new build is stable enough for further testing. It is done AFTER receiving a new build. If smoke tests fail, the build is rejected and returned to developers without further testing. Tests cover: can the application launch? Can users log in? Do main navigation items work? Think of it as a "health check" on the build. It is also called Build Verification Testing (BVT). Sanity Testing is narrow, deep testing of a specific functionality or bug fix. It is done AFTER receiving a build with a specific fix. It verifies that the fix works correctly and has not broken related features. It does NOT test the entire application — only the relevant area. Think of it as a "targeted check" on a specific change. Memory aid: Smoke = wide, shallow (is the whole build OK?). Sanity = narrow, deep (is THIS specific fix OK?). Both are done quickly — neither is exhaustive. Both help decide whether to proceed with full testing.

Q5. What is Regression Testing? How do you decide what to include in a regression suite? +

Model Answer:

Regression testing ensures that new code changes have not broken existing functionality that was previously working. Every time code changes (new feature, bug fix, refactoring), regression tests run to verify nothing is broken. Why important: software is interconnected — a change in the login module can affect the checkout flow. Without regression testing, you cannot know what broke. Building a regression suite: (1) Start with test cases for all critical business workflows (highest business value, highest usage). (2) Add test cases for areas that have historically been most bug-prone. (3) Include test cases that cover integration points between modules. (4) Add test cases for bug fixes — every bug that was found and fixed gets a regression test to prevent recurrence. (5) Automate the regression suite — manual regression is too slow for frequent releases. Selenium + TestNG + CI/CD (Jenkins/GitHub Actions) triggers regression on every code commit. Challenge: regression suites grow over time. Manage with risk-based prioritization — not all regression tests run for every change.

Q6. What is API testing? How is it different from UI testing? +

Model Answer:

API testing verifies that application programming interfaces work correctly — sending requests to the API endpoints and validating responses. It operates at the service layer (below the UI, above the database). Tools: Postman for manual/exploratory, REST Assured (Java) for automation, Newman for CI/CD execution. Key differences from UI testing: Speed — API tests run in milliseconds vs seconds for UI tests. Stability — APIs are more stable than UI (less affected by design changes). Independence — can test business logic without UI being ready. Coverage — test the actual data validation logic, not just what the UI shows. What to test in APIs: status codes, response body fields, data types, required fields presence, error messages for invalid input, authentication (401 without token, 403 for unauthorised), response time, pagination, filtering. Where API testing fits: it catches backend bugs that might not be visible in the UI, and provides faster feedback than E2E UI tests. In the testing pyramid, API/integration tests form the middle layer.

Q7. What is ISTQB and what does the Foundation Level exam cover? +

Model Answer:

ISTQB (International Software Testing Qualifications Board) is a non-profit organisation that provides globally recognised software testing certifications. The Foundation Level (CTFL) is the entry-level certification — recognized in 100+ countries and required by many companies. Exam details: 40 questions, 60 minutes, 65% pass mark (26 correct). Multiple choice format. Chapters and approximate weights: (1) Fundamentals of Testing — 15%: what is testing, why testing, testing principles, test process, psychology. (2) Testing Throughout the SDLC — 15%: testing in different models (V-model, Agile), test levels (unit/integration/system/acceptance), test types. (3) Static Testing — 10%: reviews, static analysis, value of static testing. (4) Test Analysis and Design — 25%: black box techniques (EP, BVA, decision table, state transition), white box techniques (statement/branch coverage), experience-based. (5) Managing the Test Activities — 25%: test planning, estimation, test monitoring and control, configuration management, defect management. (6) Test Tools — 10%: tool classification, benefits and risks of automation. Preparation: ISTQB official syllabus + these notes + practice papers. 3–4 weeks of study typically sufficient for the Foundation Level.

🏅 ISTQB CTFL Exam Tips: (1) Read EVERY question carefully — ISTQB questions are precise. (2) Look for absolute words: "ALWAYS", "NEVER", "BEST" — these often point to the answer. (3) Eliminate obviously wrong answers first. (4) Testing principles (7 of them) appear in almost every exam. (5) Know all black-box techniques thoroughly — 25% of exam is test design. (6) Official ISTQB mock exams are the best practice — patterns repeat year to year. Target score: 75%+ in practice before attempting the real exam.

🏫

Ready for Live Testing Training?

Hands-on Selenium projects, real test automation frameworks and personal mentoring from Hari Krishna — 16+ years expert & Silicon India Award Winner.

Get Free Consultation → 💬 WhatsApp Us ← Back to Portal
💬