Software Testing
• What is Software Testing?
• Software testing is the systematic process of evaluating a software application to ensure it aligns with
specified requirements and operates without defects.
• It includes a series of planned activities to validate the software’s functionality, performance, security,
and usability.
• Testing involves controlled conditions to simulate real-world scenarios and uncover bugs, errors, or
inconsistencies.
• Key technical aspects include:
• Test Planning: Creating a comprehensive test plan that outlines objectives, strategies, resources, and
schedules.
• Test Design: Developing test cases and scenarios based on requirements and use cases.
• Test Execution: Running the software against test cases using manual or automated approaches.
• Defect Tracking: Documenting and managing identified defects for resolution.
• Test Automation: Utilizing tools like Selenium or JUnit to streamline repetitive test cases and increase
coverage.
• Metrics and Reporting: Gathering data on test coverage, defect density, and test execution to assess
GOALS
• the software testing process has two distinct goals:
• 1. To demonstrate to the developer and the customer that the software meets
its requirements.
• For custom software, this means that there should be at least one test for every
requirement in the user and system requirements documents.
• For generic software products, it means that there should be tests for all of the system
features that will be incorporated in the product release.
• 2. To discover faults or defects in the software where the behaviour of the
software is incorrect, undesirable or does not conform to its specification.
• Defect testing is concerned with rooting out all kinds of undesirable system behaviour, such
as system crashes, unwanted interactions with other systems, incorrect computations and
data corruption.
• The first goal leads to validation testing, where you expect the system to perform
correctly using a given set of test cases that reflect the system’s expected use.
• The second goal leads to defect testing, where the test cases are designed to
expose defects.
• For validation testing, a successful test is one where the system performs correctly.
• For defect testing, a successful test is one that exposes a defect that causes the system to
perform incorrectly.
• Testing cannot demonstrate that the software is free of defects or that it will
behave as specified in every circumstance.
• It is always possible that a test that you have overlooked could discover further
problems with the system.
• As Edsger Dijkstra, a leading early figure in the development of software
engineering, eloquently stated (Dijkstra, et al., 1972),
• ‘Testing can only show the presence of errors, not their absence.’
• Overall, therefore, the goal of software testing is to convince system developers
and customers that the software is good enough for operational use.
• Testing is a process intended to build confidence in the software.
• A general model of the testing process is shown in Figure 23.2. Test cases are
specifications of the inputs to the test and the expected output from the system
plus a statement of what is being tested.
• Test data are the inputs that have been devised to test the system. Test data can
sometimes be generated automatically. Automatic test case generation is
impossible. The output of the tests can only be predicted by people who
understand what the system should do.
Why Do We Test Software?
1.Quality Assurance: To ensure the software functions as intended.
2.Reliability: To build confidence in the software's performance and stability.
3.Cost Reduction: To identify defects early, reducing the cost of fixing them later.
4.Customer Satisfaction: To deliver a product that meets or exceeds user
expectations.
How Do We Test Software?
• Software testing can be performed using various methods and tools, categorized
broadly as:
1.Manual Testing: Test cases are executed manually without automation tools.
2.Automated Testing: Uses scripts and tools to execute test cases automatically.
3.Static Testing: Involves reviewing code, requirements, or documentation without
executing the program.
4.Dynamic Testing: Involves executing the software and observing its behavior
• Who Performs Software Testing?
• Developers: Conduct unit tests during development.
• Testers/Quality Assurance (QA) Engineers: Perform thorough testing using
various methods.
• End Users: Participate in beta testing to provide feedback.
• Outcomes of Software Testing
1.Defect Identification: Revealing errors or vulnerabilities.
2.Improved Software Quality: Ensuring the software meets user needs.
3.Documentation: Providing reports on test cases, execution, and results.
Technical Aspects of Software Testing
• Types of Testing
1.Functional Testing
1. Verifies the software functions according to requirements.
2. Example: Testing a login feature with valid and invalid credentials.
2.Non-functional Testing
1. Examines non-functional aspects like performance, scalability, and usability.
2. Example: Load testing to check how many users the system can handle simultaneously.
3.Black-box Testing
1. Focuses on testing software functionality without knowing its internal structure.
2. Example: Inputting values into a calculator app and verifying the results.
4.White-box Testing
1. Involves testing the internal logic and structure of the code.
2. Example: Testing individual functions within a module using unit tests.
Black-box Testing
• Black-box testing is a method where the tester evaluates the functionality of an application without
any knowledge of its internal structure, design, or implementation details. This approach treats the
software as a "black box," focusing solely on inputs, expected outputs, and observable behaviors to
validate the system’s conformance to requirements.
• Expanded Technical Aspects:
• Input Validation: Ensures the application correctly handles various types of input data, including
valid, invalid, boundary, and extreme values. This involves rigorous testing to confirm that input
fields, APIs, and data pipelines accept and reject data appropriately. For instance:
• Valid Data: Entering correctly formatted email addresses in a registration form should proceed
without errors.
• Invalid Data: Submitting malformed email addresses (e.g., "user@@domain.com") should trigger
error messages or validation alerts.
• Boundary Values: Testing edge conditions, such as entering a string with exactly the maximum
allowed length (e.g., 255 characters for a username field) and one exceeding it to verify proper
handling.
• Extreme Values: Inputting unusually large numbers or special characters to assess system robustness,
such as entering "9999999999" in a phone number field to check overflow or parsing errors.
• Such validations are critical to prevent issues like SQL injection, buffer overflows,
or application crashes due to unhandled input errors.
• Automated testing tools like Postman for API validation and Selenium for UI field
testing are commonly used to streamline this process.
• Functional Verification: This process ensures that each feature of the application
performs precisely as defined by the requirements and functional specifications.
• It involves the systematic testing of various functionalities, such as input handling,
workflows, and expected output generation, without examining the underlying
code logic.
• Technical Explanation: 3. Search Feature: Checking that searching for specific
• Functional verification leverages predefined test cases, terms retrieves relevant results, even when queries
often derived from user stories or requirement include special characters or incomplete words.
documentation, to validate that the software responds • Integration Testing: Tests the interactions between
correctly to inputs and produces the intended different components or systems to ensure proper
outcomes. It emphasizes: communication and data flow.
• Scenario Testing: Evaluating the behavior of features in • User Acceptance Testing (UAT): Conducted with real-
real-world scenarios. world scenarios to validate the system meets user
• Test Data Usage: Providing specific inputs to verify needs and expectations.
output accuracy and error handling. • Additional Techniques:
• Error State Analysis: Ensuring the system handles • Error Guessing: Based on the tester’s intuition and
invalid operations gracefully without crashes. experience, identifying likely areas where defects may
• Examples: exist.
1. Login Feature: Testing login functionality by inputting a • Random Testing: Inputs are selected randomly to
correct username-password combination (valid case) uncover unexpected behavior or errors not covered by
and incorrect combinations (invalid cases) to confirm formal test cases.
proper authentication and error messaging. • Orthogonal Array Testing: A systematic, statistical way
2. E-commerce Checkout: Verifying that adding items to a of testing pairwise interactions among variables for
cart, applying a discount, and proceeding with payment reduced test cases.
• Tools and Automation: • Techniques:
• Modern tools like Selenium, Postman, and • Equivalence Partitioning: Dividing input data into valid
and invalid partitions to test representative values. For
TestComplete allow testers to automate black-box example, testing a shopping cart feature by
test cases for web applications, APIs, and GUI- partitioning quantity inputs into valid ranges (1-100),
based systems, improving efficiency and coverage. invalid low (0 or negative), and invalid high (above
• Detailed Examples: 1000).
• Boundary Value Analysis: Testing at the edges of input
1.Web Application Testing: Submitting an invalid ranges to uncover edge-case defects. For example, if
email format in a registration form and ensuring an age input field accepts values from 18 to 60, testing
the system prompts an error. at 17, 18, 60, and 61 to ensure proper handling.
2.E-commerce System: Adding multiple items to a • Decision Table Testing: Creating a table of inputs and
their corresponding outputs to test combinations. For
cart and verifying the total price calculation aligns example, testing a loan approval system with different
with individual item costs. combinations of credit score, income level, and loan
3.Banking Application: Entering incorrect credentials amount to verify all possible decision outcomes.
to ensure the system prevents unauthorized access • State Transition Testing: Evaluating system behavior
while maintaining proper logs. when transitioning between different states. For
instance, testing a vending machine’s behavior as it
• Technical Aspects: transitions from idle to accepting coins, dispensing
products, and returning to idle.
• Test Basis: Requirements documents,
• These techniques ensure comprehensive test coverage, identifying defects in
specific conditions or workflows that might otherwise go unnoticed.
• Tools: Tools like QTP and Selenium can be used for automating black-box testing.
• Examples:
• Inputting values into a calculator app and verifying the results.
• Testing a web form with valid and invalid email addresses to ensure proper
validation.
• White-box Testing username and password validations ensures each
• White-box testing (also known as clear-box or glass- logic path is correct, including cases for invalid inputs
box testing) is a detailed method where the internal and empty fields.
logic, code structure, and design are fully accessible 2.Unit Testing: For a function calculating loan interest,
to the tester. This type of testing ensures correctness tests can verify edge cases like zero principal amount,
and efficiency in internal processes, code paths, extreme interest rates, or maximum allowable loan
algorithms, and data handling. By analyzing the durations.
actual code, testers can: 3.Code Coverage Analysis: Using tools like JaCoCo or
• Identify untested paths or unreachable code blocks Cobertura to measure statement, branch, and path
(e.g., dead code). coverage metrics, testers ensure all code segments
• Optimize algorithms by detecting inefficiencies in are executed during testing.
loops or logic constructs. 4.Mutation Testing: Small changes are introduced to
• Validate error-handling mechanisms, ensuring proper the source code to verify if existing test cases catch
responses to invalid or unexpected inputs. these faults, strengthening the test suite’s reliability.
• Confirm the accuracy of critical computations, such • White-box testing leverages tools such as JUnit for
as sorting algorithms or financial calculations. Java or PyTest for Python to automate and enhance
test accuracy, providing deeper insights into code
• Detailed Examples: quality and robustness.
1.Control Flow Testing: In a login function, testing all
• Technical Aspects: coverage.
• Test Basis: White-box testing begins with the • Control Flow Testing: Analyzing the flow of control through
the application’s code.
foundational elements of the application, including the • Data Flow Testing: Verifying the proper use of data variables
source code, design documents, or architectural and checking variable initialization and usage.
specifications. These resources provide insight into the • Unit Testing: Testing individual functions or methods in
logical flow, data structures, and algorithms implemented isolation.
in the software. A detailed understanding of the • Mutation Testing: Introducing small changes (mutations) in
codebase allows testers to: code to verify the test’s ability to detect errors.
• Identify specific code paths for execution. • Tools: Common tools include JUnit for Java, NUnit
• Evaluate complex logic in conditional statements and for .NET, and PyTest for Python.
loops. • Examples:
• Pinpoint integration points and module interactions for • Testing loops and conditional statements within a sorting
thorough analysis. algorithm.
• For example, using the architectural specification of a • Verifying database queries to ensure proper data
payment gateway, testers can trace how payment retrieval and manipulation.
requests propagate through multiple layers of the • By leveraging both black-box and white-box testing
system, ensuring accurate transaction processing and approaches, a comprehensive testing strategy can be
error handling. achieved, addressing functional correctness and code
• Techniques: integrity simultaneously.
• Code Coverage Analysis: Measuring the extent of code
• Black-box Testing: Validates the functionality of a system by focusing on inputs
and expected outputs without knowledge of the internal code or structure.
• White-box Testing: Ensures the correctness and efficiency of the internal code,
logic, and structure by testing paths, conditions, and algorithms.
Comparison: Black-box vs. White-box Testing
Aspect Black-box Testing White-box Testing
No knowledge of internal code Requires understanding of the
Knowledge Required
structure. code.
Functional behavior of the
Focus Area Code logic, paths, and branches.
software.
System, acceptance, or functional
Testing Level Unit or integration testing.
testing.
Identifies missing functionality and Ensures code correctness and
Strengths
UI issues. optimization.
May overlook user experience
Weaknesses May miss code-level issues.
problems.
Can Software Be Complete Without Thorough Testing?
• Thorough testing is the backbone of reliable software delivery, providing a
structured approach to validate both functional and non-functional requirements.
• This critical phase employs techniques such as unit testing, integration testing,
and system testing to ensure comprehensive coverage of application behavior.
• Through methods like code coverage analysis, performance benchmarking, and
security audits, testing identifies defects, vulnerabilities, and performance
bottlenecks before software reaches end users.
• Neglecting this essential phase can compromise the product’s integrity,
functionality, and market reputation, while introducing significant technical and
business risks:
• Undetected Defects: Functional bugs or logical errors that remain hidden due to
insufficient testing can severely impact the core functionality and reliability of an
application.
• These defects may arise from incomplete requirement coverage, improper
handling of edge cases, or failure to account for integration complexities.
• For example, a financial application with a calculation bug due to an unhandled
floating-point precision error could produce inaccurate results during tax
computations, potentially causing legal and financial complications for users. Such
defects often propagate to other system components, amplifying their negative
impact.
• Security Vulnerabilities: Testing ensures vulnerabilities like SQL injection, cross-
site scripting (XSS), or insecure APIs are identified and mitigated.
• Without rigorous security testing, attackers could exploit these flaws,
compromising sensitive data or critical systems.
• System Instability: Load or stress testing evaluates the application’s performance
under heavy and variable usage scenarios by simulating user loads, network
conditions, and concurrent operations.
• This includes testing metrics such as throughput, latency, and response times to
identify bottlenecks and optimize system performance.
• Applications released without performance testing might encounter issues like
resource exhaustion, memory leaks, or thread contention under high traffic,
leading to crashes or unresponsiveness.
• For example, an e-commerce platform during a major sale event may fail to
handle thousands of simultaneous checkouts, resulting in lost revenue and
customer dissatisfaction.
• Proper load testing tools, such as Apache JMeter or LoadRunner, provide
actionable insights into system limits and behavior under stress, enabling
proactive improvements.
• Compliance Failures: Many industries require compliance with standards (e.g.,
ISO 27001, GDPR).
• Testing ensures the software adheres to such regulations. Failure to test
adequately might result in regulatory penalties.
• Reputational Damage: Defects that impact user experience can lead to negative
reviews and loss of user trust.
• For example, frequent app crashes or poor performance often result in low
ratings and poor market reception.
• Increased Maintenance Costs: Post-release defect resolution is significantly more
expensive compared to identifying and fixing issues during the development phase.
• This is due to several factors, including the need to investigate defects in a live
environment where identifying the root cause can be more complex.
• Debugging live systems often requires dedicated resources to replicate and isolate
issues, which can lead to service interruptions.
• Furthermore, patches or updates deployed to fix defects may introduce additional
risks, requiring extensive regression testing.
• For instance, a defect in a payment gateway system identified post-release may
necessitate urgent hotfixes, downtime for users, and even potential financial
compensations, making it a resource-intensive process.
• Automated testing tools and continuous integration pipelines during development
can significantly reduce these costs by catching defects earlier in the lifecycle.
• Example Analysis:
1.Thoroughly Tested Software: A banking app undergoes extensive functional testing
(validating transaction accuracy), security testing (penetration tests to identify
vulnerabilities), and load testing (simulating high user traffic). This ensures
reliability, data safety, and scalability, fostering customer trust and regulatory
compliance.
2.Insufficiently Tested Software: A gaming app released without thorough testing
might exhibit frequent crashes during gameplay due to unhandled edge cases. Poor
reviews and user dissatisfaction lead to a sharp decline in downloads and revenue.
• Conclusion:
• Thorough testing is not optional but a critical phase in the software development
lifecycle. It ensures that potential defects, vulnerabilities, and compliance issues are
proactively addressed, resulting in a stable, secure, and user-friendly product that
meets both technical and business expectation
TYPES OF TESTING
• Testing individual components or functions in isolation, such as specific classes,
methods, or functions, to verify that each performs as expected independently.
Unit tests ensure that the smallest testable parts of an application behave
correctly under a variety of conditions, including both typical and edge cases. This
helps catch bugs early in the development cycle and supports robust code
refactoring.
Key Points:
• Performed by developers.
• Automated and fast.
• Focuses on logic and output.
Real-World Example:
Testing calculateTotalPrice(cartItems) in an e-commerce app.
Integration Testing
Testing how different modules interact with each other to ensure that combined
parts of a system function together as expected. This includes checking data
communication, shared interfaces, and integrated workflows. Integration testing
is crucial for identifying issues that occur when individual units are combined,
such as incorrect data passing, broken API contracts, or improper sequencing of
actions between modules. It typically follows unit testing and can be done using
top-down, bottom-up, or a hybrid approach.
Key Points:
• Verifies data flow between components.
• Tests contracts/interfaces.
• Often automated.
Real-World Example:
• Testing the connection between the booking system and payment gateway in a
Analogy:
• Combining burger components to make sure they create a whole burger.
System Testing
Testing the entire system as a whole against the specifications to verify that all components
work together in the intended environment. System testing validates both the functional
and non-functional requirements of the software such as usability, performance, security,
and compatibility. It simulates real-world use by treating the software as a complete black
box and executing test cases based on user requirements and business logic. This phase
ensures that the system behaves correctly under expected and unexpected user scenarios
before deployment to production.
Key Points:
• End-to-end testing.
• Done by QA team.
• Includes functional and non-functional tests.
Real-World Example:
Testing a banking app: registration, login, fund transfer, etc.
Analogy:
• Test-driving a complete car on the road.
User Acceptance Testing (UAT)
Final testing conducted by the client or end-users to ensure the software meets their
business requirements and is ready for deployment. UAT typically occurs in a staging
environment that closely mimics the production setup. It focuses on validating the overall
user experience, correctness of functionality from a business perspective, and confirming
that the software delivers value to stakeholders. This type of testing is usually scenario-
based and driven by predefined acceptance criteria or user stories. by clients or users to
validate business needs.
Key Points:
• Performed in staging environment.
• Confirms readiness for production.
• Based on user stories.
Real-World Example:
Store managers test a retail inventory system for stock tracking and report generation.
Analogy: