0% found this document useful (0 votes)
7 views52 pages

Module3 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views52 pages

Module3 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Unit-3

Fundamentals of Testing

Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not.

Testing is executing a system in order to identify any gaps, errors, or missing requirements
in contrary to the actual requirements.

This tutorial will give you a basic understanding on software testing, its types, methods,
levels, and other related terminologies.

Testing is a critical role in software development that requires special skills and knowledge
not commonly taught to software developers, business analysts, and project managers. This
often results in insufficient time and resources being allocated for this important function,
and quality suffers-as do the users of the software. Equally important is the need to
measure quality quickly and efficiently, because limitations in resources and schedules are
realities that aren't going away. We need to do the best we can with what we have and still
deliver high-quality, proven software.

Different Levels of Testing

The levels of software testing involve the different methodologies, which can be used while
we are performing the software testing.

In software testing, we have four different levels of testing, which are as discussed below:

1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

Prepared By: Neetika Gupta


1. Unit Testing

Unit testing is the first level of software testing, which is used to test if software modules
are satisfying the given requirement or not.

The first level of testing involves analyzing each unit or an individual component of the
software application.

The primary purpose of executing unit testing is to validate unit components with their
performance.

A unit component is an individual function or regulation of the application, or we can say


that it is the smallest testable part of the software. The reason of performing the unit testing
is to test the correctness of inaccessible code.

Unit testing will help the test engineer and developers in order to understand the base of
code that makes them able to change defect causing code quickly. The developers
implement the unit.

Whenever the application is ready and given to the Test engineer, he/she will start checking
every component of the module or module of the application independently or one by one,
and this process is known as Unit testing or components testing.

Why Unit testing?

o Unit testing helps tester and developers to understand the base of code that makes
them able to change defect causing code quickly.
o Unit testing helps in the documentation.
o Unit testing fixes defects very early in the development phase that's why there is a
possibility to occur a smaller number of defects in upcoming testing levels.
o It helps with code reusability by migrating code and test cases.

Unit Testing Techniques:

Unit testing uses all white box testing techniques as it uses the code of software application:

o Data flow Testing


o Control Flow Testing
o Branch Coverage Testing
o Statement Coverage Testing
o Decision Coverage Testing
Prepared By: Neetika Gupta
Advantages
o Unit testing uses module approach due to that any part can be tested without waiting
for completion of another parts testing.
o The developing team focuses on the provided functionality of the unit and how
functionality should look in unit test suits to understand the unit API.
o Unit testing allows the developer to refactor code after a number of days and ensure
the module still working without any defect.

Disadvantages
o It cannot identify integration or broad level error as it works on units of the code.
o In the unit testing, evaluation of all execution paths is not possible, so unit testing is
not able to catch each and every error in a program.
o It is best suitable for conjunction with other testing activities

2. Integration testing

Integration testing is the second level of the software testing process comes after unit
testing. In this testing, units or individual components of the software are tested in a group.
The focus of the integration testing level is to expose defects at the time of interaction
between integrated components or units.

Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing. The Software is developed with a number of software modules that are
coded by different coders or programmers. The goal of integration testing is to check the
correctness of communication among all the modules.

Prepared By: Neetika Gupta


Once all the components or modules are working independently, then we need to check the
data flow between the dependent modules is known as integration testing.

Reason Behind Integration Testing

Although all modules of software application already tested in unit testing, errors still exist
due to the following reasons:

1. Each module is designed by individual software developer whose programming


logic may differ from developers of other modules so; integration testing becomes
essential to determine the working of software modules.
2. To check the interaction of software modules with the database whether it is an
erroneous or not.
3. Requirements can be changed or enhanced at the time of module development.
These new requirements may not be tested at the level of unit testing hence
integration testing becomes mandatory.
4. Incompatibility between modules of software could create errors.
5. To test hardware's compatibility with software.
6. If exception handling is inadequate between modules, it can create bugs.

Integration Testing Techniques

Any testing technique (Blackbox, Whitebox, and Greybox) can be used for Integration
Testing; some are listed below:

Black Box Testing


o State Transition technique
o Decision Table Technique
o Boundary Value Analysis
o Equivalence Partitioning
o Error Guessing

White Box Testing


o Data flow testing
o Control Flow Testing
o Branch Coverage Testing

Prepared By: Neetika Gupta


o Decision Coverage Testing
o Statement Coverage

Types of Integration Testing

Top-Down Approach

The top-down testing strategy deals with the process in which higher level modules are
tested with lower level modules until the successful completion of testing of all the
modules. Major design flaws can be detected and fixed early because critical modules tested
first. In this type of method, we will add the modules incrementally or one by one and check
the data flow in the same order.

In the top-down approach, we will be ensuring that the module we are adding is the child of
the previous one like Child C is a child of Child B and so on as we can see in the below
image:

Advantages:
Prepared By: Neetika Gupta
o Identification of defect is difficult.
o An early prototype is possible.

Disadvantages:

o Due to the high number of stubs, it gets quite complicated.


o Lower level modules are tested inadequately.
o Critical Modules are tested first so that fewer chances of defects.

Bottom-Up Method

The bottom to up testing strategy deals with the process in which lower level modules are
tested with higher level modules until the successful completion of testing of all the
modules. Top level critical modules are tested at last, so it may cause a defect. Or we can
say that we will be adding the modules from bottom to the top and check the data flow in
the same order.

In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the below image:

Prepared By: Neetika Gupta


Advantages

o Identification of defect is easy.


o Do not need to wait for the development of all the modules as it saves time.

Disadvantages

o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.

Hybrid/Mixed Testing Method

In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In
this process, top-level modules are tested with lower level modules and lower level modules
tested with high-level modules simultaneously. There is less possibility of occurrence of
defect because each module interface is tested.

Advantages

o The hybrid method provides features of both Bottom Up and Top Down methods.
o It is most time reducing method.
o It provides complete testing of all modules.

Prepared By: Neetika Gupta


Disadvantages

This method needs a higher level of concentration as the process carried out in both
directions simultaneously.

o Complicated method.

Big Bang Method

In this approach, testing is done via integration of all modules at once. It is convenient for
small software systems, if used for large software systems identification of defects is
difficult. Since this testing can be done after completion of all modules due to that testing
team has less time for execution of this process so that internally linked interfaces and high-
risk critical modules can be missed easily.

Advantages:

o It is convenient for small size software systems.

Disadvantages:

o Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
o Small modules missed easily.
o Time provided for testing is very less.
o We may miss to test some of the interfaces.

Prepared By: Neetika Gupta


3. System Testing

System Testing includes testing of a fully integrated software system. Generally, a computer
system is made with the integration of software (any software is only a single element of a
computer system). The software is developed in units and then interfaced with other
software and hardware to create a complete computer system. In other words, a computer
system consists of a group of software to perform the various tasks, but only software
cannot perform the task; for that software must be interfaced with compatible hardware.
System testing is a series of different type of tests with the purpose to exercise and examine
the full working of an integrated software computer system against requirements.

To check the end-to-end flow of an application or the software as a user is known


as System testing. In this, we navigate (go through) all the necessary modules of an
application and check if the end features or the end business works fine, and test the product
as a whole system.

It is end-to-end testing where the testing environment is similar to the production


environment.

Why is System Testing Important?


o System Testing gives hundred percent assurance of system performance as it covers
end to end function of the system.
o It includes testing of System software architecture and business requirements.
o It helps in mitigating live issues and bugs even after production.
o System testing uses both existing system and a new system to feed same data in both
and then compare the differences in functionalities of added and existing functions
so, the user can understand benefits of new added functions of the system.

Types of System Testing


There are essentially three main kinds of system testing:
• Alpha Testing. Alpha testing refers to the system testing carried out by the test
team within the developing organization.
• Beta testing. Beta testing is the system testing performed by a select group of
friendly customers.
• Acceptance Testing. Acceptance testing is the system testing performed by the
customer to determine whether he should accept the delivery of the system.

Performance testing
Performance testing is carried out to check whether the system needs the non-functional
requirements identified in the SRS document. There are several types of performance
testing. Among of them nine types are discussed below. The types of performance testing to
be carried out on a system depend on the different non-functional requirements of the

Prepared By: Neetika Gupta


system documented in the SRS document. All performance tests can be considered as black-
box tests.
Regression Testing

Regression testing is performed under system testing to confirm and identify that if there's
any defect in the system due to modification in any other part of the system. It makes sure,
any changes done during the development process have not introduced a new defect and
also gives assurance; old defects will not exist on the addition of new software over the
time.

Load Testing

Load testing is performed under system testing to clarify whether the system can work under
real-time loads or not.

The load testing is used to check the performance of an application by applying some load
which is either less than or equal to the desired load is known as load testing.

For example: In the below image, 1000 users are the desired load, which is given by the
customer, and 3/second is the goal which we want to achieve while performing a load
testing.

Functional Testing

Functional testing of a system is performed to find if there's any missing function in the
system. Tester makes a list of vital functions that should be in the system and can be added
during functional testing and should improve quality of the system.

Recovery Testing

Recovery testing of a system is performed under system testing to confirm reliability,


trustworthiness, accountability of the system and all are lying on recouping skills of the
system. It should be able to recover from all the possible system crashes successfully.

In this testing, we will test the application to check how well it recovers from the crashes or
disasters.

Migration Testing

Migration testing is performed to ensure that if the system needs to be modified in new
infrastructure so it should be modified without any issue.

Usability Testing

The purpose of this testing to make sure that the system is well familiar with the user and it
meets its objective for what it supposed to do.
Prepared By: Neetika Gupta
Software and Hardware Testing

This testing of the system intends to check hardware and software compatibility. The
hardware configuration must be compatible with the software to run it without any issue.
Compatibility provides flexibility by providing interactions between hardware and software.

Stress Testing

The stress testing is testing, which checks the behavior of an application by applying load
greater than the desired load.

For example: If we took the above example and increased the desired load 1000 to 1100
users, and the goal is 4/second. While performing the stress testing in this scenario, it will
pass because the load is greater (100 up) than the actual desired load.

Scalability Testing

Checking the performance of an application by increasing or decreasing the load in


particular scales (no of a user) is known as scalability testing. Upward scalability and
downward scalability testing are called scalability testing.

Stability Testing

Checking the performance of an application by applying the load for a particular


duration of time is known as Stability Testing.

4. Acceptance testing

Acceptance testing is formal testing based on user requirements and function processing. It
determines whether the software is conforming specified requirements and user
requirements or not. It is conducted as a kind of Black Box testing where the number of
required users involved testing the acceptance level of the system. It is the fourth and last
level of software testing.

User acceptance testing (UAT) is a type of testing, which is done by the customer before
accepting the final product. Generally, UAT is done by the customer (domain expert) for
their satisfaction, and check whether the application is working according to given business
scenarios, real-time scenarios.

In this, we concentrate only on those features and scenarios which are regularly used by the
customer or mostly user scenarios for the business or those scenarios which are used daily
by the end-user or the customer

Prepared By: Neetika Gupta


Reason behind Acceptance Testing

Once the software has undergone through Unit Testing, Integration Testing and System
Testing so, Acceptance Testing may seem redundant, but it is required due to the following
reasons.

o During the development of a project if there are changes in requirements and it may
not be communicated effectively to the development team.
o Developers develop functions by examining the requirement document on their own
understanding and may not understand the actual requirements of the client.
o There's maybe some minor errors which can be identified only when the system is
used by the end user in the actual scenario so, to find out these minor errors,
acceptance testing is essential.

Advantages of Acceptance Testing


o It increases the satisfaction of clients as they test application itself.
o The quality criteria of the software is defined in an early phase so that the tester has
already decided the testing points. It gives a clear view to testing strategy.
o The information gathered through acceptance testing used by stakeholders to better
understand the requirements of the targeted audience.
o It improves requirement definition as client tests requirement definition according to
his needs.

Disadvantages of Acceptance Testing

According to the testing plan, the customer has to write requirements in their own words
and by themselves but

 Customers are not willing to do that; it defeats the whole point of acceptance testing.
 If test cases are written by someone else, the customer does not understand them, so
tester has to perform the inspections by themselves only.

Alpha Testing

Alpha testing is conducted in the organization and tested by a representative group of end-
users at the developer's side and sometimes by an independent team of testers.

Alpha testing is simulated or real operational testing at an in-house site. It comes after the
unit testing, integration testing, etc. Alpha testing used after all the testing are executed.
Prepared By: Neetika Gupta
It can be a white box, or Black-box testing depends on the requirements - particular lab
environment and simulation of the actual environment required for this testing.

Reasons to perform alpha testing are:

o Refines the software product by finding and rectifying bugs that weren't discovered
through previous tests.
o Alpha testing allows the team to test the software in a real-world environment.
o One of the reasons to do alpha testing is to ensure the success of the software
product.
o Alpha testing validates the quality, functionality of the software, and effectiveness
of the software before it released in the real world.

Features of Alpha Testing


o Alpha testing is a type of acceptance testing.
o Alpha testing is happening at the stage of the completion of the software product.
o Alpha testing is in the labs where we provide a specific and controlled environment.
o Alpha testing is in-house testing, which is performed by the internal developers and
testers within the organization.
o There is not any involvement of the public.
o Alpha testing helps to gain confidence in the user acceptance of the software
product.
o With the help of black box and white box technique, we can achieve the alpha
testing.
o Alpha testing ensures the maximum possible quality of the software before releasing
it to market or client for beta testing.
o Developers perform alpha testing at developer's site; it enables the developer to
record the error with the ease to resolve found bugs quickly.
o Alpha testing is doing after the unit testing, integration testing, system testing but
before the beta testing.
o Alpha testing is for testing the software application, products, and projects.

Advantages of alpha testing are:

o One of the benefits of alpha testing is it reduces the delivery time of the project.
Prepared By: Neetika Gupta
o It provides a complete test plan and test cases.
o Free the team member for another project.
o Every feedback helps to improve software quality.
o It provides a better observation of the software's reliability and accountability.

Disadvantages of alpha testing are:

o Alpha testing does not involve in-depth testing of the software.


o The difference between the tester's tests the data for testing the software and the
customer's data from their perspective may result in the discrepancy in the software
functioning.
o The lab environment is used to simulate the real environment. But still, the lab
cannot furnish all the requirement of the real environment such as multiple
conditions, factors, and circumstances

Beta Testing

Beta testing is a type of User Acceptance Testing among the most crucial testing, which
performed before the release of the software. Beta Testing is a type of Field Test. This
testing performs at the end of the software testing life cycle. This type of testing can be
considered as external user acceptance testing. It is a type of salient testing. Real users
perform this testing. This testing executed after the alpha testing. In this the new version,
beta testing is released to a limited audience to check the accessibility, usability, and
functionality, and more.

o Beta testing is the last phase of the testing, which is carried out at the client's or
customer's site.

Features Of Beta Testing

o Beta testing used in a real environment at the user's site. Beta testing helps in
providing the actual position of the quality.
o Testing performed by the client, stakeholder, and end-user.
o Beta testing always is done after the alpha testing, and before releasing it into the
market.
o Beta testing is black-box testing.
o Beta testing performs in the absence of tester and the presence of real users

Prepared By: Neetika Gupta


o Beta testing is performed after alpha testing and before the release of the final
product.
o Beta testing generally is done for testing software products like utilities, operating
systems, and applications, etc.

Advantages Of Beta Testing


1. Beta testing focuses on the customer's satisfaction.
2. It helps to reduce the risk of product failure via user validations.
3. Beta testing helps to get direct feedback from users.
4. It helps to detect the defect and issues in the system, which is overlooked and
undetected by the team of software testers.
5. Beta testing helps the user to install, test, and send feedback regarding the developed
software.

Disadvantages of Beta Testing


1. In this type of testing, a software engineer has no control over the process of the
testing, as the users in the real-world environment perform it.
2. This testing can be a time-consuming process and can delay the final release of the
product.
3. Beta testing does not test the functionality of the software in depth as software still
in development.
4. It is a waste of time and money to work on the feedback of the users who do not use
the software themselves properly.

Difference between Alpha and Beta Testing:

The difference between Alpha and Beta Testing is as follows:

Alpha Testing Beta Testing

Alpha testing involves both the white


box and black box testing. Beta testing commonly uses black-box testing.

Prepared By: Neetika Gupta


Alpha Testing Beta Testing

Alpha testing is performed by testers


who are usually internal employees Beta testing is performed by clients who are not part of
of the organization. the organization.

Alpha testing is performed at the


developer’s site. Beta testing is performed at the end-user of the product.

Reliability and security testing are Reliability, security and robustness are checked during
not checked in alpha testing. beta testing.

Alpha testing ensures the quality of Beta testing also concentrates on the quality of the
the product before forwarding to beta product but collects users input on the product and
testing. ensures that the product is ready for real time users.

Alpha testing requires a testing Beta testing doesn’t require a testing environment or
environment or a lab. lab.

Alpha testing may require a long


execution cycle. Beta testing requires only a few weeks of execution.

Developers can immediately address Most of the issues or feedback collected from the beta
the critical issues or fixes in alpha testing will be implemented in future versions of the
testing. product.

Prepared By: Neetika Gupta


Alpha Testing Beta Testing

Multiple test cycles are organized in


alpha testing. Only one or two test cycles are there in beta testing.

Difference between verification and validation

Verification is the process of checking that a software achieves its goal without any bugs.
It is the process to ensure whether the product that is developed is right or not. It verifies
whether the developed product fulfills the requirements that we have. Verification is static
testing.
Verification means Are we building the product right?

Validation is the process of checking whether the software product is up to the mark or in
other words product has high level requirements. It is the process of checking the
validation of product i.e. it checks what we are developing is the right product. it is
validation of actual and expected product. Validation is the dynamic testing.
Validation means Are we building the right product?

The difference between Verification and Validation is as follow:


Verification Validation

It includes checking documents, design, It includes testing and validating the actual
codes and programs. product.

Verification is the static testing. Validation is the dynamic testing.

It does not include the execution of the


code. It includes the execution of the code.

Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections and desk- Testing, White Box Testing and non-
checking. functional testing.

It checks whether the software meets the


It checks whether the software conforms requirements and expectations of a
to specifications or not. customer or not.

It can find the bugs in the early stage of It can only find the bugs that could not be

Prepared By: Neetika Gupta


Verification Validation

the development. found by the verification process.

The goal of verification is application and


software architecture and specification. The goal of validation is an actual product.

Validation is executed on software code


Quality assurance team does verification. with the help of testing team.

It comes before validation. It comes after verification.

It consists of checking of documents/files It consists of execution of program and is


and is performed by human. performed by computer.

Differences between Black Box Testing and White Box Testing:


S.
No. Black Box Testing White Box Testing

It is a way of software testing in which It is a way of testing the software in


the internal structure or the program or which the tester has knowledge about the
the code is hidden and nothing is known internal structure or the code or the
1. about it. program of the software.

Implementation of code is not needed Code implementation is necessary for


2. for black box testing. white box testing.

3. It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is Knowledge of implementation is


4. needed. required.

It can be referred to as outer or external It is the inner or the internal software


5. software testing. testing.

6. It is a functional test of the software. It is a structural test of the software.

Prepared By: Neetika Gupta


S.
No. Black Box Testing White Box Testing

This testing can be initiated based on


the requirement specifications This type of testing of software is started
7. document. after a detail design document.

No knowledge of programming is It is mandatory to have knowledge of


8. required. programming.

9. It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of It is generally applicable to the lower


10. testing of software. levels of software testing.

11. It is also called closed testing. It is also called as clear box testing.

12. It is least time consuming. It is most time consuming.

It is not suitable or preferred for


13. algorithm testing. It is suitable for algorithm testing.

Can be done by trial and error ways and Data domains along with inner or
14. methods. internal boundaries can be better tested.

Example: Search something on google Example: By input to check and verify


15. by using keywords loops

Black-box test design techniques-


 Decision table testing White-box test design techniques-
 All-pairs testing  Control flow testing
 Equivalence partitioning  Data flow testing
16.  Error guessing  Branch testing

Types of Black Box Testing: Types of White Box Testing:


 Functional Testing  Path Testing
 Non-functional testing  Loop Testing
17.  Regression Testing  Condition testing

Prepared By: Neetika Gupta


S.
No. Black Box Testing White Box Testing

It is less exhaustive as compared to It is comparatively more exhaustive than


18. white box testing. black box testing.

Prepared By: Neetika Gupta


White box testing

White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box
Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a software
testing method in which the internal structure/ design/ implementation of the item being
tested is known to the tester. The tester chooses inputs to exercise paths through the code
and determines the appropriate outputs. Programming know-how and the implementation
knowledge is essential. White box testing is testing beyond the user interface and into the
nitty-gritty of a system.
This method is named so because the software program, in the eyes of the tester, is like a
white/ transparent box; inside which one clearly sees.
Definition

 white-box testing: Testing based on an analysis of the internal structure of the


component or system.
 white-box test design technique: Procedure to derive and/or select test cases based
on an analysis of the internal structure of a component or system.

EXAMPLE
A tester, usually a developer as well, studies the implementation code of a certain field on a
webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs
against the expected outcomes, which is also determined by studying the implementation
code.
White Box Testing is like the work of a mechanic who examines the engine to see why the
car is not moving.
WHITE BOX TESTING ADVANTAGES

 Testing can be commenced at an earlier stage. One need not wait for the GUI to be
available.
 Testing is more thorough, with the possibility of covering most paths.

WHITE BOX TESTING DISADVANTAGES

 Since tests can be very complex, highly skilled resources are required, with thorough
knowledge of programming and implementation.
 Test script maintenance can be a burden if the implementation changes too
frequently.
 Since this method of testing is closely tied with the application being testing, tools to
cater to every kind of implementation/platform may not be readily available.

Black box testing

Prepared By: Neetika Gupta


Black Box Testing, also known as Behavioral Testing, is a software testing method in
which the internal structure/ design/ implementation of the item being tested is not known to
the tester. These tests can be functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of the tester, is like a
black box; inside which one cannot see. This method attempts to find errors in the following
categories:

 Incorrect or missing functions


 Interface errors
 Errors in data structures or external database access
 Behavior or performance errors
 Initialization and termination errors

Definition

 black box testing: Testing, either functional or non-functional, without reference to


the
internal structure of the component or system.
 black box test design technique: Procedure to derive and/or select test cases based
on an
analysis of the specification, either functional or non-functional, of a component or
system without reference to its internal structure.

EXAMPLE
A tester, without knowledge of the internal structures of a website, tests the web pages by
using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against the
expected outcome.

BLACK BOX TESTING TECHNIQUES


Following are some techniques that can be used for designing black box tests.

 Equivalence partitioning: It is a software test design technique that involves


dividing input values into valid and invalid partitions and selecting representative
values from each partition as test data.

Prepared By: Neetika Gupta


 Boundary Value Analysis: It is a software test design technique that involves
determination of boundaries for input values and selecting values that are at the
boundaries and just inside/ outside of the boundaries as test data.
 Cause Effect Graphing: It is a software test design technique that involves
identifying the cases (input conditions) and effects (output conditions), producing a
Cause-Effect Graph, and generating test cases accordingly.
 Error Guessing technique: Error guessing is a technique in which there is no
specific method for identifying the error. It is based on the experience of the test
analyst, where the tester uses the experience to guess the problematic areas of the
software. It is a type of black box testing technique which does not have any defined
structure to find the error.
 State Transition: In State Transition technique changes in input conditions change
the state of the Application Under Test (AUT). This testing technique allows the
tester to test the behavior of an AUT. The tester can perform this action by entering
various input conditions in a sequence. In State transition technique, the testing team
provides positive as well as negative input test values for evaluating the system
behavior.

BLACK BOX TESTING ADVANTAGES

 Tests are done from a user’s point of view and will help in exposing discrepancies in
the specifications.
 Tester need not know programming languages or how the software has been
implemented.
 Tests can be conducted by a body independent from the developers, allowing for an
objective perspective and the avoidance of developer-bias.
 Test cases can be designed as soon as the specifications are complete.

BLACK BOX TESTING DISADVANTAGES

 Only a small number of possible inputs can be tested and many program paths will
be left untested.
 Without clear specifications, which is the situation in many projects, test cases will
be difficult to design.
 Tests can be redundant if the software designer/ developer has already run a test
case.
 Ever wondered why a soothsayer closes the eyes when foretelling events? So is
almost the case in Black Box Testing.

Test Case

The test case is defined as a group of conditions under which a tester determines whether a
software application is working as per the customer's requirements or not. Test case
designing includes preconditions, case name, input conditions, and expected result. A test
case is a first level action and derived from test scenarios.

Prepared By: Neetika Gupta


It is an in-details document that contains all possible inputs (positive as well as negative)
and the navigation steps, which are used for the test execution process. Writing of test cases
is a one-time attempt that can be used in the future at the time of regression testing.

Test case gives detailed information about testing strategy, testing process, preconditions,
and expected output. These are executed during the testing process to check whether the
software application is performing the task for that it was developed or not.

Test case helps the tester in defect reporting by linking defect with test case ID. Detailed
test case documentation works as a full proof guard for the testing team because if
developer missed something, then it can be caught during execution of these full-proof test
cases.

To write the test case, we must have the requirements to derive the inputs, and the test
scenarios must be written so that we do not miss out on any features for testing. Then we
should have the test case template to maintain the uniformity, or every test engineer follows
the same approach to prepare the test document.

Generally, we will write the test case whenever the developer is busy in writing the code.

When do we write a test case?

We will write the test case when we get the following:

o When the customer gives the business needs then, the developer starts developing
and says that they need 3.5 months to build this product.
o And In the meantime, the testing team will start writing the test cases.
o Once it is done, it will send it to the Test Lead for the review process.
o And when the developers finish developing the product, it is handed over to the
testing team.
o The test engineers never look at the requirement while testing the product document
because testing is constant and does not depends on the mood of the person rather
than the quality of the test engineer.

Why we write the test cases?

We will write the test for the following reasons:

o To require consistency in the test case execution


o To make sure a better test coverage
o It depends on the process rather than on a person
Prepared By: Neetika Gupta
o To avoid training for every new test engineer on the product

To require consistency in the test case execution: we will see the test case and start
testing the application.

To make sure a better test coverage: for this, we should cover all possible scenarios and
document it, so that we need not remember all the scenarios again and again.

It depends on the process rather than on a person: A test engineer has tested an
application during the first release, second release, and left the company at the time of third
release. As the test engineer understood a module and tested the application thoroughly by
deriving many values. If the person is not there for the third release, it becomes difficult for
the new person. Hence all the derived values are documented so that it can be used in the
future.

To avoid giving training for every new test engineer on the product: When the test
engineer leaves, he/she leaves with a lot of knowledge and scenarios. Those scenarios
should be documented so that the new test engineer can test with the given scenarios and
also can write the new scenarios.

Test Design Definition and Technique


Test design is a process that describes “how” testing should be done. It includes processes
for identifying test cases by enumerating steps of the defined test conditions.

 Test design is the act of creating and writing test suites for testing software.
 Test analysis and identifying test conditions give us a general idea for testing which
covers quite a large range of possibilities. But when we come to make a test case, we
need to be very specific. In fact, now we need the exact and detailed specific input.
But just having some values to input to the system is not a test, if you don’t know
what the system is supposed to do with the inputs, you will not be able to tell
whether your test has passed or failed.

Test Design Techniques:


Black Box testing techniques

1. Error guessing technique

Error guessing is a technique in which there is no specific method for identifying the error.
It is based on the experience of the test analyst, where the tester uses the experience to guess
the problematic areas of the software. It is a type of black box testing technique which does
not have any defined structure to find the error.

Prepared By: Neetika Gupta


In this approach, every test engineer will derive the values or inputs based on their
understanding or assumption of the requirements, and we do not follow any kind of rules to
perform error guessing technique.

The accomplishment of the error guessing technique is dependent on the ability and product
knowledge of the tester because a good test engineer knows where the bugs are most likely
to be, which helps to save lots of time.

The main purpose of this technique is to identify common errors at any level of testing by
exercising the following tasks:

o Enter blank space into the text fields.


o Null pointer exception.
o Enter invalid parameters.
o Divide by zero.
o Use maximum limit of files to be uploaded.
o Check buttons without entering values.

Purpose of Error guessing

The main purpose of the error guessing technique is to deal with all possible errors which
cannot be identified as informal testing.

o The main purpose of error guessing technique is to deal with all possible errors
which cannot be identified informal testing.
o It must contain the all-inclusive sets of test cases without skipping any problematic
areas and without involving redundant test cases.
o This technique accomplishes the characteristics left incomplete during the formal
testing.

Example

A function of the application requires a mobile number which must be of 10 characters.


Now, below are the techniques that can be applied to guess error in the mobile number
field:

o What will be the result, if the entered character is other than a number?
o What will be the result, if entered characters are less than 10 digits?
o What will be the result, if the mobile field is left blank?

Prepared By: Neetika Gupta


After implementing these techniques, if the output is similar to the expected result, the
function is considered to be bug-free, but if the output is not similar to the expected result,
so it is sent to the development team to fix the defects.

However, error guessing is the key technique among all testing techniques as it depends on
the experience of a tester, but it does not give surety of highest quality benchmark. It does
not provide full coverage to the software. This technique can yield a better result if
combined with other techniques of testing.

Advantages and disadvantage of Error guessing technique

Advantages

The benefits of error guessing technique are as follows:

o It is a good approach to find the challenging parts of the software.


o It is beneficial when we will use this technique with the grouping of other formal
testing techniques.
o It is used to enhance the formal test design techniques.
o With the help of this technique, we can disclose those bugs which were probably
identified over extensive testing; therefore, the test engineer can save lots of time
and effort.

Disadvantages

Following are the drawbacks of error guessing technique:

o The error guessing technique is person-oriented rather than process-oriented because


it depends on the person's thinking.
o If we use this technique, we may not achieve the minimum test coverage.
o With the help of this, we may not cover all the input or boundary values.
o With this, we cannot give the surety of the product quality.
o The Error guessing technique can be done by those people who have product
knowledge; it cannot be done by those who are new to the product.

2. Equivalence Class Partitioning


In this method, the input domain data is divided into different equivalence data classes. This
method is typically used to reduce the total number of test cases to a finite set of testable
test cases, still covering maximum requirements.

Prepared By: Neetika Gupta


The concept behind this technique is that the test case of a representative value of each class
is equal to a test of any other value of the same class. It allows you to Identify valid as well
as invalid equivalence classes.

Example:

Input conditions are valid between

1 to 10 and 20 to 30
Hence there are five equivalence classes

--- to 0 (invalid)
1 to 10 (valid)
11 to 19 (invalid)
20 to 30 (valid)
31 to --- (invalid)
You select values from each class, i.e.,
-2, 3, 15, 25, 45

Advantages disadvantages

It is process-oriented All necessary inputs may not cover.

We can achieve the Minimum test coverage This technique will not consider the condition for boundary
value analysis.

It helps to decrease the general test execution The test engineer might assume that the output for all data set
time and also reduce the set of test data. right, which leads to the problem during the testing process.

3. Boundary Value Analysis (BVA)


Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid
partitions. It includes maximum, minimum, inside or outside boundaries, typical values and
error values.

Boundary value analysis is one of the widely used case design technique for black box
testing. It is used to test boundary values because the input values near the boundary have
higher chances of error.

Whenever we do the testing by boundary value analysis, the tester focuses on, while
entering boundary value whether the software is producing correct output or not.

Boundary values are those that contain the upper and lower limit of a variable. Assume that,
age is a variable of any function, and its minimum value is 18 and the maximum value is 30,
both 18 and 30 will be considered as boundary values.

Prepared By: Neetika Gupta


The basic assumption of boundary value analysis is, the test cases that are created using
boundary values are most likely to cause an error.

Guidelines for Boundary Value analysis

 If an input condition is restricted between values x and y, then the test cases should
be designed with values x and y as well as values which are above and below x and
y.

 If an input condition is a large number of values, the test case should be developed
which need to exercise the minimum and maximum numbers. Here, values above
and below the minimum and maximum values are also tested.

Example:

Input condition is valid between 1 to 10

Boundary values 0,1,2 and 9,10,11

4. Decision Table Based Testing.


A decision table is a good way to deal with different combination inputs with their
associated outputs. It is a black box test design technique to determine the test scenarios for
complex business logic.

It is also known as to Cause-Effect table because of an associated logical diagramming


technique called cause-effect graphing that is used to derive the decision table. This
software testing technique is used for testing the system behaviour for different input
combinations.

The first task is to identify functionalities where the output depends on a combination of
inputs. If there are large input set of combinations, then divide it into smaller subsets which
are helpful for managing a decision table.

For every function, you need to create a table and list down all types of combinations of
inputs and its respective outputs. This helps to identify a condition that is overlooked by the
tester.

Following are steps to create a decision table:

 Enlist the inputs in rows


 Enter all the rules in the column
 Fill the table with the different combination of inputs
 In the last row, note down the output against the input combination.

Prepared By: Neetika Gupta


Example: A submit button in a contact form is enabled only when all the inputs are entered
by the end user.

In order to find the number of all possible conditions, tester uses 2n formula where n
denotes the number of inputs; in the example there is the number of inputs is 2 (one is true
and second is false).

5. State Transition

In State Transition technique changes in input conditions change the state of the Application
Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT.
The tester can perform this action by entering various input conditions in a sequence. In
State transition technique, the testing team provides positive as well as negative input test
values for evaluating the system behavior.

Guideline for State Transition:

 State transition should be used when a testing team is testing the application for a
limited set of input values.
 The Test Case Design Technique should be used when the testing team wants to test
sequence of events which happen in the application under test.

Example:

 In the following example, if the user enters a valid password in any of the first three
attempts the user will be able to log in successfully. If the user enters the invalid
password in the first or second try, the user will be prompted to re-enter the
password. When the user enters password incorrectly 3rd time, the action has taken,
and the account will be blocked.

Prepared By: Neetika Gupta


State Transition Diagram

In this diagram when the user gives the correct PIN number, he or she is moved to Access
granted state. Following Table is created based on the diagram above-

State Transition Table

White box testing techniques:

1. Statement coverage

The statement coverage strategy aims to design test cases so that every statement in a
program is executed at least once. The principal idea governing the statement coverage
strategy is that unless a statement is executed, it is very hard to determine if an error
exists in that statement. Unless a statement is executed, it is very difficult to observe
whether it causes failure due to some illegal memory access, wrong result computation,
etc. However, executing some statement once and observing that it behaves properly for
Prepared By: Neetika Gupta
that input value is no guarantee that it will behave correctly for all input values. In the
following, designing of test cases using the statement coverage strategy have been shown.

Example: Consider the Euclid’s GCD computation algorithm: int

compute_gcd(x, y)
int x, y;
{
1 while (x! = y){
2 if (x>y) then
3 x= x – y;
4 else y= y – x;5
}
6 return x;
}

By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)}, we can exercise the
program such that all statements are executed at least once.

Statement coverage derives scenario of test cases under the white box testing process which
is based upon the structure of the code.

1. input (int a, int b)


2. {
3. Function to print sum of these integer values (sum = a+b)
4. If (sum>0)
5. {
6. Print (This is positive result)
7. } else
8. {
9. Print (This is negative result)
10. }
11. }

So, this is the basic structure of the program, and that is the task it is going to do.

Now, let's see the two different scenarios and calculation of the percentage of Statement
Coverage for given source code.
Prepared By: Neetika Gupta
Scenario1:
If a = 5, b = 4

1. print (int a, int b) {


2. int sum = a+b;
3. if (sum>0)
4. print ("This is a positive result")
5. else
6. print ("This is negative result")
7. }

In scenario 1, we can see the value of sum will be 9 that is greater than 0 and as per the
condition result will be "This is a positive result." The statements highlighted in yellow
color are executed statements of this scenario.

To calculate statement coverage of the first scenario, take the total number of statements
that is 7 and the number of used statements that is 5.

1. Total number of statements = 7


2. Number of executed statements = 5

1. Statement coverage = 5/7*100


2. = 500/7
3. = 71%

Likewise, in scenario 2,

Scenario2:
If A = -2, B = -7

1. print (int a, int b) {


2. int sum = a+b;
3. if (sum>0)
4. print ("This is a positive result")
5. else
Prepared By: Neetika Gupta
6. print ("This is negative result")
7. }

In scenario 2, we can see the value of sum will be -9 that is less than 0 and as per the
condition, result will be "This is a negative result." The statements highlighted in yellow
color are executed statements of this scenario.

To calculate statement coverage of the first scenario, take the total number of statements
that is 7 and the number of used statements that is 6.

Total number of statements = 7


Number of executed statements = 6

1. Statement coverage = 6/7*100 <br>


2. = 600/7
3. = 85%

2.Branch coverage

In the branch coverage-based testing strategy, test cases are designed to make each
branch condition to assume true and false values in turn. Branch testing is also known as
edge testing as in this testing scheme, each edge of a program’s control flow graph is
traversed at least once.

Branch coverage technique is used to cover all branches of the control flow graph. It covers
all the possible outcomes (true and false) of each condition of decision point at least once.
Branch coverage technique is a whitebox testing technique that ensures that every branch of
each decision point must be executed.

However, branch coverage technique and decision coverage technique are very similar, but
there is a key difference between the two. Decision coverage technique covers all branches
of each decision point whereas branch testing covers all branches of every decision point of
the code.

In other words, branch coverage follows decision point and branch coverage edges. Many
different metrics can be used to find branch coverage and decision coverage, but some of
the most basic metrics are: finding the percentage of program and paths of execution during
the execution of the program.

Like decision coverage, it also uses a control flow graph to calculate the number of
branches.
Prepared By: Neetika Gupta
Example 1: It is obvious that branch testing guarantees statement coverage and
thus is a stronger testing strategy compared to the statement coverage-based testing. For
Euclid’s GCD computation algorithm , the test cases for branch coverage can be {(x=3,
y=3), (x=3, y=2), (x=4, y=3), (x=3, y=4)}.

Example 2:
1. Read X
2. Read Y
3. IF X+Y > 100 THEN
4. Print "Large"
5. ENDIF
6. If X + Y<100 THEN
7. Print "Small"
8. ENDIF

This is the basic code structure where we took two variables X and Y and two conditions. If
the first condition is true, then print "Large" and if it is false, then go to the next condition.
If the second condition is true, then print "Small."

Control flow graph of code structure

In the above diagram, control flow graph of code is depicted. In the first case traversing
through "Yes "decision, the path is A1-B2-C4-D6-E8, and the number of covered edges is
1, 2, 4, 5, 6 and 8 but edges 3 and 7 are not covered in this path. To cover these edges, we
have to traverse through "No" decision. In the case of "No" decision the path is A1-B3-5-
D7, and the number of covered edges is 3 and 7. So by traveling through these two paths, all
branches have covered.
Prepared By: Neetika Gupta
Path 1 - A1-B2-C4-D6-E8
Path 2 - A1-B3-5-D7
Branch Coverage (BC) = Number of paths =2

Case Covered Branches Path Branch coverage

Yes 1, 2, 4, 5, 6, 8 A1-B2-C4-D6-E8 2

No 3,7 A1-B3-5-D7

3.Condition/decision coverage

In this structural testing, test cases are designed to make each component of a composite
conditional expression to assume both true and false values. For example, in the
conditional expression ((c1.and.c2).or.c3), the components c1, c2 and c3 are each made
to assume both true and false values. Branch testing is probably the simplest condition
testing strategy where only the compound conditions appearing in the different branch
statements are made to assume the true and false values. Thus, condition testing is a
stronger testing strategy than branch testing and branch testing is stronger testing strategy
than the statement coverage-based testing. For a composite conditional expression of n
components, for condition coverage, 2ⁿ test cases are required. Thus, for
condition coverage, the number of test cases increases exponentially with the number of
component conditions. Therefore, a condition coverage-based testing technique is
practical only if n (the number of conditions) is small.
Generally, a decision point has two decision values one is true, and another is false that's
why most of the times the total number of outcomes is two. The percent of decision
coverage can be found by dividing the number of exercised outcome with the total
number of outcomes and multiplied by 100.

In this technique, it is tough to get 100% coverage because sometimes expressions get
complicated. Due to this, there are several different methods to report decision coverage.
All these methods cover the most important combinations and very much similar to decision
coverage. The benefit of these methods is enhancement of the sensitivity of control flow.

Consider the code to apply on decision coverage technique:

Prepared By: Neetika Gupta


1. Test (int a)
2. {
3. If(a>4)
4. a=a*3
5. Print (a)
6. }

Scenario 1:
Value of a is 7 (a=7)

1. Test (int a=7)


2. { if (a>4)
3. a=a*3
4. print (a)
5. }

The outcome of this code is "True" if condition (a>4) is checked.

Calculation of Decision Coverage percent:

1. Decision Coverage = ½*100 (Only "True" is exercised)


2. =100/2
3. = 50
4. Decision Coverage is 50%
Scenario 2:
Value of a is 3 (a=3)
1. Test (int a=3)
2. { if (a>4)
3. a=a*3
4. print (a)
5. }

The outcome of this code is ?False? if condition (a>4) is checked.


Prepared By: Neetika Gupta
Calculation of Decision Coverage percent:

1. = ½*100 (Only "False" is exercised) <br>


2. =100/2
3. = 50
4. Decision Coverage = 50%

4.Path coverage

The path coverage-based testing strategy requires us to design test cases such that all
linearly independent paths in the program are executed at least once. A linearly
independent path can be defined in terms of the control flow graph (CFG)of a program.

Control Flow Graph (CFG)


A control flow graph describes the sequence in which the different instructions of a
program get executed. In other words, a control flow graph describes how the control
flows through the program. In order to draw the control flow graph of a program, all the
statements of a program must be numbered first. The different numbered statements serve
as nodes of the control flow graph (as shown in fig. 10.3). An edge from one node to
another node exists if the execution of the statement representing the first node can result
in the transfer of control to the other node.

The CFG for any program can be easily drawn by knowing how to represent the
sequence, selection, and iteration type of statements in the CFG. After all, a program is
made up from these types of statements. Fig. 10.3 summarizes how the CFG for these
three types of statements can be drawn. It is important to note that for the iteration type of
constructs such as the while construct, the loop condition is tested only at the beginning
of the loop and therefore the control flow from the last statement of the loop is always to
the top of the loop. Using these basic ideas, the CFG of Euclid’s GCD computation
algorithm can be drawn as shown in fig. 10.4.

Prepared By: Neetika Gupta


Fig. 10.3: CFG for (a) sequence, (b) selection, and (c) iteration type ofconstructs

Fig. 10.4: Control flow diagram

Prepared By: Neetika Gupta


Path
A path through a program is a node and edge sequence from the starting node to a terminal
node of the control flow graph of a program. There can be more than one terminal node in a
program. Writing test cases to cover all the paths of a typical program is impractical. For this
reason, the path-coverage testing does not require coverage of all paths but only coverage of
linearly independent paths.

Linearly independent path


A linearly independent path is any path through the program that introduces at least one new
edge that is not included in any other linearly independent paths. If a path has one new node
compared to all other linearly independent paths, then the path is also linearly independent.
This is because, any path having a new node automatically implies that it has a new edge.
Thus, a path that is sub path ofanother path is not considered to be a linearly independent path.

Control flow graph

In order to understand the path coverage-based testing strategy, it is very much necessary to
understand the control flow graph (CFG) of a program. Control flow graph (CFG) of a
program has been discussed earlier.

Linearly independent path

The path-coverage testing does not require coverage of all paths but only coverage of linearly
independent paths. Linearly independent paths have been discussed earlier.

Cyclomatic complexity

For more complicated programs it is not easy to determine the number of independent paths of
the program. McCabe’s cyclomatic complexity defines an upper bound for the number of
linearly independent paths through a program. Also, the McCabe’s cyclomatic complexity is
very simple to compute. Thus, the McCabe’s cyclomatic complexity metric provides a
practical way of determining the maximum number of linearly independent paths in a
program. Though the McCabe’s metric does not directly identify the linearly independent
paths, but it informs approximately how many paths to look for.

There are three different ways to compute the cyclomatic complexity. Theanswers computed
by the three methods are guaranteed to agree.

Method 1:
Given a control flow graph G of a program, the cyclomatic complexity V(G) can be
computed as:
V(G) = E – N + 2
where N is the number of nodes of the control flow graph and E is the number of edges

Prepared By: Neetika Gupta


in the control flow graph.

For the CFG of example shown in fig. 10.4, E=7 and N=6. Therefore, the cyclomatic
complexity = 7-6+2 = 3.

Method 2:
An alternative way of computing the cyclomatic complexity of a program from an
inspection of its control flow graph is as follows:
V(G) = Total number of bounded areas + 1
In the program’s control flow graph G, any region enclosed by nodes and edges can be
called as a bounded area. This is an easy way to determine the McCabe’s cyclomatic
complexity. But, what if the graph G is not planar, i.e. however you draw the
graph, two or more edges intersect? Actually, it can be shown that structured programs
always yield planar graphs. But, presence of GOTO’s can easily add intersecting
edges. Therefore, for non-structured programs, this way of computing the McCabe’s
cyclomatic complexity cannot be used.
The number of bounded areas increases with the number of decision paths and loops.
Therefore, the McCabe’s metric provides a quantitative measure of testing difficulty
and the ultimate reliability. For the CFG example shown in fig. 10.4, from a visual
examination of the CFG the number of bounded areas is 2. Therefore the cyclomatic
complexity, computing with this method is also 2+1 = 3. This method provides a very
easy way of computing the cyclomatic complexity of CFGs, just from a visual
examination of the CFG. On the other hand, the other method of computing CFGs is
more amenable to automation, i.e. it can be easily coded into a program which can be
used to determine the cyclomatic complexities of arbitrary CFGs.

Method 3:
The cyclomatic complexity of a program can also be easily computed by computing
the number of decision statements of the program. If N is the number of decision
statement of a program, then the McCabe’s metric is equal to N+1.

5. Data flow-based testing

Data flow-based testing method selects test paths of a program according to the locations of
the definitions and uses of different variables in a program.
For a statement numbered S, let

DEF(S) = {X/statement S contains a definition of X}, andUSES(S) =


{X/statement S contains a use of X}

For the statement S:a=b+c;, DEF(S) = {a}. USES(S) = {b,c}. The definition of variable X at
statement S is said to be live at statement S1, if there exists a path from statement S to
statement S1 which does not contain any definition of X.

The definition-use chain (or DU chain) of a variable X is of form [X, S, S1], where S and S1

Prepared By: Neetika Gupta


are statement numbers, such that X Є DEF(S) and X Є USES(S1), and the definition of X in
the statement S is live at statement S1. One simple data flow testing strategy is to require that
every DU chain be covered at least once. Data flow testing strategies are useful for selecting
test paths of a program containing nested if and loop statements.

Advantages of Data Flow Testing:


Data Flow testing helps us to pinpoint any of the following issues:
 A variable that is declared but never used within the program.
 A variable that is used but never declared.
 A variable that is defined multiple times before it is used.
 Deallocating a variable before it is used.

6. Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.

1. Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop
and this is worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each.
If they’re not independent, treat them like nesting.

What is Test Coverage in Software Testing?

Amount of testing performed by a set of test cases is called Test Coverage. By amount of testing
we mean that what parts of the application program are exercised when we run a test suite. In
other words, test coverage is defined as a technique which determines whether our test cases are
actually covering the application code and how much code is exercised when we run those test
cases. When we can count upon some things in an application and also tell whether the test cases
are covering those things of application, then we can say that we measure the coverage. So
basically the coverage is the coverage items that we have been able to count and see what items
have been covered by the test. The test coverage by two test cases executed can be the same but
the input data of 1 test case can find a defect while the input data of 2 nd cannot. So with this we
understand the 100% coverage does not mean 100% tested.

Mainly more focus is put on getting code coverage information by code based testing and
requirement based testing but not much stress is put on analysing the code coverage by covering
maximum items in code coverage.

Prepared By: Neetika Gupta


What are the advantages of Test coverage Analysis?

By Test coverage we can actually determine which part of the code was touched for the
deployment/release.

1. We can prevent defect leakage using Test coverage analysis.


2. Defects can be prevented by this early stage of application life cycle.
3. We can keep time, resources, cost, scope under control with this technique.
4. The test coverage analysis can determine the decision points and important path made in
the application which helps us to increase the test coverage.
5. With this we can check the paths of the code which are not tested.
6. We can determine the quality of the testing we are performing.
7. We can find the easily gaps in requirements, test cases and defects at an early level and
code level.

What is Mutation Testing?

Mutation testing is a white box method in software testing where we insert errors purposely into
a program (under test) to verify whether the existing test case can detect the error or not. In this
testing, the mutant of the program is created by making some modifications to the original
program.

The primary objective of mutation testing is to check whether each mutant created an output,
which means that it is different from the output of the original program. We will make slight
modifications in the mutant program because if we change it on a massive scale than it will
affect the overall plan.

When we detected the number of errors, it implies that either the program is correct or the test
case is inefficient to identify the fault.

Mutation testing purposes is to evaluate the quality of the case that should be able to fail the
mutant code hence this method is also known as Fault-based testing as it used to produce an error
in the program and that why we can say that the mutation testing is performed to check the
efficiency of the test cases.

What is mutation?

The mutation is a small modification in a program; these minor modifications are planned to
typical low-level errors which are happened at the time of coding process.

Generally, we deliberate the mutation operators in the form of rules which match the data and
also generate some efficient environment to produce the mutant.

Mutation Testing Benefits:

Following benefits are experienced, if mutation testing is adopted:

Prepared By: Neetika Gupta


 It brings a whole new kind of errors to the developer's attention.

 It is the most powerful method to detect hidden defects, which might be impossible to
identify using the conventional testing techniques.

 Tools such as Insure++ help us to find defects in the code using the state-of-the-art.

 Increased customer satisfaction index as the product would be less buggy.

 Debugging and Maintaining the product would be more easier than ever.

Mutation Testing Types:


 Value Mutations: An attempt to change the values to detect errors in the programs. We
usually change one value to a much larger value or one value to a much smaller value.
The most common strategy is to change the constants.

 Decision Mutations: The decisions/conditions are changed to check for the design
errors. Typically, one changes the arithmetic operators to locate the defects and also we
can consider mutating all relational operators and logical operators (AND, OR , NOT)

 Statement Mutations: Changes done to the statements by deleting or duplicating the


line which might arise when a developer is copy pasting the code from somewhere else.

Decision mutations

In this type of mutation testing, we will check the design errors. And here, we will do the
modification in arithmetic and logical operator to detect the errors in the program.

Like if we do the following changes in arithmetic operators:

o plus(+)→ minus(-)
o asterisk(*)→ double asterisk(**)
o plus(+)→incremental operator(i++)

Like if we do the following changes in logical operators

o Exchange P > → P<, OR P>=

Prepared By: Neetika Gupta


Example:

Value mutations

In this, the values will modify to identify the errors in the program, and generally, we will
change the following:

o Small value à higher value


o Higher value à Small value.

For Example:

Statement Mutations

Statement mutations means that we can do the modifications into the statements by removing or
replacing the line as we see in the below example:

Prepared By: Neetika Gupta


In the above case, we have replaced the statement r=15 by s=15, and r=25 by s=25.

How to perform mutation testing

To perform mutation testing, we will follow the below process:

o In this, firstly, we will add the errors into the source code of the program by producing
various versions, which are known mutants. Here every mutant having the one error,
which leads the mutant kinds unsuccessful and also validates the efficiency of the test
cases.
o After that, we will take the help of the test cases in the mutant program and the actual
application will find the errors in the code.
o Once we identify the faults, we will match the output of the actual code and mutant code.

Prepared By: Neetika Gupta


o After comparing the output of both actual and mutant programs, if the results are not
matched, then the mutant is executed by the test cases. Therefore the test case has to be
sufficient for identifying the modification between the actual program and the mutant
program.
o And if the actual program and the mutant program produced the exact result, then the
mutant is saved. And those cases are more active test cases because it helps us to execute
all the mutants.

Advantages and disadvantages of Mutation Testing

Advantages

The benefits of mutation testing are as follows:

o It is a right approach for error detection to the application programmer


o The mutation testing is an excellent method to achieve the extensive coverage of the
source program.
o Mutation testing helps us to give the most established and dependable structure for the
clients.
o This technique can identify all the errors in the program and also helps us to discover the
doubts in the code.

Disadvantages

The drawbacks of mutant testing are as follows:

o This testing is a bit of time taking and costlier process because we have many mutant
programs that need to be created.
o The mutation testing is not appropriate for Black-box testing as it includes the
modification in the source code.
o Every mutation will have the same number of test cases as compare to the actual
program. Therefore the significant number of the mutant program may need to be tested
beside the real test suite.
o As it is a tedious process, so we can say that this testing requires the automation tools to
test the application.

Prepared By: Neetika Gupta


Static Analysis vs Dynamic Analysis in Software Testing

Static analysis

Static analysis involves no dynamic execution of the software under test and can detect possible
defects in an early stage, before running the program. Static analysis is done after coding and
before executing unit tests.

Static analysis can be done by a machine to automatically “walk through” the source code and
detect noncomplying rules. The classic example is a compiler which finds lexical, syntactic and
even some semantic mistakes.

Static analysis can also be performed by a person who would review the code to ensure proper
coding standards and conventions are used to construct the program. This is often called Code
Review and is done by a peer developer, someone other than the developer who wrote the code.

Static analysis is also used to force developers to not use risky or buggy parts of the
programming language by setting rules that must not be used.

When developers performs code analysis, they usually look for

 Lines of code
 Comment frequency
 Proper nesting
 Number of function calls
 Cyclomatic complexity
 Can also check for unit tests
Quality attributes that can be the focus of static analysis:

 Reliability
 Maintainability
 Testability
 Re-usability
 Portability
 Efficiency

Prepared By: Neetika Gupta


Advantages of Static Analysis

The main advantage of static analysis is that it finds issues with the code before it is ready for
integration and further testing.

 It can find weaknesses in the code at the exact location.

 It can be conducted by trained software assurance developers who fully understand the
code.
 Source code can be easily understood by other or future developers
 It allows a quicker turn around for fixes
 Weaknesses are found earlier in the development life cycle, reducing the cost to fix.
 Less defects in later tests
 Unique defects are detected that cannot or hardly be detected using dynamic tests
 Unreachable code
 Variable use (undeclared, unused)
 Uncalled functions
 Boundary value violations
Static code analysis limitations:

 It is time consuming if conducted manually.


 Automated tools produce false positives and false negatives.
 There are not enough trained personnel to thoroughly conduct static code analysis.
 Automated tools can provide a false sense of security that everything is being addressed.
 Automated tools only as good as the rules they are using to scan with.
 It does not find vulnerabilities introduced in the runtime environment.

Dynamic Analysis

In contrast to Static Analysis, where code is not executed, dynamic analysis is based on
the system execution, often using tools.

Dynamic program analysis is the analysis of computer software that is performed with executing
programs built from that software on a real or virtual processor (analysis performed without

Prepared By: Neetika Gupta


executing programs is known as static code analysis). Dynamic program analysis tools may
require loading of special libraries or even recompilation of program code.

The most common dynamic analysis practice is executing Unit Tests against the code to find any
errors in code.

Dynamic code analysis advantages:

 It identifies vulnerabilities in a runtime environment.


 It allows for analysis of applications in which you do not have access to the actual code.
 It identifies vulnerabilities that might have been false negatives in the static code analysis.
 It permits you to validate static code analysis findings.
 It can be conducted against any application.
Dynamic code analysis limitations:

 Automated tools provide a false sense of security that everything is being addressed.
 Cannot guarantee the full test coverage of the source code
 Automated tools produce false positives and false negatives.
 Automated tools are only as good as the rules they are using to scan with.
 It is more difficult to trace the vulnerability back to the exact location in the code, taking
longer to fix the problem.
Reliability Metrics

Reliability metrics are used to quantitatively expressed the reliability of the software product.
The option of which metric is to be used depends upon the type of system to which it applies &
the requirements of the application domain.

Some reliability metrics which can be used to quantify the reliability of the software product are
as follows:

Prepared By: Neetika Gupta


1. Mean Time to Failure (MTTF)

MTTF is described as the time interval between the two successive failures. An MTTF of 200
mean that one failure can be expected each 200-time units. The time units are entirely dependent
on the system & it can even be stated in the number of transactions. MTTF is consistent for
systems with large transactions.

To measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the
time instants t1,t2.....tn.

MTTF can be calculated as

2. Mean Time to Repair (MTTR)

Once failure occurs, some-time is required to fix the error. MTTR measures the average time it
takes to track the errors causing the failure and to fix them.

3. Mean Time Between Failure (MTBR)

We can merge MTTF & MTTR metrics to get the MTBF metric.

MTBF = MTTF + MTTR

Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to
appear only after 300 hours. In this method, the time measurements are real-time & not the
execution time as in MTTF.

Prepared By: Neetika Gupta


4. Rate of occurrence of failure (ROCOF)

It is the number of failures appearing in a unit time interval. The number of unexpected events
over a specific time of operation. ROCOF is the frequency of occurrence with which unexpected
role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100
operational time unit steps. It is also called the failure intensity metric.

5. Probability of Failure on Demand (POFOD)

POFOD is described as the probability that the system will fail when a service is requested. It is
the number of system deficiency given several systems inputs.

POFOD is the possibility that the system will fail when a service request is made.

A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential
measure for safety-critical systems. POFOD is relevant for protection systems where services are
demanded occasionally.

6. Availability (AVAIL)

Availability is the probability that the system is applicable for use at a given time. It takes into
account the repair time & the restart time for the system. An availability of 0.995 means that in
every 1000 time units, the system is feasible to be available for 995 of these. The percentage of
time that a system is applicable for use, taking into account planned and unplanned downtime. If
a system is down an average of four hours out of 100 hours of operation, its AVAIL is 96%

Prepared By: Neetika Gupta

You might also like