Module3 PDF
Module3 PDF
Fundamentals of Testing
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not.
Testing is executing a system in order to identify any gaps, errors, or missing requirements
in contrary to the actual requirements.
This tutorial will give you a basic understanding on software testing, its types, methods,
levels, and other related terminologies.
Testing is a critical role in software development that requires special skills and knowledge
not commonly taught to software developers, business analysts, and project managers. This
often results in insufficient time and resources being allocated for this important function,
and quality suffers-as do the users of the software. Equally important is the need to
measure quality quickly and efficiently, because limitations in resources and schedules are
realities that aren't going away. We need to do the best we can with what we have and still
deliver high-quality, proven software.
The levels of software testing involve the different methodologies, which can be used while
we are performing the software testing.
In software testing, we have four different levels of testing, which are as discussed below:
1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing
Unit testing is the first level of software testing, which is used to test if software modules
are satisfying the given requirement or not.
The first level of testing involves analyzing each unit or an individual component of the
software application.
The primary purpose of executing unit testing is to validate unit components with their
performance.
Unit testing will help the test engineer and developers in order to understand the base of
code that makes them able to change defect causing code quickly. The developers
implement the unit.
Whenever the application is ready and given to the Test engineer, he/she will start checking
every component of the module or module of the application independently or one by one,
and this process is known as Unit testing or components testing.
o Unit testing helps tester and developers to understand the base of code that makes
them able to change defect causing code quickly.
o Unit testing helps in the documentation.
o Unit testing fixes defects very early in the development phase that's why there is a
possibility to occur a smaller number of defects in upcoming testing levels.
o It helps with code reusability by migrating code and test cases.
Unit testing uses all white box testing techniques as it uses the code of software application:
Disadvantages
o It cannot identify integration or broad level error as it works on units of the code.
o In the unit testing, evaluation of all execution paths is not possible, so unit testing is
not able to catch each and every error in a program.
o It is best suitable for conjunction with other testing activities
2. Integration testing
Integration testing is the second level of the software testing process comes after unit
testing. In this testing, units or individual components of the software are tested in a group.
The focus of the integration testing level is to expose defects at the time of interaction
between integrated components or units.
Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing. The Software is developed with a number of software modules that are
coded by different coders or programmers. The goal of integration testing is to check the
correctness of communication among all the modules.
Although all modules of software application already tested in unit testing, errors still exist
due to the following reasons:
Any testing technique (Blackbox, Whitebox, and Greybox) can be used for Integration
Testing; some are listed below:
Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are
tested with lower level modules until the successful completion of testing of all the
modules. Major design flaws can be detected and fixed early because critical modules tested
first. In this type of method, we will add the modules incrementally or one by one and check
the data flow in the same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of
the previous one like Child C is a child of Child B and so on as we can see in the below
image:
Advantages:
Prepared By: Neetika Gupta
o Identification of defect is difficult.
o An early prototype is possible.
Disadvantages:
Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are
tested with higher level modules until the successful completion of testing of all the
modules. Top level critical modules are tested at last, so it may cause a defect. Or we can
say that we will be adding the modules from bottom to the top and check the data flow in
the same order.
In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the below image:
Disadvantages
o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.
In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In
this process, top-level modules are tested with lower level modules and lower level modules
tested with high-level modules simultaneously. There is less possibility of occurrence of
defect because each module interface is tested.
Advantages
o The hybrid method provides features of both Bottom Up and Top Down methods.
o It is most time reducing method.
o It provides complete testing of all modules.
This method needs a higher level of concentration as the process carried out in both
directions simultaneously.
o Complicated method.
In this approach, testing is done via integration of all modules at once. It is convenient for
small software systems, if used for large software systems identification of defects is
difficult. Since this testing can be done after completion of all modules due to that testing
team has less time for execution of this process so that internally linked interfaces and high-
risk critical modules can be missed easily.
Advantages:
Disadvantages:
o Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
o Small modules missed easily.
o Time provided for testing is very less.
o We may miss to test some of the interfaces.
System Testing includes testing of a fully integrated software system. Generally, a computer
system is made with the integration of software (any software is only a single element of a
computer system). The software is developed in units and then interfaced with other
software and hardware to create a complete computer system. In other words, a computer
system consists of a group of software to perform the various tasks, but only software
cannot perform the task; for that software must be interfaced with compatible hardware.
System testing is a series of different type of tests with the purpose to exercise and examine
the full working of an integrated software computer system against requirements.
Performance testing
Performance testing is carried out to check whether the system needs the non-functional
requirements identified in the SRS document. There are several types of performance
testing. Among of them nine types are discussed below. The types of performance testing to
be carried out on a system depend on the different non-functional requirements of the
Regression testing is performed under system testing to confirm and identify that if there's
any defect in the system due to modification in any other part of the system. It makes sure,
any changes done during the development process have not introduced a new defect and
also gives assurance; old defects will not exist on the addition of new software over the
time.
Load Testing
Load testing is performed under system testing to clarify whether the system can work under
real-time loads or not.
The load testing is used to check the performance of an application by applying some load
which is either less than or equal to the desired load is known as load testing.
For example: In the below image, 1000 users are the desired load, which is given by the
customer, and 3/second is the goal which we want to achieve while performing a load
testing.
Functional Testing
Functional testing of a system is performed to find if there's any missing function in the
system. Tester makes a list of vital functions that should be in the system and can be added
during functional testing and should improve quality of the system.
Recovery Testing
In this testing, we will test the application to check how well it recovers from the crashes or
disasters.
Migration Testing
Migration testing is performed to ensure that if the system needs to be modified in new
infrastructure so it should be modified without any issue.
Usability Testing
The purpose of this testing to make sure that the system is well familiar with the user and it
meets its objective for what it supposed to do.
Prepared By: Neetika Gupta
Software and Hardware Testing
This testing of the system intends to check hardware and software compatibility. The
hardware configuration must be compatible with the software to run it without any issue.
Compatibility provides flexibility by providing interactions between hardware and software.
Stress Testing
The stress testing is testing, which checks the behavior of an application by applying load
greater than the desired load.
For example: If we took the above example and increased the desired load 1000 to 1100
users, and the goal is 4/second. While performing the stress testing in this scenario, it will
pass because the load is greater (100 up) than the actual desired load.
Scalability Testing
Stability Testing
4. Acceptance testing
Acceptance testing is formal testing based on user requirements and function processing. It
determines whether the software is conforming specified requirements and user
requirements or not. It is conducted as a kind of Black Box testing where the number of
required users involved testing the acceptance level of the system. It is the fourth and last
level of software testing.
User acceptance testing (UAT) is a type of testing, which is done by the customer before
accepting the final product. Generally, UAT is done by the customer (domain expert) for
their satisfaction, and check whether the application is working according to given business
scenarios, real-time scenarios.
In this, we concentrate only on those features and scenarios which are regularly used by the
customer or mostly user scenarios for the business or those scenarios which are used daily
by the end-user or the customer
Once the software has undergone through Unit Testing, Integration Testing and System
Testing so, Acceptance Testing may seem redundant, but it is required due to the following
reasons.
o During the development of a project if there are changes in requirements and it may
not be communicated effectively to the development team.
o Developers develop functions by examining the requirement document on their own
understanding and may not understand the actual requirements of the client.
o There's maybe some minor errors which can be identified only when the system is
used by the end user in the actual scenario so, to find out these minor errors,
acceptance testing is essential.
According to the testing plan, the customer has to write requirements in their own words
and by themselves but
Customers are not willing to do that; it defeats the whole point of acceptance testing.
If test cases are written by someone else, the customer does not understand them, so
tester has to perform the inspections by themselves only.
Alpha Testing
Alpha testing is conducted in the organization and tested by a representative group of end-
users at the developer's side and sometimes by an independent team of testers.
Alpha testing is simulated or real operational testing at an in-house site. It comes after the
unit testing, integration testing, etc. Alpha testing used after all the testing are executed.
Prepared By: Neetika Gupta
It can be a white box, or Black-box testing depends on the requirements - particular lab
environment and simulation of the actual environment required for this testing.
o Refines the software product by finding and rectifying bugs that weren't discovered
through previous tests.
o Alpha testing allows the team to test the software in a real-world environment.
o One of the reasons to do alpha testing is to ensure the success of the software
product.
o Alpha testing validates the quality, functionality of the software, and effectiveness
of the software before it released in the real world.
o One of the benefits of alpha testing is it reduces the delivery time of the project.
Prepared By: Neetika Gupta
o It provides a complete test plan and test cases.
o Free the team member for another project.
o Every feedback helps to improve software quality.
o It provides a better observation of the software's reliability and accountability.
Beta Testing
Beta testing is a type of User Acceptance Testing among the most crucial testing, which
performed before the release of the software. Beta Testing is a type of Field Test. This
testing performs at the end of the software testing life cycle. This type of testing can be
considered as external user acceptance testing. It is a type of salient testing. Real users
perform this testing. This testing executed after the alpha testing. In this the new version,
beta testing is released to a limited audience to check the accessibility, usability, and
functionality, and more.
o Beta testing is the last phase of the testing, which is carried out at the client's or
customer's site.
o Beta testing used in a real environment at the user's site. Beta testing helps in
providing the actual position of the quality.
o Testing performed by the client, stakeholder, and end-user.
o Beta testing always is done after the alpha testing, and before releasing it into the
market.
o Beta testing is black-box testing.
o Beta testing performs in the absence of tester and the presence of real users
Reliability and security testing are Reliability, security and robustness are checked during
not checked in alpha testing. beta testing.
Alpha testing ensures the quality of Beta testing also concentrates on the quality of the
the product before forwarding to beta product but collects users input on the product and
testing. ensures that the product is ready for real time users.
Alpha testing requires a testing Beta testing doesn’t require a testing environment or
environment or a lab. lab.
Developers can immediately address Most of the issues or feedback collected from the beta
the critical issues or fixes in alpha testing will be implemented in future versions of the
testing. product.
Verification is the process of checking that a software achieves its goal without any bugs.
It is the process to ensure whether the product that is developed is right or not. It verifies
whether the developed product fulfills the requirements that we have. Verification is static
testing.
Verification means Are we building the product right?
Validation is the process of checking whether the software product is up to the mark or in
other words product has high level requirements. It is the process of checking the
validation of product i.e. it checks what we are developing is the right product. it is
validation of actual and expected product. Validation is the dynamic testing.
Validation means Are we building the right product?
It includes checking documents, design, It includes testing and validating the actual
codes and programs. product.
Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections and desk- Testing, White Box Testing and non-
checking. functional testing.
It can find the bugs in the early stage of It can only find the bugs that could not be
9. It is the behavior testing of the software. It is the logic testing of the software.
11. It is also called closed testing. It is also called as clear box testing.
Can be done by trial and error ways and Data domains along with inner or
14. methods. internal boundaries can be better tested.
White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box
Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a software
testing method in which the internal structure/ design/ implementation of the item being
tested is known to the tester. The tester chooses inputs to exercise paths through the code
and determines the appropriate outputs. Programming know-how and the implementation
knowledge is essential. White box testing is testing beyond the user interface and into the
nitty-gritty of a system.
This method is named so because the software program, in the eyes of the tester, is like a
white/ transparent box; inside which one clearly sees.
Definition
EXAMPLE
A tester, usually a developer as well, studies the implementation code of a certain field on a
webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs
against the expected outcomes, which is also determined by studying the implementation
code.
White Box Testing is like the work of a mechanic who examines the engine to see why the
car is not moving.
WHITE BOX TESTING ADVANTAGES
Testing can be commenced at an earlier stage. One need not wait for the GUI to be
available.
Testing is more thorough, with the possibility of covering most paths.
Since tests can be very complex, highly skilled resources are required, with thorough
knowledge of programming and implementation.
Test script maintenance can be a burden if the implementation changes too
frequently.
Since this method of testing is closely tied with the application being testing, tools to
cater to every kind of implementation/platform may not be readily available.
This method is named so because the software program, in the eyes of the tester, is like a
black box; inside which one cannot see. This method attempts to find errors in the following
categories:
Definition
EXAMPLE
A tester, without knowledge of the internal structures of a website, tests the web pages by
using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against the
expected outcome.
Tests are done from a user’s point of view and will help in exposing discrepancies in
the specifications.
Tester need not know programming languages or how the software has been
implemented.
Tests can be conducted by a body independent from the developers, allowing for an
objective perspective and the avoidance of developer-bias.
Test cases can be designed as soon as the specifications are complete.
Only a small number of possible inputs can be tested and many program paths will
be left untested.
Without clear specifications, which is the situation in many projects, test cases will
be difficult to design.
Tests can be redundant if the software designer/ developer has already run a test
case.
Ever wondered why a soothsayer closes the eyes when foretelling events? So is
almost the case in Black Box Testing.
Test Case
The test case is defined as a group of conditions under which a tester determines whether a
software application is working as per the customer's requirements or not. Test case
designing includes preconditions, case name, input conditions, and expected result. A test
case is a first level action and derived from test scenarios.
Test case gives detailed information about testing strategy, testing process, preconditions,
and expected output. These are executed during the testing process to check whether the
software application is performing the task for that it was developed or not.
Test case helps the tester in defect reporting by linking defect with test case ID. Detailed
test case documentation works as a full proof guard for the testing team because if
developer missed something, then it can be caught during execution of these full-proof test
cases.
To write the test case, we must have the requirements to derive the inputs, and the test
scenarios must be written so that we do not miss out on any features for testing. Then we
should have the test case template to maintain the uniformity, or every test engineer follows
the same approach to prepare the test document.
Generally, we will write the test case whenever the developer is busy in writing the code.
o When the customer gives the business needs then, the developer starts developing
and says that they need 3.5 months to build this product.
o And In the meantime, the testing team will start writing the test cases.
o Once it is done, it will send it to the Test Lead for the review process.
o And when the developers finish developing the product, it is handed over to the
testing team.
o The test engineers never look at the requirement while testing the product document
because testing is constant and does not depends on the mood of the person rather
than the quality of the test engineer.
To require consistency in the test case execution: we will see the test case and start
testing the application.
To make sure a better test coverage: for this, we should cover all possible scenarios and
document it, so that we need not remember all the scenarios again and again.
It depends on the process rather than on a person: A test engineer has tested an
application during the first release, second release, and left the company at the time of third
release. As the test engineer understood a module and tested the application thoroughly by
deriving many values. If the person is not there for the third release, it becomes difficult for
the new person. Hence all the derived values are documented so that it can be used in the
future.
To avoid giving training for every new test engineer on the product: When the test
engineer leaves, he/she leaves with a lot of knowledge and scenarios. Those scenarios
should be documented so that the new test engineer can test with the given scenarios and
also can write the new scenarios.
Test design is the act of creating and writing test suites for testing software.
Test analysis and identifying test conditions give us a general idea for testing which
covers quite a large range of possibilities. But when we come to make a test case, we
need to be very specific. In fact, now we need the exact and detailed specific input.
But just having some values to input to the system is not a test, if you don’t know
what the system is supposed to do with the inputs, you will not be able to tell
whether your test has passed or failed.
Error guessing is a technique in which there is no specific method for identifying the error.
It is based on the experience of the test analyst, where the tester uses the experience to guess
the problematic areas of the software. It is a type of black box testing technique which does
not have any defined structure to find the error.
The accomplishment of the error guessing technique is dependent on the ability and product
knowledge of the tester because a good test engineer knows where the bugs are most likely
to be, which helps to save lots of time.
The main purpose of this technique is to identify common errors at any level of testing by
exercising the following tasks:
The main purpose of the error guessing technique is to deal with all possible errors which
cannot be identified as informal testing.
o The main purpose of error guessing technique is to deal with all possible errors
which cannot be identified informal testing.
o It must contain the all-inclusive sets of test cases without skipping any problematic
areas and without involving redundant test cases.
o This technique accomplishes the characteristics left incomplete during the formal
testing.
Example
o What will be the result, if the entered character is other than a number?
o What will be the result, if entered characters are less than 10 digits?
o What will be the result, if the mobile field is left blank?
However, error guessing is the key technique among all testing techniques as it depends on
the experience of a tester, but it does not give surety of highest quality benchmark. It does
not provide full coverage to the software. This technique can yield a better result if
combined with other techniques of testing.
Advantages
Disadvantages
Example:
1 to 10 and 20 to 30
Hence there are five equivalence classes
--- to 0 (invalid)
1 to 10 (valid)
11 to 19 (invalid)
20 to 30 (valid)
31 to --- (invalid)
You select values from each class, i.e.,
-2, 3, 15, 25, 45
Advantages disadvantages
We can achieve the Minimum test coverage This technique will not consider the condition for boundary
value analysis.
It helps to decrease the general test execution The test engineer might assume that the output for all data set
time and also reduce the set of test data. right, which leads to the problem during the testing process.
Boundary value analysis is one of the widely used case design technique for black box
testing. It is used to test boundary values because the input values near the boundary have
higher chances of error.
Whenever we do the testing by boundary value analysis, the tester focuses on, while
entering boundary value whether the software is producing correct output or not.
Boundary values are those that contain the upper and lower limit of a variable. Assume that,
age is a variable of any function, and its minimum value is 18 and the maximum value is 30,
both 18 and 30 will be considered as boundary values.
If an input condition is restricted between values x and y, then the test cases should
be designed with values x and y as well as values which are above and below x and
y.
If an input condition is a large number of values, the test case should be developed
which need to exercise the minimum and maximum numbers. Here, values above
and below the minimum and maximum values are also tested.
Example:
The first task is to identify functionalities where the output depends on a combination of
inputs. If there are large input set of combinations, then divide it into smaller subsets which
are helpful for managing a decision table.
For every function, you need to create a table and list down all types of combinations of
inputs and its respective outputs. This helps to identify a condition that is overlooked by the
tester.
In order to find the number of all possible conditions, tester uses 2n formula where n
denotes the number of inputs; in the example there is the number of inputs is 2 (one is true
and second is false).
5. State Transition
In State Transition technique changes in input conditions change the state of the Application
Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT.
The tester can perform this action by entering various input conditions in a sequence. In
State transition technique, the testing team provides positive as well as negative input test
values for evaluating the system behavior.
State transition should be used when a testing team is testing the application for a
limited set of input values.
The Test Case Design Technique should be used when the testing team wants to test
sequence of events which happen in the application under test.
Example:
In the following example, if the user enters a valid password in any of the first three
attempts the user will be able to log in successfully. If the user enters the invalid
password in the first or second try, the user will be prompted to re-enter the
password. When the user enters password incorrectly 3rd time, the action has taken,
and the account will be blocked.
In this diagram when the user gives the correct PIN number, he or she is moved to Access
granted state. Following Table is created based on the diagram above-
1. Statement coverage
The statement coverage strategy aims to design test cases so that every statement in a
program is executed at least once. The principal idea governing the statement coverage
strategy is that unless a statement is executed, it is very hard to determine if an error
exists in that statement. Unless a statement is executed, it is very difficult to observe
whether it causes failure due to some illegal memory access, wrong result computation,
etc. However, executing some statement once and observing that it behaves properly for
Prepared By: Neetika Gupta
that input value is no guarantee that it will behave correctly for all input values. In the
following, designing of test cases using the statement coverage strategy have been shown.
compute_gcd(x, y)
int x, y;
{
1 while (x! = y){
2 if (x>y) then
3 x= x – y;
4 else y= y – x;5
}
6 return x;
}
By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)}, we can exercise the
program such that all statements are executed at least once.
Statement coverage derives scenario of test cases under the white box testing process which
is based upon the structure of the code.
So, this is the basic structure of the program, and that is the task it is going to do.
Now, let's see the two different scenarios and calculation of the percentage of Statement
Coverage for given source code.
Prepared By: Neetika Gupta
Scenario1:
If a = 5, b = 4
In scenario 1, we can see the value of sum will be 9 that is greater than 0 and as per the
condition result will be "This is a positive result." The statements highlighted in yellow
color are executed statements of this scenario.
To calculate statement coverage of the first scenario, take the total number of statements
that is 7 and the number of used statements that is 5.
Likewise, in scenario 2,
Scenario2:
If A = -2, B = -7
In scenario 2, we can see the value of sum will be -9 that is less than 0 and as per the
condition, result will be "This is a negative result." The statements highlighted in yellow
color are executed statements of this scenario.
To calculate statement coverage of the first scenario, take the total number of statements
that is 7 and the number of used statements that is 6.
2.Branch coverage
In the branch coverage-based testing strategy, test cases are designed to make each
branch condition to assume true and false values in turn. Branch testing is also known as
edge testing as in this testing scheme, each edge of a program’s control flow graph is
traversed at least once.
Branch coverage technique is used to cover all branches of the control flow graph. It covers
all the possible outcomes (true and false) of each condition of decision point at least once.
Branch coverage technique is a whitebox testing technique that ensures that every branch of
each decision point must be executed.
However, branch coverage technique and decision coverage technique are very similar, but
there is a key difference between the two. Decision coverage technique covers all branches
of each decision point whereas branch testing covers all branches of every decision point of
the code.
In other words, branch coverage follows decision point and branch coverage edges. Many
different metrics can be used to find branch coverage and decision coverage, but some of
the most basic metrics are: finding the percentage of program and paths of execution during
the execution of the program.
Like decision coverage, it also uses a control flow graph to calculate the number of
branches.
Prepared By: Neetika Gupta
Example 1: It is obvious that branch testing guarantees statement coverage and
thus is a stronger testing strategy compared to the statement coverage-based testing. For
Euclid’s GCD computation algorithm , the test cases for branch coverage can be {(x=3,
y=3), (x=3, y=2), (x=4, y=3), (x=3, y=4)}.
Example 2:
1. Read X
2. Read Y
3. IF X+Y > 100 THEN
4. Print "Large"
5. ENDIF
6. If X + Y<100 THEN
7. Print "Small"
8. ENDIF
This is the basic code structure where we took two variables X and Y and two conditions. If
the first condition is true, then print "Large" and if it is false, then go to the next condition.
If the second condition is true, then print "Small."
In the above diagram, control flow graph of code is depicted. In the first case traversing
through "Yes "decision, the path is A1-B2-C4-D6-E8, and the number of covered edges is
1, 2, 4, 5, 6 and 8 but edges 3 and 7 are not covered in this path. To cover these edges, we
have to traverse through "No" decision. In the case of "No" decision the path is A1-B3-5-
D7, and the number of covered edges is 3 and 7. So by traveling through these two paths, all
branches have covered.
Prepared By: Neetika Gupta
Path 1 - A1-B2-C4-D6-E8
Path 2 - A1-B3-5-D7
Branch Coverage (BC) = Number of paths =2
Yes 1, 2, 4, 5, 6, 8 A1-B2-C4-D6-E8 2
No 3,7 A1-B3-5-D7
3.Condition/decision coverage
In this structural testing, test cases are designed to make each component of a composite
conditional expression to assume both true and false values. For example, in the
conditional expression ((c1.and.c2).or.c3), the components c1, c2 and c3 are each made
to assume both true and false values. Branch testing is probably the simplest condition
testing strategy where only the compound conditions appearing in the different branch
statements are made to assume the true and false values. Thus, condition testing is a
stronger testing strategy than branch testing and branch testing is stronger testing strategy
than the statement coverage-based testing. For a composite conditional expression of n
components, for condition coverage, 2ⁿ test cases are required. Thus, for
condition coverage, the number of test cases increases exponentially with the number of
component conditions. Therefore, a condition coverage-based testing technique is
practical only if n (the number of conditions) is small.
Generally, a decision point has two decision values one is true, and another is false that's
why most of the times the total number of outcomes is two. The percent of decision
coverage can be found by dividing the number of exercised outcome with the total
number of outcomes and multiplied by 100.
In this technique, it is tough to get 100% coverage because sometimes expressions get
complicated. Due to this, there are several different methods to report decision coverage.
All these methods cover the most important combinations and very much similar to decision
coverage. The benefit of these methods is enhancement of the sensitivity of control flow.
Scenario 1:
Value of a is 7 (a=7)
4.Path coverage
The path coverage-based testing strategy requires us to design test cases such that all
linearly independent paths in the program are executed at least once. A linearly
independent path can be defined in terms of the control flow graph (CFG)of a program.
The CFG for any program can be easily drawn by knowing how to represent the
sequence, selection, and iteration type of statements in the CFG. After all, a program is
made up from these types of statements. Fig. 10.3 summarizes how the CFG for these
three types of statements can be drawn. It is important to note that for the iteration type of
constructs such as the while construct, the loop condition is tested only at the beginning
of the loop and therefore the control flow from the last statement of the loop is always to
the top of the loop. Using these basic ideas, the CFG of Euclid’s GCD computation
algorithm can be drawn as shown in fig. 10.4.
In order to understand the path coverage-based testing strategy, it is very much necessary to
understand the control flow graph (CFG) of a program. Control flow graph (CFG) of a
program has been discussed earlier.
The path-coverage testing does not require coverage of all paths but only coverage of linearly
independent paths. Linearly independent paths have been discussed earlier.
Cyclomatic complexity
For more complicated programs it is not easy to determine the number of independent paths of
the program. McCabe’s cyclomatic complexity defines an upper bound for the number of
linearly independent paths through a program. Also, the McCabe’s cyclomatic complexity is
very simple to compute. Thus, the McCabe’s cyclomatic complexity metric provides a
practical way of determining the maximum number of linearly independent paths in a
program. Though the McCabe’s metric does not directly identify the linearly independent
paths, but it informs approximately how many paths to look for.
There are three different ways to compute the cyclomatic complexity. Theanswers computed
by the three methods are guaranteed to agree.
Method 1:
Given a control flow graph G of a program, the cyclomatic complexity V(G) can be
computed as:
V(G) = E – N + 2
where N is the number of nodes of the control flow graph and E is the number of edges
For the CFG of example shown in fig. 10.4, E=7 and N=6. Therefore, the cyclomatic
complexity = 7-6+2 = 3.
Method 2:
An alternative way of computing the cyclomatic complexity of a program from an
inspection of its control flow graph is as follows:
V(G) = Total number of bounded areas + 1
In the program’s control flow graph G, any region enclosed by nodes and edges can be
called as a bounded area. This is an easy way to determine the McCabe’s cyclomatic
complexity. But, what if the graph G is not planar, i.e. however you draw the
graph, two or more edges intersect? Actually, it can be shown that structured programs
always yield planar graphs. But, presence of GOTO’s can easily add intersecting
edges. Therefore, for non-structured programs, this way of computing the McCabe’s
cyclomatic complexity cannot be used.
The number of bounded areas increases with the number of decision paths and loops.
Therefore, the McCabe’s metric provides a quantitative measure of testing difficulty
and the ultimate reliability. For the CFG example shown in fig. 10.4, from a visual
examination of the CFG the number of bounded areas is 2. Therefore the cyclomatic
complexity, computing with this method is also 2+1 = 3. This method provides a very
easy way of computing the cyclomatic complexity of CFGs, just from a visual
examination of the CFG. On the other hand, the other method of computing CFGs is
more amenable to automation, i.e. it can be easily coded into a program which can be
used to determine the cyclomatic complexities of arbitrary CFGs.
Method 3:
The cyclomatic complexity of a program can also be easily computed by computing
the number of decision statements of the program. If N is the number of decision
statement of a program, then the McCabe’s metric is equal to N+1.
Data flow-based testing method selects test paths of a program according to the locations of
the definitions and uses of different variables in a program.
For a statement numbered S, let
For the statement S:a=b+c;, DEF(S) = {a}. USES(S) = {b,c}. The definition of variable X at
statement S is said to be live at statement S1, if there exists a path from statement S to
statement S1 which does not contain any definition of X.
The definition-use chain (or DU chain) of a variable X is of form [X, S, S1], where S and S1
6. Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop
and this is worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each.
If they’re not independent, treat them like nesting.
Amount of testing performed by a set of test cases is called Test Coverage. By amount of testing
we mean that what parts of the application program are exercised when we run a test suite. In
other words, test coverage is defined as a technique which determines whether our test cases are
actually covering the application code and how much code is exercised when we run those test
cases. When we can count upon some things in an application and also tell whether the test cases
are covering those things of application, then we can say that we measure the coverage. So
basically the coverage is the coverage items that we have been able to count and see what items
have been covered by the test. The test coverage by two test cases executed can be the same but
the input data of 1 test case can find a defect while the input data of 2 nd cannot. So with this we
understand the 100% coverage does not mean 100% tested.
Mainly more focus is put on getting code coverage information by code based testing and
requirement based testing but not much stress is put on analysing the code coverage by covering
maximum items in code coverage.
By Test coverage we can actually determine which part of the code was touched for the
deployment/release.
Mutation testing is a white box method in software testing where we insert errors purposely into
a program (under test) to verify whether the existing test case can detect the error or not. In this
testing, the mutant of the program is created by making some modifications to the original
program.
The primary objective of mutation testing is to check whether each mutant created an output,
which means that it is different from the output of the original program. We will make slight
modifications in the mutant program because if we change it on a massive scale than it will
affect the overall plan.
When we detected the number of errors, it implies that either the program is correct or the test
case is inefficient to identify the fault.
Mutation testing purposes is to evaluate the quality of the case that should be able to fail the
mutant code hence this method is also known as Fault-based testing as it used to produce an error
in the program and that why we can say that the mutation testing is performed to check the
efficiency of the test cases.
What is mutation?
The mutation is a small modification in a program; these minor modifications are planned to
typical low-level errors which are happened at the time of coding process.
Generally, we deliberate the mutation operators in the form of rules which match the data and
also generate some efficient environment to produce the mutant.
It is the most powerful method to detect hidden defects, which might be impossible to
identify using the conventional testing techniques.
Tools such as Insure++ help us to find defects in the code using the state-of-the-art.
Debugging and Maintaining the product would be more easier than ever.
Decision Mutations: The decisions/conditions are changed to check for the design
errors. Typically, one changes the arithmetic operators to locate the defects and also we
can consider mutating all relational operators and logical operators (AND, OR , NOT)
Decision mutations
In this type of mutation testing, we will check the design errors. And here, we will do the
modification in arithmetic and logical operator to detect the errors in the program.
o plus(+)→ minus(-)
o asterisk(*)→ double asterisk(**)
o plus(+)→incremental operator(i++)
Value mutations
In this, the values will modify to identify the errors in the program, and generally, we will
change the following:
For Example:
Statement Mutations
Statement mutations means that we can do the modifications into the statements by removing or
replacing the line as we see in the below example:
o In this, firstly, we will add the errors into the source code of the program by producing
various versions, which are known mutants. Here every mutant having the one error,
which leads the mutant kinds unsuccessful and also validates the efficiency of the test
cases.
o After that, we will take the help of the test cases in the mutant program and the actual
application will find the errors in the code.
o Once we identify the faults, we will match the output of the actual code and mutant code.
Advantages
Disadvantages
o This testing is a bit of time taking and costlier process because we have many mutant
programs that need to be created.
o The mutation testing is not appropriate for Black-box testing as it includes the
modification in the source code.
o Every mutation will have the same number of test cases as compare to the actual
program. Therefore the significant number of the mutant program may need to be tested
beside the real test suite.
o As it is a tedious process, so we can say that this testing requires the automation tools to
test the application.
Static analysis
Static analysis involves no dynamic execution of the software under test and can detect possible
defects in an early stage, before running the program. Static analysis is done after coding and
before executing unit tests.
Static analysis can be done by a machine to automatically “walk through” the source code and
detect noncomplying rules. The classic example is a compiler which finds lexical, syntactic and
even some semantic mistakes.
Static analysis can also be performed by a person who would review the code to ensure proper
coding standards and conventions are used to construct the program. This is often called Code
Review and is done by a peer developer, someone other than the developer who wrote the code.
Static analysis is also used to force developers to not use risky or buggy parts of the
programming language by setting rules that must not be used.
Lines of code
Comment frequency
Proper nesting
Number of function calls
Cyclomatic complexity
Can also check for unit tests
Quality attributes that can be the focus of static analysis:
Reliability
Maintainability
Testability
Re-usability
Portability
Efficiency
The main advantage of static analysis is that it finds issues with the code before it is ready for
integration and further testing.
It can be conducted by trained software assurance developers who fully understand the
code.
Source code can be easily understood by other or future developers
It allows a quicker turn around for fixes
Weaknesses are found earlier in the development life cycle, reducing the cost to fix.
Less defects in later tests
Unique defects are detected that cannot or hardly be detected using dynamic tests
Unreachable code
Variable use (undeclared, unused)
Uncalled functions
Boundary value violations
Static code analysis limitations:
Dynamic Analysis
In contrast to Static Analysis, where code is not executed, dynamic analysis is based on
the system execution, often using tools.
Dynamic program analysis is the analysis of computer software that is performed with executing
programs built from that software on a real or virtual processor (analysis performed without
The most common dynamic analysis practice is executing Unit Tests against the code to find any
errors in code.
Automated tools provide a false sense of security that everything is being addressed.
Cannot guarantee the full test coverage of the source code
Automated tools produce false positives and false negatives.
Automated tools are only as good as the rules they are using to scan with.
It is more difficult to trace the vulnerability back to the exact location in the code, taking
longer to fix the problem.
Reliability Metrics
Reliability metrics are used to quantitatively expressed the reliability of the software product.
The option of which metric is to be used depends upon the type of system to which it applies &
the requirements of the application domain.
Some reliability metrics which can be used to quantify the reliability of the software product are
as follows:
MTTF is described as the time interval between the two successive failures. An MTTF of 200
mean that one failure can be expected each 200-time units. The time units are entirely dependent
on the system & it can even be stated in the number of transactions. MTTF is consistent for
systems with large transactions.
To measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the
time instants t1,t2.....tn.
Once failure occurs, some-time is required to fix the error. MTTR measures the average time it
takes to track the errors causing the failure and to fix them.
We can merge MTTF & MTTR metrics to get the MTBF metric.
Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to
appear only after 300 hours. In this method, the time measurements are real-time & not the
execution time as in MTTF.
It is the number of failures appearing in a unit time interval. The number of unexpected events
over a specific time of operation. ROCOF is the frequency of occurrence with which unexpected
role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100
operational time unit steps. It is also called the failure intensity metric.
POFOD is described as the probability that the system will fail when a service is requested. It is
the number of system deficiency given several systems inputs.
POFOD is the possibility that the system will fail when a service request is made.
A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential
measure for safety-critical systems. POFOD is relevant for protection systems where services are
demanded occasionally.
6. Availability (AVAIL)
Availability is the probability that the system is applicable for use at a given time. It takes into
account the repair time & the restart time for the system. An availability of 0.995 means that in
every 1000 time units, the system is feasible to be available for 995 of these. The percentage of
time that a system is applicable for use, taking into account planned and unplanned downtime. If
a system is down an average of four hours out of 100 hours of operation, its AVAIL is 96%