0% found this document useful (0 votes)
22 views217 pages

Bcse301l Se Module-5a Smsatapathy

This document discusses software testing concepts like verification, validation, black box testing, white box testing, equivalence partitioning, and boundary value analysis. It provides details on different testing strategies and approaches for designing test cases.

Uploaded by

sukijaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views217 pages

Bcse301l Se Module-5a Smsatapathy

This document discusses software testing concepts like verification, validation, black box testing, white box testing, equivalence partitioning, and boundary value analysis. It provides details on different testing strategies and approaches for designing test cases.

Uploaded by

sukijaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 217

SOFTWARE ENGINEERING

L T P C
BCSE301L - 3 0 0 3
BCSE301P - 0 0 2 1

Dr. S M SATAPATHY
Associate Professor Sr.,
School of Computer Science and Engineering,
VIT Vellore, TN, India – 632 014.
Module – 5

Verification and Validation

1. Strategic Approach to Testing

2. Testing Fundamentals

3. Testing of Different Applications

2
Module - 5
3

 Important concepts in program testing


 Black-box testing:
 equivalencepartitioning
 boundary value analysis

 White-box testing
 Debugging
 Unit, Integration, and System testing
Module - 5
4

 Introduction to Testing.
 White-box testing:
 statement coverage
 path coverage
 branch testing
 condition coverage
 Cyclomatic complexity
Module - 5
5

 Data flow testing


 Mutation testing
 Cause effect graphing
 Performance testing.
 Test summary report
How do you test a program?
6

Input test data to the program.


Observe the output:

Checkif the program behaved


as expected.
How do you test a system?
7
How do you test a system?
8

 If the program does not behave


as expected:
note the conditions under which it
failed.
later debug and correct.
Error, Faults, and Failures
9

A failure is a manifestation of
an error (aka defect or bug).
mere presence of an error may
not lead to a failure.
Error, Faults, and Failures
10

 A fault is an incorrect state entered


during program execution:
a variable value is different from what
it should be.
A fault may or may not not lead to a
failure.
Consequences of Bugs
11
Test cases and Test suites
12

 Test a software using a set of


carefully designed test cases:
the set of all test cases is called
the test suite
Test cases and Test suites
13

 A test case is a triplet [I,S,O]


I is the data to be input to the system,
S is the state of the system at which
the data will be input,
O is the expected output of the
system.
Verification versus Validation
14

 Verification is the process of determining:


 whether of one phase of development
output
conforms to its previous phase.
 Validation is the process of determining
 whether a fully developed system conforms to
its SRS document.
Verification versus Validation
15

 Verification is concerned with phase


containment of errors,
whereas the aim of validation is that
the final product be error free.
 Verification: "Are we building the

product right?"
 Validation: "Are we building the right

product?
Verification versus Validation
16
Verification Validation
Verification is the process to find whether the The validation process is checked whether
software meets the specified requirements the software meets requirements and
for particular phase. expectation of the customer.
It estimates an intermediate product. It estimates the final product.
The objectives of verification is to check The objectives of the validation is to check
whether software is constructed according to whether the specifications are correct and
requirement and design specification. satisfy the business need.
It describes whether the outputs are as per It explains whether they are accepted by the
the inputs or not. user or not.
Verification is done before the validation. It is done after the verification.

Plans, requirement, specification, code are Actual product or software is tested under
evaluated during the verifications. validation.
It manually checks the files and document. A computer software or developed program
based checking of files and document.
Activities: Reviews, Walkthroughs, Testing
Inspections
Who tests the Software?
17
Testing Myths
18

 Software Testing is Monotonous/Boring


 Think of testing as an information gathering activity
done with intent of exploring and discovering answers
and not just discovering flaws or bugs in the software.
 A Tester Should Be Able To Test Everything
 It is indeed a foolish notion. The reasons of justifying this
perception can be many — lack of enough time, lack of
available infrastructure to test something, vastness of all
permutation and combination that can exist etc.
Testing Myths
19

 Testers only find Bugs


 But, along with just finding bugs, testers do analyze the
requirements, review the product architecture, provide
ideas to make the product more user-friendly, validate
the help documents and a lot of other things.
 Testers Add No Value to the Software
 a skilled tester is often an expert of the system and
gets a better chance to demonstrate their understanding
of the product in a way that adds value to the product.
 Test Automation will make Human Testers obsolete
 under certain contexts, automation tools can act as
supplementary tools to aid human testers; NOT to
replace them
Testing Strategy
20

 We begin by ‘testing-in-the-small’ and move


toward ‘testing-in-the-large’
 For conventional software
 The module (component) is our initial focus
 Integration of modules follows

 For OO software
 our focus when “testing in the small” changes from an
individual module (the conventional view) to an OO
class that encompasses attributes and operations and
implies communication and collaboration
21
Design of Test Cases
 Exhaustive testing of any non-trivial
system is impractical:
 input data domain is extremely large.
 Design an optimal test suite:
of reasonable size and
uncovers as many errors as possible.
22
Design of Test Cases
 If test cases are selected randomly:
 many test cases would not contribute to the
significance of the test suite,
 would not detect errors not already being detected
by other test cases in the suite.
 Number of test cases in a randomly selected test
suite:
 not an indication of effectiveness of testing.
23
Design of Test Cases
 Testing a system using a large number of
randomly selected test cases:
 does not mean that many errors in the system
will be uncovered.
 Consider an example for finding the
maximum of two integers x and y.
24
Design of Test Cases
 The code has a simple programming error:
 If (x>y) max = x;
else max = x;
 test suite {(x=3,y=2);(x=2,y=3)} can detect the
error,
 a larger test suite {(x=3,y=2);(x=4,y=3);
(x=5,y=1)} does not detect the error.
25
Design of Test Cases
 Systematic approaches are
required to design an optimal test
suite:
each test case in the suite should
detect different errors.
26
Design of Test Cases
 There are essentially two main
approaches to design test cases:
Black-boxapproach
White-box (or glass-box) approach
Black-box Testing
27

 Test cases are designed using only


functional specification of the software:
 without any knowledge of the internal
structure of the software.
 For this reason, black-box testing is also
known as functional testing.
White-box Testing
28

 Designing white-box test cases:


requires knowledge about the internal
structure of software.
white-box testing is also called
structural testing.
Black-box Testing
29

 There are essentially two main


approaches to design black box
test cases:
Equivalenceclass partitioning
Boundary value analysis
Equivalence Class Partitioning
30

 Input values to a program are


partitioned into equivalence classes.
 Partitioning is done such that:
program behaves in similar ways to
every input value belonging to an
equivalence class.
Why define equivalence classes?
31

 Test the code with just one


representative value from each
equivalence class:
as good as testing using any other
values from the equivalence classes.
Equivalence Class Partitioning
32

 How do you determine the


equivalence classes?
examine the input data.
few general guidelines for
determining the equivalence
classes can be given
Equivalence Class Partitioning
33

 If the input data to the program is


specified by a range of values:
 e.g. numbers between 1 to 5000.
 one valid and two invalid equivalence
classes are defined.

1 5000
Equivalence Class Partitioning
34

 If input is an enumerated set of values:


 e.g.{a,b,c}
 one equivalence class for valid input values

 another equivalence class for invalid input


values should be defined.
35
Example
 A program reads an input value in
the range of 1 and 5000:
computes the square root of the input
number
SQR
T
36
Example (cont.)
 There are three equivalence classes:
 the set of negative integers,
 set of integers in the range of 1 and 5000,

 integers larger than 5000.

1 5000
37
Example (cont.)
 The test suite must include:
representatives from each of the three
equivalence classes:
a possible test suite can be:
{-5,500,6000}.
1 5000
Boundary Value Analysis
38

 Some typical programming errors occur:


 at boundaries of equivalence classes
 might be purely due to psychological
factors.
 Programmers often fail to see:
 special
processing required at the
boundaries of equivalence classes.
Boundary Value Analysis
39

 Programmers may improperly use <


instead of <=
 Boundary value analysis:

select test cases at the boundaries


of different equivalence classes.
40
Example
 For a function that computes the
square root of an integer in the
range of 1 and 5000:
testcases must include the values:
{0,1,5000,5001}.
1 5000
41
Debugging
 Once errors are identified:
 itis necessary identify the precise location
of the errors and to fix them.
 Each debugging approach has its own
advantages and disadvantages:
 each is useful in appropriate circumstances.
Debugging Process
42
Brute-force method
43

 This is the most common method of


debugging:
 least efficient method.
 program is loaded with print statements

 print the intermediate values

 hope that some of printed values will help


identify the error.
Symbolic Debugger
44

 Brute force approach becomes more


systematic:
 with the use of a symbolic debugger,
 symbolic debuggers get their name for
historical reasons
 early debuggers let you only see values
from a program dump:
 determine which variable it corresponds to.
Symbolic Debugger
45

 Using a symbolic debugger:


 values of different variables can be easily
checked and modified
 single stepping to execute one instruction at
a time
 break points and watch points can be set to
test the values of variables.
46
Backtracking
 This is a fairly common approach.
 Beginning at the statement where an
error symptom has been observed:
source code is traced backwards until
the error is discovered.
47
Example
int main(){
int i,j,s;
i=1;
while(i<=10){
s=s+i;
i++; j=j++;}
printf(“%d”,s);
}
48
Backtracking
 Unfortunately, as the number of source
lines to be traced back increases,
 the number of potential backward paths
increases
 becomes unmanageably large for complex
programs.
Cause-elimination method
49

 Determine a list of causes:


 which could possibly have contributed to
the error symptom.
 tests are conducted to eliminate each.

 A related technique of identifying error


by examining error symptoms:
 software fault tree analysis.
50
Program Slicing
 This technique is similar to back tracking.
 However, the search space is reduced by defining
slices.
 A slice is defined for a particular variable at a
particular statement:
 set of source lines preceding this statement which can
influence the value of the variable.
51
Example
int main(){
int i,s;
i=1; s=1;
while(i<=10){
s=s+i;
i++;}
printf(“%d”,s);
printf(“%d”,i);
}
52
Debugging Guidelines
 Debugging usually requires a thorough
understanding of the program design.
 Debugging may sometimes require full redesign of
the system.
 A common mistake novice programmers often make:
 not fixing the error but the error symptoms.
53
Debugging Guidelines
 Be aware of the possibility:
an error correction may introduce new
errors.
 After every round of error-fixing:
regression testing must be carried out.
Program Analysis Tools
54

 An automated tool:
 takes program source code as input
 produces reports regarding several
important characteristics of the program,
 such as size, complexity, adequacy of
commenting, adherence to programming
standards, etc.
Program Analysis Tools
55

 Some program analysis tools:


 produce reports regarding the adequacy of
the test cases.
 There are essentially two categories of
program analysis tools:
 Static
analysis tools
 Dynamic analysis tools
56
Static Analysis Tools
 Static analysis tools:
assess properties of a program
without executing it.
Analyze the source code
provide analytical conclusions.
57
Static Analysis Tools
 Whether coding standards have been adhered to?
 Commenting is adequate?
 Programming errors such as:
 uninitialized
variables
 mismatch between actual and formal parameters.

 Variables declared but never used, etc.


58
Static Analysis Tools
 Code walk through and inspection can
also be considered as static analysis
methods:
however, the term static program
analysis is generally used for automated
analysis tools.
59
Static Analysis Tools
Inspection Walkthrough
Formal Informal
Initiated by the project team Initiated by the author
Planned meeting with fixed roles Unplanned.
assigned to all the members involved
Reader reads the product code. Author reads the product code and his
Everyone inspects it and comes up with team mate comes up with defects or
defects. suggestions
Recorder records the defects Author makes a note of defects and
suggestions offered by team mate
Moderator has a role in making sure Informal, so there is no moderator
that the discussions proceed on the
productive lines
Dynamic Analysis Tools
60

 Dynamic program analysis tools


require the program to be
executed:
itsbehavior recorded.
Produce reports such as adequacy
of test cases.
61
Testing
 The aim of testing is to identify all
defects in a software product.
 However, in practice even after
thorough testing:
one cannot guarantee that the software
is error-free.
62
Testing
 The input data domain of most
software products is very large:
itis not practical to test the
software exhaustively with each
input data value.
63
Testing
 Testing does however expose many
errors:
testingprovides a practical way of
reducing defects in a system
increases the users' confidence in a
developed system.
64
Testing
 Testing is an important development
phase:
 requires
the maximum effort among all
development phases.
 In a typical development organization:
 maximum number of software engineers can be found
to be engaged in testing activities.
65
Testing
 Many engineers have the wrong
impression:
testing is a secondary activity
it is intellectually not as stimulating
as the other development
activities, etc.
66
Testing
 Testing a software product is in fact:
as much challenging as initial
development activities such as
specification, design, and coding.
 Also, testing involves a lot of
creative thinking.
67
Testing
 Software products are tested at
three levels:
Unit testing
Integration testing

System testing
68
Unit testing
 During unit testing, modules are
tested in isolation:
Ifall modules were to be tested
together:
it may not be easy to determine
which module has the error.
69
Unit testing
 Unit testing reduces debugging
effort several folds.
Programmers carry out unit testing
immediately after they complete
the coding of a module.
70
Unit testing
71
Unit testing
Integration testing
72

 After different modules of a system


have been coded and unit tested:
modules are integrated in steps
according to an integration plan
partially integrated system is tested at
each integration step.
Integration testing
73
74
System Testing
 System testing involves:
validatinga fully developed
system against its requirements.
Integration Testing
75

 Develop the integration plan by


examining the structure chart :
big bang approach
top-down approach

bottom-up approach

mixed approach
Example Structured Design
76
Big bang Integration Testing
77

 Big bang approach is the simplest


integration testing approach:
all the modules are simply put
together and tested.
this technique is used only for very
small systems.
Big bang Integration Testing
78

 Main problems with this approach:


 if an error is found:
 itis very difficult to localize the error
 the error may potentially belong to any of the
modules being integrated.
 debugging errors found during big bang
integration testing are very expensive to
fix.
Bottom-up Integration Testing
79

 Integrate and test the bottom level


modules first.
 A disadvantage of bottom-up testing:

 when the system is made up of a large


number of small subsystems.
 This extreme case corresponds to the big
bang approach.
Bottom-up Integration Testing
80
Top-down integration testing
81

 Top-down integration testing starts with the main


routine:
 and one or two subordinate routines in the system.
 After the top-level 'skeleton’ has been tested:
 immediatesubordinate modules of the 'skeleton’ are
combined with it and tested.
Top-down integration testing
82
Mixed integration testing
83

 Mixed (or sandwiched) integration


testing:
uses both top-down and bottom-up
testing approaches.
Most common approach
Mixed integration testing
84
85
Integration Testing
 In top-down approach:
testingwaits till all top-level modules
are coded and unit tested.
 In bottom-up approach:
testingcan start only after bottom
level modules are ready.
86
System Testing
 There are three main kinds of
system testing:
Alpha Testing
Beta Testing

Acceptance Testing
87
Alpha Testing
 System testing is carried out by the
test team within the developing
organization.
88
Beta Testing
 System testing performed by a
select group of friendly customers.
Acceptance Testing
89

 System testing performed by the


customer himself:
to determine whether the system
should be accepted or rejected.
90
Stress Testing
 Stress testing (aka endurance testing):
 impose abnormal input to stress the
capabilities of the software.
 Input data volume, input data rate, processing
time, utilization of memory, etc. are tested
beyond the designed capacity.
91
Smoke Testing
 A common approach for creating “daily builds” for product software
 Smoke testing steps:
 Software components that have been translated into code are
integrated into a “build.”
 A build includes all data files, libraries, reusable modules, and
engineered components that are required to implement one or more
product functions.
 A series of tests is designed to expose errors that will keep the build
from properly performing its function.
 The intent should be to uncover “show stopper” errors that have the
highest likelihood of throwing the software project behind schedule.
 The build is integrated with other builds and the entire product (in its
current form) is smoke tested daily.
 The integration approach may be top down or bottom up.
How many errors are still remaining?
92

 Seed the code with some known


errors:
artificial
errors are introduced into the
program.
Check how many of the seeded errors
are detected during testing.
93
Error Seeding
 Let:
N be the total number of errors in the
system
n of these errors be found by testing.
 S be the total number of seeded errors,

 s of the seeded errors be found during


testing.
94
Error Seeding
 n/N = s/S
 N = S n/s

 remaining defects:

N - n = n ((S - s)/ s)
95
Example
 100 errors were introduced.
 90 of these errors were found during

testing
 50 other errors were also found.

 Remaining errors=

50 (100-90)/90 = 6
96
Error Seeding
 The kind of seeded errors should match
closely with existing errors:
 However, it is difficult to predict the types of
errors that exist.
 Categories of remaining errors:
 canbe estimated by analyzing historical data
from similar projects.
White-box Testing
97

 Designing white-box test cases:


 requiresknowledge about the internal structure of
software.
 white-box testing is also called structural testing.
Black-box Testing
98

 Two main approaches to design black box test


cases:
 Equivalenceclass partitioning
 Boundary value analysis
White-Box Testing
99

 There exist several popular white-box testing


methodologies:
 Statement coverage
 branch coverage

 path coverage

 condition coverage

 mutation testing

 data flow-based testing


Statement Coverage
100

Statement coverage
methodology:
design test cases so that
every statement in a program is
executed at least once.
Statement Coverage
101

 The principal idea:


 unlessa statement is executed,
 we have no way of knowing if an error exists in that
statement.
Statement coverage criterion
102

 Based on the observation:


an error in a program can not
be discovered:
unless the part of the program
containing the error is executed.
Statement coverage criterion
103

 Observing that a statement behaves properly for


one input value:
 no guarantee that it will behave correctly for all input
values.
Example
104

 int f1(int x, int y){


 1 while (x != y){

 2 if (x>y) then
 3 x=x-y;
 4 else y=y-x;
 5 }

 6 return x; }
Euclid's GCD computation algorithm
105

 By choosing the test set {(x=3,y=3),(x=4,y=3),


(x=3,y=4)}
 all statements are executed at least once.
Branch Coverage
106

Test cases are designed such


that:
different branch conditions
given true and false values in turn.
Branch Coverage
107

Branch testing guarantees


statement coverage:
a stronger testing compared to
the statement coverage-based
testing.
Stronger testing
108

 Test cases are a superset of a


weaker testing:
discovers at least as many errors as a
weaker testing
contains at least as many significant
test cases as a weaker test.
Example
109

 int f1(int x,int y){


 1 while (x != y){

 2 if (x>y) then
 3 x=x-y;
 4 else y=y-x;
 5 }

 6 return x; }
Example
110

 Test cases for branch coverage can be:


 {(x=3,y=3),(x=3,y=2), (x=4,y=3), (x=3,y=4)}
Condition Coverage
111

 Test cases are designed such that:


 each component of a composite conditional expression
 given both true and false values.
Example
112

 Consider the conditional expression


 ((c1.and.c2).or.c3):
 Each of c1, c2, and c3 are exercised at least
once,
 i.e. given true and false values.
Branch testing
113

 Branch testing is the simplest condition testing


strategy:
 compound conditions appearing in different branch
statements
 are given true and false values.
Branch testing
114

 Condition testing
 stronger testing than branch testing:
 Branch testing
 stronger than statement coverage testing.
Condition coverage
115

Consider a boolean expression


having n components:
forcondition coverage we
n
require 2 test cases.
Condition coverage
116

Condition coverage-based
testing technique:
practical only if n (the number
of component conditions) is small.
Path Coverage
117

 Design test cases such that:


all linearly independent paths in
the program are executed at
least once.
118
Linearly independent paths
 Defined in terms of
control
flow graph (CFG) of a
program.
119
Path coverage-based testing
 To understand the path coverage-based testing:
 we need to learn how to draw control flow graph of a
program.
120
Control flow graph (CFG)
 A control flow graph (CFG) describes:
 the sequence in which different instructions of a
program get executed.
 the way control flows through the program.
How to draw Control flow graph?
121

 Number all the statements of a program.


 Numbered statements:
 represent nodes of the control flow graph.
How to draw Control flow graph?
122

 An edge from one node to another node exists:


 ifexecution of the statement representing the first
node
 can result in transfer of control to the other node.
Example
123

 int f1(int x,int y){


 1 while (x != y){

 2 if (x>y) then
 3 x=x-y;
 4 else y=y-x;
 5 }

 6 return x; }
Example Control Flow Graph
124

1
2
3 4
5
6
How to draw Control flow graph?
125

 Sequence:
1 a=5; 1
 2 b=a*b-1;

2
How to draw Control flow graph?
126

 Selection:
1 if(a>b) then
1
2 c=3;
 3 else c=5; 2 3
 4 c=c*c; 4
How to draw Control flow graph?
127

Iteration:
1

1 while(a>b){
2 b=b*a;
2
3 b=b-1;}
 4 c=b+d;
3
4
128
Path
 A path through a program:
a node and edge sequence from the starting node to a
terminal node of the control flow graph.
 There may be several terminal nodes for program.
Independent path
129

 Any path through the program:


 introducing at least one new node:
 that is not included in any other independent paths.
Independent path
130

 It is straight forward:
to identify linearly independent paths
of simple programs.
 For complicated programs:
itis not so easy to determine the
number of independent paths.
131
McCabe's cyclomatic metric
 An upper bound:
forthe number of linearly independent
paths of a program
 Provides a practical way of
determining:
the maximum number of linearly
independent paths in a program.
132
McCabe's cyclomatic metric
 Given a control flow graph G,
cyclomatic complexity V(G):
 V(G)= E-N+2
N is the number of nodes in G
 E is the number of edges in G
133
Example Control Flow Graph
1
2
3 4
5
6
Example
134

Cyclomatic complexity =

7-6+2 = 3.
Cyclomatic complexity
135

 Another way of computing cyclomatic


complexity:
 inspectcontrol flow graph
 determine number of non-overlapping
bounded areas in the graph
 V(G) = Total number of non-overlapping
bounded areas + 1
Bounded area
136

Any region enclosed by a


nodes and edge sequence.


137
Example Control Flow Graph
1

3 4

6
138
Example
 From a visual examination of the CFG:
 the number of bounded areas is 2.
 cyclomatic complexity = 2+1=3.
Cyclomatic complexity
139

 Another way of computing cyclomatic


complexity:
 inspect
total number of decision / branch
statements
 V(G) = Total number of decision / branch
statements + 1
Cyclomatic complexity
140

 McCabe's metric provides:


a quantitative measure of testing difficulty
and the ultimate reliability
 Intuitively,
number of bounded areas increases
with the number of decision nodes and
loops.
Cyclomatic complexity
141

 The first method of computing V(G) is


amenable to automation:
you can write a program which
determines the number of nodes and
edges of a graph
applies the formula to find V(G).
Cyclomatic complexity
142

 The cyclomatic complexity of a


program provides:
a lower bound on the number of test
cases to be designed
to guarantee coverage of all linearly
independent paths.
Cyclomatic complexity
143

 Defines the number of independent paths in a


program.
 Provides a lower bound:
 for the number of test cases for path coverage.
Cyclomatic complexity
144

 Knowing the number of test cases


required:
does not make it any easier to derive
the test cases,
only gives an indication of the minimum
number of test cases required.
Path testing
145

 The tester proposes:


 an initial set of test data using his experience and
judgement.
146
Path testing
 A dynamic program analyzer is used:
to indicate which parts of the program
have been tested
the output of the dynamic analysis
used to guide the tester in selecting
additional test cases.
147
Derivation of Test Cases
 Let us discuss the steps:
to derive path coverage-based
test cases of a program.
148
Derivation of Test Cases
 Draw control flow graph.
 Determine V(G).

 Determine the set of linearly

independent paths.
 Prepare test cases:

to force execution along each path.


149
Example
 int f1(int x,int y){
 1 while (x != y){
 2 if (x>y) then
 3 x=x-y;
 4 else y=y-x;
 5 }

 6 return x; }
Example Control Flow Diagram
150

3 4

6
Derivation of Test Cases
151

 Number of independent paths: 3


 1,6 test case (x=1, y=1)
 1,2,3,5,1,6 test case(x=1, y=2)

 1,2,4,5,1,6 test case(x=2, y=1)


An interesting application of
152
cyclomatic complexity
 Relationship exists between:
 McCabe's metric
 the number of errors existing in the code,

 the time required to find and correct the errors.


Cyclomatic complexity
153

 Cyclomatic complexity of a program:


 also indicates the psychological complexity of a
program.
 difficulty level of understanding the program.
Cyclomatic complexity
154

 From maintenance perspective,


limit cyclomatic complexity
of modules to some reasonable value.
Good software development
organizations:
restrict
cyclomatic complexity of functions to
a maximum of ten or so.
Automated Testing Tools
 Mercury Interactive
 Quick Test Professional: Regression testing

 WinRunner: UI testing

 IBM Rational
 Rational Robot

 Functional Tester

 Borland
 Silk Test

 Compuware
 QA Run

 AutomatedQA
 TestComplete
Data Flow-Based Testing
156

 Selects test paths of a program:


according to the locations of
definitionsand uses of different
variables in a program.
Data Flow-Based Testing
157

 For a statement numbered S,


DEF(S) = {X/statement S contains a
definition of X}
USES(S)= {X/statement S contains a use
of X}
Example: 1: a=b; DEF(1)={a},
USES(1)={b}.
Example: 2: a=a+b; DEF(1)={a},
USES(1)={a,b}.
Data Flow-Based Testing
158

 A variable X is said to be live at


statement S1, if
X is defined at a statement S:
there exists a path from S to S1 not
containing any definition of X.
DU Chain Example
159

1 X(){
2 a=5; /* Defines variable a */
3 While(C1) {
4 if (C2)
5 b=a*a; /*Uses variable a */
6 a=a-1; /* Defines variable a */
7 }
8 print(a); } /*Uses variable a */
Definition-use chain (DU chain)
160

 [X,S,S1],
S and S1 are statement numbers,
 X in DEF(S)

 X in USES(S1), and

 the definition of X in the statement S is live at


statement S1.
Data Flow-Based Testing
161

 One simple data flow testing strategy:


every DU chain in a program be
covered at least once.
Data Flow-Based Testing
162

 Data flow testing strategies:


usefulfor selecting test paths of a
program containing nested if and loop
statements
Data Flow-Based Testing
163

1 X(){
2 B1; /* Defines variable a */
3 While(C1) {
4 if (C2)
5 if(C4) B4; /*Uses variable a */
6 else B5;
7 else if (C3) B2;
8 else B3; }
9 B6 }
Data Flow-Based Testing
164

 [a,1,5]: a DU chain.
 Assume:
 DEF(X) = {B1, B2, B3, B4, B5}
 USED(X) = {B2, B3, B4, B5, B6}

 There are 25 DU chains.

 However only 5 paths are needed to cover these


chains.
Mutation Testing
165

 The software is first tested:


 using an initial testing method based on white-box
strategies we already discussed.
 After the initial testing is complete,
 mutation testing is taken up.
 The idea behind mutation testing:
 make a few arbitrary small changes to a program at a
time.
Mutation Testing
166

 Each time the program is changed,


itis called a mutated program
the change is called a mutant.
Mutation Testing
167

 A mutated program:
 tested against the full test suite of the program.
 If there exists at least one test case in the test suite
for which:
a mutant gives an incorrect result,
 then the mutant is said to be dead.
Mutation Testing
168

 If a mutant remains alive:


 even after all test cases have been exhausted,
 the test suite is enhanced to kill the mutant.

 The process of generation and killing of mutants:


 can be automated by predefining a set of primitive
changes that can be applied to the program.
Mutation Testing
169

 The primitive changes can be:


alteringan arithmetic operator,
changing the value of a constant,

changing a data type, etc.


Mutation Testing
170

 A major disadvantage of mutation


testing:
computationallyvery expensive,
a large number of possible mutants can
be generated.
Cause and Effect Graphs
171

 Testing would be a lot easier:


 ifwe could automatically generate test cases from
requirements.
 Work done at IBM:
 Can requirements specifications be systematically used
to design functional test cases?
Cause and Effect Graphs
172

 Examine the requirements:


 restate them as logical relation between
inputs and outputs.
 The result is a Boolean graph representing
the relationships
 called a cause-effect graph.
Cause and Effect Graphs
173

 Convert the graph to a decision table:


each column of the decision table
corresponds to a test case for functional
testing.
174
Steps to create cause-effect graph
 Study the functional requirements.
 Mark and number all causes and effects.

 Numbered causes and effects:

 become nodes of the graph.


175
Steps to create cause-effect graph
 Draw causes on the LHS
 Draw effects on the RHS
 Draw logical relationship between
causes and effects
as edges in the graph.
 Extra nodes can be added
to simplify the graph
Drawing Cause-Effect Graphs
176

A B
If A then B

A
C
B
If (A and B)then C
Drawing Cause-Effect Graphs
177

A
C
B
If (A or B)then C
A
C
B
If (not(A and B))then C
Drawing Cause-Effect Graphs
178

A
C
B
If (not (A or B))then C

A B
If (not A) then B
Cause effect graph- Example
179

 Situation:
 The “Print message” is software that read two characters
and, depending on their values, messages must be printed.
 The first character must be an “A” or a “B”.
 The second character must be a digit.
 If the first character is an “A” or “B” and the second character
is a digit, the file must be updated.
 If the first character is incorrect (not an “A” or “B”), the
message X must be printed.
 If the second character is incorrect (not a digit), the message
Y must be printed.
Cause effect graph- Example
180

 The causes allocated by letter “C” are as follows,


C1 – First character is A
C2 – First character is B
C3 – the Second character is a digit
Cause effect graph- Example
181

 The effects designated by letter “e” are as follows,


E1 – Update the file
E2 – Print message “X”
E3 – Print message “Y”

Build up a cause-effect graph


Cause effect graph- Example
182

 let’s start with Effect E1. Effect E1 is to update the file. The file is
updated when
– The first character is “A” and the second character is a digit
– The first character is “B” and the second character is a digit
– The first character can either be “A” or “B” and cannot be both.

 Now let’s put these 3 points in symbolic form:


 For E1 to be true – following are the causes:
– C1 and C3 should be true
– C2 and C3 should be true
– C1 and C2 cannot be true together. This means C1 and C2 are
mutually exclusive.
Cause effect graph- Example
183
Cause effect graph- Example
184
Cause effect graph- Example
185
Cause effect graph- Example
186
Cause effect graph- Example
187

 Put a row in the decision table for


each cause or effect:
inthe example, there are three rows for
causes and three for effects.
Cause effect graph- Example
188

 The columns of the decision table


correspond to test cases.
 Define the columns by examining each

effect:
 listeach combination of causes that can lead
to that effect.
Cause effect graph- Example
189

 We can determine the number of columns


of the decision table
 byexamining the lines flowing into the effect
nodes of the graph.
Cause effect graph
190

 Not practical for systems which:


includetiming aspects
feedback from processes is used for
some other processes.
Phased versus Incremental Integration
Testing
191

 Integration can be incremental or


phased.
 In incremental integration testing,

only one new module is added to the


partial system each time.
Phased versus Incremental Integration Testing
192

 In phased integration,
a group of related modules are added
to the partially integrated system each
time.
 Big-bang testing:
a degenerate case of the phased
integration testing.
Phased versus Incremental
193
Integration Testing
 Phased integration requires less number of
integration steps:
 compared to the incremental integration
approach.
 However, when failures are detected,
 it is easier to debug if using incremental
testing
 since errors are very likely to be in the newly
integrated module.
System Testing
194

 System tests are designed to


validate a fully developed system:
to assure that it meets its
requirements.
Performance Testing
195

 Addresses non-functional requirements.


 May sometimes involve testing hardware and
software together.
 There are several categories of performance
testing.
Stress testing
196

 Evaluates system performance


when stressed for short periods of time.
 Stress testing
also known as endurance testing.
Stress testing
197

 Stress tests are black box tests:


designed to impose a range of
abnormal and even illegal input
conditions
so as to stress the capabilities of the
software.
Stress Testing
198

 If the requirements is to handle a


specified number of users, or devices:
stresstesting evaluates system
performance when all users or devices
are busy simultaneously.
Stress Testing
199

 If an operating system is supposed to support 15


multiprogrammed jobs,
 the system is stressed by attempting to run 15 or more
jobs simultaneously.
 A real-time system might be tested
 to determine the effect of simultaneous arrival of several
high-priority interrupts.
Stress Testing
200

 Stress testing usually involves an element of time or


size,
 such as the number of records transferred per unit time,
 the maximum number of users active at any time, input
data size, etc.
 Therefore stress testing may not be applicable to
many types of systems.
Volume Testing
201

 Addresses handling large amounts of data


in the system:
 whether data structures (e.g. queues, stacks,
arrays, etc.) are large enough to handle all
possible situations
 Fields, records, and files are stressed to check
if their size can accommodate all possible
data volumes.
Configuration Testing
202

 Analyze system behavior:


 in various hardware and software
configurations specified in the requirements
 sometimes systems are built in various
configurations for different users
 for instance, a minimal system may serve a
single user,
 other configurations for additional users.
Compatibility Testing
203

 These tests are needed when the


system interfaces with other systems:
check whether the interface functions as
required.
Compatibility testing
204
Example
 If a system is to communicate with a
large database system to retrieve
information:
a compatibility test examines speed and
accuracy of retrieval.
Recovery Testing
205

 These tests check response to:


presence of faults or to the loss of data,
power, devices, or services
subject system to loss of resources
check if the system recovers properly.
Maintenance Testing
206

 Diagnostic tools and procedures:


help find source of problems.
It may be required to supply
memory maps
diagnostic programs
traces of transactions,
circuit diagrams, etc.
Maintenance Testing
207

 Verify that:
allrequired artifacts for maintenance
exist
they function properly
Documentation tests
208

 Check that required documents exist


and are consistent:
user guides,
maintenance guides,
technical documents
Documentation tests
209

 Sometimes requirements specify:


formatand audience of specific
documents
documents are evaluated for compliance
Usability tests
210

 All aspects of user interfaces are tested:


 Display screens
 messages

 report formats

 navigation and selection problems


Environmental test
211

 These tests check the system’s ability to perform at the


installation site.
 Requirements might include tolerance for
 heat
 humidity

 chemical presence

 portability

 electrical or magnetic fields

 disruption of power, etc.


Test Summary Report
212

 Generated towards the end of testing


phase.
 Covers each subsystem:

a summary of tests which have been applied


to the subsystem.
Test Summary Report
213

 Specifies:
 how many tests have been applied to a subsystem,
 how many tests have been successful,
 how many have been unsuccessful, and the degree to
which they have been unsuccessful,
 e.g.whether a test was an outright failure
 or whether some expected results of the test were actually
observed.
Regression Testing
214

 Does not belong to either unit test,


integration test, or system test.
In stead, it is a separate dimension to
these three forms of testing.
Regression testing
215

 Regression testing is the running of test


suite:
 after each change to the system or after each
bug fix
 ensures that no new bug has been introduced
due to the change or the bug fix.
Regression testing
216

 Regression tests assure:


the new system’s performance is at least
as good as the old system
always used during phased system
development.
Regression testing
217

 As integration testing proceeds, the


number of regression tests can grow
quite large.
Design regression test suite - include
only those tests that address one or more
classes of errors in each of the major
program functions.

You might also like