0% found this document useful (0 votes)
113 views

ISTQB TM Materiale

The document provides an overview of the ISTQB Advanced Level Test Manager certification topics. It covers 7 key areas related to testing processes, test planning, risk management, test tools, defect management, process improvement, and people skills. The certification is beneficial for aspiring or current test managers. The study material then delves into specific topics within each of the 7 areas to help candidates prepare for the exam.

Uploaded by

alexdaramus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views

ISTQB TM Materiale

The document provides an overview of the ISTQB Advanced Level Test Manager certification topics. It covers 7 key areas related to testing processes, test planning, risk management, test tools, defect management, process improvement, and people skills. The certification is beneficial for aspiring or current test managers. The study material then delves into specific topics within each of the 7 areas to help candidates prepare for the exam.

Uploaded by

alexdaramus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

ISTQB Advanced Level Test Manager Study Material

The ISTQB Advanced Level Test Manager exam tests your knowledge in 7 key areas ranging
from testing processes, test planning to people skills. Anyone who aspires to be a Test / QA
Manager should have knowledge of these topics.

The below study material covers all the topics which are part of the ISTQB Advanced Level
Test Manager certification. This certification is beneficial for Test Managers. Even if you are
not planning to take up the certification, studying the topics will improve your knowledge of the
subject.

1. Testing Process
1. What is Test Planning? What are Work Products in Testing?
2. What is Test Monitoring and Test Control?
3. What is Test Condition & Test Analysis? Advantages, Disadvantages & Level of Detail
4. What is Test Design? When to create Test Design?
5. What is Test Implementation? Advantages & Disadvantage of early implementation
6. What is Test Execution?
7. What Are Test Closure Activities? Evaluating Exit Criteria and Reporting

2. Test Management
8. How to identify testing stakeholders?
9. How Do Software Development Lifecycle Activities & Work Products Affect Testing?
10. How to align software testing activities with product / development lifecycle activities?
11. How to manage Non-Functional Testing?
12. How to manage Experience Based Testing? What are Test Sessions?
13. What is Risk Based Testing? Identifying, Assessing, Minimizing & Managing Risks
14. What are the different Risk Based Testing (RBT) Techniques?
15. Test selection techniques – Requirement / Model based, Checklists, Reactive testing
16. How to perform test prioritization & effort allocation in test process?
17. What is Test Policy? What does it contain?
18. What is Test Strategy? Types of strategies with examples
19. What are Master Test Plans & Level Test Plan? Examples, When to use
20. What is test estimation? Related Factors, Estimation Techniques
21. How to define, track, report & validate metrics in software testing?
22. Cost of quality
23. What are Distributed, Outsourced and Insourced Testing?
24. How to manage & apply industry standards to software testing projects?

3. Reviews
25. How to manage formal reviews & management audits? Skills, metrics, responsibilities
4. Defect Management
26. Complete guide to defect management for Test / QA Managers

5. Improving the Testing Process


27. Software Testing Process Improvements for Test / QA Managers
28. What is the IDEAL model for test process improvement?
29. Software testing process improvement models – TMMi, TPI Next, CTP, STEP

6. Test Tools and Automation


30. How to select a testing tool? Open Source, Vendor Tools & Custom Development
31. What is Testing Tool ROI? One time/Recurring Costs & Risks related to tools?
32. What are the parameters for selecting a testing tool?
33. How to manage software testing tool lifecycle and tool metrics?

7. People Skills
34. How to assess, manage & develop skills in testers as a Test Manager?
35. How to manage hiring & team dynamics as a Test Manager?
36. How to manage testing team at different levels of independence?
37. How to motivate software testing team as a Test / QA Manager?
38. How to communicate effectively as a Test / QA Manager?
What is Test Planning? What are Work Products in Testing?
In very simple terms, test planning refers to planning the activities that must be performed during
testing in order to achieve the objectives of the test. Test planning for every test level of the
project begins at the starting of that level’s testing process and goes on upto the end of closing
activities of the level.

The syllabus for the ISTQB Advanced Level Test Manager covers some of the test process tasks
separately compared to the ISTQB Foundation Level syllabus to offer more fine tuning and
optimization so that it may be better suited to effective test monitoring and control in software
development lifecycle. The test tasks now take this form:

• Planning, monitoring and control


• Analysis
• Design
• Implementation
• Execution
• Evaluating exit criteria and reporting
• Test closure activities

Test planning activities


Test planning process include the following activities:

• Ascertaining test objectives


• Enumerating test activities
• Identifying resources required
• Determining test metrics
• Identifying procedures for collecting and tracking test metrics
• Fixing benchmarks to assess adherence to test plan
• Identifying parameters to measure achievement of objectives

Zeroing in on software testing metrics while planning enables test team to choose appropriate
testing tools, design training schedules and put documentation guidelines in place.

Selecting a test strategy helps in listing out the test tasks in the planning phase itself.

• For example, in case of risk based testing, focus is on identifying activities that can
diminish risks & coming up with a contingency plan. Product risk analysis data acts as
guide for prioritizing testing activities here.
• Similarly, if the product is likely to have security risks, testing focuses on security testing
and during the planning phase multiple security tests are designed and developed.
• If a product is likely to have design defects, testing focuses on design specifications and
additional design specification reviews (static testing) are planned and built into the
testing process.
• There are so many testing activities to be carried out that they must be prioritized. Data
provided by risk analysis plays an important role in determining this priority list.
• For example, if there is a high risk of system performance issues or system performance
is critical, performance testing is given highest priority when the code is delivered for
testing.
• In case of reactive testing, planning for creating test charters & developing dynamic
testing methods like exploratory testing are the top priority tasks.

During test planning phase, Test Manager is responsible for identifying:

• Testing strategy to be employed


• Test levels, with their measurable outcomes
• Test techniques

For example, for risk based testing of avionics systems, the Test Manager must plan for the level
to which code must be tested and the testing technique to be employed to achieve that level of
testing.

Work Products in Testing


Software testing process typically has these three work products, among others:

• Test basis like product risks or its requirements


• Test conditions
• Tests that provide coverage

Understanding the work products listed above and the relationship between them is essential for
efficient test planning, selection of right test tools, monitoring of test processes and overall
control of the testing process.

As you know, each team has their own work products, which may or may not be the final
deliverables. The test team work products may bear relationship with development team work
products.

The traceability matrix could record relationships between these three elements:

• Detailed design specifications from system designers


• Business related requirements given by business analysts
• Work products of testing, created by testing team

When low-level test cases are expected to be created, the plan could contain a requirement for a
detailed design document to be provided by the dev team and approval of the design document
before creation of test cases can commence.

If an Agile methodology is being followed, information may be shared between teams


informally before actual testing starts.
The test plan can also explicitly enumerate those software features which are in scope & those
that cannot be covered by its scope.

Risk analysis document may be used to determine this. After this, on the basis of formality level
and documentation required for the project, every software feature that lies within test scope can
be traced back to its matching test design description.

In this stage of test planning, Test Managers may need to collaborate with project architects
on following aspects:

• Specifying initial environment for testing


• Verifying availability of required resources
• Ensuring availability and dedication of trained personnel
• Understanding time and cost implications
• Gaining insights about processes required to set up test environment

Lastly, the test manager needs to identify every external dependency as well as its corresponding
service level agreement (SLA) since these need to be considered while planning.

Some of the external dependencies include:

• Third party resource requests


• Dependencies on another project(s) running simultaneously within the program
• Deployment team
• External vendors
• External development partners
• Database administrators
What is Test Monitoring and Test Control?
A Test Manager can ensure effective test control only if a testing schedule and a monitoring
framework are in place so that test work products as well as available resources may be mapped
against the plan. Test Manager must be able to monitor test progress accurately and efficiently.

What is Test Monitoring?


The test monitoring framework must consist of exhaustive steps and specific targets required
to correlate the current status of test work products and tasks to the planned and strategic
objectives.

• Comparing the current status of test work products and tasks, against the plan and
strategic objectives is simpler for projects that are small or less complex.
• However, for other projects a more detailed objective must be defined to accomplish
this.
• Targets and measures that are needed to achieve the objectives of the test and test basis
coverage can be included in this.

It is very important to map the test basis to the status of activities and test related work
products, in a way that is easy to understand, logical and apt for the business stakeholders as
well as the project.

This can be achieved by clearly outlining the test goals and assessing progress against
predefined set of test conditions by mapping work products related to testing with test basis
through test conditions.

As you can recognize, a complex correlation exists between work products of the testing and
those of development and test basis.

This complexity can be reduced by correctly structuring traceability and incorporating the
capability to report accurately on the status of traceability in the design.

Many times the business stakeholders require you to monitor measures and targets that are not
directly related to a system process or specification.

This can occur especially when formal documentation is not available. For instance, a design
requirement is detailed in terms of system processes but the stakeholder wants you to include
analysis of operational business cycle in your test scope.

Closely associating with business stakeholders in the initial stages of project can help you in

• Identifying the measures and targets correctly


• Ensuring better control during the test phase
• Influencing testing activities during the whole process
For example, accurate monitoring of test progress can be easily done if test designs and
implementation work products have been structured on the basis of stakeholder targets and
measures. These targets also enable you to deliver traceability for single or multiple test levels.

What is Test Control?


Test control must be a continuous exercise. It includes comparison of actual test progress
versus planned progress and taking remedial actions if required.

In simple terms, test control refers to controlling the testing process in order to meet the goal of
the project like achieving a target percentage for test coverage or completing testing on a specific
date etc.

• It manages the testing process to accomplish defined objectives, strategies and overall
goal.
• In the process, you may need to revisit the test planning activities and modify them
suitably.
• Some of the possible actions that can be taken to control the testing process and bring it
back on track could be – addition of extra resources, reducing the scope of the release,
splitting the release into multiple releases etc.
• The action that needs to be taken to control the test process will depend on a number of
factors like stakeholders, development lifecycle, budget, project complexity etc.

Data provided by test control can be handled correctly only if a comprehensive planning
information is available.

What should be included in test planning document and test control activities is discussed in
detail under Test Management.
What are Test Conditions & Test Analysis? Advantages,
Disadvantages & Level of Detail
In the ISTQB Foundation Level syllabus, test analysis and design are grouped into one topic.
However, the ISTQB Test Manager syllabus treats them as separate activities that can be
implemented together, in parallel, or as iterative tasks to produce the desired work products in
the test design phase.

Test analysis describes “what” should be tested, in terms of test conditions. In simple terms,
test condition is something that can be tested by us.

These conditions are recognized by analyzing these three factors:

1. Test objectives
2. Test basis
3. Product risks

These factors are the detailed targets and measures for success. Anyone must be able to trace the
test analysis back to these three factors as well as other success criteria specified by the
stakeholders.

By contrast, anyone should be able to trace the test conditions forward to test designs and
other test work products as soon as they are created.

Test analysis of a specified testing level can be done only after the test conditions for the level
have been defined.

Some of the techniques used to identify the test conditions include:

• Formal testing
• Risk based strategy
• Requirement based strategy

Both risk-based and requirement-based test strategies are analytical in nature. Depending on
testing level, the test conditions may or may not specify these:

• Variable values
• Information that forms basis of defining conditions
• Degree of documentation granularity or depth

Factors that determine the level of detail of Test Condition


When determining the level of detailing required for test conditions, a number of factors need
to be considered:
• Testing levels
• Detailing level and quality of test conditions
• System or software difficulty level
• Product risk and Project risk
• Correlation between test condition, what is being tested and method of testing
• Software development lifecycle being used
• Test management tool in use
• Detailing level of test design and other test work products like test documentation
• Understanding level and abilities of the test analysts
• Experience level of the organization as well as the test process(detailing level is directly
proportional to the experience)
• Accessibility of other stakeholders for discussion in case of difficulties

If test conditions are described in great depth, huge number of test conditions will be created.
Let us take the example of testing the checkout process of an e-commerce application.

In a general test condition, this will be specified as a single condition – “Test checkout”.

However, in a detailed test condition documentation, this will be broken down into multiple test
conditions for example, each payment option, currency or country etc.

Advantages of describing test conditions in a detailed


manner
• Greater flexibility in correlating other test work products like test cases to test conditions
and objectives. This in turn enables better and in-depth control and observation for the
Test Manager.
• Helps in avoiding faults, as explained in Foundation Level, because the condition
happens quite early in a project, immediately after the test condition is defined and
sometimes before detailed designs and system architecture are available.
• Explains testing work products in terms that are easily understood by the stakeholders.
They may not comprehend the test cases, test basis or the basic figures like number of
times a test case has been executed.
• Impacts other testing as well as development activities.
• Optimizes test design, test implementation, test execution and test work products by
covering specified measures and targets in detail.
• Enables transparent horizontal traceability in the test level

Disadvantages of describing test conditions in a detailed


manner
• Detailing is time consuming
• Sticking to plan is difficult in case of dynamic environment
• Defining and implementing test levels across the team is challenging
When to describe test conditions in great detail?
• Simple ways of documenting test design are being used like checklists, due to different
constraints like time, cost, or typical development lifecycle
• Unavailability of formal requirements document or development work products that
can be used as basis for defining test conditions
• Project is so complex that required level of control cannot be achieved just by specifying
test cases

When to describe test conditions in less detail?


A low level of detail of test condition is used when the basis of test can be easily
communicated to test design work products.

These are some of the situations where this may be the case:

• Testing at component level


• Simple projects having hierarchical relations between test conditions and test cases
• Acceptance testing, where tests can be defined with the help of use cases
What is Test Design? When to create Test Design?
Test design is a process that describes “how” testing should be done. It includes processes for
the identifying test cases by enumerating steps of the defined test conditions. The testing
techniques defined in test strategy or plan is used for enumerating the steps.

The test cases may be linked to the test conditions and project objectives directly or indirectly
depending upon the methods used for test monitoring, control and traceability.

The objectives consist of test objectives, strategic objectives and stakeholder definition of
success.

When to create test design?


After the test conditions are defined and sufficient information is available to create the test cases
of high or low level, test design for a specified level can be created.

For lower level testing, test analysis and design are combined activity. For higher level
testing, test analysis is performed first, followed by test design.

There are some activities that routinely take place when the test is implemented. These activities
may also be incorporated into the design process when the tests are created in an iterative
manner.

An example of such a case is creation of test data.

Test data will definitely be created during the test implementation. So it is better to incorporate it
in the test design itself.

This approach enables optimization of test condition scope by creating low or high level test
cases automatically.
What is Test Implementation? Advantages & Disadvantage
of early implementation
Test implementation is the practice of organizing and prioritizing tests. This is carried out by
Test Analysts who implement the test designs as real test cases, test processes and test data.

If you are observing the IEEE 829 standard, you must define these parameters as well :

• Test inputs
• Expected results for each test case
• Steps to be followed for each test process

These three are documented together and test data is stored in the form of database tables, flat
files, etc.

Ensuring that the team is prepared for executing the test design is an important part of test
implementation process.

Some of the checks that could be performed to confirm that the team ready to execute tests
include:

• Ensuring that the test environment is in place


• Ensuring every test case is well documented and reviewed
• Putting test environment in a state of readiness
• Checking against explicit and implicit entry criteria for the specified test level
• Describing test environment as well as test data in great detail
• Performing code acceptance check by running it on test environment

Granularity and related complexity of tasks taken up in the course of test implementation is
often influenced by granularity of test work products like test cases, test conditions etc.

For example, if the tests are to be documented for using again in future for regression testing, the
test documents will record step by step description of executing the test.

This detailed explanation will enable other software testers to conduct the test reliably and
consistently irrespective of their expertise.

Similarly, if regulatory guidelines are applicable to the test procedure, test compliance to
relevant standards must be documented as evidence.

Test Managers are also responsible for creating a schedule for test execution, detailing the
order for execution of both automated and manual tests.
They should diligently check for constraints like risks, priorities etc. that may necessitate the
tests to be executed in a specific order or using a specific equipment. Test Managers should also
check for dependencies on test data or test environment.

Disadvantages of early test implementation


Implementing the tests early may have some disadvantages too.

• For example, if Agile lifecycle has been adopted for product development, the code itself
may undergo drastic changes between consecutive iterations. This will render the whole
test implementation useless.
• In fact, any iterative development lifecycle will affect the code between iterations, even if
it is not as drastic as that in the Agile lifecycle.
• This will make pre-defined tests obsolete or require continuous and resource intensive
maintenance.
• Similarly, even in case of sequential lifecycles, if the project is badly managed and
requirements keep changing even when project is in an advanced state, early test
implementation can be rendered obsolete.

Therefore, before starting the test implementation process, Test Manager should consider
these important points:

• Software development life cycle being used


• Features that need to be tested
• Probability of change in requirement late into project lifecycle
• Possibility of changes in code between two iterations

Advantages of early test implementation


Early test implementation offers some advantages too.

• Concrete tests, for example, deliver ready examples of appropriate behavior of the
software if documented according to the test conditions.
• Domain experts find it easier to verify the concrete tests rather than non-concrete
business rules, which further enables them to detect faults in software specifications.
What is Test Execution?
After a test object has been delivered and entry conditions for test execution are met, the test
execution phase begins. Ideally, tests should be conducted as per the defined test cases.
However, the Test Manager may allow the testers to perform additional tests for interesting
and new behaviors observed during testing.

For effective and efficient test execution, these conditions must be fulfilled before actual test
execution begins:

• Designing or defining the tests must be complete


• Test tools, especially test management tools, must be ready
• Processes for tracking the test results, including metrics, must be working
• Every team member must understand the data to be tracked
• Criteria for logging tests and reporting defects must be published and available to all team
members

If the test strategy being used is reactive, even if partly, extra time should be allocated for
applying defect based and experience based methodologies.

Obviously, if any failure is detected during such unplanned testing, variations from the
predefined test cases must be documented, so that they may be reproduced in future.

Automated tests always follow the written instructions, without any deviation.

Fundamental responsibility of a Test Manager in the test execution phase is to observe the test
progress and map it against the test plan.

If any deviation is observed, the Test Manager must initiate and execute test control activities
to pilot the testing towards a successful finish, in accordance with the defined objectives,
strategies and mission.

The Test Manager may employ backward traceability from test results to test conditions, test
basis and finally test objectives and forward traceability from test objectives to test results.
What Are Test Closure Activities? Evaluating Exit Criteria
and Reporting
Test closure activities are those activities which are performed at the end of the testing process.
These are usually performed after the product is delivered, like generating test report etc.
According to test process, it is essential to guarantee that processes for delivering source
information essential for evaluating exit criteria and reporting are available and effective.

Test planning, monitoring and control also includes explanation of information requirements
and techniques used for collecting the information.

Right from test analysis phase to execution, through design and implementation, Test Manager
must ascertain that the information is being provided by the team members in a correct and
timely way. This is necessary for efficient assessment and reporting.

Rate of reports and their depth depends upon the project as well as the organization. Both these
factors should be discussed during test planning after talks with the concerned project
stakeholders.

What Are Test Closure Activities?


After test execution phase is declared finished, the important outputs should be captured for
archiving or conveying to the concerned person.

Together these tasks form the test closure activities, which fall into these four key groups:

1. Checking for completion of tests

Here the Test Manager ensures that every test work has actually been completed. For instance,
each planned test must have been run or avoided deliberately.

All known bugs must be corrected, deferred, or acknowledged to be permanent limitations. In


case of bug correction, the corrections must be tested as well.

2. Handing over test objects

The relevant work products must be passed on to the relevant people. For instance, known
bugs must be conveyed to the system maintenance team.

Tests and its setup information must be conveyed to the maintenance testing team. Sets of
manual as well as automated regression tests must be recorded and passed on to the system
maintenance team.
3. Learning Experience

An important component of test closure activities is the meetings that discuss and document
lessons learned from the testing as well as the complete software development life cycle.

These discussions focus on ensuring that good processes are repeated and poor ones are
eliminated in future. If some issues remain unresolved, they become part of project plans.

Some of the areas considered in future test plans include:

1. Were a broad spectrum of users involved in analysis around quality risks? For example, many
times unexpected defects are discovered late into the project.
o It could have been avoided if there was broader user representation in quality risk
analysis sessions.
o So, in future projects, more users would be included in these sessions.
2. Were the test estimations correct? If, for example, estimates had been significantly off the
mark, future estimation tasks must address the reasons like inefficient testing behind this wrong
estimation.
3. What were the outcomes of cause and effect study of the defects, and the trends displayed by
them?
o For instance, if requests for change were proposed late into the project, affecting
quality of analysis and development, Test Manager should investigate trends that imply
wrong methods.
o These trends could be anything like missing a test level that had the potential to identify
defects sooner, use of new technologies, change in team members, lack of expertise,
etc.
4. Is there any scope for improving test processes?
5. Was there any unexpected deviation from test plan, which should be incorporated in future
test planning?

4. Archiving

Test documents like test reports and logs and work products must be archived in configuration
management system.

Say, both test and project plans – with clear relationship with version and system used for testing
– must be available in planning archive.

The tasks mentioned above are very important but they are usually missed by the testing teams.
So, they must be clearly built into the test plan.

One or more of these tasks may be left out due to any of these reasons:

• Untimely reassignment
• Removal of team members
• Demand for resources for other projects
• Team fatigue
To ensure inclusion of these tasks in the test plan, the contract must explicitly mention them.

Who are the stakeholders in software testing? How to


identify them?
In general, a stakeholder is someone who has an interest or is concerned with the outcome of
the project or activity or decision. This could be an individual, a group or an organization.
Stakeholders can impact and / or be impacted by the outcome of the project. Test Managers must
be able to identify software testing stakeholders and manage them effectively.

The primary responsibility of a Test Manager is to obtain resources like people, infrastructure,
hardware, software, etc. and make maximum use of them to perform the testing processes.

These processes are usually part of an internal project led by Engineering or IT managers, which
has to deliver the system or software or application to be used internally or externally.

Test Managers are concerned only with the testing processes. As these test processes enhance
value of the product by supporting its overall success or averting severe failure, Test Manager
needs to plan and control the test activities with this in mind.

The test processes, associated tasks and work products must be arrived at as per the
stakeholders requirements, requirement specifications and software development life cycle.

People who are interested in any of the following activities are the testing stakeholders:

• Testing activities on the whole


• Testing work products
• Final product quality

A stakeholder may be involved in any of these ways:

• Explicit or implicit interest in test activities


• Receive test work products
• Explicitly or implicitly affected by the deliverable quality

Who are the stakeholders in a testing process depends on the organization, project, product, etc.

Some of the roles that may be stakeholders include:

• Product development leads, team and managers – They are responsible for implementing the
tests, receiving the test results and take actions like fixing bugs based on the results.
• System and Database designers and architects – They design the software, receive test results
and take action based on test results.
• Business and marketing analysts – They define the product features and their expected quality.
They also contribute in defining test coverage, analyzing test results and taking decisions on the
basis of those results.
• Senior-level management, product managers and project sponsors – They contribute in
defining test coverage, analyzing test results and taking decisions on the basis of those results.
• Project Managers – They lead their projects to successful completion while achieving the
objectives, balancing required quality, maintaining schedules and as per budgets. They are also
responsible for acquiring the necessary resources for testing activities and working in
coordination with Test Managers in planning, monitoring and controlling test activities.
• Technical, customer and help desk support staff – They provide support to the customers and
end-users who use the delivered software.
• Indirect and direct users – They are the users who use the software directly, receive output
from the software or get support from the software.

The stakeholders list discussed here is not exhaustive. As a Test Manager you need to identify
the stakeholders who are relevant and necessary for your project.

There may be other stakeholders like account managers, business unit heads etc., who are also
stakeholders in the project. Apart from the successful deliver of the project, the business unit
head may also have an additional interest in the profitability of the project while minimizing the
cost of quality and other costs.

Test Manager also needs to evaluate the specific relationship the stakeholder has with the
testing process and how the test team is fulfilling the stakeholder’s requirements.

Besides finding the stakeholders, a Test Manager must make a list of other software development
life cycle activities and work products that have an effect over or are affected by testing process.
If this is not done, the test process may not be able to realize its optimal efficiency.
How Do Software Development Lifecycle
Activities & Work Products Affect Testing?
Software testing is required to assess the quality of some or all of the work products
delivered by the software development lifecycle activities. The testing process itself subsists in
the larger perspective of the development lifecycle.

Therefore, the Test Manager needs to design and pilot the test tasks in the context of how
development lifecycle activities and work products affect testing and vice-versa.

Take the example of organizations employing Agile development methodology.

• Here developers often resort to testing driven development by creating automatic unit tests and
keep adding the tests and the code into a configuration management system.
• So the Test Manager, in coordination with development manager, must ensure that the testing
team is a part of these activities and aligned to them.

Review of the unit tests enable the testers to understand the software and its implementation
as well as suggest improvements to increase their scope and impact.

The testers can also assess how they can incorporate their legacy automated tests, particularly
functional regression tests, into the configuration management systems.

As discussed earlier, the exact relationship between test tasks, software development lifecycle
tasks, stakeholders in testing and work products depends upon a variety of factors like
organization, project, software development lifecycle, etc. Testing is closely related to these:

• Requirement gathering and management – While determining test scope and estimating test
efforts, the Test Manager should also be aware changes in requirements later into the project,
undertaking control activities to incorporate the changes in testing as well. It is advisable to
make Technical Test Analysts and Test Analysts a part of the requirement reviews.
• Project Management – Test Manager is responsible for finalizing the test schedule and resource
requirement in coordination with Technical Test analyst and Test Analyst and making it available
to the Project Manager. If there are any changes in project plan, Project Manager and Test
Manager must work together to take up test control activities to incorporate changes in project
plan.
• Managing product configuration, release and change – test team led by the Test Manager is
responsible for outlining and describing testing processes and methods and define them in test
plan. Test Manager may also request the analysts to come up with build verification tests and
ascertain version control throughout test implementation.
• Software development and maintenance – Test Manager is responsible for coordinating with
Development Managers to deliver test objects – complete with test content and test release
dates – and getting involved in defect management.
• Technical support – Test Manager must coordinate with Technical Support Manager to deliver
test results accurately during test closure and to review production failures so that
improvements in test process may be implemented. The technical support team must be aware
of observed failures and their solutions.
• Technical documentation development – Test Manager is responsible for working in
coordination with the Technical Documentation Manager to ascertain documents for testing are
provided on time and to manage defects pointed out in the documents.

Besides finding the stakeholders, a Test Manager must make a list of other software development
lifecycle activities and work products that have an effect over or are affected by testing process.
If this is not done, the test process may not be able to realize its optimal efficiency.

Other Work Products In Testing


Senior management and test managers also create documents like Test Policy, Test Strategy,
Master Test Plan and Level Test Plan which are discussed in detail in later topics.

Many work products get created in the complete testing process, especially through Test
Analysts. Examples include specifications for test cases, defects reports, test logs, etc.

Test Manager supports the Test Analyst by ensuring quality and consistency by following
these steps:

• Determining metrics like rejected defects percentage for evaluating work product quality and
monitoring their correct usage during testing
• Selecting and customizing templates for documenting the work products
• Establishing standards for work products, like degree of detailing required
• Getting test work products reviewed by appropriate stakeholders using correct methods

Type of test documentation, degree of detailing and test document specificity is influenced by
these factors:

• Development lifecycle used


• Standards to be followed
• Organizational and other regulations to be followed
• Product quality
• Project risks

Getting industry standard templates for documenting testing work products is an important
consideration for any Test Manager. IEEE 829 is one of the most important sources because it
can be used in any industry.

However, it has many levels of detailing included, so it must be customized as per an


organization’s standards.

Regular use of templates enables seamless joining of processes across the organization and
decreases need for training staff on test work products documentation.
Test reports are created by the Test Manager. They are discussed under Testing Metrics.

How to align software testing activities with product /


development lifecycle activities?
Testing must be an essential part of any project, irrespective of whether it uses any models for
software development or not. Test Manager should completely understand the system
development lifecycle being used in the organization so that testing activities may be correctly
aligned to the lifecycle.

Let us look at some of the models that may be used for developing software.

Table of contents

1. Sequential models
2. Iterative models
3. Agile model
4. Spiral model

Sequential models
Examples include V-model, W-model, Waterfall model, etc. As you must be aware, every model
has many developmental phases. Each phase has multiple activities associated with it, like
requirement gathering, system design, design implementation and finally testing – unit testing,
integration testing, system testing and acceptance testing.

In sequential model, all the developmental activities of a phase must complete before the next
phase can begin.

Testing activities like planning, analysis, design and implementation for products using such
models happen simultaneously with project activities like planning, requirements analysis,
software design, database design and coding.

The extent of overlapping is decided according to the testing level. As explained in ISTQB
Foundation Level syllabus as well as in previous topics, test execution occurs sequentially.

Iterative models
They are also called incremental models. Examples include Rapid Application Development
(RAD), Rational Unified Process (RUP), etc. In iterative model, features that are to be
implemented are grouped together on the basis of some criteria like business priority, business
risk, etc.

After the grouping, the project phases as well as their work products and task are implemented
for each group, together. The phases for each feature group may occur sequentially or
simultaneously and the iterations may be sequential or overlapped.

When the project is initiated, testing activities like high-level planning and analysis is done
simultaneously with project activities like planning and business or requirement analysis.

Many times the test itself is implemented in an overlapping fashion with the test levels.

This means that test levels start as early as practically possible and can continue even after
higher test levels have commenced.

Agile model
Examples include SCRUM, Extreme Programming (XP), etc. Agile product development
model has incremental lifecycle where increments are quite short, sometimes just two to four
weeks.

As in case of iterative or incremental models, activities and work products for each iteration
must finish before the next phase can start.

Testing in Agile models is same as that for iterative models but there is a greater overlapping of
the test activities, including execution, with product development activities.

However, all tasks of an iteration, which also comprises of testing, must finish before the next
iteration can begin. Test Managers in an Agile environment, must play a technical consultant’s
role rather than a direct administration role.

Spiral model
In Spiral model, prototyping is done initially to ascertain project feasibility as well as to try out
new design and implementation ideas.

The prototyping is done depending on business priority level and technical risks involved.
Prototypes helps the teams to identify the technical difficulties that could not be solved.

After these problems are fixed, the project can adopt an iterative or sequential model.

Let us take an example of applying ISTQB testing process to a system using V-model for
development. The test process can be executed as follows:
• Project planning activities and System test planning tasks for the system can take place
together at the same time. Untill system test execution and test closure activities are complete,
Test control will continue to be in effect.
• Analysis and designing of system test activities can happen with defining specification for
requirements, system, high-level (architectural) design and low-level (component) design.
• With the progress of system design, activities related to system test implementation may also
begin. However, most of the activities would take place during programming and component
testing. A few days before system test execution starts, system test implementation tasks come
to an end.
• After system test entry conditions for the system are fulfilled or deliberately ignored, system
test execution tasks can start. This means that at this state, testing for individual components
as well as component integration is complete. The system test execution extends till test exit
conditions are also fulfilled.
• Assessment of system test exit conditions and reporting and recording of test results take
place continuously, all the way through the system test execution. The frequency of assessment
and reporting increase as deadline for the project approaches.
• After system test exit conditions are fulfilled and system test execution is announced to be
complete, system test closure tasks are carried out. Sometimes acceptance testing may be
completed before system test closure related tasks can be completed.

Here we took the example of V-model above.

For an incremental or iterative model, same activities are performed but scheduling and degree
may vary.

• For instance, it may be inefficient to implement the complete test environment at the start of
the project, rather than as needed depending on iteration level.
• Scope of fundamental test activities increases the earlier the planning is done in initial stages,
for any incremental and iterative model.
• Besides planning, test execution and then reporting for each project may be decided by the
lifecycle model that the team is using.

For instance, in an incremental lifecycle, reporting and post-iteration review activities for each
iteration should be done before the next iteration begins.

• Each iteration is treated as a small project in itself.


• This enables the team to adjust each iteration on the basis of what happened during the
previous iteration.
• As each iteration may be short and have time constraints, reporting and post-iteration reviews
could be short in terms of time and effort.
• However, they must not be skipped completely, so that overall test process may be tracked and
problem areas may be identified as soon as possible.
• Issues arising out of system processes can affect the next iteration if remedial actions are not
taken.

Test strategy may contain information about how testing activities may be aligned with other
lifecycle activities.
Test Manager is responsible for ensuring this alignment for each project at every test level as
well as any randomly selected group of development lifecycle and test processes. The groups
may be selected during test planning or project planning phase.

According the requirements of the project, product or organization, extra test levels other than
those explained in syllabus for ISTQB Foundation Level may be needed, like:

• Integration testing of hardware-software


• System integration testing
• Feature interaction testing
• Customer product integration testing

Test Manager must ensure that every test level must have these features defined:

• Feasible goals for every test objective


• Test items and the scope of the test
• Basis of the test and traceability or a method for evaluating test basis coverage
• Criteria or condition for test entry and test exit
• Deliverables of testing like reports of test results
• Test techniques to be used must be clearly defined as well as – how using the test technique,
correct degree of coverage can be ensured, should also be documented
• Test entry and test exit conditions, reporting of results along with measurement and test
metrics applicable to test objectives.
• Where relevant, test tools for each test task
• Test environments and any other resources that will be required
• People both internal and external to the test team who are responsible for the testing
• Where relevant, necessary compliance with standards, regulatory requirements, organizational
standards etc.
How to manage Non-Functional Testing?
Test managers need to conduct non-functional testing as well. If they fail to do so, it may result
in detection of serious quality issues in the system after its release, which could prove
catastrophic for the product as well. Common examples of non-functional testing includes
performance testing, security testing etc.

Many non-functional test need lots of investment in terms of time, effort and money. So
depending upon the potential risks and resources available, QA Manager / Test Manager has to
decide which test are relevant to an application.

Test Manager should distribute some of the test planning duties to Technical Test Analysts if
he/she is not technically equipped to handle all planning aspects. The general points the manager
may assign to the Technical Test Analyst include:

• Relevant test tools


• Stakeholder requirement
• Security
• Test environment
• Organization factors

How non-functional tests integrate with software development lifecycle is another aspect that
Test Managers also need to consider.

One of the most common mistakes here is conducting non-functional testing after the
functional testing is complete. However, this can lead to delayed identification of severe non-
functions problems.

Therefore, prioritization and sequence of non-functional tests must be done as per the risks
involved. Normally, non-functional risks can be easily reduced in the early phases of software
testing and development.

For instance, if user interface prototyping is done in the system design phase, its usability
assessment can enable early identification of defects. If usability assessment is deferred, the
potential defects can delay testing schedules.

In incremental lifecycles like Agile, one factor that negatively affects non-functional tests that
need sophisticated testing frameworks is the rate of change in the iterations. Test design and test
implementation tasks that take longer time than a single iteration must be done independent of
the iteration itself.
What is Non-functional testing (Testing of software product
characteristics)?
In non-functional testing the quality characteristics of the component or system is tested. Non-
functional refers to aspects of the software that may not be related to a specific function or user
action such as scalability or security. Eg. How many people can log in at once? Non-functional
testing is also performed at all levels like functional testing.

Non-functional testing includes:

• Reliability testing
• Usability testing
• Efficiency testing
• Maintainability testing
• Portability testing
• Baseline testing
• Compliance testing
• Documentation testing
• Endurance testing
• Load testing
• Performance testing
• Compatibility testing
• Security testing
• Scalability testing
• Volume testing
• Stress testing
• Recovery testing
• Internationalization testing and Localization testing

• Reliability testing: Reliability Testing is about exercising an application so that failures are
discovered and removed before the system is deployed. The purpose of reliability testing is to
determine product reliability, and to determine whether the software meets the customer’s
reliability requirements.
• Usability testing: In usability testing basically the testers tests the ease with which the user
interfaces can be used. It tests that whether the application or the product built is user-friendly
or not.

Usability testing includes the following five components:


1. Learnability: How easy is it for users to accomplish basic tasks the first time they
encounter the design?
2. Efficiency: How fast can experienced users accomplish tasks?
3. Memorability: When users return to the design after a period of not using it, does the
user remember enough to use it effectively the next time, or does the user have to start
over again learning everything?
4. Errors: How many errors do users make, how severe are these errors and how easily can
they recover from the errors?
5. Satisfaction: How much does the user like using the system?
• Efficiency testing: Efficiency testing test the amount of code and testing resources required by a
program to perform a particular function. Software Test Efficiency is number of test cases
executed divided by unit of time (generally per hour).
• Maintainability testing: It basically defines that how easy it is to maintain the system. This means
that how easy it is to analyze, change and test the application or product.
• Portability testing: It refers to the process of testing the ease with which a computer software
component or application can be moved from one environment to another, e.g. moving of any
application from Windows 2000 to Windows XP. This is usually measured in terms of the
maximum amount of effort permitted. Results are measured in terms of the time required to
move the software and complete the and documentation updates.
• Baseline testing: It refers to the validation of documents and specifications on which test cases
would be designed. The requirement specification validation is baseline testing.
• Compliance testing: It is related with the IT standards followed by the company and it is the
testing done to find the deviations from the company prescribed standards.
• Documentation testing: As per the IEEE Documentation describing plans for, or results of, the
testing of a system or component, Types include test case specification, test incident report, test
log, test plan, test procedure, test report. Hence the testing of all the above mentioned
documents is known as documentation testing.
• Endurance testing: Endurance testing involves testing a system with a significant load extended
over a significant period of time, to discover how the system behaves under sustained use. For
example, in software testing, a system may behave exactly as expected when tested for 1 hour
but when the same system is tested for 3 hours, problems such as memory leaks cause the system
to fail or behave randomly.
• Load testing: A load test is usually conducted to understand the behavior of the application under
a specific expected load. Load testing is performed to determine a system’s behavior under both
normal and at peak conditions. It helps to identify the maximum operating capacity of an
application as well as any bottlenecks and determine which element is causing degradation. E.g.
If the number of users are in creased then how much CPU, memory will be consumed, what is the
network and bandwidth response time
• Performance testing: Performance testing is testing that is performed, to determine how fast
some aspect of a system performs under a particular workload. It can serve different purposes
like it can demonstrate that the system meets performance criteria. It can compare two systems
to find which performs better. Or it can measure what part of the system or workload causes the
system to perform badly.
• Compatibility testing: Compatibility testing is basically the testing of the application or the
product built with the computing environment. It tests whether the application or the software
product built is compatible with the hardware, operating system, database or other system
software or not.
• Security testing: Security testing is basically to check that whether the application or the product
is secured or not. Can anyone came tomorrow and hack the system or login the application
without any authorization. It is a process to determine that an information system protects data
and maintains functionality as intended.
• Scalability testing: It is the testing of a software application for measuring its capability to scale
up in terms of any of its non-functional capability like load supported, the number of transactions,
the data volume etc.
• Volume testing: Volume testing refers to testing a software application or the product with a
certain amount of data. E.g., if we want to volume test our application with a specific database
size, we need to expand our database to that size and then test the application’s performance on
it.
• Stress testing: It involves testing beyond normal operational capacity, often to a breaking point,
in order to observe the results. It is a form of testing that is used to determine the stability of a
given system. It put greater emphasis on robustness, availability, and error handling under a heavy
load, rather than on what would be considered correct behavior under normal circumstances. The
goals of such tests may be to ensure the software does not crash in conditions of insufficient
computational resources (such as memory or disk space).
• Recovery testing: Recovery testing is done in order to check how fast and better the application
can recover after it has gone through any type of crash or hardware failure etc. Recovery testing
is the forced failure of the software in a variety of ways to verify that recovery is properly
performed. For example, when an application is receiving data from a network, unplug the
connecting cable. After some time, plug the cable back in and analyze the application’s ability to
continue receiving data from the point at which the network connection got disappeared. Restart
the system while a browser has a definite number of sessions and check whether the browser is
able to recover all of them or not.
• Internationalization testing and Localization testing: Internationalization is a process of
designing a software application so that it can be adapted to various languages and regions
without any changes. Whereas Localization is a process of adapting internationalized software for
a specific region or language by adding local specific components and translating text.

How to manage Experience Based Testing? What are Test


Sessions?
Experience based tests are adept at identifying defects that the other testing techniques might
have missed, thus acting as checks on comprehensiveness of the test techniques that were
employed.

But it poses challenges to the test management team as well. So the Test Manager must know
about the advantages and also the challenges of experience based testing methods, especially
exploratory testing.

Determining the coverage of these tests is difficult because the advanced preparation done is
usually bare minimum. If you need to reproduce these tests, you need to pay extra attention to
test management, especially if more than one tester is being used.

Test sessions
If you are focusing on exploratory testing, an effective way to handle experience based testing
would be to split the job into small periods known as test sessions of thirty minutes to two hours.
Test sessions enable the testers to focus on test that should be executed and the manager to
schedule and monitor the test being conducted. Each test session must cover a single test
charter that has been defined by the Test Manager / QA Manager verbally or in writing.

The charter outlines one or more test conditions to be tested in each session, enabling the
testers to focus on the job at hand even more. It also avoids overlapping if more than one person
is doing the exploratory testing concurrently.

Experience based testing may also be managed by incorporating such spontaneous and self-
directed testing techniques in the traditional predefined test sessions.

For instance, testers may be allowed and given time to test beyond the inputs, defined steps and
projected results. If these test sessions succeed in detecting defects or possible areas for more
testing, they may be incorporated into the predefined tests.

• At the onset of exploratory test session, testers are responsible for confirming and executing the
required setup activities for these tests.
• While carrying out the tests, the testers gain information about the application and then design
tests and execute tests accordingly.
• The testers record the information gained regarding the application as well any defect
investigation if it has been done and its results.
• If the testers feel that they or another set of testers may need to repeat the defect tests
conducted by them, they also capture test inputs, test conditions and test events in the log.

Usually a debriefing takes place after the test session to decide how subsequent test sessions
should be held.

What is Risk Based Testing? Identifying, Assessing,


Mitigating & Managing Risks
Risk can be defined as possibility of an adverse or unwanted result or occurrence. If
stakeholders, users or customer opinion about the project’s quality or successful completion of
the project can potentially be reduced due to a problem or issue, risk is said to exist.

If the main impact of risk is on quality of the product, the problems are termed product risks,
quality risks or product quality risks.

Table of contents

1. What is Risk Based Testing?


2. How to identify risks?
3. How to assess risk?
4. How to mitigate risks?
5. How to manage lifecycle risk?
6. Project Risk Management

If the main impact of the problem is on success of the project, the problems are termed
planning risks or project risks.

A common challenge in test management lies in correct selection of a limited set of test
conditions from an almost unlimited set of tests, then allocating resources, effort and prioritizing
the tests effectively.

After the test conditions are chosen, the team must allocate requisite resources to create test
cases, then decide a sequence for the test cases so that overall test efficiency and effectiveness is
optimized.

Test Managers can use risk analysis to decide on appropriate test cases and their sequencing but
there exist many variable factors and constraints that must be taken into account, even if it
means compromising on test solution.

What is Risk Based Testing?


In risk based testing, the selection of test conditions are guided by the risks identified to
product quality. These product quality risks also are used to determine the allocation of effort
for the testing the test conditions and for prioritizing the created test cases.

Many types of testing techniques – depending on documentation and formality level – are
available to perform risk based testing.

The main aim of risk based testing is to minimize quality risks to an acceptable level.

Achieving zero quality risk is close to impossible.

While carrying out risk based testing, the product risks or quality risks are detected and reviewed
during risk analysis of product quality with stakeholders.

After risk analysis, test team carries out test design, test implementation and test execution
activities with the objective of minimizing the risks.

Here, quality refers to all the product features, characteristics and behaviors that have the
potential to affect satisfaction experienced by users, customers and other stakeholders.

When defects are identified before product release, testing has reduced quality risk by identifying
the defects and providing methods to handle them.

Here are some examples of quality risks:

• Delayed response to users action: Non-functional risk pertaining to performance


• Reports with wrong result or incorrect calculations: Functional risk pertaining to precision
• Complicated UI and input fields: Non-functional risk pertaining to usability & system adoption

If defects are not found after testing, testing has reduced quality risks by confirming that the
system performs correctly under tested state.

Risk based testing has several techniques for implementation which differ in the documentation
level, type of documentation and formality level.

There are four main steps in risk based testing:

1. Identifying risks
2. Assessing the risks
3. Mitigating risks
4. Managing risk

The activities listed here are not sequential but overlapping. In the subsequent sections we will
look at all these activities in detail. The cost incurred in these activities are part of the cost of
quality.

For maximum efficiency, the team undertaking risk assessment and identification must comprise
members from all stakeholders groups – be it the project on the whole or product development
team.

However, in reality, some stakeholders take on the responsibility of representing some additional
stakeholders.

Let us consider a scenario: When software is being developed for mass market, a small
representative sample of prospective customers could possibly assist in identifying those defects
which might affect the extent to which they use the software the most.

Here, the small section of prospective customers is the representative for complete customer
base.

The testers should also be a part of risk identification process because of their experience in
performing quality risks analysis and defect identification.

How to identify risks?


The stakeholders can use any methods given below to identify risks:

• Interviewing experts
• Making independent reviews
• Existing templates of risks
• Retrospective meetings in projects
• Conducting risk workshops
• Brainstorming with all stakeholders
• Creating and using checklists
• Revisiting previous experiences

If widest possible cross-section of stakeholders is used, the risk detection process would be able
to detect a large majority among the important product quality risks. Stakeholders role in risk
based testing is quite significant.

Risk identification process also delivers other results, i.e. they identify issues that are not
really risks to the quality of the product. For example, generic concerns of the product, issues
related to documents like requirements specifications etc.

It is important to manage project risks for overall testing, it should not be limited to risk
based testing.

How to assess risk?


After risks have been identified, their assessment can start. Risk assessment is the analysis and
evaluation of identified risks.

Risk assessment usually involves these activities:

• Classifying each risk


• Identifying probability of each risk occurring
• Impact of each risk
• Identifying or assigning risk properties like risk owner

Risk is classified on the basis of different parameters like performance, functionality,


reliability, etc.

Companies are adopting ISO 25000 standard which replaced ISO 9126 standard quality
characteristics for classification.

Many companies also used other classification models to classify the risks. The checklist used in
identification of risks is usually used in classification of risks as well.

There are many general checklists available freely, which are customized by the organizations
for their own use. If checklists are used for risk identification, risk classification is also done
simultaneously.

To find the risk level of a risk, you need to ascertain the probability of that risk occurring and
its impact when it happens. Probability of that risk occurring implies the probability of existence
of the problem in application while it was being tested.

Likelihood can also be determined by evaluating the technical risk level.


Issues affecting this likelihood or probability include:

• Both technology being used and teams being employed


• Training level of business analysts, project managers, architects and developers
• Level of disagreement among team members
• Physically spread teams across the country/globe
• Old versus modern approach
• Testing tools and techniques
• Strength of leaders – technical & managerial
• Non-availability of previous quality assurance reports
• High rate of change
• High rate of early defects
• Problems in interfacing as well as integrating

The impact of a risk when it occurs is the importance of its effect on all stakeholders like users
and consumers.

Some of the factors that influence product as well as project risks include:

• Rate at which the risky feature is used


• How crucial the feature is for achieving business objective
• Loss of reputation
• Damage to business
• Probable losses or liabilities in terms of finances, ecology or societal pressures
• Probability of criminal or civil legal sanctions
• Annulment of license
• Safety
• Non-availability of feasible workarounds
• Negative publicity because product failure is prominent

Risk level can be evaluated qualitatively or quantitatively.

If the probability of a risk and its impact is calculated then these numbers can be multiplied to
get a risk priority number which is quantitative.

However, usually risk level can be determined only qualitatively. So probability of a risk
occurring can be called very low, low, medium, high or very high. But a percentage value for the
probability cannot be calculated to any degree of precision.

In the same way, impact of a risk can be very low, low, medium, high or very high but it cannot
be depicted in financial numbers. However, qualitative evaluation of risk levels should be
taken to be below par as compared to quantitative methods.

In fact, incorrect use of quantitative evaluations of the risk levels can result in misguiding the
stakeholders regarding level to which the risks are actually understood and can be managed.
Risk analysis which isn’t based on broad and statistically validated risk data will be based on
stakeholders’ perspectives.

The stakeholders – project managers, architects, business analysts, programmers and testers –
tend to have their own subjective view of the risk probability and its impact.

So they will have different and at times highly varied opinions about each risk.

The process of risk analysis must incorporate some method of reaching a consensus or at least an
established risk level. This prior agreed level could be through management order or by
calculating mathematical figures like mean, median and mode.

The risk level should also have a good enough distribution in the given range, so rate of risks can
provide correct guidelines for deciding on sequence, priority or effort allocation for test cases.
Else, risk levels will not be able to act as risk mitigation guideposts.

How to mitigate risks?


The first step is analysis of quality risks i.e. identification and then evaluation of risks to
product quality. All test plans are based on this analysis of quality risks.

Test designing, test implementation and test execution is done to mitigate the risks as per the test
plan. The effort allocated to developing, implementing and then test execution is directly
proportional to the risk level.

This implies that thorough techniques like pair wise testing are designed for higher level risks
while not so detailed techniques like exploratory testing for limited time or equivalence
partitioning are designed for lower level risks.

Development and execution priority of a test also depends upon risk level.

Risk level should also influence these decisions:

• Should the project artifacts and tests documents be reviewed?


• How independent should the testers be?
• What should be the experience level of the tester?
• How much retesting should be done?
• How much regression testing should be done?

While the project is continuing, some extra information may change the quality risks that the test
team was working with or impact level of the risks.

Test team must always be alert to such information and tweak tests as and when required. The
adjustments like detecting fresh risks, re-evaluating risk levels, assessing efficacy of risk
mitigation tasks completed, etc. must be done at key milestones of the project.
For example, if risk detection and evaluation session was held on the basis of requirement
specification in the requirements phase, it is advisable to re-asses the risks after design
specification is complete.

Let us look at one more example. If number of defects in any part of the product is much higher
than the anticipated amount of defects during testing, the testers can safely summarize that the
probability of a defect occurring in this region is higher than anticipated.

So, the probability of risks must be revised upward for that part of the product. This would also
raise the extent of testing to be done for this part of the product.

Risks in product quality can be minimized even before execution of test cases begins.

For instance, if the issues concerning the requirements were detected during risk identification
itself, the issues can be mitigated just after the process through reviews.

As this is done before subsequent phases in product development lifecycle, it will reduce the
number of tests needed during subsequent quality risk testing process.

How to manage lifecycle risk?


Ideally, managing risk is a continuous effort through the complete product development
lifecycle. If available, documents pertaining to test policy or test strategy must explain the
following:

• Process to be followed for managing tests for project as well as product risks
• Integration of risk management to all test levels
• Impact of risk management to associated test levels

In an experienced organization, the project team is highly aware of risks and committed to
managing risks at multiple levels, not only for testing.

In such a scenario, critical risks are dealt with at all test levels, as soon as they are identified.

For instance, in case of performance being a risk factor for product quality, testing of
performance is performed at multiple levels like design testing, unit testing along with
integration testing.

Experienced organizations do not stop at identifying the risks. They also identify the source of
risks as well as the consequences.

Often, root cause analysis is performed to understand risk source in depth and apply
improvements in processes to avoid the defects from occurring. Risk mitigation is performed all
through the lifecycle.

In mature organizations, risk analysis takes into account the following factors:
• Related work activities
• System behavior analysis
• Risk assessment based on cost
• Analysis of product risk
• Analysis of risk related to end user
• Analysis of liability risk

In such organizations risk analysis goes beyond software testing. Testing team proposes and
becomes part of risk analysis covering the whole program.

Most of the risk based testing techniques combine some methods to utilize risk level for
sequencing and prioritizing the tests.

In this way they also ensure that most of the important defects are identified at the time of test
execution and most of the essential components of the product are also covered.

Sometimes, all tests that are high risk are executed before any low-risk test executes and that
too in the order of risk, starting from the highest risk. This is also termed depth first risk
testing technique.

Sometimes a breadth first testing is used, i.e. a sample group of selected tests is run across all
the predefined risk areas. This ensures that all risk areas are covered at least one time.

Depending upon the test process adopting breadth-first or depth-first technique, the time allotted
for the testing process might be over before all tests could be executed.

After doing risk based tests, the testers report to the management about the risks levels that
remain to be tested.

This information helps the management to confirm if they want to do more testing or not.

If more testing is not done, the remaining risks must be handled by the customers, end users,
technical support staff, help desk and operational staff or a combination of one or more of them.

When the test is being executed, the risk based testing enables project manager, product
manager, senior managers and other stakeholders to observe and manage the development
lifecycle.

Keeping a tab on the development cycle enables them to take decisions regarding product
release, depending on the remainder risk levels.

However, to allow the stakeholders to monitor the status, the Test Manager must provide risk
based testing results in a way that is easily understandable.

Project Risk Management


Planning for project testing must also cover potential risks associated with a project. The
procedure for identifying such risks is explained above in the section on identifying risks.

The detected risks must be communicated to the project manager so that he/she takes steps to
mitigate them as much as possible.

The test team may not be able to mitigate all the risks but these risks can be taken care of:

• Testing environment readiness


• Testing tool readiness
• Availability of well-qualified testing staff
• Unavailability of testing standards, techniques and rules

Managing project risks includes the following:

• Organizing testware
• Testing the test environments before they are actually used
• Testing preliminary product versions
• Using difficult test entry conditions
• Strict adherence to testability requirements
• Being part of review teams for preliminary project work products
• Managing changes to project based on defect detection
• Reviewing project progress and product quality

After risk identification and analysis, these are the four ways to manage risk:

1. Preventive measures that decrease its occurrence and/or its impact


2. Creating emergency plans to handle the risk if it actually occurs
3. Transferring the responsibility of risk management to third party
4. Ignoring or accepting the risk

Some of the factors that should be considered while choosing the best possible option out of
these four include:

• Advantages and disadvantages of an option


• Cost of implementing an option
• Extra risks related to choosing an option

In case of contingency plans, risk trigger as well as plan owner must be predetermined.

If you are interested, you can review Risk Based Testing from the perspective of the Foundation
Level exam.
What are the different Risk Based Testing (RBT)
Techniques?
Several techniques are available for Risk Based Testing (RBT). Many of these are very
informal. For example, in some informal methods, tester investigates risks to quality while doing
exploratory tests. This may drive the whole testing process but also focus more on probability
of a defect occurring rather than its impact.

Also, it may not take into consideration the input from cross-functional stakeholders. These tests
also run the risk of being subjective and relying on experience and expertise of the tester. So,
they are unable to take full advantage of risk based testing techniques.

Table of contents

1. Requirements and risk


2. Risk identification and analysis techniques
3. Stakeholders role in risk management
4. Closure activities in risk based testing

In order to get full advantage of RBT techniques while minimizing costs, testers and test
managers use lightweight approach to risk based testing.

Such approaches combine flexibility offered by informal techniques with consent building
capacity found in approaches which are formal.

Some risk based testing techniques are given below:

• Product Risk Management (PRisMa)


• Pragmatic Risk Analysis and Management (PRAM)
• Systematic Software Testing (SST)

Some of the additional characteristics they possess are:

• They have matured over a period of time inside organizations where efficiency was the most
important parameter, due to testers experience with risk based testing techniques.
• They depend heavily upon inclusion of wide cross-section of people from all stakeholder
groups, including business as well as technical aspects, in early stages of risk detection and
analysis. They are most effective if included during the initial phase of the project.
• At this stage they are able to reduce the quality risks significantly and work products of risk
analysis may impact design specification as well as implementation.
• The output generated is usually in a matrix or tabular form, which can easily act as the
foundation for planning the test and creating test cases.
• This also impacts the later test monitoring and analysis.
• It also makes reporting about residual risks easier.
Requirements and risk
It must be noted that some techniques like Systematic Software Testing can be used only after
requirement specifications are available to act as input. This ensures that all the requirements
have been covered in risk based testing.

However, requirement specifications may not include all potential risks, especially the non-
functional ones. It is the responsibility of Test Managers that these risks do not remain neglected.

• If good requirements and requirements that are prioritized are part of the requirements
specification, a strong relationship between requirement priority and risk levels is observed.
• PRAM and PRisMa use a judicious combination of risk based and requirements based
strategies.
• They primarily use requirements specifications as input but can use the stakeholders inputs as
well.
• Risk detection and analysis process can also be used to create an agreement between the
stakeholders concerning the right test approach to be adopted.
• But the stakeholders need to allocate time for participating in the group discussions or one-one-
one sessions.
• If enough numbers and groups of stakeholders do not participate, this may cause gaps in risk
analysis.
• Also, stakeholders may not have the same opinion all the time regarding the risk levels.

So the group leader needs to guide the discussion in such a way that a consensus is reached
amicably. The leader must possess all the qualities of an expert moderator to do this
successfully.

Risk identification and analysis techniques


Lightweight techniques of risk analysis are quite like the formal techniques. They highlight the
technical or business risks by weighing upon the probability of a risk occurring and factors
affecting this probability.

However, the only two factors used by these techniques are – likelihood of a risk and its impact.
Also, they use simplistic qualitative judgements and scales to come to a decision.

This lightweight approach makes the techniques flexible, applicable to a wide range of
industries, accessible to teams with varied experience and expertise level.

In case of more formal and heavyweight approach, test manager can use any of the following
risk identification and analysis techniques:

• Cost of exposure – here, three pieces of information are gathered about each risk to quality.
First, probability of risk, in percentage, of total number of risks. Second, price to be paid for each
loss, if it happens during production. Third, price to be paid for testing each failure.
• Hazard analysis – it tries to detect the hazards associated with each risk
• Quality Function Deployment (QFD) – It tests the consequences of risks that occur because the
requirements gathering team failed to correctly understand the requirements of the users or
customers.
• Fault Tree Analysis (FTA) – Here the actual failures that were observed during the complete
development lifecycle are put through rigorous root cause analysis till all the causes / reasons
are known.
• Failure Mode and Effect Analysis (FEMA) – FEMA has many variants too. Here the potential
risks to quality, their probable causes and effects are ascertained. Then, a rating for detection,
priority and impact is also given.

The technique which is to be utilized and the level of formality for performing risk based
testing is decided by considering many factors like project specifications, processes to be
adopted and product to be delivered.

For instance, an informal technique like exploratory method is suited to quick fix or patch
testing. However, in case of Agile approach, analysis of quality risks must be completely
integrated into each iteration at its onset. The risks must also be documented and recorded as
part of customer story.

For projects where safety and outcome is critical, risk based testing must be formality as well as
documentation intensive.

The technique chosen also decides the inputs required, processes to be followed and outputs
obtained.

Some examples of inputs used in risk based testing include:

• Project specifications
• Stakeholders experience
• Data from previous projects

Some examples of processes in risk based testing are:

• Identifying risks
• Assessing risks
• Controlling risks

Some examples of outputs in risk based testing are:

• List of risks to quality


• List of risks to project
• List of risks to testing
• Risk levels for each risk
• Optimum distribution of resources for testing
• Shortcomings in input documents
• Points to be considered for each risk
Stakeholders role in risk management
Having the best group of stakeholders during risk detection and evaluation is essential for
accomplishing risk based testing successfully. This is because each stakeholder has his or her
own perception of a quality product, their priority items, areas of importance, depending on their
category.

There are two categories of stakeholders – business and technical.

End users, customers, help desk personnel, technical support executives, operations staff, etc. are
the business stakeholders. They have knowledge about the users and customers and hence
understand their concerns and expectations. So they can assist in detecting business risks and
their impact.

Product developers, database administrators, network administrators, system architects, etc. are
the technical stakeholders. They know about the undocumented techniques (or simply a
specific sequence of steps) that can be used to cause failure in a software. So they can identify
technical risks and probability of their occurring.

Some of the stakeholders like Subject Matter Experts (SME) can possess both technical and
business view. This happens when they have knowledge about both business and technical
aspects because of their work profile.

During risk identification, the list of risks can get really long as each stakeholder shares his own
perception of a risk item.

It is unnecessary to argue over whether a risk should be included in the list because if even one
person thinks of an item as potential risk, it is definitely a risk. What is important is to reach an
agreement over level of that risk.

If we take the example of lightweight test techniques, giving a ranking to each risk listed is
essential piece of the whole procedure. Once the ranking is finalized, all stakeholders must use
the same scale. The ranking is assigned after the group agrees over it.

Engaging the stakeholders in the analysis of quality risks is advantageous to the Test Manager. In
case of projects with incomplete requirement specifications, stakeholders can provide guidance,
thereby improving the capability of risk detection teams.

This is because both requirement specifications and stakeholder inputs are being used in the
analysis process, making it comprehensive.

Closure activities in risk based testing


After the risk based testing is complete, teams must evaluate their success level at the time of
test closure.
To do this, they must answer the following questions:

• Was the percentage of risks that are important, more than the less important ones?
• Were the important weaknesses detected in the initial stages of test execution?
• Was the team capable of explaining the risks detected to the stakeholders?
• If the team left out any tests, did they have lower risk level as compared to the risks that were
tested?

These questions must be answered by the whole team using predefined metrics.

If the testing was successful, the answer to all these questions must be yes.

Test Managers must strive to establish processes that improve upon the metrics used to answer
these four questions and improve the risk analysis process efficiency.

The manager also needs to consider the correlation among the predefined metrics, critical goals
accomplished by the testing team and management behavior with respect to test outcomes.

Test selection techniques – Requirements based, Model


based, Checklists, Reactive testing
There are several techniques that can be used for choosing test conditions. The risk based
strategy for testing was discussed in the previous section, which is one of the commonly used
techniques by Test Managers for choosing tests.

Requirement based testing technique is another important alternative for building and
prioritizing testing conditions. Some of the these techniques are discussed below.

Test selection techniques


• Requirements based testing
• Model based testing
• Checklists
• Reactive testing

Requirements based testing


It employs many methods like creating graphs for cause and effect, analyzing test conditions
and analyzing ambiguities. Usually a list of defects in the requirements document is used to
detect requirements ambiguities and then remove them.
To determine test conditions that must be covered, one must make an in-depth study of the
requirements document.

• If the requirements have a pre-allocated order of importance, that order can guide in
distributing effort and ordering test cases.
• If the order is not pre-allocated, risk based testing must be combined with requirements based
approach to arrive at correct effort allocation.
• Plotting of cause-effect graph to cover test conditions has a wider impact in reducing a testing
problem from a large problem into test cases that are manageable in number.
• At the same time, it provides 100% coverage of the functionality of the test basis.
• The graph is also capable of detecting gaps during the designing of test cases itself, which helps
in early detection of defects in the overall software development lifecycle.
• But producing graphs using manual tools is very complex.

A familiar problem in testing based on requirements is confusing specifications that are not
complete, difficult to test and at times not available at all. If the organization does not take care
of these issues, the testers must choose some other testing strategy.

Model based testing


• If available requirements are not complete, they can be boosted by creating models that depict
the actual usage of the system.
• Such models make use of user profiles, use cases, inputs to the system, outputs provided by the
system, etc.
• This enables the testers to perform reliability testing, functional testing, performance testing,
security testing and inter-operability of the system.
• User profiles are created during test review and planning on the basis of data available about
real time products usage and stakeholder inputs.
• They are then covered through test cases. Even if the profiles do not create the exact model of
the system, it is sufficient for testing purposes.

Checklists
Creating checklists in order to decide what is to be tested, in what quantities and the testing
order is also very common among Test Managers.

If the product is stable, the Test Manager may choose to create checklist that includes the
following items:

• Functional areas to be tested


• Non-functional parts to be tested
• Database of test cases which already exist

These checklists also provide empirical values for the test data. However, it is effective only for
minor adjustments.
Reactive testing
• Reactive testing is also a common testing approach employed by the Test Managers.
• As the name suggests, the test team concentrates on testing the groups of bugs that are
discovered after delivery.
• Decisions about prioritizing the tests and allocating resources is fully dynamic. This is done as
the need comes up.
• This technique is effective only when combined with other testing techniques.
• If used alone, it may not be able to detect issues in important application areas if they do not
have many defects.

How to perform test prioritization & effort allocation in test


process?
The techniques chosen by Test Managers should be integrated into both project and testing
processes. Consider the example of sequential development lifecycles like V-model, where test
selection, effort allocation and test prioritization is done in requirements phase itself, with
scope for adjustments as the project progresses.

On the other hand, in iterative lifecycles like Agile methodology, testing progresses in an
iteration-to-iteration fashion. Planning and test control strategies must incorporate the way in
which risks, project requirements, user profiles, usage etc. can progress and be ready to handle
them accordingly.

Test effort allocation and test prioritization is usually decided during the test planning
phase itself so that it may guide the subsequent test processes successfully.

While executing the tests, software testers should follow the test sequencing fixed during the
planning phase. However, they should be open to updating the testing sequence depending on
knowledge gained after the initial plan was formally written.

Similarly, Test Manager / QA Manager should evaluate test results and test exit scenarios based
on factors like project risks, stakeholder requirements, usage, checklists, etc. that were used to
sequence the tests in the planning phase.

While reporting on test results obtained and evaluating the criteria for test exit, the Manager
should quantify how much testing was completed.

This can be done by testing the detected defects again by modifying the test basis. So, if defects
are detected in risk based testing, the testers can evaluate the residual risks and decide product
release dates accordingly.
Testing reports must talk about the risks that were covered and those that are still undetected.

While closing the testing process, Test Manager must measure the metrics that evaluate testing
outcomes against product quality and stakeholders’ expectations.

What is Test Policy? What does it contain?


Every organization has its own nomenclature and scope for test documents created for test
management. Test Policy is one of the key documents that exists in an organization. These are
some of the broad categories of documents that exist in every testing project.

• Test Policy – It explains the goals that the organization wishes to achieve through testing
activities.
• Test Strategy – This document details the general testing methods used by the organization.
These test methods are independent of any project.
• Master Test Plan – Also called the project test plan, it explains project specific testing strategy
and test implementation.
• Level Test Plan – Also referred as the phase test plan, this document gives details about the
testing activities that must be performed for every test level.

Actual existence of each of these documents is different for each project and/or organizations.
For larger projects / organizations, they could possibly be split across multiple documents,
however for smaller projects/organizations, they could possibly be part of a single document.
Every document is explained in detail here.

Larger organizations which are more formal and larger projects have more detailed documents
compared to smaller, less formal organizations & smaller projects.

Test Policy
The overarching objective of an organization in performing test activities is described in Test
Policy document. It is created by seniors in the test management team in association with
senior managers of the stakeholders’ groups.

Sometimes, test policy is part of a wider quality policy adopted by the organization. In such
cases the quality policy will explain the overall aim of the management with respect to quality.

Test policy contents

Test policy is a short document, summarized at a high level that contains the following:

• Outline the advantages of testing, business value delivered to the organization which justifies
the cost of quality
• Define test objectives like confidence building, defect detection and reduce quality risks
• Describe the methods to measure test efficiency and effectiveness in fulfilling test objectives
• Summarize the processes used in testing using ISTQB primary test process
• Describe ways for the organization to enhance its testing processes

Test policy must also include testing activities necessary for maintenance of current project &
also development of new projects.

What is Test Strategy? Types of strategies with examples


Most commonly used testing techniques and methodologies are described as part of the
organization’s test strategy. Test Manager should be able to decide on a suitable testing strategy
for the project based on the project requirements as well as the organizations needs.

In simple terms, test strategy contains the following information:

• How to use testing for managing project and product risks?


• How to divide testing process into different test levels?
• What are the high level testing activities?
• Which testing strategy should be used in which situation? Strategies can differ based on project
requirements like regulatory requirements, risk levels and different methodologies of software
development.
• General test entry and test exit conditions
• The activities and processes mentioned in the Test Strategy should be align with the
organizations Test Policy.

Table of contents

1. Types of testing strategies


1. Analytical strategy
2. Model based strategy
3. Methodical strategy
4. Standards or process compliant strategy
5. Reactive strategy
6. Consultative strategy
7. Regression averse strategy
2. Details included in test strategy
3. Test strategy selection

Types of testing strategies


Some of the testing methodologies that may be part of an organization’s testing strategy are:
• Analytical strategy
• Model based strategy
• Methodical strategy
• Standards compliant or Process compliant strategy
• Reactive strategy
• Consultative strategy
• Regression averse strategy

Analytical strategy

For example risk based testing or requirements based testing. Here the testing team defines
the testing conditions to be covered after analyzing the test basis, be it risks or requirements,
etc.

So, in case of testing based on requirements, requirements are analyzed to derive the test
conditions. Then tests are designed, implemented and executed to meet those requirements.

Even the results are recorded with respect to requirements, like requirements tested and passed,
those that were tested but failed and those requirements which are not fully tested, etc.

Model based strategy

In this technique, testing team chooses an existing or expected situation and creates a model
for it, taking into account inputs, outputs, processes and possible behavior.

The models are also developed according to existing software, hardware, data speeds,
infrastructure, etc.

Let us consider the scenario of mobile application testing. To carry out its performance testing,
models may be developed to emulate outgoing and incoming traffic on mobile network, number
of active/inactive users, projected growth, etc.

Methodical strategy

Here test teams follow a predefined quality standard (like ISO25000), checklists or simply a
set of test conditions. Standard checklists can exists for specific types of testing (like security),
application domains.

For instance, in case of maintenance testing, a checklist describing important functions, their
attributes, etc. is sufficient to perform the tests

Standards or process compliant strategy

Medical systems following US Food and Drugs Administration (FDA) standards are good
examples of this technique.
Here the testers follow the processes or guidelines established by committee for standards or
panel of industry experts to identify test conditions, define test cases and put testing team in
place.

In the case of a project following Scrum Agile technique, testers will create its complete test
strategy, starting from identifying test criteria, defining test cases, executing tests, report status
etc. around each user stories.

Reactive strategy

Here tests are designed and implemented only after the real software is delivered. So testing is
based on defects found in actual system.

Consider a scenario where exploratory testing is being used. Test charters are developed based
on the existing features and functionalities. These test charters are updated based on the results of
the testing by testers. Exploratory testing can be applied to Agile development projects as well.

Consultative strategy

As the name suggests, this testing technique uses consultations from key stakeholders as input
to decide the scope of test conditions as in the case of user directed testing.

Let us consider a situation where the compatibility of any web based application with possible
browsers is to be tested. Here the application owner would provide a list of browsers and their
versions in order of priority.

They may also provide a list of connection types, operating systems, anti malware software, etc.
against which they want the application to be tested.

The testers may then use different techniques like testing pair wise or equivalence partitioning
techniques depending upon priority of the items in the provided lists.

Regression averse strategy

Here testing strategies focus on reducing regression risks for functional or non-functional
product parts.

Continuing our previous example of web application, if the application needs to be tested for
regression issues, testing team can create test automation for both typical and exceptional use
cases.

They can even use GUI based automation tools so that the tests can be run whenever the
application is changed.
It is not necessary to use any one of the techniques listed above for any testing project.
Depending on product and organization’s requirements, two or more techniques may be
combined.

Details included in test strategy


The final test strategy should include details about these factors:

• Test levels
• Entry as well exit conditions for each test level
• Relationships between the test levels
• Procedure to integrate different test levels
• Techniques for testing
• Degree of independence of each test
• Compulsory as well as non-compulsory standards that must be adhered
• Testing environment
• Level of automation for testing
• Tools to be used in testing
• Confirmation and regression testing
• Re-usability of both software and testing work products
• Controlling testing
• Reporting on test results
• Metrics and measurements to be evaluated during testing
• Managing defects detected
• Managing test tools and infrastructure configuration
• Test team members roles and responsibilities

Test strategy selection


The testing strategy selection may depend on these factors:

• Is the strategy a short term or long term one?


• Organization type and size
• Project requirements – Safety and security related applications require rigorous strategy
• Product development model

You can review how Test Strategy was covered as part of the Foundation Level syllabus.
What are Master Test Plans & Level Test Plan? Examples,
When to use
The master test plan is a document that describes in detail how the testing is being planned and
how it will be managed across different test levels. It gives a bird’s eye view of the key decisions
taken, the strategies to be implemented and the testing effort involved in the project.

Details provided by Master Test Plan


In simple terms, the master test plan for software testing provides the following details:

• List of tests to be performed


• Testing levels to be covered
• Relationship among different test levels and related coding activity
• Test implementation strategy
• Explain testing effort which is a component of the project
• Master test plan should align with test policy and test strategy. It should list any exceptions or
deviations and their possible impact

Example – If regression testing is always carried out in an organization prior to the release of a
product but the project at hand will have no regression testing, the master plan must point this
out with justification for skipping regression testing as well as impact, if any. It should also list
steps taken to mitigate any risks that arise from skipping regression testing, like releasing a
maintenance patch after two weeks.

The exact structure of the master test plan and its content depends on the following factors:

• Type of organization
• Documentation standards followed by the organization
• Level of project formality

Content of Master Test Plan


All master test plans for testing must include these:

• List of things to be tested


• List of items that will not be tested
• List of quality characteristics that will be tested
• List of product quality characteristics that will not be tested
• Test execution cycles
• Test budgets according to project or overall operational budget
• Testing schedule
• Correlation between testing cycles and release plan
• Interrelation between other departments and testing team
• Scope of each test item
• Predefined entry, continuation and exit conditions for each test level
• Risks associated with test project
• Management and control of testing
• Team members responsible for each test level
• Inputs and outputs for each test level

For small projects, only one or two formal test levels exist. Most of the other testing is informal
and may be done even as beta testing process. In such a case the master plan may include the
test plan for the formal test levels.

Let us consider a situation where the only formal level is system testing, developers are
performing integration testing and customers are informally performing acceptance testing for a
beta release. In such cases the elements described here can be included in the system test plan.

Testing is never an isolated process. It always functions in correlation to other activities of the
whole project. If those activities of the project are not documented properly, the testing
master plan may include details around those activities as well.

To understand the above, let us consider a scenario where the process for test object delivery is
not formally documented due to undocumented configuration management process. The testing
master plan will then specify how the test team will receive the test objects.

Level Test Plans


As the name suggests, level test plan explains in detail the testing activities that must be
performed for every test level or sometimes, test type. Normally, the test levels listed in the
master testing plan is expanded in the level test plan.

They would provide the testing schedule, milestones, test activities, test templates etc., which are
not given in the master plan.

Informal test plans usually have only one test plan that covers both master and level testing
plans.

Agile projects can have iteration testing plans for each iteration rather than plan for each test
level.

You can review the purpose and importance of test plans as it was explained in the Foundation
Level exam.

In the next topic we will delve into test estimation and estimation techniques
What is test estimation? Related Factors, Estimation
Techniques
It is necessary to have at least a rough estimation of total costs for testing activities as well as
test completion dates. Test estimation is used to estimate the effort, cost and timelines for
testing. Accurately estimating testing effort and timeline helps in planning the project better.

The test estimates must:

• Include inputs from experienced testers


• Have been approved by or have consensus of all participants
• Furnish quantified values of costs to be incurred, resources to be needed, tasks to be performed
and people to be involved
• Provide cost, time and effort estimation for every activity

Best practices in project management for estimating effort are well established. Test estimation
uses the same best practices in test activities required for a project.

It should include all testing activities given below:

• Test Planning, Test monitoring and control


• Test Analysis
• Test Design
• Test Implementation
• Test Execution
• Evaluating exit criteria and reporting
• Test closure activities

Management teams are usually most interested in test execution timelines because software
testing is usually on the critical path of a project plan.

But estimation of test execution is not easy to produce as well as doubtful, if software quality is
low or worse, unknown.

The estimates are also affected by how familiar the estimator is with the system.

How many test cases will be required during testing is usually guesstimated. But this works if
product has less defects.

So, all assumptions used in the estimation process must be documented.


Factors affecting Test Estimation
Many factors can affect cost incurred, effort required and duration of testing. Estimates must be
made by considering all possible factors, some of which are:

• Expected quality level of the product


• Size of system that must be tested
• Statistics from earlier testing projects, enhanced with standard data from test projects of other
organizations or industry standards
• Different processes like testing strategy, product lifecycle, precision of project estimates, etc.
• Infrastructure like testing tools, data and environments, project documents, work products from
testing that may be reused
• People like managers, technical and non-technical leaders, commitment level of senior
management, project team skill and stability, correlation with other teams, domain knowledge,
etc.
• Degree of complication in processes, technology used and stakeholder number and involvement
• Need for skill enhancement or infrastructure upgradation
• Requirement of new processes, tools or techniques to be developed
• Requirement of procuring customized hardware or large quantities of testware
• Compliance with new documentation standards
• Time sensitive test data

While making estimates, Test Manager should take into consideration the quality and stability
of software that will be delivered to them for testing.

If best practices like continuous integration and unit test automation are used in the
development phase, it will result in the reduction of 50% of the defects before it goes for
testing.

Agile methodology also has been reported to produce high quality product for testing by
removing defects.

Test Estimation Techniques


Test Manager may take a top-down or bottom-up approach to test estimation using one or
more of these methods:

1. Prior experience and knowledgeable guesses


2. Work Breakdown Structure (WBS)
3. Holding multiple sessions like Wide Band Delphi on estimation
4. Organizational standards and adopted best practices
5. Ratio of number of project staff to number of testers
6. Historical data regarding cost, effort, duration, regression cycles, etc. from previous testing
projects
7. Industry benchmarks and models like function points, etc.
Once an estimation is done, it must be reported to the management, together with supporting data
for arriving at those estimates.

There might be some discussions after this, resulting in estimate revisions.

In an ideal scenario, the final estimate should balance the expectations of both the
organization and the project team in terms of product quality, project schedules, budgets and
product features.

It must be remembered that any estimate is prepared on the availability of data at that instance. In
the preliminary stages of the project, the amount of available information would be quite less.

If more relevant information is received as the project progresses, the estimates must be
updated to keep them correct.

You can review the explanation for estimation techniques in software testing given in the
Foundation level for testers against the one provided above for Test Managers, to see how they
differ in the knowledge expected from each level.

How to define, track, report & validate metrics in software


testing?
Metrics help in measuring the current state of an activity or process. They help us set
benchmarks and targets. Measuring where you are currently helps you establish the how much
further you need to go in order to achieve your goals. Test Managers must be able to define,
track, report test progress metrics.

What gets measured gets done is a common saying. So it can be safely inferred that if something
is not measured it won’t be done. So it is essential to establish quantifiable metrics for testing.

Table of contents

1. 4 categories of test activity metrics


2. Test progress metrics
3. Product quality risks metrics
4. Defect metrics
5. Test metrics
6. Test coverage metrics
7. Test planning, monitoring and control metrics
8. Test analysis metrics
9. Test design metrics
10. Test implementation metrics
11. Test execution metrics
12. Test progress metrics
13. Test closure metrics
14. Test control activities

4 categories of test activity metrics


Metrics for software testing activities can be grouped into the following:

• Project metrics – They measure how well the project is moving towards its exit conditions.
Examples include test case percentage that ran successfully, failed or were executed.
• Product metrics – They measure product characteristics like density of defects or degree of
testing.
• Process metrics – They measure the ability of product development processes or testing.
Examples include amount of defects which testing has been able to uncover.
• People metrics – They measure the skill levels and ability of team members or whole teams.
Examples include adherence to schedule for test case implementation.

Any metric can be part of more than one category listed above. For instance, if a chart is
prepared to record the rate at which defects are being reported every day, it can be a project,
product or process metrics.

If no defects are reported for seven days, the project can safely be said to be moving toward exit
condition.

If no more defects are found in the product, that is a measure of product quality. If a huge
number of defects are detected in early phases of testing, it measures the test process ability.

It is very important to handle people metrics very carefully as they can easily be confused with
process metrics.

If that happens, the whole testing process can fail and people may lose confidence in their
managers as well as organizational capabilities. We will look at how to motivate testing teams
and assess them against established metrics in upcoming topics.

ISTQB Advanced Level Test Manager exam deals mostly with project metrics. A number of
these metrics measuring testing progress also measure process and product.

Metrics assist the testers in reporting test results and tracking testing progress consistently.
Test Managers usually present these metrics at meetings attended by stakeholders of different
levels.

As these metrics may be used to assess the overall project progress, it is necessary to be careful
while determining what metrics must be tracked, techniques for preparing reports on the
metrics and frequency of presenting the reports.
These are some of the point a Test / QA Managers should keep in mind:

• Metrics definition – Metrics that are defined, should be useful. Unimportant metrics should be
ignored, keeping in view the four categories discussed above. All stakeholders must concur with
the metrics definition to avoid any confusion when discussing measurements. Since a single
metric might give the wrong idea, metrics should be defined such that they balance each other
and provide the complete picture.
• Metrics tracking – Tacking, merging and reporting of results for the metrics must be made
automatic to the extent that is feasible to minimize effort spent on these activities. Test
Managers must observe if there are deviations from expected results and incorporate them into
the report. If possible, causes of the deviations must also be mentioned.
• Metrics reporting – Metrics are reported to the senior management for project management.
So, the report should ideally be in presentation form and highlight the important metrics values
as well as evolution of metrics over a period of time.
• Metrics validation – Test Manager is responsible for verification of data and values being
presented in the reports. Test Managers should also analyze it for correctness as well as trends
being reported.

Test progress metrics


Test progress is observed based on these 5 factors:

• Risks to product quality


• Product defects
• Tests conducted
• Test coverage
• Confidence in test activities

Product defects, risks, tests and scope is normally reported in a predetermined format. If these
metrics are correlated to predefined exit conditions, a benchmark to assess test effort can be
developed.

Confidence can be measured subjectively or objectively using metrics like surveys and
coverage.

Metrics that can be defined for product quality risks


• Fraction of risks that were fully covered by tests
• Fraction of risks where all of or at least some of the tests failed
• Fraction of risks that could not be tested fully
• Fraction of risks that were tested or sorted according to category of risks
• Fraction of risks that were detected post preliminary analysis of risks to product quality

Metrics that can be defined for defects


• Ratio of total number of defects detected to number of defects resolved
• Mean time between failure or rate of failure being reported
• Categorization of defects according to these factors:
o Product components to be tested
o Defect causes
o Defect sources like addition of new features, regression, requirement specification
o Test releases
o Defect levels
o Defect priority or severity
o Duplicated or rejected reports
o Time lag between defect detection and resolution
• Daughter bugs, i.e. defects caused by fixing another defect

Metrics that can be defined for tests


• Number of planned tests
• Number of implemented and executed tests
• Number of tests that were blocked, skipped, failed or successful
• Status, trends and values for regression as well as confirmation tests
• Ratio of planned testing hours to actual testing hours daily

Metrics that can be defined for test coverage


• Coverage of requirements and design documents
• Coverage of risks
• Coverage of testing environment or configuration
• Coverage of product code

Test Managers must be proficient in interpreting and using the coverage metrics for reporting
on testing status. Coverage of design and requirements documents are needed for higher
testing levels like integration testing, acceptance testing and system testing.

Coverage of codes are required for lower testing levels like unit testing and component level
testing. Higher level testing results should not include code coverage.

It is necessary to note that despite 100% coverage goals at lower levels, defects will be detected
at the higher levels and fixed accordingly.

Testing metrics may be related to the primary testing activities. This will help in monitoring test
progress against the stated project objectives.

Metrics that can be defined for test planning monitoring and


control
• Scope of test basis elements like risk, product requirements, etc.
• Defect detection
• Ratio of estimated number of hours required in test development and execution to total
number of hours required

Metrics that can be defined for test analysis


• How many test conditions were known?
• How many defects were detected?

Metrics that can be defined for test design


• Fraction of test conditions that had test cases
• In the test design process (for example, when test basis was used to develop tests), how many
defects were detected

Metrics that can be defined for test implementation


• Proportion of testing environments that were setup
• Proportion of records of test data that were uploaded
• Proportion of test cases that were automated

Metrics that can be defined for test execution


• Percentage of test cases that were run, successful or failed
• Ratio of testing criteria covered by test cases that were run successfully
• Ratio of expected defects to actual defects that were reported or resolved
• Ratio of estimated test coverage to real coverage that could be achieved

Metrics defined to observe progress of testing


Metrics defined to observe progress of the testing activities must map to the project milestones,
test entry conditions and test exit conditions. Some of these metrics could be:

• Number of predefined testing cases, conditions or specifications that were executed, with their
results (pass or fail)
• Detected defects categorized according to severity, importance, affected product component,
etc.
• Details of modifications required and their status incorporated and/or tested)
• Estimated versus real cost
• Estimated versus real testing duration
• Estimated dates for project milestone testing versus the real dates
• Estimated milestones dates of testing activities versus their real dates
• Details of risks to product quality categorized into mitigated and unmitigated
• Key risk components
• Risks detected subsequent to test analysis
• Loss in testing time and effort because of unexpected or planned events
• Status of regression testing and confirmation testing

Metrics to measure test closure tasks


Following metrics may be designed for measuring tasks involved in test closure:

• Number of test cases


o For these categories – run, passed, failed, skipped and blocked
o That became part of reusable repository of test cases
o That were to be automated versus the real number of cases that were automated
• Number of defects that were resolved or not resolved
• Number of archived work products of testing

Test process is also monitored through other management techniques like work breakdown
structure. Products developed using Agile techniques monitor testing process on burn down
chart. In Lean management, test progress is monitored by creating a testing story card on
Kanban board’s column.

For a group of metrics that are predefined, reports may be generated verbally, in form of tabular
data or visually as graphs and charts. The metrics values obtained can be utilized for many
things, like:

• Understanding causes of defects and identifying trends, if any


• Creating reports for project management team and other stakeholders
• Modifying testing process if needed and monitoring the execution of modified tests

Here there is no one correct way to collect and analyze the metrics or create reports using them.
It depends on project goals and requirements and type and skill level of people using the reports.

The metrics gathered through the testing process must assist the Test Manager in monitoring the
testing effort and leading it towards successful completion.

Therefore, the metrics, amount and frequency of data collection, its complexity and risk
associated with it must be established in the planning phase itself.

The project conditions can keep changing during software development lifecycle.

Test control activities


Test control must be able to modify the tests according to changing project environment and data
provided by the tests.

As an example, consider the scenario where dynamic testing of a product reveals cluster of
defects in areas that had been assumed to be defect free, the time available for testing is reduced
due to delay in development. This would require revision of the risk analysis and the plan. In this
case, tests would have to be re-prioritized and the effort allocation for testing would have to be
revisited.

Keeping in view the new information, test plan should be revisited, new test cases created and
testing effort redistributed.

If reports on testing progress point to deviation from test plan, test control activities must be
performed to steer the test towards the correct path.

Some of the actions that may be considered for this include:

• Modifying test sequencing or test plan


• Reviewing quality risk analysis
• Increasing testing efforts
• Rescheduling product release date
• Changing test exit condition
• Modifying project scope as required

All stakeholders and project managers must agree before any of these steps could be
implemented.

Data provided in any test report depends on the intended audience. So, a report for project
manager would have detailed defects report whereas the report for business manager should
focus on product risks.

What is cost of quality in software testing?


Every project activity has cost and it must yield excellent business value and software testing is
no exception. Test Managers should optimize testing in a way that its business value is
maximized. Too much of testing would cause unnecessary delays and end up incurring more
costs.

Too little software testing will cause highly defective products to handed to the end users. A
middle path must be chosen by Test Manager / QA Manager and explain it to the stakeholders
too.

Testing is considered valuable by organizations but Test Managers find it difficult to explain
the value in easily understandable and quantifiable terms. They concentrate on technical
details and overlook the strategic details in testing, which are considered more important by
the managers.
Cost of quality
Cost of quality is one of the most established, effective measures of quantifying and calculating
the business value of testing. To measure this, the project and its budgeted expenses must be
classified into these four categories:

• Prevention costs – This includes cost of training developers on writing secure and easily
maintainable code
• Detection costs – This includes cost of creating test cases, setting up testing environments,
revisiting testing requirements.
• Internal failure costs – This includes costs incurred in fixing defects just before delivery
• External failure costs – This includes product support costs incurred by delivering poor quality
software

Normally, cost of detecting defects is the major part of total cost. This is the cost which is
incurred even if no defects are found by the testing team. Whereas remainder cost is incurred on
fixing the defects, i.e. internal failure cost.

However, total of these two costs is always significantly less than external failure costs. So,
testing always provides good business value.

Test Managers should be able to create a strong case for taking up testing activities by evaluating
these four categories of costs.

Business value of software testing


It is the responsibility of Test Manager to identify the business value that needs to be provided
and communicate it to other teams and senior management. Ensuring that the senior management
is aware of the business value of testings help alleviate any concerns regarding the cost of
quality.

These are some of the measurable ways in which testing can deliver business value:

• Detecting defects that can be corrected before product release


• Documenting defects that cannot be fixed, with details on how to handle them
• Providing status reports and test metrics on project progress, process implementation and
product development

These are some of the subjective ways in which testing can deliver business value:

• Enhancing reputation for product and process quality


• Ensuring predictable product releases in an organized manner
• Increasing confidence in product quality
• Decreasing risks to product or team members using risk based testing, other techniques or
simply by testing the functionality
• Hedging against legal liabilities
What are Distributed, Outsourced and Insourced Testing?
Testing teams may be distributed over different locations and may not even be employed by the
same organization. The Test Manager must know how to manage test teams in different
locations, which belong to a different organizations.

Test effort by such teams is of three types:

• Distributed – The test team is distributed across multiple locations


• Outsourced – The test team comprises members employed by a different company and do not
work at the same location as team working on the project
• Insourced – The test team comprises members employed by a different company but they
work at the same location as the team working on the project

It is necessary to have clear communication among the project as well as testing team members
in all these three types of testing efforts for success. Deliverables, tasks and goals must be stated
clearly.

These are the factors that shape up the communication channels in such teams:

• Informal communication like hallway and social conversations are not to be depended on, for
sharing information
• Ways of communication must be predefined, especially for escalating issues, the kind of details
which should be shared, acceptable communication channels, etc.
• Every team member must understand his/her role and responsibilities
• Cultural differences, time zone issues and geographic concerns must be addressed
• No team member should have unworkable expectations of any other team member

Using common testing techniques is a very important factor in avoiding problems. Say, the
product is being developed using Agile methodology but the testing provider uses a testing
technique that needs input in a sequential way. Here, the two teams will have serious issues in
delivering as well as accepting test items.

In case of distributed testing, testing objectives for each group must be clearly defined.
Otherwise even the most skilled teams may not be able to deliver what is expected of them. It is
the responsibility of testing management team to distribute testing tasks correctly so that there is
no overlap of activities.

Last but not the least, all the individual teams must have complete faith that other teams will
execute their share of the work effectively and efficiently. Any degree of mistrust will reduce
efficiencies, cause unnecessary delays due to verification of testing processes and promote
organizational politics.

Test Manager must be able to motivate the testing team irrespective of the organization they
belong to.
How to manage & apply industry standards to software
testing projects?
Test managers must know about relevant industry standards in software testing and their
applicability to their particular organization or project. They should be able to determine which
standards are applicable for their project and which ones may be an overkill.

There are several industry standards available for:

• Software development lifecycle


• Software quality benchmark
• Software testing
• Defect management
• Testing reviews

These standards may be national, international or domain/industry specific, with their own
objectives and benchmarks. Two of the most popular international bodies issuing standards for
various industries are International Standards Organization (ISO) and Institute of Electrical
and Electronics Engineers (IEEE).

ISO consists of members from different countries. They represent the national bodies of
industry being considered for standardization in their respective nations. Some of the most
useful standards for software testers includes ISO 9126, ISO 12207, ISO 15504 and ISO 25000.

ISO 25000 is fast replacing ISO 9126. ISO 25000 is also known as SQuaRE which stands for
System and Software Quality Requirements and Evaluation which provides a framework for
evaluating software quality.

IEEE is headquartered in the US but has representatives from over 100 countries. Some of its
most useful standards used by software testers are IEEE 1028 and IEEE 829. IEEE 829 acts
as a standard for creating a master test plan. IEEE 1028 provides the standard for software
reviews and audits.

Many standards adopted by individual nations are applicable to software testers. For example BS
79925-2 standard from UK covers testing design methods in detail.

There are many standards which were created for specific industries but they have been found
useful in testing software. US Federal Aviation Administration’s (FAA) DO-178B standard is a
case in the point. It is developed for the avionics industry but has found use in software that is
used by civilian aircrafts, especially if used in certain critical areas. Its EU equivalent is ED 12B.

US Food and Drug Administration (FDA) has another example, standard Title 21 [FDA21] CFR
Part 820 is a standard for medical systems. But it has recommendations for some functional
testing as well as structural testing techniques. The recommended testing techniques are in
line with the ISTQB syllabus.
Sometimes testing activities may be subject to software development process standards. For
example, two important process areas of CMMI process framework for improvement in software
process – which are validation and verification – influence testing strategy as well as test
levels.

ITIL, PRINCE2 and PMI’s PMBOK are three other frameworks for project management that
influence testing activities greatly. However, terms and activities used in them are very different
to the ISTQB syllabus as well as glossary. So a Test Manager handling projects being developed
using these frameworks must understand them and how they would affect the testing process.

Please note that the standards are produced and reviewed by set of experienced professionals. If
they replicate the team member’s experience, they also reflect their limitations. Whatever the
standard being used, Test Managers must understand how it applies in the context of the
specific project.

If more than one standards are being used, Test Manager must check whether they are consistent
with each other or not. The Test Manager must also decide whether the standards would be
useful in testing or prove to be an obstacle. However, it should be noted that standard can
provide an easy reference for best practices that are customary in the industry and act as the
starting point for testing process.

Sometimes, it is mandatory to ensure compliance with standards. In such a situation, test


processes should be planned and built to comply with such requirements.

How to manage formal reviews & management audits?


Skills, metrics, responsibilities
Reviews can be considered to be a type of static tests, so Test Managers could be made
accountable for the success of review process, especially for testing work products. This
accountability should be fixed by the organization as part of its organizational policy or test
policy.

Reviews find a mention in the syllabus for Foundation Level as per ISTQB in the form of static
testing tasks for software products. You should note that management reviews and audits lay
emphasis on software development process and not on its work products.

Table of contents

1. Review leader
2. Examples of reviews
3. Management Audits and Reviews
4. Managing Reviews
1. Skills of a reviewer
2. Factors affecting review planning
3. Review leaders responsibilities during review
4. Review leaders responsibilities after review
5. Reviews Metrics
6. Managing Formal Reviews

Review leader
In any organization formal reviews are done before as well as during software development
projects. So, Test Manager, QA Manager or Review Coordinator may be responsible for the
review process. The person responsible for the review is referred to as review leader.

Review leader is responsible for creating a favorable environment for successful reviews. A
review leader must develop metrics to measure business value provided by the process.

Product testers understand software requirements and functional features of the final product
well, they must be involved in any review process.

Training must be provided to all reviewers so that they clearly understand their role in the review
process. It is essential that all reviewers must be dedicated to ensuring a successful review.

If reviews are done in the right manner, they prove to be the single largest and best cost-effective
technique for improving quality of the final product. Therefore, it is essential the review leaders
incorporate effective reviews in projects as well as prove its advantages too.

Examples of reviews
These are some of the items that may be reviewed in a project:

• Review of contract
• Project requirements – Functional as well as non-functional
• Top level designs
• Detailed design
• Code reviews
• Test work products like test plan, test criteria, risks to product quality, test data, test results,
test environments, etc.
• Test entry and exit at each level
• Product acceptance in the form of approval by customer/stakeholder

Each of these reviews should be initiated as soon as the necessary documents are available
for review, so that review results may be incorporated as soon as possible.

As you can see, a product undergoes multiple reviews at different stages of development.
However, the review leader should ensure that besides the static testing, some form of dynamic
testing as well as other types of static testing should also be done to enhance coverage of tests as
well as detect more defects.

Combination of testing techniques should be used because each technique has a different focus
and ends up detecting different type of defects.

Consider the example of requirements review, which can detect issues at the project inception
stage so that it does not get transmitted to the coding level.

Static review can assist in ensuring adherence to coding benchmarks and detect problems which
can be difficult later on. Reviews can also train the team members in avoiding defects in the
work products created by them.

Syllabus of ISTQB Foundation discussed these review types:

• Walkthrough
• Informal review
• Inspection
• Technical review

Besides these review types, Test Managers may also do audits and management reviews.

Management Audits and Reviews


Management reviews help in tracking project progress and assessing project status. They also
provide support for taking decisions about further course of action like changing project
resources, incorporating corrective activities, modifying project scope etc.

Management reviews primarily:

• Are conducted by the managers that are directly responsible for the application or project or by
a skilled reviewer on their behalf
• Are also done by a decision maker like director or any other stakeholders themselves, or by a
skilled reviewer on their behalf
• Check the real time project progress against planned progress for consistency
• Evaluate risks to the project
• Assess effect of various actions and how to measure the effect
• Create list of issues that need to be fixed, decisions that need to be taken, and actions that must
be carried out

When management review of processes are done, they end up improving the overall process.

It is necessary for the Test Managers to be part of management reviews and even start the review
process sometimes.
Audits are conducted to ensure compliance to certain standards, regulations, or contractual
requirements. They are always independent to any other review or testing process that the
organization or project may have.

Here are some of the main audit features:

• Audits are usually done and/or led by a principal auditor


• Compliance is tested by sourcing and analyzing documents, interviews, and witnesses
• Work product of audits include recommendations, observations, corrective measures,
success/failure evaluation etc.

Managing Reviews
Reviews must be scheduled at natural milestones that occur in the project. Usually they should
be done after requirements gathering and top level designing has been done, starting from
objectives for the business as well as going up to the lowest design level.

Management review may also form part of verification done after, during and before testing as
well other important project phases. It is essential to integrate review strategy with test strategy.

Review leader must consider these factors before creating a project level review plan:

• Processes and work products to be reviewed


• People involved in review process
• Risks to be covered

Review leader needs to recognize the project areas that must be reviewed in the early stages of
project planning. The suitable review type which could be walkthrough, informal, technical or
inspection and formality level must be specified as well.

Depending on type and formality level, review leader can ask for a training of reviewers,
depending upon the budgeted time and money.

The budget must evaluate risks that are covered and ROI (return on investment).

Review ROI is the variation in cost incurred in carrying out the review versus the cost of
handling defects detected later in the lifecycle assuming the review was not performed. This
figure can be obtained as per the calculation for cost of quality discussed under business value of
testing.

Required time for the reviews can be estimated based on these factors:

• Format of content which is to be reviewed


• Skill level of the reviewer
• time frame for delivery of reviewed item
• Time needed for review
The review leader pre-determines the metrics for evaluating review effectiveness at the test
planning phase itself. In case of inspection, it is to be performed as and when sections of the
documents are completed, on author’s request.

Test planning must also define review objectives like performing effective review, achieving
maximum efficiency, and reaching unanimous decisions.

Often, the whole project is reviewed, requiring reviewing of project subsystems as well
individual software units. Type, number, reviewers, complexity, etc. of reviews is decided by
project size, product complexity, and probable risks.

Skills of a reviewer

To achieve required efficiency levels, the reviewers must have these skills:

• Technical knowledge
• Operational knowledge
• Mindful of the details
• Exhaustiveness
• Ability to provide clear and actionable comments
• Understanding of their own role and responsibility

Factors affecting review planning

Review planning needs to take care of these factors:

• Technical risks – Appropriate technical knowledge.


• Organizational risks – Sufficient time given to each reviewing organization, participation from
review team members for discussions.
• People issues – This includes availability of skilled reviewers, commitment by each review team,
and backup plan for replacing reviewers if needed.
• Time constraints – Adequate time provided for training.

Review leaders responsibilities during review

These are the things review leader needs to ensure during the review:

• Sufficient data points are captured in the review for evaluating review efficiency
• Checklist generated and maintained for enhancing future reviews
• Each detected defect has a severity and priority associated with it and defect severity and
priority are defined before the review

Review leaders responsibilities after review

These are the actions review leader must take after completion of review:
• Gather metrics generated by the review
• Detected issues fixed to meet review objectives
• Determine ROI on reviews using metrics generated
• Offer feedback to reviewers
• Share review results with relevant stakeholders

Test Managers should compare the review reports with the real results obtained in tests done
after the review to measure usefulness of the reviews conducted.

If there is discrepancy in the two results, the review leader must investigate and analyze why the
defects could not be detected during the review. Some of the reasons could be:

• Incorrect entry and/or exit conditions


• Unsuitable team members
• Insufficient and inaccurate tools for review
• Inadequate preparation time
• Less time scheduled for review meetings

If a pattern is found in defects that were left undetected, this points to issues in the review
process itself. In this scenario, review leader must investigate the causes and fix the problems.

Reviewing of review process and fixing of errors needs to be taken up if the reviews start losing
their efficacy over a period of time.

Whatever the scenario, review metrics values should be utilized for identifying issues in
processes, not reprimand reviewers or people whose work introduced the defects. Test Managers
should be able to motivate the testing team to perform the review to the best of their abilities.

Metrics for Reviews


Review leaders are responsible for providing metrics for:

• Review team quality evaluation


• Review costs assessment
• Assessment of downstream benefits of reviews conducted

Metrics may be used in to calculate ROI, review efficacy, creating reports, and suggesting
process improvement.

The below metrics may perhaps be evaluated as well as reported in case of every work product:

• Time required to prepare for review


• Size of work product
• Time taken in review process
• Time required for fixing defects
• Defect severity and how many defects were reported
• Areas having greater occurrence of defects
• Review type
• Defect density
• Density estimate of defects left undetected

Each review should evaluate these metrics and report on them for evaluation of the process:

• Effectiveness of detecting defects


• Effort and time required for review
• Percentage of review done as compared to planned coverage
• Defects types of detected defects with their severity
• Survey measuring perception about review efficacy and impact
• Number of reviewers
• Relation between type of review and types of detects detected
• Cost of quality measurements for defects caught in the review as compared with testing and
product defects found later
• Savings in project time
• Average effort needed for each detecting each defect

The metrics discussed here are for evaluation of the product. However, they can be used to
evaluate the processes as well.

Managing Formal Reviews


Any review has these phases:

• Planning the review


• Initiating the review process
• Preparation and Training
• Review meeting and discussion
• Rework
• Follow up

Review leaders must ensure adherence to all these steps for review success.

These are the characteristics of formal reviews:

• Pre-determined exit and entry conditions


• Availability of review checklists
• Deliverables like review summary, evaluation sheets, reports etc.
• Metrics for providing information about review progress, efficacy, and effectiveness

Before a review is started, the review leader must ensure that prerequisites for conduction of the
review is met. If these conditions are not met, review leader can choose to refer to a more
competent authority for any of these:
• Redefining of review objectives
• Remedial actions to start the review
• Postpone the review

In order to manage formal reviews, the above reviews are observed from the higher perspective
of the program. So they are linked to project quality assurance. They need to be kept under
constant observation and feedback about product and/or process metrics must be produced.

Complete guide to defect management for Test / QA


Managers
After detecting the defects, managing defects is the most important activity for any organization,
not just for the testing team but for everyone engaged in the software development or project
management process. The processes and tools used for managing the defects are critically
important because they can identify the areas that need improvement for overall enhancement of
development and testing processes.

Table of contents

1. The Defect Lifecycle and the Software Development Lifecycle


2. Defect States and Workflow
3. Managing Duplicate Defects and Invalid Defect Reports
4. Cross Functional Defect Management
5. Defect Report
6. Defect data
7. Assessing Process Capability with Defect Report Information
8. Examples of process improvements using defect report information

Test Manager or QA Manager needs to know which data must be collected at all costs and
promote correct usage of processes and tools used for defect management.

The Defect Lifecycle and the Software Development


Lifecycle
As you know, when anyone commits an error during the product development or maintenance, a
corresponding defect is introduced in any work product. The defect may be introduced into any
work product like requirements specification, technical document, use case, test case, code, etc.,

As defects may occur in any work product, defect detection and removal must be an integral part
of every step of software development life cycle. Sooner the defects are identified and fixed,
lesser the total cost of quality of whole system.
As discussed in the syllabus for Foundation Level, static testing process detects the defects
directly, without the need for debugging. This reduces cost of fixing defects considerably.

In contrast, dynamic tests like unit testing, system testing or integration testing, defects are
discovered only after they cause system failures. Once the tester observes the failure, a defect
report is filed and further investigation into the defect starts.

In a Test Driven Development (TDD) technique, automated tests are built into the product design
itself. The specified product development phase is not considered over till the automated tests are
executed.

It must be noted that in TDD, tests will fail until the complete unit is developed. So there is no
need to track failures till the unit is complete.

Defect States and Workflow


Many of the organizations that conduct software testing use a tool that keeps track of the defects
throughout defect lifecycle and to manage defect reports.

Usually, there is one owner of the defects report at each state of defect lifecycle, who is also
responsible for completing a task that would move defect report to the subsequent state.

In the last phases of defect lifecycle, defect report may not have an owner in these situations:

• The defect has been fixed and tested and hence defect report considered closed
• Defect report is cancelled because it is invalid
• Defect report is deferred if the defect won’t be fixed as part of the project
• Defect report is considered not reproducible if the defect cannot be observed any more

If defects are detected during testing, the testing team must take action to manage them in
these three situations:

• Initial state – Also referred as new or open state. Here one / multiple testers are responsible for
collecting necessary information for fixing the defects.
• Returned state – This is also called the rejected state or clarification state. Here the person
receiving the test rejects the report and asks the report creator to give more information.
Testers have the option of providing more information or accepting the rejection of the report.
If many reports are rejected, Test Manager should look out for shortcomings in the preliminary
information collection process itself.
• Confirmation state – Also called the verified or resolved state, here the tester does a
confirmation testing to ensure that defect has really been resolved. This is done by repeating
the steps that detected the defect during testing. The report is closed if the defect was fixed.
The report is reopened if the defect was not fixed and sent back to the owner who previously
owned the defect report, for fixing.

Managing Duplicate Defects and Invalid Defect Reports


The tester may detect an irregularity due to an issue in test environment, testware or test data or
tester’s own mistake. If it is reported as a defect and later found to be unrelated to any work
product defect, it is called as false-positive result.

The report is closed after terming it an invalid defect or directly cancelled.

Sometimes one defect can lead to results which may look like multiple unrelated issues to testers.
If more than one defect is reported, which have a common root cause. In such situations, one
defect report remains open while others defects are closed by marking them as duplicate.

Though duplicate and invalid reports indicate inefficiency, Test Managers must recognize
them as inevitable. If they try to eliminate such reports completely, amount of false negative
reports increases because testers are not willing to file reports for all defects detected.

This in turn decreases the organization’s testing efficiency.

Cross Functional Defect Management


Usually defect management is owned by the Test Manager and the testing organization.
However, a team which is cross-functional, comprising members from product development,
product management, project management, etc. is accountable for managing defects reported
by test teams.

As defects are reported through the tools used for defect management, the committee for defect
management must consider the costs, risks and benefits associated with each risk to decide
whether the defect should be fixed or deferred.

If a defect needs to be fixed, its priority must be finalized with respect to other tasks in the
project. The committee can consult Test Manager & the testing team members to discuss the
priority and severity level of any defect.

It must be remembered that no tracking tool can take the place of proper communication.
Similarly, defects committee consultations cannot be a replacement for optimum use of the tool
used for tracking defects.

To ensure efficient and effective defect management, all these factors are necessary:

• Good communication
• Efficient usage of a defect tracking tool
• Sufficient support for the tool
• Clearly established defect lifecycle
• Proactive committee for defect management

Defect Report
When static testing detects a defect or static testing observes a failure, the testers must include
them in their defect report. It is necessary for:

• Managing reports through each level of defect lifecycle


• Assessing project status in context of testing progress and product quality
• Assessing process ability

Information needed for managing defect report or assessing project status depends upon the
lifecycle stage in which the defect has been reported. As the lifecycle progresses, more
information is needed for defect report management.

Nevertheless, the core data collected must be constant across not just the current lifecycle but
all projects so that process defect data comparisons can be done to get meaningful insights.

Gathering defect data helps in monitoring and controlling test progress and evaluating test exit
conditions. For instance, defect data can provide insights into analysis of defect density, trends in
defect detection and resolution, average time needed to fix a defect and intensity of observed
failure.

Defect data
Information pertaining to defect data can include these:

• Name of tester who detected the defect


• Tester role, like developer, business analyst, technical support analyst, etc.
• Testing type that caught the defect – like Regression testing, smoke testing, usability testing etc
• Problem summary
• Problem description
• Testing steps to recreate the observed failure, including expected and actual results,
screenshots, logs, database dumps, etc.
• Lifecycle phase where defect was introduced, detected and removed along with test level where
required
• Work product where defect was initiated
• Severity of impact on system / stakeholder
• Defect priority (generally dependant on the issue’s business impact)
• Component or subsystem where the defect was found – for analyzing defect cluster
• Project task which was in progress when the issue was found
• Method of identification used that detected the defect – like regular production use, static
analysis, review, dynamic testing etc
• Defect type and its taxonomy, if used
• Quality attribute impacted by the bug
• Test environment where the bug was detected, if applicable
• Product and project that has the defect
• Defect owner who at present has been allotted the issue
• Work products where defect was detected and where it was fixed. This should include release
numbers, specific test item details etc.
• Defect reports current state
• Recommendation, approval and conclusion for corrective action performed or deferred
• Costs, benefits, risks and opportunities linked to whether defect is fixed or not
• Dates for various changes in the lifecycle of the defect. Report owner of each lifecycle transition
/ change. Activities undertaken to isolate and fix the defect and verify the solution.
• Description for defect resolution and details of how the fix can be tested
• Other references like tests performed to identify defects and test basis elements like risk linked
to the bug

Test Manager can refer to many documents and standards like ISO 25000, IEEE 1044, IEEE 829
and Orthogonal Defect Classification to decide what data must be collected for reporting the
defect.

Testers must enter complete, accurate, precise, timely and relevant defect information. Even
manual intervention or correct communication cannot undo the effect if such defect information
is not entered and it will affect project status evaluation, testing progress and ability level of the
process.

Assessing Process Capability with Defect Report


Information
Under Test Management, we discussed in detail how defect reports assist in monitoring project
status and creating related reports.

Test Managers must understand the role of defect reports in evaluating software development
process capabilities and testing.

Besides information used for monitoring test progress, discussed in Test Management as well as
in Defect Report, information captured in the defect must support initiatives for process
enhancement.

Examples of process improvements using defect report


information
• Using information about phases in which the defects were introduced, detected and removed to
evaluate phase containment while proposing how defect detection efficiency can be enhanced
in all phases
• Using information about phases where defects were introduced in Pareto Analysis of phases
where maximum defects were detected, so that targeted improvement measures can be taken
to reduce defects
• Using information about root cause behind the defects to find the reason why defect was
introduced, so that process could be improved to decrease overall amount of defects
• Utilizing information about phases in which the defects were introduced, detected and removed
to carry out quality costs analysis and decrease cost of defects.
• Using information about defective product component to analyze defect clusters and
understand the technical risks in a better way and help in re-building of defective components.
Sometimes, teams may decide to not track the defects detected for some or whole of the SDLC.

Though this is apparently done for efficiency and decreasing process overheads, it diminishes
insights into capabilities of software development and testing processes. This also hinders
process improvements recommended earlier due to dearth of trustworthy data.

Software Testing Process Improvements for Test / QA


Managers
After an organization’s test process is established, it should be subjected to periodic reviews and
enhancements. Usually Test Managers / QA Managers have process improvement as one of their
goals on which they are evaluated during their appraisals. This article discusses some of the
models that are useful in implementing process improvement in software testing.

Table of contents

1. Test Process Improvement


2. Process Improvement
3. Types of Process Improvements

Test Managers are responsible for the overall process and must keep up-to-date with latest
techniques prevalent in the industry. These techniques are discussed in the article.

This article also discusses the general issues that may crop up in this and suggests some models
that may be used.

Test Process Improvement


You have learnt till now that testing should be used to enhance software quality as well as the
final product. Just as the techniques of process improvement are used in software development,
they can also be used to improve the test process.

There are many methods to enhance software testing and the testing process itself. They provide
the guidelines for improvements as well as areas that need improvement.

Even though the cost of testing is a significant portion of overall cost of the project, focus on
testing process in process improvement models like CMMI is not proportional. Process
improvements in testing can potentially be used to reduce the cost of quality.

Specific models for test improvement have been developed to tackle this issue. Some of them
are:
• Test Maturity Model integration (TMMi)
• Critical Testing Processes (CTP)
• Test Process Improvement Next (TPI Next)
• Systematic Test & Evaluation Process (STEP)

If these models are used correctly, they can assist in gathering cross-organizational test metrics
for carrying out standardized comparisons.

The process improvement models have been discussed in subsequent topics to explain how they
function and what their scope is. They should not be treated as recommendations for process
improvement.

Process Improvement
Improvements to processes are essential for both software development and testing processes. If
the organization learns from its own mistakes, processes used to develop the product as well the
testing process can be improved.

A case in point is the Deming improvement cycle – plan, do, check, act – which is used
decades after it was first proposed.

An assumption in favor of improving the processes is the conviction that if product development
process is improved, overall quality is also improved. If the software quality improves, less
resources are required for software maintenance, which can then be used to create better
solutions.

Models for process improvement evaluate organization’s capabilities and then propose an
improvement framework.

The first step in process improvement must be evaluation, which determines organization’s
process ability, which provides further motivation for improving the process and ramping up the
abilities.

This may also lead to more assessments to evaluate the consequences of improvement measures.

Types of Process Improvements


Using models for process improvement provides a benchmark for improving the testing
processes using established practices. There are two types or models for process improvement:

1. Process reference model – This model measures process maturity to assess organizational
capabilities. Based on the assessment, the model provides a process improvement road map.
2. Content reference model – This model performs business driven assessments, sometimes
measuring against established industry averages. This assessment then provides a process
improvement road map.
Testing process can be improved without using these models. Techniques like retrospective
review meetings, improving the defect management process, better test implementation, test
execution and other analytical methods can be utilized in such cases.

What is the IDEAL model for test process improvement?


The models for improving test processes can enable the overall IT industry to achieve greater
maturity level and professionalism. These models enable development of cross-organizational
parameters that which are useful for comparisons.

Table of contents

1. IDEAL model for Process Improvement


2. Initiate the process improvement
3. Diagnose the present circumstances
4. Establish a plan for improving the testing process
5. Action to execute the improvements
6. Learn lessons from improvements

As the necessity of process improvement was felt in software testing organizations, several
industry standard models have been proposed like Systematic Test and Evaluation Process
(STEP), Test Process Improvement (TPI) Next, Critical Testing Processes (CTP) and Testing
Maturity Model integration (TMMi). These models are discussed in the next topic.

CMMI and TMMi are staged models and define benchmarks for different organizations to
compare against each other. Based on assessment results, a process improvement road map is
drawn.

STEP, TPI Next and CTP are continuous models, which assist organizations in dealing with
the issues that have highest priority, allows them to choose the order in which they are addressed.

These models are discussed in more detail in the next topic on process improvement models.

Each of the above models help the organization in assessing its test process efficiency. In order
to improve the software testing process, TPI Next and TMMi will recommend road maps
once the assessment is performed.

In case CTP or STEP is being implemented, after the assessment is complete, these models
offer methods to evaluate which process improvement will yield maximum returns.
The road map for process improvement has to be selected by the organization. Apart from
improvements in efficiency, reduction in cost of quality can be one of the factors influencing the
choice.

The organization must be flexible enough to accommodate the changes suggested by the model
like improvements to defects management, better tracking of test metrics or stricter monitoring
and control of testing process. Test implementation and test execution processes could also
require changes if they are found to be lacking.

Whatever the technique used, all the models enable an organization to evaluate its existing test
processes.

IDEAL model for Process Improvement


After the organization is convinced that its test processes need review and improvement, the
implementation steps must be defined as per the IDEAL model:

• Initiate the process improvement


• Diagnose the present circumstances
• Establish a plan for improving the testing process
• Action to execute improvement
• Learn lessons from implementing the above process improvements

Initiate the process improvement

To start the process improvement the stakeholders must carry out the following activities:

• Setting goals and objectives for process improvement


• Defining scope as well as coverage for process improvement
• Selecting an industry standard model mentioned earlier (like CTP, STEP etc) for process
improvement or developing one internally
• Defining success criteria
• Establishing metrics for evaluating success and defining how it is to be measured

Diagnose the present circumstances

In this phase the current software testing process is evaluated based on an approved
evaluation method. After the assessment, a report containing evaluation of the existing testing
process as well as suggested improvements is generated.

Establish a plan for improving the testing process

In this phase the all the suggested process improvements from diagnosis phase are prioritized
based on these criteria:

• Product and quality risks


• How well they align with the organizations strategies
• Return on Investments
• Measurable benefits, qualitative and/or quantitative

After the list of improvements is prioritized, the plan for delivering those improvements is
prepared.

Action to execute the improvements

As the name suggests, the plan for test process improvement is implemented in this phase.
This also involves mentoring and training the people involved, if required, process piloting and
then full implementation.

Learn lessons from improvements

After the improvement processes are completely deployed, the outcomes must be compared
against previously estimated benefits to check if the expected benefits and unexpected ones were
achieved. In addition, its required to check if the process improvements have met all their
success criteria.

Based upon process model implemented, at this stage the process observation can start for the
next maturity level.

Depending on learning from implementing the plan, decision is usually made to again start the
process improvement or stop.

Software testing process improvement models – TMMi, TPI


Next, CTP, STEP
There are several industry standard models that have been created in order to improve the testing
process. These software testing process improvement models are specifically tailored for
testing and hence, they are better suited than other process improvement models which are
usually meant for software development.

Test Managers should be familiar with the models and their high level features, for the ISTQB
Advanced Level Test Manager exam.

Table of contents

1. Software Testing Process Improvement Models


2. Testing Maturity Model integration (TMMi)
3. Maturity levels defined for TMMi
4. Test Process Improvement (TPI) Next
5. Critical Testing Processes (CTP)
6. Systematic Test and Evaluation Process (STEP)

In the previous topics we discussed process improvement in testing. Here we will take look at
some of the models that Test Managers can use for improving the test process.

Software Testing Process Improvement Models


• Testing Maturity Model integration (TMMi)
• Test Process Improvement (TPI) Next
• Critical Testing Processes (CTP)
• Systematic Test and Evaluation Process (STEP)

Testing Maturity Model integration (TMMi)


Testing Maturity Model integration (TMMi) complements the CMMI model and consists of five
levels of maturity. Each level of maturity has predefined processes areas with general goals as
well as goals that are specific.

The organization will be able to move to higher maturity level only after these goals are at
least 85% complete.

Maturity levels defined for TMMi

• Level 1: Initial
In this level, testing is considered equivalent to debugging and its aim is to simply prove that the
software functions as expected. The testing process is not properly structured or even
documented officially. The tests themselves are introduced in an unplanned manner as and
when required, once coding is complete.
• Level 2: Managed
If the testing process is separate from the debugging process, the organization reaches the
managed level. To attain this level, test goals and test policy must be defined clearly. Basic steps
seen in test process like creating a test plan, implementing testing methods and techniques,
must be put into practice.
• Level 3: Defined
In this level, testing is an integral part of the overall software development process. Test
processes have formally defined standards, methods and processes that are documented. There
is a distinct test function for software testing that is monitored and controlled and reviews occur
periodically.
• Level 4: Measured
This maturity level is attained when the test process can be efficiently measured and controlled
at company level for individual projects benefit.
• Level 5: Optimized
This highest maturity level is said to be reached if data obtained as a result of testing process is
used to minimize defects. The focus at this level is to optimize the existing test processes.

Test Process Improvement (TPI) Next


With TPI Next model, each facet of the testing process, like test planning, test metrics, test
environment, etc. – is covered by 16 predefined areas. This model has 4 maturity levels:

• Initial
• Controlled
• Efficient
• Optimizing

Each of the 16 key areas is assessed using predefined checkpoints at each maturity level. Based
on assessment results, a maturity matrix is developed to assist in visualizing and summarizing
key areas.

Definition of objectives for improvement and their execution is customized according to the
testing organizations needs and capacities.

TPI Next model is independent of all software development process improvement models
because of its generic nature. It covers test engineering as decision support systems.

Critical Testing Processes (CTP)


Critical Testing Processes (CTP) model assumes that some testing processes are critical. If
executed properly, the test teams will be successfully supported by the critical processes.

On the other hand, if these tests are not done properly, even the most experienced and skilled
teams can fail, making the most successful testers and Test Managers fail.

CTP is mainly a content reference model and identifies 12 test processes that are critical. The
CTP assessment model can be tailored as per the needs of the organizations to include:

• Identifying specific challenges


• Recognizing characteristics of good test processes
• Prioritizing suggested improvements that are important to the organization

The CTP model can be integrated within any SDLC model. CTP model uses metrics to
compare companies against best practices and averages in the industry, which are derived from
interviews with participants.

Systematic Test and Evaluation Process (STEP)


In Systematic Test and Evaluation Process (STEP) and CTP it is not necessary for
improvements to take place in a predefined sequence, unlike TPI Next and TMMi.

STEP is a content reference model that believes that testing process starts with software
product requirements gathering phase and goes on till the system retires. It lays emphasis on
“testing before coding” through a test strategy based on requirements. This makes sure that the
test cases are developed early which in turn confirms that the requirements are correct,
before design and coding begins.

These are some of the basic assumptions of STEP model:

• Test strategy based on requirements


• Testing starts as the software development lifecycle begins
• Tests are aligned to requirements as well as usage
• Software design is led by testware design
• Defects must be identified in the initial stages or avoided completely
• Defects must be analyzed thoroughly
• Testers must work as a team with the developers

Sometimes, STEP model is merged with the TPI Next model.

How to select a testing tool? Open Source, Vendor Tools &


Custom Development
Investing in software testing tools is a significant decision for many organizations. The Test
Manager may be called upon to drive the selection / recommend a tool to be used. Selection of a
testing tool should be done after due process, considering the long term strategy of the
organization, total cost of ownership (TCO), risks and several other parameters, apart from
meeting the functional requirements.

In this article and the subsequent articles, we discuss many of the aspects that a Test Manager
needs to keeps in mind while selecting software testing tools and deciding about automation.

Table of contents

1. Tool Selection
2. Open-Source Tools vs Vendor tools
3. Developing Custom Tools

Tool Selection
Test Manager / QA Manager needs to study many issues when choosing testing tools. The usual
course of action is to buy a test tool from an external vendor. Sometimes that’s the only option
available. Otherwise, custom tools or open source are also viable options.

Irrespective of how the tool has to be sourced, the Test Manager needs to determine the Total
Cost of Ownership (TCO) using a cost-benefit analysis for the entire expected lifetime for
the tool. This issue has been explored in detail in the section on return on investment (ROI).

While one may think that the purchase of tools may add to the cost of quality of the project, tools
generally improve the efficiency of the team and reduce the cost of quality in the long run. Tools
can also make it easy to collect test metrics.

Open-Source Tools vs Vendor tools


Open-source tools can be used for all the phases of testing like test implementation, test
execution, test monitoring and for activities like test case creation, defect management, testing
automation, etc.

An important consideration in purchasing an open-source tool is that even though the cost of
purchasing it is quite low, formal tool support is unavailable. Though, in some cases, devoted
supporters of the tool may be ready to provide informal support.

Most of the open-source tools are developed to resolve a particular problem. This applies to
open-source testing tools as well. So the Test Manager must consider carefully if the selected
open-source tool would be able to fulfill all testing needs.

In comparison, vendor tools usually perform more activities than a corresponding open-source
tool.

• An important advantage of using an open-source test tool is that it can be customized as per
the organizational needs, by its users.
• If skilled people are available, two or more open-source tools can be integrated.
• Open source tools can be enhanced to meet testing demands or the original tool can be
customized to address the needs.
• However, it must be ensured that open-source tools are not being used just for its sake because
extending or customizing them adds testing overheads.
• Test Manager must ensure that ROI on using the open-source tools is positive.

Different open-source tools have different variations of the generic GNU General Public
License. Some may require the developers to share their modifications with all other tool users.

Therefore, the Test Manager needs to understand the licensing terms and conditions
thoroughly and legal repercussions of distributing the software after modifications.

• The quality of open-source tool selected must also be examined carefully, especially if software
is critical to the mission or product safety needs to be tested.
• The precision of open-source tools are not generally certified but tools from vendors are
usually certified for both relevance to task at hand (for example DO-178B) and correctness.
• Even if the open-source tool is good, certifying it may be left to the testing team that is using it,
which would lead to additional overhead for the team.

Developing Custom Tools


Sometimes no open-source or vendor tool meets the organization’s requirements. This may be
due to use of customized environment, proprietary hardware or customized processes.

If the team has the necessary skills, Test Manager might think of developing custom testing tool.

These are some of the advantages of having a custom tool developed:

• Meets the exact needs of the testing team


• Function efficiently as per requirements
• Can be designed to interact with additional tools
• Can produce data and output in a format which the team requires
• May be used for subsequent projects

These are some of the disadvantages of developing custom tools:

• Custom tools rely on the developers who developed the tool


• Tools need sufficient documentation necessary for maintenance by others
• Tool may be abandoned if there is lack of documentation, when the developers quit the project
• If scope is expanded in future, quality of the tool could be compromised, giving rise to defect
reports that are false-negative, inaccurate data generation etc
• Like any other software, it must be designed, developed and tested in order to confirm that it
works

What is Testing Tool ROI? One time/Recurring Costs &


Risks related to tools?
There are several aspects that a Test Manager has to consider before a testing tool can be
selected for purchase and deployment. A Test Manager /QA Manager is responsible for ensuring
that all testing tools meet these criteria:

• Enhance value of work done by testing team


• Give positive Return On Investment (ROI)
• Have real and prolonged benefit
To this end, the Test Manager must perform cost-benefit analysis before a tool is sourced or
built. The ROI analysis should include recurring as well as one-time expenses, which may be
measured in terms of money, time, resources or risks covered.

Table of contents

1. One time costs of using tools


2. Recurring costs of tools
3. Advantages of using tools
4. Risks associated with tools

Return On Investment (ROI) on a software testing tool is the difference between the total
cost incurred over the lifetime of the tool (including cost of hardware infrastructure, software
licenses, one time or recurring costs listed below, training, people cost etc) versus the business
value derived from the usage of the tool (like those listed under tool advantages like reduction in
test implementation and management effort, cost savings from decline in human error, savings
from test automation etc).

It is difficult to derive the exact ROI that can be achieved since some of the benefits could be
intangible and the calculation of savings achieved in future, are estimated values.

In the long run, a tool should ideally help reduce the cost of quality. This may not be true for all
tools since there could be cases where a tool is required regulatory or compliance purposes.

One time costs of using tools


These are some of the one time costs:

• Specifying requirements which the tool needs to fulfill in order to achieve the teams goal
• Multiple tool evaluations and vendor selection
• Acquiring the tool by purchasing, customizing, or developing
• Conducting on-boarding training
• Integrating with additional tools
• Acquiring required hardware and software for supporting the tool

Recurring costs of tools


Some of the recurring costs are:

• Tool ownership – support and licensing fees, maintenance, continuous training, maintaining tool
generated artifacts
• Using in different environments by porting it
• Customizing for future requirements
• Optimizing quality and process improvement to use the tool efficiently
Every tool has an opportunity cost associated with it, which includes time required in activities
like obtaining, managing, training, using, etc. This opportunity cost must be considered by the
Test Manager to allocate the required resources well in time.

Advantages of using tools


It is necessary for the Test Manager to consider the advantages of using a tool. Some of the
benefits could be:

• Decrease in work which is repetitive


• Decline in cycle time for testing (Example – Run regression test using automation)
• Decline in costs incurred for test execution
• Rise in some testing types like automatic regression testing
• Decline in human errors, like more accurate comparisons, less invalid data generation, correct
data entry etc.
• Reduced effort to get testing information from reports
• Use of testing assets like test cases, scripts and data again
• More testing that could not be done without tools, like load testing, performance testing etc
• Enhanced testers’ status and that of the testing team, as they demonstrate better
understanding, knowledge and use of advanced tools

Risks associated with tools


It should be noted that all tools have risks associated with them and all of them may not be able
to provide benefits that offset the risks. The risks associated with a testing tool have been
covered in the syllabus for Foundation Level.

Test Managers must think about these risks when deciding ROI:

• Is the organization mature enough to put the tool to optimum use


• Maintenance of artifacts generated/used by the tool when the application or software being
tested is changed. For example: Automation script may be auto generated or written manually
in order to use an automation tool to test an application. If that application functionality is
modified, the script may have to be updated accordingly.
• Business value associated with testing may be diminished as Test Analysts’ contribution
decreases. Example – If only automated testing is performed, efficiency of defect detection may
decrease.

Testing teams generally use a combination of test tools. So, return on investment (ROI) is also a
combination of every tools being used. As the tools must interact with each other and share
information, the organization must have a long term strategy for test tools.
What are the parameters for selecting a testing tool?
Investments done in testing tools usually extends over many repetitions of the same project and
may be useful to many other projects too. So the Test Manager needs to look at test tool
investment from multiple perspectives when selecting a tool.

Some of these are:

• Positive ROI – From a business’s perspective, this is one of the most essential viewpoint to be
adopted. To ensure high ROI, the selected test tools as well as non-test related tools must be
able to work together well. Sometimes, teams need to extend tool functionalities to accomplish
this.
• Effective – From a project’s perspective, it is essential to ascertain that the selected tool is
effective in what it does, like minimizing mistakes in manual testing. It is not necessary that the
tool gives a positive ROI in the first instance of use. The Test Manager must consider the
complete lifecycle to judge when the tool will give a positive ROI. This means that business
perspective of selecting a tool must never go out of sight.
• Support for the team – From the team’s perspective, the selected tool must enable team
members in performing their activities effectively and efficiently. The Test Manager must
consider the amount of learning necessary to start using the tool.

To ascertain inclusion of all these three viewpoints, a roadmap must be created for
introduction of the testing tool.

The syllabus for Foundation Level has already covered how to select a testing tool. Here is a
recap for you:

• Assessment of organizational maturity level


• Specifying requirement which the tool must fulfill
• Evaluation of available tools, vendors and support services
• Identifying training and mentoring needs of teams using the tools
• Performing cost-benefit analysis as described earlier

Irrespective of the testing phase where the test tool will be used, the Test Manager must take note
of these capabilities of the tool:

• Analysis
o Is the selected tool suitable for the job?
o Will the selected tool make sense of the inputs provided?
• Design
o Is it possible to generate the design automatically?
o Will the tool be able to design testware according to information provided?
o Will the testware code be generated partly or fully and is it easy to maintain and use?
o Is it possible to generate test data automatically using this tool?
• Data and test selection
o By what logic does this tool decide which data is needed? Example: Which set of test
data is to be used for a given test case?
o Will the tool be able to accept the criteria for selection, both manually and
automatically?
o Can production data be “scrubbed” by the tool, depending on input data selected?
o Will the tool be able to select the necessary tests according to coverage criteria?
Example: Can the tool decide which test cases are to be executed based on the
requirements by analyzing the traceability matrix?
• Execution
o Can this tool function automatically without manual involvement?
o In what way can the tool restart and/or stop?
o Does the tool need to monitor for relevant incident that affect its operations? Example:
If a defect is closed, should the status of the test case be updated by the tool
automatically by a test management tool?
• Evaluation
o By what logic does this tool decide that it has been provided the correct result?
o What is this tool’s ability to recover after an error?
o Does this tool offer sufficient reporting and logging?

How to manage software testing tool lifecycle and tool


metrics?
Any software has a fixed lifetime during which is it useful to the user. A software testing tool
goes through a lifecycle from the time it is acquired to the time it is retired.

The Test Manager needs to ensure that the software tool is managed well during its lifecycle
such that it available to the testing team and can be used efficiently when required.

Table of contents

1. Software Testing Tool Lifecycle


1. Acquisition
2. Maintenance and Support
3. Evolution
4. Retirement
2. Tool Metrics
Software Testing Tool Lifecycle
Test Manager needs to manage these 4 stages of a testing tool’s lifecycle:

Acquisition

In this stage the software testing tool should be procured as per the considerations discussed in
tool selection. Once the decision to acquire a tool is taken, a tool administrator is assigned by
Test Manager.

Usually a Technical Test Analyst or Test Analyst is the tool administrator.

The administrator is responsible for taking decisions like tool usage, tool roll out timelines,
naming conventions, storage for tool output, etc.

Taking these decisions in advance improves the testing tool ROI in the long run. In this stage,
training is also imparted, if needed.

Maintenance and Support

The selected tools need continuous maintenance and support. It is provided by the tool
administrator, who might constitute a dedicated group for this purpose.

Some of the points to be considered here include interface with other processes, if required, data
backup and tool restoration in case of failure or crash.

Evolution

With time, the tool may need to be extended, modified, updated or converted according to
business, environment or vendor considerations (vendor releases a new version).

Changes in regulatory policies may require the tool to be upgraded to a newer version. Some of
the upgrades may not be backward compatible or may not work with other applications.

Based on the level of involvement of the tool in business processes, the Test Manager must
ensure uninterrupted service / availability of the tool.

Retirement

No tool can be around forever. Once it has outlived its utility, it must be retired. This can happen
because tool lifecycle has come to an end or the cost of acquiring new tools is too high compared
to its business value or risks associated with the tool outweigh the benefits.

However, the functions provided by the tool must be replaced and its data archived for a
successful tool retirement.
The onus of managing the tool for efficient and continued performance lies on the Test Manager
for the lifetime of the tool.

Tool Metrics
The tools which are used by Test Analysts and Technical Test Analysts can provide valuable
metrics and gather data in real-time. This can considerable decrease data collection effort.
Test effort can be managed by the Test Manager using this data.

Data gathered by various tools is different because their focus is different. Here are some
examples of data that can be collected using different tools:

• Test management tools – Examples of data collected include traceability matrix for
requirements and test cases, metrics on coverage provided by automated scripts, currently
planned tests, available tests, execution status, etc.
• Defect management tools – Defect information like status, priority, severity, density of
occurrence, rate of escape, phase of defect introduction etc. are some information gathered by
the defect management tools. This information enables the Test Manager to implement process
improvements in the team / project.
• Static analysis tools – Static analysis tools can assist in detecting issues related to
maintainability and reporting them.
• Performance tools – Performance testing tools provide data on system scalability and help
determine if the system will scale.
• Coverage tools – These coverage tools gather data about what portion of the system has really
been tested.

Reports to be created by the tool must be specified during tool selection itself. It is also
essential to implement these requirements during the configuration phase so that the reports
generated can be easily understood and put to correct use by the stakeholders. Use of tools can
also reduce time taken to gather test metrics.

How to assess, manage & develop skills in testers as a Test


Manager?
Any leader is as good as the team. A Test Manager should hire teams which have the right blend
of skills and keep them updated as requirements of skills change over with time. The teams
should be given training on new skills and opportunities to grow to improve their performance
and to retain the team.

The Test Manager should have skills to function effectively under pressure in fast-moving
setting. Test Managers should have excellent people skills.
Table of contents

1. Individual Skills
2. Skills required by Test Manager
3. Assessing and Managing Skills
4. Skill Development Planning

In this article we will discuss about skill assessment techniques that would enable Test Managers
to fill gaps in the team to make it effective for the organization. The next articles also discuss
about motivating testing teams and communicating as a QA Manager effectively.

Individual Skills
Anyone can learn software testing through training and education or by experience and go on to
become a good software tester.

They can improve their software testing expertise by:

• Using software systems


• Gaining business or domain knowledge
• Participating in all phases of software development, like analysis and development to support
and maintenance
• Performing testing related tasks

End users have good knowledge about these aspects:

• How does the system work


• What are the potential areas of failures
• Which areas suffer maximum due to failures
• What should be system reaction in different situations

If the users are also domain experts, they are aware of areas which are most important in
terms of business requirements. This information is useful in setting priorities for testing tasks,
developing test cases, creating testing data and verifying use cases.

A grasp of various phases in software development lifecycle, like requirement gathering, system
design, system analysis, coding etc. enables one to understand:

• How defects might be introduced due to errors


• Which are the areas that may have defects
• How to avoid defect from being introduced into the system

Technical support expertise gives information about user expectation, experience and
requirements for usability.

Expertise in software development is necessary in following situations:


• Use testing tools that need knowledge about programming and designing
• Analyzing static code
• Unit testing
• Reviewing code
• Technical integration testing

The testing skills usually include:

• Analyzing a testing specification


• Participating in risk analysis
• Designing test cases
• Persistence required for running the tests and documenting their results

Skills required by Test Manager


Test Managers should have the necessary skills, expertise and knowledge of project
management because test management comprises many similar activities like planning,
monitoring progress, stakeholder reporting etc.

If the project manager is not available for any reason, Test Manager might need to perform both
the duties, especially during the final stages of the project. These skillsets are required along with
those discussed in Foundation Level syllabus as well as this one.

Besides technical skills, people in testing must have other skills to succeed. Some of these
skills are:

• People skills
• Providing and getting constructive feedback
• Influencing
• Negotiation
• Well-organized
• Detail oriented
• Strong communication skills, both verbal and written

A perfect team would have the right blend of experience and skills. Members of the team must
be ready to learn from other team mates and teach them too.

This is necessary because different situations lays emphasis on different set of skills. For
instance, in technical testing, programming skills get appreciated more but for black box tests
domain knowledge would be more important.

Assessing and Managing Skills


To promote teaching and learning culture among team members, Test Manager can create a skill
assessment spreadsheet listing every skill required for the task at hand as also relevant for that
position.
Some of the areas in which testers may be assessed include domain knowledge, software testing,
systems development, software systems, interpersonal skills etc

• Each team member should be rated on a scoring level of say, 1 through 5, for each skill listed.
• Based on the chart produced at the end of scoring, strength and weaknesses of each person
can be known.
• Based on this knowledge, the Test Manager can create training programs, set performance
goals and define individual assessment criteria.

People cannot be hired just for one project but long-term. Test Managers should promote an
culture of learning so that team members are inspired to gain more knowledge and prepare
for subsequent projects.

• Test Managers need to remember that hiring the perfect person is close to impossible.
• Also, even if they are ideal for this project, they might not be suited for the subsequent one.
• So, Test Managers should employ people with these skills – intelligent, willing to learn, team
player, adaptable and ability to pick up new skills.
• Getting these characteristics in every team member may not be possible always.
• A strong team could be established by bringing together individuals with complementary
weaknesses and strength.

Skill Development Planning


Test Manager should use the skill assessment spreadsheet to address team’s weaknesses and
develop a plan to enhance their skills. Some of the possible approaches are listed here:

• Training – Team members may be provided in-house, external or customized training.


• Self-study – People can be motivated to read books, participate in webinars or use internet
resources.
• Cross-training – A person needing a particular skill is teamed with another team member having
that skill so that knowledge transfer may occur.
• Mentoring – New incumbent is generally paired with someone senior in the team, having
experience in that role so that the new person may be trained for his role.

Test Managers must not forget to use the strengths displayed by the team members while
creating training plans.
How to manage hiring & team dynamics as a Test Manager?
Hiring the right people to build a strong team is an important responsibility of any
management role. While making a hire, besides looking for specific skills needed for the role,
Test Managers must examine if the new member’s skills would complement the team’s
existing skills as well as personality types.

A good team has variety of technical skills and personalities, so that it can handle projects
with different complexities and interact with other teams successfully.

Software testing is highly stressful and full of demanding activity throughout the software testing
life cycle, usually due to tight software development timelines and unreasonable expectations by
stakeholders.

• Test Manager must hire team members who are able to work in high pressure environment,
not get frustrated and perform even within seemingly impossible deadlines.
• Though the Test Manager is supposed to directly handle these issues of scheduling and
stakeholder expectations, practically it does get transferred to members of the team too.
• When the Test Manager is hiring new team members, personality type of the new hire must
match with the work environment.

For example, if less time is allocated for every work than what’s needed, Test Manager must
have achievers in the team, who are always willing to complete the current task and move to the
next one without any problem.

People tend to work more and take more responsibilities when they feel they are valued and
others need them. Test Manager should work towards creating an environment where each team
member feels that he/she is important and their contribution is essential for team’s success.

• In such an environment, cross-training and work balancing is carried out by members of the
team, freeing up the Test Manager to handle the external issues.
• The new hires must be absorbed in the team immediately, given a specific role, trained
adequately and supported whenever needed.
• The role and training required can be decided after the individual’s skill assessment is done.
• The aim should be to ensure that besides enhancing personal skills the new team member
should contribute to the team constructively.
• This can be done by assigning roles that suit the personality type and not just technical skills.

Objective assessment of a potential hire is essential to decide if the person is suitable for the role
or the team. The assessment can be accomplished through interviews, real-time testing, reviews
of prior work samples, as well as reference verification.

Evaluating a candidate’s technical skills


Let us now look at some of the skills that must be evaluated.
These are some of the ways in which the candidate’s technical skills may be evaluated:

• Creating test cases on the basis of requirements


• Review of technical documents like program code, requirement specifications etc .
• Giving objective and clear reviews
• Applying appropriate technique for a given testing scenario
• Evaluating and documenting a failure
• Knowledge about defect classification
• Expertise in root cause analysis of detected defects
• Performing both positive and negative tests for the given API
• Finding information in a database using SQL and modifying it for testing a specific scenario
• Performing test execution and failure troubleshooting
• Creating testing specifications and test plans
• Creating test summary containing test assessment, for a testing report

Desirable soft skills in a candidate


These are some of the desirable soft skills in a candidate:

• Presenting information about a delayed testing project


• Explaining any defect report that is not acceptable to the developer
• Training a team member
• Convincing the management about an ineffective process
• Reviewing and commenting on a test scenario developed by a team member
• Team interviews
• Praising a developer

These lists are by no means complete. They only list the most desirable skills in a new hire. The
actual requirement will be different for each project environment or organization.

Test Manager should develop incisive interview question and permit skill demonstration to
evaluate a candidate and judge the strengths and weaknesses. The evaluation framework must be
applied to current team members to find out their training needs and areas of growth.

Besides the team members, Test Managers should also have excellent communication skills
and be highly diplomatic. They should prevent contentious issues from escalating, employ
suitable communication medium and concentrate on creating and nurturing relationships inside
the company.
How to manage testing team at different levels of
independence?
Testing can fit into the structure of an organization in different ways. While everyone involved
with software development is responsible for product quality, a testing team is capable of making
important contributions to the overall product quality. A Test Manager must know how to
manage the testing team and its deliverables at different levels of independence.

The level of independence of the testing team can vary significantly from one organization to
another, depending on the organization.

The different levels of independence of the testing team are listed below from the least
independent to the most:

• No independent testers, same developer codes and tests


o Testing is done by the same programmer who wrote the code
o If given the required time, the developer checks if the code runs as intended, this may
not match the requirements that were actually given
o The defects can be fixed quickly by the developer
• A developer performs the tests on code written by some other developer
o Very little or no independence for the developer tester from developer coder (the
person who wrote the code)
o The tester might be not so keen on reporting defects in code written by a fellow
developer
o Developers usually focus on test cases which are positive
• Testing team or tester is a subset of development team
o Testing team or individual tester reports to development / project manager
o Focus is on testing requirements adherence
o Development responsibilities might be added on to testing responsibilities for the tester
o Test manager could perhaps be motivated to meet project schedules rather than goals
for test quality
o Development could perhaps be awarded higher status than the testing activity
o Testers or the test team may not have sufficient powers to impact quality goals or
ensure adherence to them
• Testing is performed by test experts from user groups, business teams and other technical
departments not involved in development
o Testing results are reported accurately to stakeholders
o Chief focus of this group is on product quality
o Training and skill development are performed based on testing requirements
o Testing is considered as the career path
o No fixed management for ensuring quality focus of the testing team
• Testing on particular types of tests are performed by external test experts
o Expert tests are performed for product areas where general testing may not be
sufficient
o Some of the test types include security testing, performance testing, usability testing or
any other areas where expertise is essential in testing it
o Each team member focuses on quality but in his or her area of expertise
o For example – An application may achieve good performance but functional defects may
not be detected by an expert in performance testing
• Testing is performed by external company which is outside the organizations
o The independence between developer and software testing team is maximum in this
model
o Transfer of knowledge may perhaps not be adequate, resulting in ineffective testing by
the tester
o A pre-defined communication protocol and unambiguous requirements are essential
o Periodic auditing of the external testing organization’s quality is necessary

The list given here is derived from the points on which individuals usually focus. However, it
may not apply to organizations. The title and role of a person may not decide the focus of a
person. So, apart from Test Managers even Development Managers may focus on quality.

If independent Test Managers have a management chain that focuses on schedule, the test
manager may become schedule focused too rather than quality. Test Managers need to
understand organizational goals to determine their team’s position within it.

Testing and development teams work under varying levels of independence. A trade-off may be
involved if there is more independence resulting in higher degree of isolation and lower
levels of knowledge transfer.

If there is more dependence between the two organizations, there may be more exchange of
information but it may result in goal conflict too. The model used for software development
also determines how independent the testing team is.

In case of Agile development, testers are usually embedded in the developing team as an
integral part.

An organization may perform testing in any independence levels discussed above and also a
combination of two or more levels. So, the testing done by internal teams may further be
certified by external specialists or organizations.

Whatever the testing style adopted, it is essential to understand responsibilities associated with
every testing phase, as also specify the requirements for maximizing product quality while
adhering to development schedule and project budgets.
How to motivate software testing team as a Test / QA
Manager?
A Test Manager / QA Manager should possess the skill of motivating his / her team. They
should be able to motivate the testing team to perform at their best and successfully deliver
challenging projects during tight schedules.

A tester can be motivated in many ways, some of them are:

• Praise for completed tasks


• Management approval
• Respect from peers and team members
• More responsibilities and independence
• Appropriate rewards like salary, recognition and more responsibilities, for accomplishments

Motivating the testing team as a Test / QA Manager


Many aspects of the project may make it difficult to implement these motivational measures.

Take an example of a project with impractical deadline. Assuming the tester works very hard,
putting in extra effort and time but the product is delivered before all tests are complete,
eventually resulting in a product with poor quality.

If the tester’s efforts are not recognized and measured irrespective of success of final product,
it can be a huge demotivator.

• Suitable test metrics must be tracked to show that testers did an excellent job of testing,
identifying defects, mitigating risks and correctly documenting test results.
• If this information is not collected and then published, the team may become demotivated due
to not receiving due recognition.
• Recognition must not be only in the form of non-tangibles like management approval and peer
respect.
• In tangible forms like promotions, more responsibilities and greater merit.
• Many opportunities could disappear in case of lack of respect for test groups among its peers.

Appreciation and respect for good testers can be gained if it is obvious they contribute to
project or business value. This can be best accomplished by making testers a continuous part of
that project since the beginning.

Due to their positive contribution the testers are bound to gain the respect of other stakeholders.
The testers contribution must also be measured for reduction in cost of quality and product
risks.
A Test Manager must motivate the individual team members and also promote them at the
organizational level. Besides praising, the manager is responsible for giving honest feedback in
case of mistakes, at the right time.

A just and reliable Test Manager should always lead by example.

How to communicate effectively as a Test / QA Manager?


A Test Manager / QA Manager should be proficient in communication. They should be able to
communicate professionally with internal and external teams, customers, stakeholders and users.
A Test Manager should be skilled in sharing details crisply, stating facts and persuasive when
required.

Members of a testing team mainly communicate through:

• Test product documentation – Examples of documents include master test plan, test strategy,
scenarios, defect reports, summary reports etc.
• Document review and feedback – Feedback may be provided for requirements document,
testing use cases, functional specs, documentation of component testing etc.
• All hands meetings – This includes interaction between team members, with development
team, management etc.

When testers communicate with other stakeholders, they should be professional, effective and
stick to facts, so that a respect is built and maintained.

• When giving feedback on work products generated by others, care must be taken that it is
constructive and delivered diplomatically and objectively.
• All communication must lead to accomplishing testing objectives, enhancing product quality and
optimizing software product development processes.
• Test Managers need to communicate with plethora of people, like end users, customers,
members of the project team, management, external testers etc.
• So the communication needs to be relevant for all of them.

Say a defect report has been created by the test team for development team. This report will have
detailed information about trends, severity, areas prone to defects etc. However, when the same
report is being presented to the management, it must be brief and to the point, without going
into much details.

As the Test Manager usually gives the presentation, it is essential for him or her to decide upon
the level of detailing required by the audience. So a manager might want to know the number
and severity of defects but executive management may appreciate defect trends presented in
graphical format.
Efficient and crisp communication holds audience’s attention while giving out correct
information. Test Managers must consider each presentation to be an occasion to encourage
quality as well as quality processes.

• Communicating with those who are outside the test team is called outward communication
• Communication within the test group is called inward communication
• Test Manager needs to carry out inward communication efficiently to share latest updates,
instruction, priority modifications and other necessary information.
• The Manager also needs to perform upward communication with those higher up the chain of
management and downward communication with those below him/her in the chain.
• Whatever the direction, same rules of communication must be followed, like appropriate,
effective and easy to understand.

Test Managers / QA Managers use several methods for communication like e-mail, formal
meeting, informal meeting, verbal interactions, formal reports, informal reports, defect
management tools etc .

QA Managers must be adept at using all these means to have professional and factual
communication. They must proofread each communication content for both content and
quality before sending, even under most pressing deadlines.

Written communication has a life much longer than the project itself, so it is essential to create
professional content that boosts the organization’s image as being quality conscious.

Test Managers must be persuasive in their communication when required for example, when
justifying the cost of quality, test effort estimation etc., to senior management.

They should be able to persuade not only senior management but also influence peers and
subordinates when steering decisions regarding the master test plan, test strategy to be used or
arriving at consensus on test metrics with development team.

You might also like