0% found this document useful (0 votes)
1 views

Software Engg PCC CS-601

Software engineering is the application of scientific principles to develop reliable software products through systematic processes. It addresses the complexities of large programming projects, reduces costs, and ensures quality management. The Software Development Life Cycle (SDLC) outlines the stages of software development, from planning and requirement analysis to maintenance, with various models like the Waterfall model guiding the process.

Uploaded by

asitpanda1031
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Software Engg PCC CS-601

Software engineering is the application of scientific principles to develop reliable software products through systematic processes. It addresses the complexities of large programming projects, reduces costs, and ensures quality management. The Software Development Life Cycle (SDLC) outlines the stages of software development, from planning and requirement analysis to maintenance, with various models like the Waterfall model guiding the process.

Uploaded by

asitpanda1031
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 474

What is Software Engineering?

• The term software engineering is the product of two


words, software, and engineering.
• The software is a collection of integrated programs.
• Software consists of carefully-organized instructions and
code written by developers on any of various particular
computer languages.
• Engineering is the application
of scientific and practical knowledge to invent, design,
build, maintain, and improve frameworks, processes, etc.
Software Engineering is an engineering branch related to the evolution of software product
using well-defined scientific principles, techniques, and procedures. The result of software
engineering is an effective and reliable software product.
Need of Software Engineering

•Huge Programming: It is simpler to manufacture a wall


than to a house or building, similarly, as the measure of
programming become extensive, engineering has to step to
give it a scientific process.

•Cost: The hardware industry has demonstrated its skills and


huge manufacturing has let down the cost of computer and
electronic hardware. But the cost of programming remains
high if the proper process is not adapted.

•Quality Management: Better procedure of software


development provides a better and quality software product.
Characteristics of a good software engineer

• Exposure to systematic methods, i.e., familiarity with software


engineering principles.
• Good technical knowledge of the project range (Domain
knowledge).
• Good programming abilities.
• Good communication skills. These skills comprise of oral, written,
and interpersonal skills.
• High motivation.
• Sound knowledge of fundamentals of computer science.
• Intelligence.
• Ability to work in a team
• Discipline, etc.
The importance of Software engineering
is as follows:
Reduces complexity:

• Big software is always complicated and


challenging to progress.
• Software engineering has a great solution to
reduce the complication of any project.
• Software engineering divides big problems into
various small issues.
• And then start solving each small issue one by
one. All these small problems are solved
independently to each other.
To minimize software cost:
• Software needs a lot of hard work and software
engineers are highly paid experts.
• A lot of manpower is required to develop software
with a large number of codes.
• But in software engineering, programmers project
everything and decrease all those things that are
not needed. In turn, the cost for software
productions becomes less as compared to any
software that does not use software engineering
method.
To decrease time:
• And if you are making great software, then
you may need to run many codes.
• This is a very time-consuming procedure, and
if it is not well handled, then this can take a lot
of time.
• So if you are making your software according
to the software engineering method, then it
will decrease a lot of time.
Handling big projects:
• Big projects are not done in a couple of days, and they
need lots of patience, planning, and management.
• And to invest six and seven months of any company, it
requires heaps of planning, direction, testing, and
maintenance.
• No one can say that he has given four months of a
company to the task, and the project is still in its first
stage. Because the company has provided many
resources to the plan and it should be completed. So to
handle a big project without any problem, the company
has to go for a software engineering method.
Reliable software:
• Software should be secure, means if you have
delivered the software, then it should work for
at least its given time or subscription.
• And if any bugs come in the software, the
company is responsible for solving all these
bugs.
• Because in software engineering, testing and
maintenance are given, so there is no worry of
its reliability.
Effectiveness:
• Effectiveness comes if anything has made
according to the standards.
• Software standards are the big target of
companies to make it more effective.
• So Software becomes more effective in the act
with the help of software engineering.
Software Processes and its activities

A software process is the set of activities and associated outcomes


that produce a software product. Software engineers mostly carry
out these activities. These are 4 key process activities, which are
common to all software processes. These activities are:
• Software specifications: The functionality of the software and
constraints (limitations) on its operation must be defined.
• Software development: The software to meet the requirement
(SRS) must be produced.
• Software validation: The software must be validated (accuracy) to
ensure that it does what the customer wants.
• Software evolution: The software must evolve to meet changing
client needs.
Software Crisis Due to

• Size: Software is becoming more expensive and more complex with the growing
complexity and expectation out of software. For example, the code in the consumer
product is doubling every couple of years.

• Quality: Many software products have poor quality, i.e., the software products
defects after putting into use due to ineffective testing technique. For example,
Software testing typically finds 25 errors per 1000 lines of code.

• Cost: Software development is costly i.e. in terms of time taken to develop and the
money involved. For example, Development of the FAA's Advanced Automation
System cost over $700 per lines of code.

• Delayed Delivery: Serious schedule overruns are common. Very often the software
takes longer than the estimated time to develop, which in turn leads to cost shooting
up. For example, one in four large-scale development projects are never completed.
Software
3 components (needed to be built in SE environment) of software:
Program vs. Software

• Software is more than programs. Any program


is a subset of software, and it becomes
software only if documentation & operating
procedures manuals are prepared. Operating
procedures is a set of step-by-step instructions
compiled by an organization to help workers
carry out complex routine operations.
Types of documentation manuals
Formal specification: In computer science, formal specifications are mathematically
based techniques whose purpose are to help with the implementation of systems
and software.
Context diagram: A diagram that defines the boundary between the system, or part of
a system, and its environment, showing the entities that interact with it. This
diagram is a high level view of a system.
• Source code listing: An output produced by a compiler or assembler, consisting of
the source program neatly laid out and accompanied error messages.
• Cross reference listings: provides a list of all data references, procedure-name
references, and program-name references, by statement number, within the
source program.
• Test data is data which has been specifically identified for use in tests, typically of a
computer program. Some data may be used in a confirmatory way, typically to
verify that a given set of input to a given function produces some expected result.
• A standard operating procedure (SOP) is a set of step-by-step instructions compiled by an
organization to help workers carry out routine operations. SOPs aim to achieve efficiency,
quality output and uniformity of performance, while reducing miscommunication and failure
to comply with industry regulations.
• The operations manual is the documentation by which an organization provides guidance for
members and employees to perform their functions correctly and reasonably efficiently.
• A user manual is a technical communication document intended to give assistance to people
on how to use a product. A good user manual assists users on how to use a product safely,
healthily and effectively.
• The System Overview describes the system requirements, operating environment, system
and subsystem architecture, files and database design, input formats, output layouts,
human-machine interfaces, detailed design, processing logic, and external interfaces.
• Reference materials are various sources that provide background information or quick facts
on any given topic.
• System administration guide: This document is a non-technical, practical guide to performing
the duties and practices inherent in taking on the responsibilities for and maintaining a
server.
Software Development Life Cycle (SDLC)

• A life cycle model maps the various activities


performed on a software product from its
inception to retirement.
• Different life cycle models may plan the
necessary development activities and phases
in different ways.
• During any life cycle stage, more than one
activity may also be carried out.
SDLC Cycle

• SDLC Cycle represents the process of developing software. SDLC framework


includes the following steps:
Need of SDLC

• The development team must determine a suitable life cycle model for a particular plan and
then observe to it.
• Without using an exact life cycle model, the development of a software product would not
be in a systematic and disciplined manner.
• When a team is developing a software product, there must be a clear understanding among
team representative about when and what to do. Otherwise, it would point to chaos and
project failure.
• This problem can be defined by using an example. Suppose a software development issue is
divided into various parts and the parts are assigned to the team members. From then on,
suppose the team representative is allowed the freedom to develop the roles assigned to
them in whatever way they like. It is possible that one representative might start writing the
code for his part, another might choose to prepare the test documents first, and some other
engineer might begin with the design phase of the roles assigned to him. This would be one
of the perfect methods for project failure. A software life cycle model describes entry and
exit criteria for each phase. A phase can begin only if its stage-entry criteria have been
fulfilled. So without a software life cycle model, the entry and exit criteria for a stage cannot
be recognized. Without software life cycle models, it becomes tough for software project
managers to monitor the progress of the project.
The stages of SDLC are as follows:
Stage1: Planning and requirement analysis
• Requirement Analysis is the most important and necessary stage in SDLC.
• The senior members of the team perform it with inputs from all the stakeholders and
domain experts or SMEs (A Subject Matter Expert is an authority in a particular technology)
in the industry.
• Planning for the quality assurance requirements and identifications of the risks associated
with the projects is also done at this stage.
• Business analyst and Project organizer set up a meeting with the client to gather all the
data like what the customer wants to build, who will be the end user, what is the objective
of the product. Before creating a product, a core understanding or knowledge of the
product is very necessary.
• For Example, A client wants to have an application which concerns money transactions. In
this method, the requirement has to be precise like what kind of operations will be done,
how it will be done, in which currency it will be done, etc.
• Once the required function is done, an analysis is complete with auditing (conduct an
official financial inspection of (a company or its accounts)) the feasibility analysis of the
growth of a product. In case of any ambiguity, a signal is set up for further discussion.
• Once the requirement is understood, the SRS (Software Requirement Specification)
document is created. The developers should thoroughly follow this document and also
should be reviewed by the customer for future reference.
• Stage2: Defining Requirements
• Once the requirement analysis is done, the
next stage is to certainly represent and
document the software requirements and get
them accepted from the project stakeholders.
• This is accomplished through "SRS"- Software
Requirement Specification document which
contains all the product requirements to be
constructed and developed during the project
life cycle.
• Stage3: Designing the Software
• The next phase is about to bring down all the
knowledge of requirements, analysis, and
design of the software project. This phase is
the product of the last two, like inputs from
the customer and requirement gathering.
• Stage 4: Developing the project
• In this phase of SDLC, the actual development
begins, and the programming is built. The
implementation of design begins concerning
writing code. Developers have to follow the
coding guidelines described by their
management and programming tools like
compilers, interpreters, debuggers, etc. are
used to develop and implement the code.
• Stage 5: Testing
• During this stage, unit testing, integration
testing, system testing, acceptance testing are
done.
• Stage 6: Deployment
• Once the software is certified, and no bugs or
errors are stated, then it is deployed.
• Then based on the assessment, the software
may be released as it is or with suggested
enhancement.
• After the software is deployed, then its
maintenance begins.
• Stage 7: Maintenance
• Once when the client starts using the
developed systems, then the real issues come
up and requirements to be solved from time
to time.
• This procedure where the care is taken for the
developed product is known as maintenance.
SDLC Models

• Software Development life cycle (SDLC) is a model used in


project management that defines the stages included in
an information system development project, from an
initial feasibility study to the maintenance of the
completed application.
• There are different software development life cycle
models to specify and design, which are followed during
the software development phase. These models are also
called "Software Development Process Models." Each
process model follows a series of phase unique to its type
to ensure success in the step of software development.
Waterfall model

• Winston Royce introduced the Waterfall Model in 1970. This model has five
phases: Requirements analysis and specification, design, implementation, and
unit testing, integration and system testing, and operation and maintenance. The
steps always follow in this order and do not overlap. The developer must
complete every phase before the next phase begins. This model is named
"Waterfall Model", because its diagrammatic representation resembles a
cascade of waterfalls.
• 1. Requirements analysis and specification phase: The aim of this phase is to
understand the exact requirements of the customer and to document them
properly. Both the customer and the software developer work together so as to
document all the functions, performance, and interfacing (user and the software
will interact) requirement of the software. It describes the "what" of the system
to be produced and not "how“. In this phase, a large document called Software
Requirement Specification (SRS) document is created which contained a
detailed description of what the system will do in the common language.
• 2. Design Phase: This phase aims to transform the requirements
gathered in the SRS into a suitable form which permits further
coding in a programming language. It defines the overall software
architecture together with high level and detailed design. All this
work is documented as a Software Design Document (SDD).
• 3. Implementation and unit testing: During this phase, design is
implemented. If the SDD is complete, the implementation or coding
phase proceeds smoothly, because all the information needed by
software developers is contained in the SDD.
• During testing, the code is thoroughly examined and modified. Small
modules are tested in isolation initially. After that these modules are
tested by writing some code to check the interaction between these
modules and the flow of intermediate output.
• 4. Integration and System Testing: This phase is highly
crucial as the quality of the end product is determined by
the effectiveness of the testing carried out. The better
output will lead to satisfied customers, lower
maintenance costs, and accurate results. Unit testing
determines the efficiency of individual modules. However,
in this phase, the modules are also tested for their
interactions with each other and with the system.
• 5. Operation and maintenance phase: Maintenance is the
task performed by every user once the software has been
delivered to the customer, installed, and operational.
When to use SDLC Waterfall Model?

Some Circumstances where the use of the Waterfall model is most suited are:
• When the requirements are constant and not changed regularly.
• A project is short
• The situation is calm (not complex software in terms of functionalities)
• Where the tools and technology used is consistent and is not changing
• When resources (S/W and H/W) are well prepared and are available to use .

Advantages of Waterfall model


• This model is simple to implement also the number of resources that are required for it is
minimal.
• The requirements are simple and explicitly declared; they remain unchanged during the entire
project development.
• The start and end points for each phase is fixed, which makes it easy to cover progress.
• The release date for the complete product, as well as its final cost, can be determined before
development.
• It gives easy to control and clarity for the customer
Disadvantages of Waterfall model

• In this model, the risk factor is higher, so this model is not


suitable for more significant and complex projects.
• This model cannot accept the changes in requirements during
development.
• It becomes tough to go back to the earlier phase. For example,
if the application has now shifted to the coding phase, and
there is a change in requirement, it becomes tough to go back
and change it.
• Since the testing is done at a later stage, it does not allow
identifying the challenges and risks in the earlier phase, so the
risk reduction strategy is difficult to prepare.
RAD (Rapid Application Development)
Model)

• A software project can be implemented using this model if the


project can be broken down into small modules wherein each
module can be assigned independently to separate teams. These
modules can finally be combined to form the final product. RAD
(Rapid Application Development) is a concept that products can be
developed faster and of higher quality through:
• Gathering requirements using workshops or focus groups
• Prototyping and early, reiterative user testing of designs
• The re-use of software components
• A rigidly paced schedule that refers design improvements to the next
product version
• Less formality in reviews and other team communication
The various phases of RAD are as follows:
• 1.Business Modeling: The information flow among business functions is defined by
answering questions like what data drives the business process, what data is generated,
who generates it, where does the information go, who process it and so on.
• 2. Data Modeling: The data collected from business modeling is refined into a set of data
objects (entities) that are needed to support the business. The attributes (character of
each entity) are identified, and the relation between these data objects (entities) is
defined.
• 3. Process Modeling: The information object defined in the data modeling phase are
transformed to achieve the data flow necessary to implement a business function.
Processing descriptions are created for adding, modifying, deleting, or retrieving a data
object.
• 4. Application Generation: Automated tools are used to facilitate construction of the
software; even they use the 4th GL techniques.
• 5. Testing & Turnover: Many of the programming components have already been tested
since RAD emphasis reuse. This reduces the overall testing time. But the new part must
be tested, and all interfaces must be fully exercised.
When to use RAD Model?

• When the system should need to create the project


that modularizes in a short span time (2-3 months).
• When the requirements are well-known.
• When the technical risk is limited.
• When there's a necessity to make a system, which
is modularized in 2-3 months of period.
• It should be used only if the budget allows the use
of automatic code generating tools.
Advantage of RAD Model

• This model is flexible for change.


• In this model, changes are adoptable.
• Each phase in RAD brings highest priority
functionality to the customer.
• It reduced development time.
• It increases the reusability of features.
Disadvantage of RAD Model

• It required highly skilled designers.


• All application is not compatible with RAD.
• For smaller projects, we cannot use the RAD
model.
• On the high technical risk, it's not suitable.
• Required user involvement.
Spiral Model

• Spiral model is one of the most important Software Development Life


Cycle models, which provides support for Risk Handling.
• In its diagrammatic representation, it looks like a spiral with many loops.
• The exact number of loops of the spiral is unknown and can vary from
project to project.
• Each loop of the spiral is called a Phase of the software development
process.
• The exact number of phases needed to develop the product can be
varied by the project manager depending upon the project risks.
• As the project manager dynamically determines the number of phases,
so the project manager has an important role to develop a product
using the spiral model.
• The Radius of the spiral at any point
represents the expenses (cost) of the project
so far, and the angular dimension represents
the progress made so far in the current
phase.
Each phase of the Spiral Model is divided into four quadrants as shown in the
above figure. The functions of these four quadrants are discussed below-
• Objectives determination and identify alternative solutions: Requirements
are gathered from the customers and the objectives are identified,
elaborated, and analyzed at the start of every phase. Then alternative
solutions possible for the phase are proposed in this quadrant.
• Identify and resolve Risks: During the second quadrant, all the possible
solutions are evaluated to select the best possible solution. Then the risks
associated with that solution are identified and the risks are resolved using
the best possible strategy. At the end of this quadrant, the Prototype is built
for the best possible solution.
• Develop next version of the Product: During the third quadrant, the
identified features are developed and verified through testing. At the end of
the third quadrant, the next version of the software is available.
• Review and plan for the next Phase: In the fourth quadrant, the Customers
evaluate the so far developed version of the software. In the end, planning
for the next phase is started.
Risk Handling in Spiral Model
• A risk is any adverse situation that might affect the
successful completion of a software project.
• The most important feature of the spiral model is
handling these unknown risks after the project has
started.
• Such risk resolutions are easier done by developing
a prototype.
• The spiral model supports coping up with risks by
providing the scope to build a prototype at every
phase of the software development.
• The Prototyping Model also supports risk handling, but
the risks must be identified completely before the start
of the development work of the project.
• But in real life project risk may occur after the
development work starts, in that case, we cannot use
the Prototyping Model.
• In each phase of the Spiral Model, the features of the
product are updated and analyzed, and the risks at that
point in time are identified and are resolved through
prototyping. Thus, this model is much more flexible
compared to other SDLC models.
Why Spiral Model is called Meta Model?

• The Spiral model is called a Meta-Model because it includes all


the other SDLC models. For example, a single loop spiral
actually represents the Iterative Waterfall Model.
• The spiral model incorporates the stepwise approach of the
Classical Waterfall Model.
• The spiral model uses the approach of the Prototyping Model
by building a prototype at the start of each phase as a risk-
handling technique.
• Also, the spiral model can be considered as supporting the
Evolutionary model – the iterations along the spiral can be
considered as evolutionary levels through which the complete
system is built.
Advantages of Spiral Model:

• Risk Handling: In the projects with many unknown risks that occur as
the development proceeds, in that case, Spiral Model is the best
development model to follow due to the risk analysis and risk
handling at every phase.
• Good for large projects: It is recommended to use the Spiral Model in
large and complex projects.
• Flexibility in Requirements: Change requests in the Requirements at
later phase can be incorporated accurately by using this model.
• Customer Satisfaction: Customer can see the development of the
product at the early phase of the software development and thus,
they are habituated with the system by using it before completion of
the total product.
Disadvantages of Spiral Model:

• Complex: The Spiral Model is much more complex than other SDLC
models.
• Expensive: Spiral Model is not suitable for small projects as it is expensive.
• Too much dependability on Risk Analysis: The successful completion of
the project is very much dependent on Risk Analysis. Without very highly
experienced experts, it is going to be a failure to develop a project using
this model.
• Difficulty in time management: As the number of phases is unknown at
the start of the project, so time estimation is very difficult.
Choosing the right Software development life cycle model

• Selecting a Software Development Life Cycle (SDLC)


methodology is a challenging task for many organizations
and software engineers.
• What tends to make it challenging is the fact that few
organizations know what are the criteria to use in
selecting a methodology to add value to the organization.
• Fewer still understand that a methodology might apply to
more than one Life Cycle Model. Before considering a
framework for selecting a given SDLC methodology, we
need to define the different types and illustrate the
advantages and disadvantages of those models
How to select the right SDLC

• Selecting the right SDLC is a process in itself that the organization


can implement internally or consult for. There are some steps to get
the right selection.
STEP 1: Learn the about SDLC Models
• SDLCs are the same in their usage. In order to select the right SDLC,
you should have enough experience and be familiar with the SDLCs
that will be chosen and understand them correctly.
• As described in the software development life cycle models article,
models are similar to the tools that important to know each tool
usage to know which context it can fit into.
• Imagine the image below by Jacob Lawrence, if the carpenter did not
know the tools he will use, what will be the results? Did you visualize
the disaster?
STEP 2: Assess the needs of Stakeholders
• We must study the business domain, stakeholders concerns and requirements,
business priorities, our technical capability and ability, and technology
constraints to be able to choose the right SDLC against their selection criteria.
STEP 3: Define the criteria
Some of the selection criteria or arguments that you may use to select an SDLC
are:
• Is the SDLC suitable for the size of our team and their skills?
• Is the SDLC suitable for the selected technology we use for implementing the
solution?
• Is the SDLC suitable for client and stakeholders concerns and priorities?
• Is the SDLC suitable for the geographical situation (distributed team)?
• Is the SDLC suitable for the size and complexity (Software complexity is a way to
describe a specific set of characteristics of your code. These characteristics all
focus on how your code interacts with other pieces of code) of our software?
• Is the SDLC suitable for the type of projects we do?
• Is the SDLC suitable for our software engineering capability?
• Is the SDLC suitable for project risk and quality assurance?
What are the criteria?
• Here are the recommended criteria
Which model to use when the factors are??

A complex system is a system composed of many components


which may interact with each other. Examples of complex systems
are Earth's global climate, the human brain, communication
systems
Reliability - The probability that a system, including all hardware,
firmware, and software, will satisfactorily perform the task for which it
was designed or intended

THE TERM PROJECT VISIBILITY REFERS TO THE BIG-PICTURE VIEW OF A PROJECT

SOFTWARE REUSABILITY IS AN ATTRIBUTE THAT REFERS TO THE EXPECTED REUSE POTENTIAL OF A


SOFTWARE COMPONENT.
STEP 4: Decide

• When you define the criteria and the arguments you need to discuss with
the team, you will need to have a decision matrix and give each criterion
a defined weight and score for each option. After analyzing the results,
you should document this decision in the project artifacts and share it
with the related stakeholders.

STEP 5: Optimize
• You can always optimize (add to / change) the SDLC during the project
execution, you may notice upcoming changes do not fit with the selected
SDLC, it is okay to align and cope with the changes. You can even make
your own SDLC model which is optimum for your organization or the type
of projects you are involved in.
V-Model

• V-Model also referred to as the Verification


and Validation Model. In this, each phase of
SDLC must complete before the next phase
starts. It follows a sequential design process
same as the waterfall model. Testing of the
device is planned in parallel with a
corresponding stage of development.
• Verification: It involves a static analysis method (review) done
without executing code. It is the process of evaluation of the
product development process to find whether specified
requirements meet.
• Validation: It involves dynamic analysis method (functional, non-
functional), testing is done by executing code. Validation is the
process to classify the software after the completion of the
development process to determine whether the software meets
the customer expectations and requirements.
• So V-Model contains Verification phases on one side of the
Validation phases on the other side. Verification and Validation
process is joined by coding phase in V-shape. Thus it is known as
V-Model.
There are the various phases of
Verification Phase of V-model:
• Business requirement analysis: This is the first step where product requirements
understood from the customer's side. This phase contains detailed communication to
understand customer's expectations and exact requirements.
• System Design: In this stage system engineers analyze and interpret the business of the
proposed system by studying the user requirements document.
• Architecture Design: The baseline in selecting the architecture is that it should understand
all which typically consists of the list of modules, brief functionality of each module, their
interface relationships, dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular phase.
• Module Design: In the module design phase, the system breaks down into small modules.
The detailed design of the modules is specified, which is known as Low-Level Design
• Coding Phase: After designing, the coding phase is started. Based on the requirements, a
suitable programming language is decided. There are some guidelines and standards for
coding. Before checking in the repository, the final build is optimized for better
performance, and the code goes through many code reviews to check the performance.
There are the various phases of Validation
Phase of V-model:
• Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the module
design phase. These UTPs are executed to eliminate errors at code level or unit level. A
unit is the smallest entity which can independently exist, e.g., a program module. Unit
testing verifies that the smallest entity can function correctly when isolated from the rest
of the codes/ units.
• Integration Testing: Integration Test Plans are developed during the Architectural Design
Phase. These tests verify that groups created and tested independently can coexist and
communicate among themselves.
• System Testing: System Tests Plans are developed during System Design Phase. Unlike
Unit and Integration Test Plans, System Tests Plans are composed by the clients business
team. System Test ensures that expectations from an application developer are met.
• Acceptance Testing: Acceptance testing is related to the business requirement analysis
part. It includes testing the software product in user atmosphere. Acceptance tests reveal
the compatibility problems with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like load and
performance defects within the real user atmosphere.
When to use V-Model?

• When the requirement is well defined and not


ambiguous.
• The V-shaped model should be used for small
to medium-sized projects where requirements
are clearly defined and fixed.
• The V-shaped model should be chosen when
sample technical resources are available with
essential technical expertise.
Advantage (Pros) of V-Model:

• Easy to Understand.
• Testing Methods like planning, test designing happens well before coding.
• This saves a lot of time. Hence a higher chance of success over the waterfall
model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.
Disadvantage (Cons) of V-Model:
• Very rigid and least flexible.
• Not a good for a complex project.
• Software is developed during the implementation stage, so no early
prototypes of the software are produced.
• If any changes happen in the midway, then the test documents along with the
required documents, has to be updated.
Incremental Model

• Incremental Model is a process of software


development where the requirements are divided
into multiple standalone modules of the software
development cycle. In this model, each module
goes through the requirements, design,
implementation and testing phases. Every
subsequent release of the module adds function to
the previous release. The process continues until
the complete system is achieved.
The various phases of incremental model
are as follows:
• 1. Requirement analysis: In the first phase of the incremental model, the product
analysis expertise identifies the requirements. And the system functional requirements
are understood by the requirement analysis team. To develop the software under the
incremental model, this phase performs a crucial role.
• 2. Design & Development: In this phase of the Incremental model of SDLC, the design
of the system functionality and the development method are finished with success.
When software develops new practicality, the incremental model uses style and
development phase.
• 3. Testing: In the incremental model, the testing phase checks the performance of each
existing function as well as additional functionality. In the testing phase, the various
methods are used to test the behavior of each task.
• 4. Implementation: Implementation phase enables the coding phase of the
development system. It involves the final coding that design in the designing and
development phase and tests the functionality in the testing phase. After completion of
this phase, the number of the product working is enhanced and upgraded up to the
final system product
When we use the Incremental Model?

• When the requirements are superior.


• A project has a lengthy development schedule.
• When Software team are not very well skilled
or trained.
• When the customer demands a quick release
of the product.
• You can develop prioritized requirements first.
Agile Model

• The meaning of Agile is swift or versatile."Agile process model" refers to a


software development approach based on iterative development. Agile
methods break tasks into smaller iterations, or parts do not directly
involve long term planning. The project scope and requirements are laid
down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are
clearly defined in advance.
• Each iteration is considered as a short time "frame" in the Agile process
model, which typically lasts from one to four weeks. The division of the
entire project into smaller parts helps to minimize the project risk and to
reduce the overall project delivery time requirements. Each iteration
involves a team working through a full software development life cycle
including planning, requirements analysis, design, coding, and testing
before a working product is demonstrated to the client.
Phases of Agile Model:

Following are the phases in the Agile model


are as follows:
• Requirements gathering
• Design the requirements
• Construction/ iteration
• Testing/ Quality assurance
• Deployment
• Feedback
• 1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project.
Based on this information, you can evaluate technical and economic feasibility.
• 2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to
show the work of new features and show how it will apply to your existing system.
• 3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.
• 4. Testing: In this phase, the Quality Assurance team examines the product's performance
and looks for the bug.
• 5. Deployment: In this phase, the team issues a product for the user's work environment.
• 6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.
Agile Testing Methods:

•Scrum
•Crystal
•Dynamic Software Development
Method(DSDM)
•Feature Driven Development(FDD)
•Lean Software Development
•eXtreme Programming(XP)
When to use the Agile Model?

• When frequent changes are required.


• When a highly qualified and experienced team
is available.
• When a customer is ready to have a meeting
with a software team all the time.
• When project size is small.
Scrum

SCRUM is an agile development process focused primarily on ways to


manage tasks in team-based development conditions.
There are three roles in it, and their responsibilities are:
Scrum Master: The scrum can set up the master team, arrange the meeting
and remove obstacles for the process
Product owner: The product owner makes the product backlog, prioritizes
the delay and is responsible for the distribution of functionality on each
repetition.
Scrum Team: The team manages its work and organizes the work to
complete the sprint or cycle.
eXtreme Programming(XP)

• This type of methodology is used when


customers are constantly changing demands
or requirements, or when they are not sure
about the system's performance.
Crystal:

There are three concepts of this method-


• Chartering: Multi activities are involved in this phase
such as making a development team, performing
feasibility analysis, developing plans, etc.
• Cyclic delivery: under this, two more cycles consist,
these are:
– Team updates the release plan.
– Integrated product delivers to the users.
• Wrap up: According to the user environment, this
phase performs deployment, post-deployment.
When to use the Agile Model?

• When frequent changes are required.


• When a highly qualified and experienced team
is available.
• When a customer is ready to have a meeting
with a software team all the time.
• When project size is small.
Advantage(Pros) of Agile Method:
• Frequent Delivery
• Face-to-Face Communication with clients.
• Efficient design and fulfils the business requirement.
• Anytime changes are acceptable.
• It reduces total development time.
Disadvantages(Cons) of Agile Model:
• Due to the shortage of formal documents, it creates confusion and crucial
decisions taken throughout various phases can be misinterpreted at any time by
different team members.
• Due to the lack of proper documentation, once the project completes and the
developers allotted to another project, maintenance of the finished project can
become a difficulty.
Iterative Model

• In this Model, you can start with some of the software


specifications and develop the first version of the
software. After the first version if there is a need to
change the software, then a new version of the
software is created with a new iteration. Every release
of the Iterative Model finishes in an exact and fixed
period that is called iteration.
• The Iterative Model allows the accessing of the earlier
phases, in which the variations are made. The final
output of the project is renewed at the end of the
Software Development Life Cycle (SDLC) process.
The various phases of Iterative model are as follows:

• 1. Requirement gathering & analysis: In this phase, requirements are gathered from
customers and checked by an analyst whether any requirements will be fulfilled or not.
Analyst checks that need will achieve within budget or not. After all of this, the software
team skips to the next phase.
• 2. Design: In the design phase, team design the software by the different diagrams like Data
Flow diagram, activity diagram, class diagram, state transition diagram, etc.
• 3. Implementation: In the implementation, requirements are written in the coding language
and transformed into computer programs which are called Software.
• 4. Testing: After completing the coding phase, software testing starts using different test
methods. There are many test methods, but the most common are white box, black box, and
grey box test methods.
• 5. Deployment: After completing all the phases, software is deployed to its work
environment.
• 6. Review: In this phase, after the product deployment, review phase is performed to check
the behavior and validity of the developed product. And if there are any error found then the
process starts again from the requirement gathering.
• 7. Maintenance: In the maintenance phase, after deployment of the software in the working
environment there may be some bugs, some errors or new updates are required.
Maintenance involves debugging and new addition options.
When to use the Iterative Model?

• When requirements are defined clearly and easy to


understand.
• When the software application is large.
• When there is a requirement of changes in future.
Advantage(Pros) of Iterative Model:
• Testing and debugging during smaller iteration is easy.
• Risks are identified and resolved during iteration.
• Limited time spent on documentation and extra time
on designing.
Disadvantage(Cons) of Iterative Model:

• It is not suitable for smaller projects.


• More Resources may be required.
• Design can be changed again and again
because of imperfect requirements.
• Requirement changes can cause over budget.
• Project completion date not confirmed
because of changing requirements.
Big Bang Model

• In this model, developers do not follow any specific


process. Development begins with the necessary
funds and efforts in the form of inputs. And the
result may or may not be as per the customer's
requirement, because in this model, even the
customer requirements are not defined.
• This model is ideal for small projects like academic
projects or practical projects. One or two developers
can work together on this model.
When to use Big Bang Model?

• As we discussed above, this model is required when this


project is small like an academic project or a practical project.
This method is also used when the size of the developer team
is small and when requirements are not defined, and the
release date is not confirmed or given by the customer.
Advantage (Pros) of Big Bang Model:
• There is no planning required.
• Simple Model.
• Few resources required.
• Easy to manage.
• Flexible for developers.
Disadvantage (Cons) of Big Bang Model:
• There are high risk and uncertainty.
• Not acceptable for a large project.
• If requirements are not clear it can cause the
project to be very expensive.
Prototype Model
• The prototype model requires that before carrying out the development of actual software, a
working prototype of the system should be built.
• A prototype is a toy implementation of the system. A prototype usually turns out to be a very
crude version of the actual system, possible exhibiting limited functional capabilities, low
reliability, and inefficient performance as compared to actual software.
• In many instances, the client only has a general view of what is expected from the software
product.
• In such a scenario where there is an absence of detailed information regarding the input to
the system, the processing needs, and the output requirement, the prototyping model may
be employed.

Steps of Prototype Model


• Requirement Gathering and Analysis
• Quick Decision
• Build a Prototype
• Assessment or User Evaluation
• Prototype Refinement
• Engineer Product
Advantage of Prototype Model

• Reduce the risk of incorrect user requirement


• Good where requirement are
changing/uncommitted
• Regular visible process aids management
• Reduce Maintenance cost.
• Errors can be detected much earlier as the
system is made side by side.
Disadvantage of Prototype Model

• An unstable/badly implemented prototype often becomes the


final product.
• Require extensive customer collaboration
– Costs customer money
– Needs committed customer
– Difficult to finish if customer withdraw
• Difficult to know how long the project will last.
• Prototyping tools are expensive. E.g. Figma, InVision Studio,
Adobe XD, Webflow, Axure RP 9, Origami Studio, Justinmind,
Sketch.
• Special tools & techniques are required to build a prototype.
• It is a time-consuming process.
Evolutionary Process Model

• Evolutionary process model resembles the iterative


enhancement model.
• The same phases that are defined for the waterfall
model occurs here in a cyclical fashion.
• This model differs from the iterative enhancement
model in the sense that this does not require a useful
product at the end of each cycle.
Benefits of Evolutionary Process Model

• Use of Evo (Evolutionary Delivery) [deliver more


business value in less time] brings a significant
reduction in risk for software projects.
• EVO can reduce costs by providing a structured,
disciplined avenue for experimentation.
• EVO allows the marketing department access to
early deliveries, facilitating the development of
documentation and demonstration.
• Accelerate sales cycles with early customer
exposure.
What is Project?

• A project is a group of tasks that need to complete to reach a


clear result. A project also defines as a set of inputs and
outputs which are required to achieve a goal. Projects can vary
from simple to difficult and can be operated by one person or a
hundred.
• Projects usually described and approved by a project manager
or team executive. They go beyond their expectations and aims,
and it's up to the team to handle logistics (Logistics is generally
the detailed organization and implementation of a complex
operation) and complete the project on time. For good project
development, some teams split the project into specific tasks
so they can manage responsibility and utilize team strengths.
What is software project management?

• Software project management is an art and discipline of


planning and supervising software projects. It is a sub-
discipline of software project management in which
software projects planned, implemented, monitored
and controlled.
• It is a procedure of managing, allocating and timing
resources to develop computer software that fulfills
requirements.
• In software Project Management, the client and the
developers need to know the length, period and cost of
the project.
Prerequisite of software project management?

• There are three needs for software project management.


These are:
• Time
• Cost
• Quality
• It is an essential part of the software organization to
deliver a quality product, keeping the cost within the
clients budget and deliver the project as per schedule.
There are various factors, both external and internal,
which may impact this triple factor. Any of three-factor can
severely affect the other two.
Project Manager

• A project manager is a character who has the overall responsibility for the planning, design,
execution, monitoring, controlling and closure of a project. A project manager represents an
essential role in the achievement of the projects.
• A project manager is a character who is responsible for giving decisions, both large and small
projects. The project manager is used to manage the risk and minimize uncertainty. Every
decision the project manager makes must directly profit their project.

Role of a Project Manager:


1. Leader
• A project manager must lead his team and should provide them direction to make them
understand what is expected from all of them.
2. Medium:
• The Project manager is a medium between his clients and his team. He must coordinate and
transfer all the appropriate information from the clients to his team and report to the senior
management.
• 3. Mentor:
• He should be there to guide his team at each step and make sure that the team has an
attachment. He provides a recommendation to his team and points them in the right direction.
Responsibilities of a Project Manager:

• Managing risks and issues.


• Create the project team and assigns tasks to
several team members.
• Activity planning and sequencing.
• Monitoring and reporting progress.
• Modifies the project plan to deal with the
situation.
The list of activities of Project Manager are as follows:

• Project planning and Tracking


• Project Resource Management
• Scope Management
• Estimation Management
• Project Risk Management
• Scheduling Management
• Project Communication Management
• Configuration Management
Activities of a project manager:
• 1. Project Planning: It is a set of multiple processes, or we can say
that it is a task that performed before the construction of the product
starts.
• 2. Scope Management: It describes the scope of the project. Scope
management is important because it clearly defines what would you
do and what you wouldn’t.
• 3. Estimation management: This is not only about cost estimation
because whenever we start to develop software, but we also figure
out their size (line of code), efforts, time as well as cost.
• If we talk about the size, then LOC depends upon user or software
requirement.
• If we talk about effort, we should know about the size of the
software, because based on the size we can quickly estimate how
much big team is required to produce the software.
• If we talk about time, when size and efforts are estimated, the
time required to develop the software can be easily
determined.
And if we talk about cost, it includes all the elements such as:
• Size of software
• Quality
• Hardware
• Communication
• Training
• Additional Software and tools
• Skilled manpower
• 4. Scheduling Management: Scheduling Management in
software refers to all the activities to complete in the
specified order and within time slotted to each activity.
Project managers define multiple tasks and arrange them
keeping various factors in mind.
For scheduling, it is compulsory -
• Find out multiple tasks and correlate them.
• Divide time into units.
• Assign the respective number of work-units for every job.
• Calculate the total time from start to finish.
• Break down the project into modules.
• 5. Project Resource Management: In software Development,
all the elements are referred to as resources for the project.
It can be a human resource, productive tools.

Resource management includes:

• Create a project team and assign responsibilities to every


team member
• Developing a resource plan is derived from the project plan.
• Adjustment of resources.
• 6. Project Risk Management: Risk management consists of
all the activities like identification, analyzing and preparing
the plan for predictable and unpredictable risk in the
project.

Several points show the risks in the project:

• The Experienced team leaves the project, and the new team
joins it.
• Changes in requirement.
• Change in technologies and the environment.
• 7. Project Communication
Management: Communication is an essential factor
in the success of the project. It is a bridge between
client, organization, team members and as well as
other stakeholders of the project such as hardware
suppliers.
• From the planning to closure, communication plays
a vital role. In all the phases, communication must
be clear and understood. Miscommunication can
create a big blunder in the project.
• 8. Project Configuration Management: Configuration management is
about to control (how much changes will be required) the changes in
software like requirements, design, and development of the product.
• The Primary goal is to increase productivity with fewer errors.

Some reasons for configuration management:

• Several people work on software that needs to be continually


updated (synchronization problem).
• Help to build coordination among suppliers.
• Changes in requirement, budget, schedule needed to accommodate.
• Software should run on multiple systems.
Project Management Tools

• To manage the Project management system adequately and


efficiently, we use Project management tools.
Here are some standard tools:
Gantt chart
• Gantt Chart first developed by Henry Gantt in 1917. Gantt chart
usually utilized in project management, and it is one of the
most popular and helpful ways of showing activities displayed
against time. Each activity represented by a bar.
• Gantt chart is a useful tool when you want to see the entire
landscape of either one or multiple projects. It helps you to
view which tasks are dependent on one another and which
event is coming up.
• PERT is an acronym of Program Evaluation Review Technique. In the 1950s, it was
developed by the U.S. Navy to handle the Polaris submarine missile program.
• In Project Management, PERT chart represented as a network diagram concerning the
number of nodes, which represents events.
• The direction of the lines indicates the sequence of the task. In the above example,
tasks between "Task 1 to Task 9" must complete, and these are known as a dependent
or serial task. Between Task 4 and 5, and Task 4 and 6, nodes are not depended and
can undertake simultaneously. These are known as Parallel or concurrent tasks.
• Tasks that must be completed in sequence but that don't require resources or
completion time are considered to have event dependency. These are represented by
dotted lines with arrows and are called dummy activities. For example, the dashed
arrow linking nodes 6 and 9 indicates that the system files must be converted before
the user test can take place but that the resources and time required to prepare for
the user test -- writing the user manual and user training -- are on another path.
Numbers on the opposite sides of the vectors indicate the time allotted for the task.
Logic Network

• The Logic Network shows the order of


activities over time. It shows the sequence in
which activities are to be done. Moreover, it
will help with understanding task
dependencies, a timescale, and overall project
workflow.
• Product Breakdown Structure
Work Breakdown Structure
• It is an important project deliverable that classifies the team's work into flexible segments. "Project
Management Body of Knowledge (PMBOK)" is a group of terminology that describes the work breakdown
structure as a "deliverable-oriented hierarchical breakdown of the work which is performed by the project
team."
• There are two ways to generate a Work Breakdown Structure. The top-down and The bottom-up approach.
• In the top-down approach, the WBS derived by crumbling the overall project into subprojects or lower-
level tasks.
• The bottom-up approach is more alike to a brainstorming exercise where team members are asked to
make a list of step-by-step tasks which is required to complete the project. In many instances this can turn
quite chaotic if the tasks identified by the team are not all at the same level. It can also be time consuming
to ensure that all tasks at a given level have been completely identified.
• This approach is resource intensive since it assumes that all members of the team have sufficient domain
knowledge and a complete understanding of the project requirements in order to be able to identify and
integrate tasks at different levels. The biggest disadvantage that I have found in bottom-up estimating is
that almost always more than a few low-level tasks are inadvertently omitted because team members are
either not knowledgeable or sensitive to all parts of the project. I do not recommend the bottom-up
approach unless the WBS is created by a group of experts who have a very detailed knowledge of the
project and its decomposed elements.
Resource Histogram
• The resource histogram is precisely a bar chart that is used for displaying the amounts of
time that a resource is scheduled to be worked on over a pre-arranged and specific period.
Resource histograms can also contain the related feature of resource availability, used for
comparison on purposes of contrast.
Critical Path Analysis
• Critical path analysis is a technique that is used to categorize the activities which are
required to complete a task, as well as classifying the time which is needed to finish each
activity and the relationships between the activities. It is also called a critical path method.
CPA helps in predicting whether a project will expire on time.
Software Metrics
• A software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses.
• Within the software development process, many metrics are that are all connected.
Software metrics are similar to the four functions of management: Planning, Organization,
Control, or Improvement.
Classification of Software Metrics

Software metrics can be classified into two types as follows:


• 1. Product Metrics: These are the measures of various
characteristics of the software product. The two important
software characteristics are:
• Size and complexity of software.
• Quality and reliability of software.
• These metrics can be computed for different stages of SDLC.
• 2. Process Metrics: These are the measures of various
characteristics of the software development process. For
example, the efficiency of fault detection. They are used to
measure the characteristics of methods, techniques, and tools
that are used for developing software.
Types of Metrics

• Internal metrics: Internal metrics are the metrics used to be of greater importance to a software
developer. For example, Lines of Code (LOC) measure.
• External metrics: External metrics are the metrics used for measuring properties that are of greater
importance to the user, e.g., portability (Being able to move software from one machine platform
to another. It refers to system software or application software that can be recompiled for a
different platform or to software that is available for two or more different platforms), reliability
(Software reliability is the probability of failure-free operation of a computer program for a
specified period in a specified environment), functionality (Functionality is the ability of the system
to do the work for which it was intended), usability (usability is the degree to which a software can
be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and
satisfaction in a quantified context of use), etc.
• Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource metrics.
For example, cost per FP where FP stands for Function Point Metric.
• Project metrics: Project metrics are the metrics used by the project manager to check the project's
progress. Data from the past projects are used to collect various metrics, like time and cost; these
estimates are used as a base of new software. Note that as the project proceeds, the project
manager will check its progress from time-to-time. Also understand that these metrics are used to
decrease the development costs, time, efforts and risks. The project quality can also be improved. As
quality improves, the number of errors and time, as well as cost required, is also reduced.
Advantage of Software Metrics

• Comparative study of various design methodology of software systems.


• For analysis, comparison, and critical study of different programming language concerning their
characteristics.
• In comparing and evaluating the capabilities and productivity of people involved in software
development.
• In the preparation of software quality specifications.
• In making inference about the effort to be put in the design and development of the software
systems.
• In getting an idea about the complexity of the code.
• In taking decisions regarding further division of a complex module is to be done or not.
• In guiding resource manager for their proper utilization.
• In providing feedback to software managers about the progress and quality during various phases
of the software development life cycle.
• In the allocation of testing resources for testing the code.
Disadvantage of Software Metrics

• The application of software metrics is not always easy, and in


some cases, it is difficult and costly.
• The verification and justification of software metrics are based
on historical/empirical data whose validity is difficult to verify.
• These are useful for managing software products but not for
evaluating the performance of the technical staff.
• The definition and derivation of Software metrics are usually
based on assuming which are not standardized and may
depend upon tools available and working environment.
• Most of the predictive models rely on estimates of certain
variables which are often not known precisely.
Size Oriented Metrics

LOC Metrics
• It is one of the earliest and simpler metrics for
calculating the size of the computer program.
It is generally used in calculating and
comparing the productivity of programmers.
These metrics are derived by normalizing the
quality and productivity measures by
considering the size of the product as a metric.
Following are the points regarding LOC measures:

• It is an older method that was developed when FORTRAN and COBOL


programming were very popular.
• Productivity is defined as KLOC / EFFORT, where effort is measured in person-
months.
• Size-oriented metrics depend on the programming language used.
• As productivity depends on KLOC, so assembly language code will have more
productivity.
• LOC method of measurement does not apply to projects that deal with visual
(GUI-based) programming. As already explained, Graphical User Interfaces
(GUIs) use forms basically. LOC metric is not applicable here.
• It requires that all organizations must use the same method for counting LOC.
This is so because some organizations use only executable statements, some
useful comments, and some do not. Thus, the standard needs to be established.
• These metrics are not universally accepted.
Based on the LOC/KLOC count of software, many other metrics can be
computed:

• Errors/KLOC.
• $/ KLOC.
• Pages of documentation/KLOC.
• Errors/PM.
• Productivity = KLOC/PM (effort is measured in
person-months).
• $/ Page of documentation.
Advantages of LOC
• Simple to measure
Disadvantage of LOC
• It is defined on the code. For example, it cannot measure
the size of the specification.
• It characterizes only one specific view of size, namely
length, it takes no account of functionality or complexity
• Bad software design may cause an excessive line of code
• It is language dependent
• Users cannot easily understand it
Halstead's Software Metrics

• According to Halstead's "A computer program is an implementation of an


algorithm considered to be a collection of tokens which can be classified as
either operators or operand."
Token Count
• In these metrics, a computer program is considered to be a collection of
tokens, which may be classified as either operators or operands. All software
metrics can be defined in terms of these basic symbols. These symbols are
called as a token.
The basic measures are
• n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
• In terms of the total tokens used, the size of the program can be expressed as
N = N1 + N2.
Halstead metrics are:

Program Volume (V)


The unit of measurement of volume is the standard unit for size "bits". It is the actual size of a program if a
uniform binary encoding (Each ASCII character is encoded in binary using 7 bits, A=65, 1000001) for the
vocabulary is used.
V=N*log2n

Potential Minimum Volume (V*) – The potential minimum volume V* is defined as the volume of the most
succinct (briefly and clearly expressed) program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters

Program Level (L)


The value of L ranges between zero and one, with L=1 representing a program written at the highest possible
level (i.e., with minimum size).
L=V*/V

Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional to the number of the unique operator in
the program.
D= (n1/2) * (N2/n2)
Programming Effort (E)
The unit of measurement of E is elementary mental
discriminations.
E=V/L=D*V
Estimated Program Length
According to Halstead, The first Hypothesis of software science
is that the length of a well-structured program is a function
only of the number of unique operators and operands.
N=N1+N2
And estimated program length is denoted by N^
N^ = n1log2n1 + n2log2n2
Potential Minimum Volume
• The potential minimum volume V* is defined as the volume of the most short
program in which a problem can be coded.
V* = (2 + n2*) * log2 (2 + n2*)
Here, n2* is the count of unique input and output parameters (special kind of
variable used in a subroutine)
Size of Vocabulary (n)
The size of the vocabulary of a program, which consists of the number of
unique tokens used to build a program, is defined as:
n=n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
Counting rules for C language

• Comments are not considered.


• The identifier and function declarations are not considered
• All the variables and constants are considered operands.
• Global variables used in different modules of the same program are counted as multiple occurrences of the
same variable.
• Local variables with the same name in different functions are counted as unique operands.
• Functions calls are considered as operators.
• All looping statements e.g., do {...} while ( ), while ( ) {...}, for ( ) {...}, all control statements e.g., if ( ) {...}, if ( )
{...} else {...}, etc. are considered as operators.
• In control construct switch ( ) {case:...}, switch as well as all the case statements are considered as operators.
• The reserve words like return, default, continue, break, sizeof, etc., are considered as operators.
• All the brackets, commas, and terminators are considered as operators.
• GOTO is counted as an operator, and the label is counted as an operand.
• The unary and binary occurrence of "+" and "-" are dealt with separately. Similarly "*" (multiplication
operator) are dealt separately.
• In the array variables such as "array-name [index]" "array-name" and "index" are considered as operands and
[ ] is considered an operator.
• In the structure variables such as "struct-name, member-name" or "struct-name -> member-name," struct-
name, member-name are considered as operands and '.', '->' are taken as operators. Some names of member
elements in different structure variables are counted as unique operands.
• Example: Consider the sorting program as
shown in fig: List out the operators and
operands and also calculate the value of
software science measure like n, N, V, E, λ ,etc.
• Solution: The list of operators and operands is
given in the table
Operators Occurrences Operands Occurrences

int 4 SORT 1

() 5 x 7

, 4 n 3

[] 7 i 8

if 2 j 7

< 2 save 3

; 11 im1 3

for 2 2 2

= 6 1 3

- 1 0 1

<= 2 - -

++ 2 - -

return 2 - -

{} 3 - -

n1=14 N1=53 n2=10 N2=38


• Here N1=53 and N2=38. The program length N=N1+N2=53+38=91
• Vocabulary of the program n=n1+n2=14+10=24
• Volume V= N * log2n=91 x log2 24=417 bits.
• The estimate program length N of the program
= 14 log214+10 log210
= 14 * 3.81+10 * 3.32
= 53.34+33.2=86.45
• Conceptually unique input and output parameters are represented by n2*.
n2*=3 {x: array holding the integer to be sorted. This is used as both input and output}
{N: the size of the array to be sorted}
• The Potential Volume V*=5 log25=11.6
• Program Level L=V*/V
= 11.6/417 = 0.027
D=1/L = 1/0.027 = 37.03
Estimated program level L^ = (2/n1) X (n2/N2) = 2/14 X 10/38 = 0.038
• We may use another formula
• V^=V x L^= 417 x 0.038=15.67
Functional Point (FP) Analysis

• Allan J. Albrecht initially developed function Point


Analysis in 1979 at IBM and it has been further
modified by the International Function Point Users
Group (IFPUG). FPA is used to make estimate of the
software project, including its testing in terms of
functionality or function size of the software product.
However, functional point analysis may be used for the
test estimation of the product. The functional size of
the product is measured in terms of the function point,
which is a standard of measurement to measure the
software application.
Objectives of FPA

• The basic and primary purpose of the


functional point analysis is to measure and
provide the software application functional
size to the client, customer, and the
stakeholder on their request. Further, it is
used to measure the software project
development along with its maintenance,
consistently throughout the project
irrespective of the tools and the technologies.
Following are the points regarding FPs

• FPs of an application is found out by counting


the number and types of functions used in the
applications. Various functions used in an
application can be put under five types, as
shown in Table:
Types of FP Attributes

Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces (EIF) Shared databases and shared routines.


• All these parameters are then individually
assessed for complexity.
• The FPA functional units are shown in Fig:
1. FPs of an application is found out by counting the number and
types of functions used in the applications. Various functions used
in an application can be put under five types, as shown in Table:
2. FP characterizes the complexity of the software system and hence
can be used to depict the project time and the manpower
requirement.
3. The effort required to develop the project depends on what the
software does.
4. FP is programming language independent.
5. FP method is used for data processing systems, business systems
like information systems.
6. The five parameters mentioned above are also known as
information domain characteristics.
7. All the parameters mentioned above are assigned some weights
that have been experimentally determined and are shown in Table
Weights of 5-FP Attributes

Measurement Parameter Low Average High

1. Number of external 7 10 15
inputs (EI)

2. Number of external 5 7 10
outputs (EO)

3. Number of external 3 4 6
inquiries (EQ)

4. Number of internal 4 5 7
files (ILF)

5. Number of external 3 4 6
interfaces (EIF)
• The functional complexities are multiplied
with the corresponding weights against each
function, and the values are added up to
determine the UFP (Unadjusted Function
Point) of the subsystem.
• Here that weighing factor will be simple, average, or complex for a measurement parameter type.
The Function Point (FP) is thus calculated with the following formula.
• FP = Count-total * [0.65 + 0.01 * ∑(f i)]
= Count-total * CAF
where Count-total is obtained from the above Table.
• CAF = [0.65 + 0.01 *∑(f i)]
• and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-CAF (where i ranges from 1 to
14). Usually, a student is provided with the value of ∑(f i)
• Also note that ∑(fi) ranges from 0 to 70, i.e.,
• 0 <= ∑(fi) <=70
• and CAF ranges from 0.65 to 1.35 because
• When ∑(fi) = 0 then CAF = 0.65
• When ∑(fi) = 70 then CAF = 0.65 + (0.01 * 70) = 0.65 + 0.7 = 1.35
• Based on the FP measure of software many other metrics can be computed:
• Errors/FP
• $/FP.
• Defects/FP
• Pages of documentation/FP
• Errors/PM.
• Productivity = FP/PM (effort is measured in person-months).
• $/Page of Documentation.
• 8. LOCs of an application can be estimated from FPs. That is, they are
interconvertible. This process is known as backfiring. For example, 1 FP is equal to
about 100 lines of COBOL code.
• 9. FP metrics is used mostly for measuring the size of Management Information
System (MIS) software.
• 10. But the function points obtained above are unadjusted function points (UFPs).
These (UFPs) of a subsystem are further adjusted by considering some more
General System Characteristics (GSCs). It is a set of 14 GSCs that need to be
considered. The procedure for adjusting UFPs is as follows:
• Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5.
(b) If a particular GSC has no influence, then its weight is taken as 0 and if it has a
strong influence then its weight is 5.
• The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
• Then Value Adjustment Factor (VAF) is computed from TDI by using the
formula: VAF = (TDI * 0.01) + 0.65
• Remember that the value of VAF lies within 0.65 to 1.35 because
• When TDI = 0, VAF = 0.65
• When TDI = 70, VAF = 1.35
• VAF is then multiplied with the UFP to get the final FP count: FP = VAF * UFP
• Example: Compute the function point, productivity, documentation, cost
per function for the following data:
• Number of user inputs = 24
• Number of user outputs = 46
• Number of inquiries = 8
• Number of files = 4
• Number of external interfaces = 2
• Effort = 36.9 PM
• Technical documents = 265 pages
• User documents = 122 pages
• Cost = $7744/ month
• Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2, 2, 4,
5.
Solution:

Measurement Parameter Count Weighing factor

1. Number of external 24 * 4 = 96
inputs (EI)

2. Number of external 46 * 4 = 184


outputs (EO)

3. Number of external 8 * 6 = 48
inquiries (EQ)

4. Number of internal 4 * 10 = 40
files (ILF)

5. Number of external 2 * 5 = 10
interfaces (EIF) Count- 378
total →
• So sum of all fi (i ← 1 to 14) = 4 + 1 + 0 + 3 + 3 + 5 + 4 + 4 +
3 + 3 + 2 + 2 + 4 + 5 = 43
• FP = Count-total * [0.65 + 0.01 *∑(fi)]
= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408
• Total pages of documentation = technical document + user
document
= 265 + 122 = 387pages
• Documentation = Pages of documentation/FP
= 387/408 = 0.94
Differentiate between FP and LOC

FP LOC

1. FP is specification based. 1. LOC is an analogy based.

2. FP is language independent. 2. LOC is language dependent.

3. FP is user-oriented. 3. LOC is design-oriented.

4. It is extendible to LOC. 4. It is convertible to FP (backfiring)


• Extended Function Point (EFP) Metrics
FP metric has been further extended to compute:
• Feature points.
• 3D function points.
• Feature Points
• Feature point is the superset of function point measure that can be applied to
systems and engineering software applications.
• The feature points are used in those applications in which the algorithmic
complexity is high like real-time systems where time constraints are there,
embedded systems, etc.
• Feature points are computed by counting the information domain values and
are weighed by only single weight.
• Feature point includes another measurement parameter-ALGORITHM.
• The table for the computation of feature point is as follows:
• Feature Point Calculations

Measurement Count Weighing factor


Parameter

1. Number of - * 4 -
external inputs (EI)

2. Number of - * 5 -
external outputs
(EO)

3. Number of - * 4 -
external inquiries
(EQ)

4. Number of - * 7 -
internal files (ILF)

5. Number of - * 7 -
external interfaces
(EIF)

6.Algorithms used - * 3 -
Count total →
The feature point is thus calculated with the following formula:

• FP = Count-total * [0.65 + 0.01 *∑(fi)]


= Count-total * CAF
where count-total is obtained from the above table.
CAF = [0.65 + 0.01 * ∑(fi)]
and ∑(fi) is the sum of all 14 questionnaires and show the
complexity adjustment value/factor-CAF (where i ranges from 1 to
14). Usually, a student is provided with the value of ∑(fi) .
• 6. Function point and feature point both represent systems
functionality only.
• 7. For real-time applications that are very complex, the feature
point is between 20 and 35% higher than the count determined
using function point above.
• 3D function points
• Three dimensions may be used to represent 3D function points
data dimension, functional dimension, and control dimension.
• 2. The data dimension is evaluated as FPs are calculated. Herein,
counts are made for inputs, outputs, inquiries, external interfaces,
and files.
• 3. The functional dimension adds another feature-Transformation,
that is, the sequence of steps which transforms input to output.
• 4. The control dimension that adds another feature-
Transition that is defined as the total number of transitions
between states. A state represents some externally observable
mode
• Now fi for average case = 3. So sum of all fi (i ←1 to 14) = 14 * 3 = 42
FP = Count-total * [0.65 + 0.01 *∑(fi ) ]
= 618 * [0.65 + 0.01 * 42]
= 618 * [0.65 + 0.42]
= 618 * 1.07 = 661.26
• and feature point = (32 *4 + 60 * 5 + 24 * 4 + 80 +14) * 1.07 + {12 * 15 *1.07}
= 853.86
• Example: Compute the 3D-function point value for an embedded system with the following
characteristics:
• Internal data structures = 6
• External data structures = 3
• No. of user inputs = 12
• No. of user outputs = 60
• No. of user inquiries = 9
• No. of external interfaces = 3
• Transformations = 36
• Transitions = 24
• Assume complexity of the above counts is high.
• Solution: We draw the Table first. For
embedded systems, the weighting factor is
complex and complexity is high. So,
Data Structure Metrics
• Essentially the need for software development
and other activities are to process data. Some
data is input to a system, program or module;
some data may be used internally, and some
data is the output from a system, program, or
module.
Example:

Program Data Input Internal Data Data Output

Payroll Name/Social Security Withholding rates Gross Pay withholding


No./Pay rate/Number of Overtime Factors Net Pay Pay Ledgers
hours worked Insurance Premium
Rates

Spreadsheet Item Names/Item Cell computations Spreadsheet of items


Amounts/Relationships Subtotal and totals
among Items

Software Planner Program Size/No of Model Parameter Est. project effort Est.
Software developer on Constants Coefficients project duration
team
• That's why an important set of metrics which capture in the
amount of data input, processed in an output form software. A
count of this data structure is called Data Structured Metrics. In
these concentrations is on variables (and given constant) within
each module & ignores the input-output dependencies.
• There are some Data Structure metrics to compute the effort
and time required to complete the project. There metrics are:
• The Amount of Data.
• The Usage of data within a Module.
• Program weakness.
• The sharing of Data among Modules.
1. The Amount of Data: To measure the amount of Data,
there are further many different metrics, and these are:
• Number of variable (VARS): In this metric, the Number
of variables used in the program is counted.
• Number of Operands (η2): In this metric, the Number of
operands used in the program is counted.
η2 = VARS + Constants + Labels
• Total number of occurrence of the variable (N2): In this
metric, the total number of occurrence of the variables
are computed
2. The Usage of data within a Module: To
measure this metric, the average numbers of
live variables are computed. A variable is live
from its first to its last references within the
procedure.
• For Example: If we want to characterize the
average number of live variables for a program
having modules, we can use this equation.
• Where (LV) is the average live variable metric
computed from the ith module. This equation
could compute the average span size (SP) for a
program of n spans.
• 3. Program weakness: Program weakness depends
on its Modules weakness. If Modules are weak(less
Cohesive), then it increases the effort and time
metrics required to complete the project.
• Module Weakness (WM) = LV*γ
• A program is normally a combination of
various modules; hence, program weakness
can be a useful measure and is defined as:
• WP=(∑WMi, i=1……m)/m
where
• WMi: Weakness of the ith module
• WP: Weakness of the program
• m: No of modules in the program
• 4. The Sharing of Data among Module: As the
data sharing between the Modules increases
(higher Coupling), number of parameter
passing between Modules also increased, As a
result, more effort and time are required to
complete the project. So Sharing Data among
Module is an important metrics to calculate
effort and time.
Information Flow Metrics
• The other set of metrics we would live to consider are known as
Information Flow Metrics. The basis of information flow metrics is found
upon the following concept that the simplest system consists of the
component, and it is the work that these components do and how they
are fitted together that identify the complexity of the system. The
following are the working definitions that are used in Information flow:
Component: Any element identified by decomposing a (software)
system into it's constituent's parts.
Cohesion: The degree to which a component performs a single function.
Coupling: The term used to describe the degree of linkage between one
component to others in the same system.
• Information Flow metrics deal with this type of complexity by observing
the flow of information among system components or modules. This
metrics is given by Henry and Kafura. So it is also known as Henry and
Kafura's Metric.
• This metrics is based on the measurement of the information flow among system
modules. It is sensitive to the complexity due to interconnection among system
component. This measure including the complexity of a software module is
defined to be the sum of complexities of the procedures included in the module. A
process contributes to the complexity due to the following two factors.
• The complexity of the procedure code itself.
• The complexity due to the procedure's connections to its environment. The effect
of the first factor has been included through LOC (Line Of Code) measure. For the
quantification of the second factor, Henry and Kafura have defined two terms,
namely FAN-IN and FAN-OUT.
FAN-IN: FAN-IN of a procedure is the number of local flows into that procedure
plus the number of data structures from which this procedure retrieve
information.
FAN -OUT: FAN-OUT is the number of local flows from that procedure plus the
number of data structures for which that procedure updates.
• Procedure Complexity = Length * (FAN-IN * FANOUT)**2
Cyclomatic Complexity

• Cyclomatic complexity is a software metric used to measure the complexity of a


program. Thomas J. McCabe developed this metric in 1976. McCabe interprets a
computer program as a set of a strongly connected directed graph. Nodes represent
parts of the source code having no branches and arcs represent possible control flow
transfers during program execution. The notion of program graph has been used for this
measure, and it is used to measure and control the number of paths through a program.
The complexity of a computer program can be correlated with the topological complexity
of a graph.
How to Calculate Cyclomatic Complexity?
• McCabe proposed the cyclomatic number, V (G) of graph theory as an indicator of
software complexity. The cyclomatic number is equal to the number of linearly
independent paths through a program in its graphs representation. For a program
control graph G, cyclomatic number, V (G), is given as:
V (G) = E - N + 2 * P
• E = The number of edges in graphs G
• N = The number of nodes in graphs G
• P = The number of connected components in graph G.
Example:

Properties of Cyclomatic complexity:


Following are the properties of Cyclomatic
complexity:
• V (G) is the maximum number of independent
paths in the graph
• V (G) >=1
• G will have one path if V (G) = 1
• Minimize complexity to 10
CASE Tools For Software Metrics

• Many CASE tools (Computer Aided Software Engineering tools) exist for measuring
software. They are either open source or are paid tools. Some of them are listed below:
• Analyst4j tool is based on the Eclipse platform and available as a stand-alone Rich Client
Application or as an Eclipse IDE plug-in. It features search, metrics, analyzing quality, and
report generation for Java programs.
• CCCC is an open source command-line tool. It analyzes C++ and Java lines and generates
reports on various metrics, including Lines of Code and metrics proposed by Chidamber &
Kemerer and Henry & Kafura.
• Chidamber & Kemerer Java Metrics is an open source command-line tool. It calculates the
C&K object-oriented metrics by processing the byte-code of compiled Java.
• Dependency Finder is an open source. It is a suite of tools for analyzing compiled Java
code. Its core is a dependency analysis application that extracts dependency graphs and
mines them for useful information. This application comes as a command-line tool, a
Swing-based application, and a web application.
• Eclipse Metrics Plug-in 1.3.6 by Frank Sauer is an open source metrics calculation and
dependency analyzer plugin for the Eclipse IDE. It measures various metrics and detects
cycles in package and type dependencies.
• Eclipse Metrics Plug-in 3.4 by Lance Walton is open
source. It calculates various metrics during built cycles
and warns, via the problems view, of metrics 'range
violations'.
• OOMeter is an experimental software metrics tool
developed by Alghamdi. It accepts Java/C# source code
and UML models in XMI and calculates various metrics.
• Semmle is an Eclipse plug-in. It provides an SQL like
querying language for object-oriented code, which
allows searching for bugs, measure code metrics, etc.
Software Project Planning

• A Software Project is the complete methodology of programming


advancement from requirement gathering to testing and support,
completed by the execution procedures, in a specified period to
achieve intended software product.
• Need of Software Project Management
• Software development is a sort of all new streams in world business,
and there's next to no involvement in structure programming items.
Most programming items are customized to accommodate customer's
necessities. The most significant is that the underlying technology
changes and advances so generally and rapidly that experience of one
element may not be connected to the other one. All such business
and ecological imperatives bring risk in software development; hence,
it is fundamental to manage software projects efficiently.
Software Project Manager

• Software manager is responsible for planning and scheduling project development. They
manage the work to ensure that it is completed to the required standard. They monitor the
progress to check that the event is on time and within budget. The project planning must
incorporate the major issues like size & cost estimation scheduling, project monitoring,
personnel selection evaluation & risk management. To plan a successful software project,
we must understand:
• Scope of work to be completed
• Risk analysis
• The resources mandatory
• The project to be accomplished
• Record of being followed
• Software Project planning starts before technical work start. The various steps of planning
activities are:
• The size is the crucial parameter for the estimation of other activities. Resources
requirement are required based on cost and development time. Project schedule may
prove to be very useful for controlling and monitoring the progress of the project. This is
dependent on resources & development time.
Software Cost Estimation

• For any new software project, it is necessary to know how much it will
cost to develop and how much development time will it take. These
estimates are needed before development is initiated, but how is this
done? Several estimation procedures have been developed and are
having the following attributes in common.
• Project scope must be established in advanced.
• Software metrics are used as a support from which evaluation is made.
• The project is broken into small PCs which are estimated individually.
To achieve true cost & schedule estimate, several option arise.
• Delay estimation
• Used symbol decomposition techniques to generate project cost and
schedule estimates.
• Acquire one or more automated estimation tools.
Uses of Cost Estimation

• During the planning stage, one needs to choose how many


engineers are required for the project and to develop a
schedule.
• In monitoring the project's progress, one needs to access
whether the project is progressing according to the procedure
and takes corrective action, if necessary.
Cost Estimation Models
• A model may be static or dynamic. In a static model, a single
variable is taken as a key element for calculating cost and time.
In a dynamic model, all variable are interdependent, and there
is no basic variable.
• Static, Single Variable Models: When a model makes use of single variables to calculate
desired values such as cost, time, efforts, etc. is said to be a single variable model. The
most common equation is:
C=aLb
where C = Costs
L= size
a and b are constants
• The Software Engineering Laboratory established a model called SEL model, for
estimating its software production. This model is an example of the static, single variable
model.
• E=1.4L0.93
DOC=30.4L0.90
D=4.6L0.26
where E= Efforts (Person Per Month)
DOC=Documentation (Number of Pages)
D = Duration (D, in months)
L = Number of Lines per code
• Static, Multivariable Models: These models are based on method (1), they depend
on several variables describing various aspects of the software development
environment. In some model, several variables are needed to describe the software
development process, and selected equation combined these variables to give the
estimate of time & cost. These models are called multivariable models.
• WALSTON and FELIX develop the models at IBM and it provides the following
equation gives a relationship between lines of source code and effort:
E=5.2L0.91
• In the same manner duration of development is given by
D=4.1L0.36
• The productivity index uses 29 variables which are found to be highly correlated
productivity as follows:
where Wi is the weight factor for the ith variable and Xi={-1,0,+1} the estimator
gives Xi, one of the values -1, 0 or +1 depending on the variable decreases, has no
effect or increases the productivity.
• Example: Compare the Walston-Felix Model with the SEL model on a software development
expected to involve 8 person-years of effort.
Calculate the number of lines of source code that can be produced.
Calculate the duration of the development.
Calculate the productivity in LOC/PY
Calculate the average manning
• Solution:
The amount of manpower involved = 8PY=96persons-months
(a)Number of lines of source code can be obtained by reversing equation to give:
Then
L (SEL) = (96/1.4)1⁄0.93=94264 LOC
L (SEL) = (96/5.2)1⁄0.91=24632 LOC
(b)Duration in months can be calculated by means of equation
D (SEL) = 4.6 (L) 0.26
= 4.6 (94.264)0.26 = 15 months
D (W-F) = 4.1 L0.36
= 4.1 (24.632)0.36 = 13 months
(c) Productivity is the lines of code produced
per persons/month (year)

(d)Average manning is the average number of


persons required per month in the project
COCOMO Model

• Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981. COCOMO is one
of the most generally used software estimation models in the world. COCOMO predicts the
efforts and schedule of a software product based on the size of the software.
The necessary steps in this model are:
• Get an initial estimate of the development effort from evaluation of thousands of delivered
lines of source code (KDLOC).
• Determine a set of 15 multiplying factors from various attributes (for e.g., Required
Software Reliability, Size of Application Database, Complexity of The Product) of the
project.
• Calculate the effort estimate by multiplying the initial estimate with all the multiplying
factors i.e., multiply the values in step1 and step2.
• The initial estimate (also called nominal estimate) is determined by an equation of the form
used in the static single variable models, using KDLOC as the measure of the size. To
determine the initial effort Ei in person-months the equation used is of the type is shown
below
Ei=a*(KDLOC)b
• The value of the constant a and b depends on the project type.
In COCOMO, projects are categorized into
three types
1. Organic: A development project can be treated of the organic type, if the project deals with
developing a well-understood application program, the size of the development team is reasonably
small, and the team members are experienced in developing similar methods of projects. Examples
of this type of projects are simple business systems, simple inventory management systems, and
data processing systems.

2. Semidetached: A development project can be treated with semidetached type if the development
consists of a mixture of experienced and inexperienced staff. Team members may have finite
experience in related systems but may be unfamiliar with some aspects of the order being
developed. Example of Semidetached system includes developing a new operating system (OS), a
Database Management System (DBMS), and complex inventory management system.

3. Embedded: A development project is treated to be of an embedded type, if the software being


developed is strongly coupled to complex hardware, or if the stringent regulations on the
operational method exist. For Example: ATM, Air Traffic control.

For three product categories, Bohem provides a different set of expression to predict effort (in a unit
of person month)and development time from the size of estimation in KLOC(Kilo Line of code) efforts
estimation takes into account the productivity loss due to holidays, weekly off, coffee breaks, etc.
According to Boehm, software cost estimation should be done through three stages:
• Basic Model
• Intermediate Model
• Detailed Model

1. Basic COCOMO Model: The basic COCOMO model provide an accurate size of the
project parameters. The following expressions give the basic COCOMO estimation model:
Effort=a1*(KLOC)a2 PM
Tdev=b1*(efforts)b2 Months
where
KLOC is the estimated size of the software product indicated in Kilo Lines of Code,
a1,a2,b1,b2 are constants for each group of software products,
Tdev is the estimated time to develop the software, expressed in months,
Effort is the total effort required to develop the software product, expressed in person
months (PMs).
Estimation of development effort
For the three classes of software products, the formulas for estimating the effort based on
the code size are shown below:
• Organic: Effort = 2.4(KLOC) 1.05 PM
• Semi-detached: Effort = 3.0(KLOC) 1.12 PM
• Embedded: Effort = 3.6(KLOC) 1.20 PM
Estimation of development time
For the three classes of software products, the formulas for estimating the development
time based on the effort are given below:
• Organic: Tdev = 2.5(Effort) 0.38 Months
• Semi-detached: Tdev = 2.5(Effort) 0.35 Months
• Embedded: Tdev = 2.5(Effort) 0.32 Months
• Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes. Fig shows a plot of estimated effort versus
product size. From fig, we can observe that the effort is somewhat superliner in the size of
the software product. Thus, the effort required to develop a product increases very rapidly
with project size.
• The graph indicates that the effort taken by
organic type of projects is less than the semi-
detached or embedded type of project
• From the effort estimation, the project cost can be obtained by
multiplying the required effort by the manpower cost per month. But,
implicit in this project cost computation is the assumption that the entire
project cost is incurred on account of the manpower cost alone. In
addition to manpower cost, a project would incur costs due to hardware
and software required for the project and the company overheads for
administration, office space, etc.
• It is important to note that the effort and the duration estimations
obtained using the COCOMO model are called a nominal effort estimate
and nominal duration estimate. The term nominal implies that if anyone
tries to complete the project in a time shorter than the estimated
duration, then the cost will increase drastically.
• But, if anyone completes the project over a longer period of time than the
estimated, then there is almost no decrease in the estimated cost value.
• Example1: Suppose a project was estimated to be 400
KLOC. Calculate the effort and development time for each
of the three model i.e., organic, semi-detached &
embedded.
• Solution: The basic COCOMO equation takes the form
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i) Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 M
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 M
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 M
• Example 2: A project size of 200 KLOC is to be
developed. Software development team has average
experience on similar type of projects. The project
schedule is not very tight. Calculate the Effort,
development time, average staff size, and productivity of
the project.
• Solution: The semidetached mode is the most
appropriate mode, keeping in view the size, schedule
and experience of development time.
Hence E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3 M
P = 176 LOC/PM
• 2. Intermediate Model: The basic COCOMO model considers that the effort is only a function
of the number of lines of code and some constants calculated according to the various
software systems. The intermediate COCOMO model recognizes these facts and refines the
initial estimates obtained through the basic COCOMO model by using a set of 15 cost drivers
based on various attributes of software engineering.
• Classification of Cost Drivers and their attributes:
(i) Product attributes -
• Required software reliability extent
• Size of the application database
• The complexity of the product
(ii) Hardware attributes -
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time (Turnaround time (TAT) is the amount of time taken to complete a
process or fulfill a request)
(iii) Personnel attributes -
• Analyst capability
• Applications experience
• Software engineering capability
• Virtual machine experience
• Programming language experience
(iv) Project attributes -
• Use of software tools
• Application of software engineering methods
• Required development schedule
The cost drivers are divided into four categories:
• Intermediate COCOMO equation:
E=ai (KLOC) bi*EAF
D=ci (E)di
• Coefficients for intermediate COCOMO
Multiplier Values For Effort Ratings

Calculations
Cost Drivers Very Low Low Nominal High Very High Extra
Product attributes High

RELY 0.75 0.88 1.00 1.15 1.40 -


DATA - 0.94 1.00 1.08 1.16 -
CPLX 0.70 0.85 1.00 1.15 1.30 1.65
Computer attributes
TIME - - 1.00 1.11 1.30 1.66
STOR - - 1.00 1.06 1.21 1.56
VIRT - 0.87 1.00 1.15 1.30 -
TURN - 0.87 1.00 1.07 1.15 -
Personnel attributes
ACAP 1.46 1.19 1.00 0.86 0.71 -
AEXP 1.29 1.13 1.00 0.91 0.82 -
PCAP 1.42 1.17 1.00 0.86 0.70 -
VEXP 1.21 1.10 1.00 0.90 - -
LEXP 1.14 1.07 1.00 0.95 - -
Project attributes
MODP 1.24 1.10 1.00 0.91 0.82 -
TOOL 1.24 1.10 1.00 0.91 0.83 -
SCED 1.23 1.08 1.00 1.04 1.10 -
E A F = Effort
Adjustment factor
E = a i (KLOC) b i * E A F E = effort
D = Deployment time
D = ci (E)di S S = staff size
S S = E/D persons P = productivity
P = KLOC/E a i , bi , ci , d i =
Coefficients
Co- efficients for Intermediate C O C O M O
Project ai bi ci di

Organic mode 3.2 1.05 2.5 0.38

Semidetached 3.0 1.12 2.5 0.35


mode
Embedded mode 2.8 1.20 2.5 0.32
A new project with estimated 400 K L O C embedded system has to be developed.
Project manager has a choice of hiring from two pools of developers : with very high
application experience and very little experience in the programming language
being used or developers of very low application experience but a lot of experience
with the programming language. What is the impact of hiring all developers from
one or the other pool.
Solution
This is the case of embedded mode
Hence E = a i (KLOC) b i * E A F D = ci (E)di
Case 1: Developers are with very high application experience and very little
experience in the programming language being used.
E A F = 0.82 *1.14 = 0.9348
E = 2.8(400)1.20 * 0.9348 = 3470 P M
D = 2.5 (3470)0.32 = 33.9 M
Case 2: developers of very low application experience but a lot of experience with
the programming language.
E A F = 1.29*0.95 = 1.22
E = 2.8 (400)1.20 *1.22 = 4528PM
D = 2.5 (4528)0.32 = 36.9 M
Case 2 requires more effort and time. Hence, low quality application experience
but a lot of programming language experience could not match with the very high
application experience and very little programming language experience.
Project ai bi ci di

Organic 2.4 1.05 2.5 0.38

Semidetached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32


• 3. Detailed COCOMO Model: Detailed COCOMO incorporates all qualities of the
standard version with an assessment of the cost drivers effect on each method of
the software engineering process. The detailed model uses various effort
multipliers for each cost driver property. In detailed COCOMO, the whole
software is differentiated into multiple modules, and then we apply COCOMO in
various modules to estimate effort and then sum the effort.
• The Six phases of detailed COCOMO are:
Planning and requirements
System structure
Complete structure
Module code and test
Integration and test
Cost Constructive model
The effort is determined as a function of program estimate, and a set of cost
drivers are given according to every phase of the software lifecycle.
Putnam Resource Allocation Model

• The Lawrence Putnam model describes the


time and effort required finishing a software
project of a specified size. Putnam makes a
use of a so-called The Norden/Rayleigh Curve
to estimate project effort, schedule & defect
rate as shown in fig:
• Putnam noticed that software staffing profiles
followed the well known Rayleigh distribution.
Putnam used his observation about
productivity levels to derive the software
equation:
The various terms of this expression are as follows:

• K is the total effort expended (in PM) in product development, and L is the
product estimate in KLOC .
• td correlate to the time of system and integration testing. Therefore, td can be
relatively considered as the time required for developing the product.
• Ck Is the state of technology constant and reflects requirements that impede the
development of the program.
• Typical values of Ck = 2 for poor development environment
• Ck= 8 for good software development environment
• Ck = 11 for an excellent environment (in addition to following software
engineering principles, automated tools and techniques are used).
• The exact value of Ck for a specific task can be computed from the historical data
of the organization developing it.
• Putnam proposed that optimal staff develop
on a project should follow the Rayleigh curve.
Only a small number of engineers are required
at the beginning of a plan to carry out
planning and specification tasks. As the
project progresses and more detailed work are
necessary, the number of engineers reaches a
peak. After implementation and unit testing,
the number of project staff falls.
• Effect of a Schedule change on Cost
• Putnam derived the following expression:
• Where, K is the total effort expended (in PM) in the product
development
• L is the product size in KLOC
• td corresponds to the time of system and integration testing
• Ck Is the state of technology constant and reflects
constraints that impede the progress of the program
• Now by using the above expression, it is obtained that,
• For the same product size, C =L3 / Ck3 is a constant.
• (As project development effort is equally proportional to project
development cost)
• From the above expression, it can be easily observed that when
the schedule of a project is compressed, the required
development effort as well as project development cost increases
in proportion to the fourth power of the degree of compression. It
means that a relatively small compression in delivery schedule can
result in a substantial penalty of human effort as well as
development cost.
• For example, if the estimated development time is 1 year, then to
develop the product in 6 months, the total effort required to
develop the product (and hence the project cost) increases 16
times.
What is Risk?

• "Tomorrow problems are today's risk." Hence, a clear definition of a


"risk" is a problem that could cause some loss or threaten the progress of
the project, but which has not happened yet.
• These potential issues might harm cost, schedule or technical success of
the project and the quality of our software device, or project team
morale.
• Risk Management is the system of identifying, addressing and eliminating
these problems before they can damage the project.
• We need to differentiate risks, as potential issues, from the current
problems of the project.
• Different methods are required to address these two kinds of issues.
• For example, staff storage, because we have not been able to select
people with the right technical skills is a current problem, but the threat
of our technical persons being hired away by the competition is a risk.
Risk Management

• A software project can be concerned with a large variety


of risks. In order to be adept to systematically identify the
significant risks which might affect a software project, it is
essential to classify risks into different classes. The project
manager can then check which risks from each class are
relevant to the project.
• There are 3 main classifications of risks which can affect a
software project:
• Project risks
• Technical risks
• Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule,
personnel, resource, and customer-related problems. A vital project risk is
schedule slippage. Since the software is intangible, it is very tough to
monitor and control a software project. It is very tough to control something
which cannot be identified. For any manufacturing program, such as the
manufacturing of cars, the plan executive can recognize the product taking
shape.
2. Technical risks: Technical risks concern potential method,
implementation, interfacing, testing, and maintenance issue. It also consists
of an ambiguous specification, incomplete specification, changing
specification, technical uncertainty, and technical obsolescence. Most
technical risks appear due to the development team's insufficient
knowledge about the project.
3. Business risks: This type of risks contain risks of building an excellent
product that no one need, losing budgetary or personnel commitments, etc.
Other risk categories
• 1. Known risks: Those risks that can be uncovered
after careful assessment of the project program, the
business and technical environment in which the plan
is being developed, and more reliable data sources
(e.g., unrealistic delivery date)
• 2. Predictable risks: Those risks that are predicted
from previous project experience (e.g., past turnover)
• 3. Unpredictable risks: Those risks that can and do
occur, but are extremely tough to identify in advance.
Principle of Risk Management

• Global Perspective: In this, we review the bigger system description,


design, and implementation. We look at the chance and the impact
the risk is going to have.
• Take a forward-looking view: Consider the threat which may appear
in the future and create future plans for directing the next events.
• Open Communication: This is to allow the free flow of
communications between the client and the team members so that
they have certainty about the risks.
• Integrated management: In this method risk management is made
an integral part of project management.
• Continuous process: In this phase, the risks are tracked continuously
throughout the risk management paradigm.
Risk Management Activities
Risk management consists of three main activities, as shown in fig:
Risk Assessment

• The objective of risk assessment is to divide the risks in the condition of


their loss, causing potential. For risk assessment, first, every risk should
be rated in two methods:
• The possibility of a risk coming true (denoted as r).
• The consequence of the issues relates to that risk (denoted as s).
• Based on these two methods, the priority of each risk can be estimated:
p=r*s
Where p is the priority with which the risk must be controlled, r is the
probability of the risk becoming true, and s is the severity of loss caused
due to the risk becoming true. If all identified risks are set up, then the
most likely and damaging risks can be controlled first, and more
comprehensive risk abatement methods can be designed for these risks.
1. Risk Identification:

• The project organizer needs to anticipate the risk in the project as early as possible so
that the impact of risk can be reduced by making effective risk management planning.
• A project can be of use by a large variety of risk. To identify the significant risk, this might
affect a project. It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:
• Technology risks: Risks that assume from the software or hardware technologies that are
used to develop the system.
• People risks: Risks that are connected with the person in the development team.
• Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
• Tools risks: Risks that assume from the software tools and other support software used
to create the system.
• Requirement risks: Risks that assume from the changes to the customer requirement
and the process of managing the requirements change.
• Estimation risks: Risks that assume from the management estimates of the resources
required to build the system
2. Risk Analysis:

• During the risk analysis process, you have to consider every identified
risk and make a perception of the probability and seriousness of that
risk.
• There is no simple way to do this. You have to rely on your perception
and experience of previous projects and the problems that arise in
them.
• It is not possible to make an exact, the numerical estimate of the
probability and seriousness of each risk. Instead, you should authorize
the risk to one of several bands:
• The probability of the risk might be determined as very low (0-10%), low
(10-25%), moderate (25-50%), high (50-75%) or very high (+75%).
• The effect of the risk might be determined as catastrophic (threaten the
survival of the plan), serious (would cause significant delays), tolerable
(delays are within allowed contingency), or insignificant.
Risk Control

• It is the process of managing risks to achieve desired outcomes. After all, the
identified risks of a plan are determined; the project must be made to include
the most harmful and the most likely risks. Different risks need different
containment methods. In fact, most risks need ingenuity on the part of the
project manager in tackling the risk.
• There are three main methods to plan for risk management:
• Avoid the risk: This may take several ways such as discussing with the client
to change the requirements to decrease the scope of the work, giving
incentives to the engineers to avoid the risk of human resources turnover, etc.
• Transfer the risk: This method involves getting the risky element developed
by a third party, buying insurance cover, etc.
• Risk reduction: This means planning method to include the loss due to risk.
For instance, if there is a risk that some key personnel might leave, new
recruitment can be planned.
Risk Leverage:
• To choose between the various methods of
handling risk, the project plan must consider the
amount of controlling the risk and the
corresponding reduction of risk. For this, the risk
leverage of the various risks can be estimated.
• Risk leverage is the variation in risk exposure
divided by the amount of reducing the risk.
• Risk leverage = (risk exposure before reduction -
risk exposure after reduction) / (cost of reduction)
1. Risk planning: The risk planning method considers each of the key risks
that have been identified and develop ways to maintain these risks.
• For each of the risks, you have to think of the behavior that you may take
to minimize the disruption to the plan if the issue identified in the risk
occurs.
• You also should think about data that you might need to collect while
monitoring the plan so that issues can be anticipated.
• Again, there is no easy process that can be followed for contingency
planning. It rely on the judgment and experience of the project manager.
2. Risk Monitoring: Risk monitoring is the method kind that your
assumption about the product, process, and business risks has not
changed.
Project Scheduling

Project-task scheduling is a significant project planning activity. It


comprises deciding which functions would be taken up when. To
schedule the project plan, a software project manager wants to do the
following:
• Identify all the functions required to complete the project.
• Break down large functions into small activities.
• Determine the dependency among various activities.
• Establish the most likely size for the time duration required to
complete the activities.
• Allocate resources to activities.
• Plan the beginning and ending dates for different activities.
• Determine the critical path. A critical way is the group of activities that
decide the duration of the project.
• The first method in scheduling a software plan involves identifying all the functions
required to complete the project. A good judgment of the intricacies of the project
and the development process helps the supervisor to identify the critical role of the
project effectively.
• Next, the large functions are broken down into a valid set of small activities which
would be assigned to various engineers. The work breakdown structure formalism
supports the manager to breakdown the function systematically after the project
manager has broken down the purpose and constructs the work breakdown
structure; he has to find the dependency among the activities.
• Dependency among the various activities determines the order in which the various
events would be carried out. If an activity A needs the results of another activity B,
then activity A must be scheduled after activity B. In general, the function
dependencies describe a partial ordering among functions, i.e., each service may
precede a subset of other functions, but some functions might not have any
precedence ordering describe between them (called concurrent function). The
dependency among the activities is defined in the pattern of an activity network.
• Once the activity network representation has been processed out,
resources are allocated to every activity. Resource allocation is
usually done using a Gantt chart. After resource allocation is
completed, a PERT chart representation is developed. The PERT
chart representation is useful for program monitoring and control.
• For task scheduling, the project plan needs to decompose the
project functions into a set of activities. The time frame when every
activity is to be performed is to be determined. The end of every
action is called a milestone. The project manager tracks the
function of a project by auditing the timely completion of the
milestones. If he examines that the milestones start getting
delayed, then he has to handle the activities carefully so that the
complete deadline can still be met.
Personnel Planning

• Personnel Planning deals with staffing. Staffing deals with the appointed
personnel for the position that is identified by the organizational
structure.
It involves:
• Defining requirement for personnel
• Recruiting (identifying, interviewing, and selecting candidates)
• Compensating
• Developing and promoting agent
• For personnel planning and scheduling, it is helpful to have efforts and
schedule size for the subsystems and necessary component in the system.
• At planning time, when the system method has not been completed, the
planner can only think to know about the large subsystems in the system
and possibly the major modules in these subsystems.
• Once the project plan is estimated, and the effort and schedule of various phases and
functions are known, staff requirements can be achieved.
• From the cost and overall duration of the projects, the average staff size for the
projects can be determined by dividing the total efforts (in person-months) by the
whole project duration (in months).
• Typically the staff required for the project is small during requirement and design, the
maximum during implementation and testing, and drops again during the last stage
of integration and testing.
• Using the COCOMO model, average staff requirement for various phases can be
calculated as the effort and schedule for each method are known.
• When the schedule and average staff level for every action are well-known, the
overall personnel allocation for the project can be planned.
• This plan will indicate how many people will be required for different activities at
different times for the duration of the project.
• The total effort for each month and the total effort for each step can easily be
calculated from this plan.
Team Structure

• Team structure addresses the issue of arrangement


of the individual project teams. There are some
possible methods in which the different project
teams can be organized. There are primarily three
formal team structures: chief programmer, Ego-
less or democratic, and the mixed team
organizations even several other variations to
these structures are possible. Problems of various
complexities and sizes often need different team
structures for the chief solution.
Ego-Less or Democratic Teams

• Ego-Less teams subsist of a team of fewer


programmers. The objective of the group is set by
consensus, and input from each member is taken for
significant decisions. Group leadership revolves among
the group members. Due to its nature, egoless teams
are consistently known as democratic teams.
• The structure allows input from all representatives,
which can lead to better decisions in various problems.
This suggests that this method is well suited for long-
term research-type projects that do not have time
constraints.
Chief Programmer Team

• A chief-programmer team, in contrast to the ego-less team, has a


hierarchy. It consists of a chief-programmer, who has a backup
programmer, a program librarian, and some programmers.
• The chief programmer is essential for all major technical decisions of the
project.
• He does most of the designs, and he assigns coding of the different part
of the design to the programmers.
• The backup programmer uses the chief programmer, makes technical
decisions, and takes over the chief programmer if the chief programmer
drops sick or leaves.
• The program librarian is vital for maintaining the documentation and
other communication-related work.
• This structure considerably reduces interpersonal communication. The
communication paths, as shown in fig:
Controlled Decentralized Team
(Hierarchical Team Structure)

• A third team structure known as the controlled decentralized team tries to


combine the strength of the democratic and chief programmer teams.
• It consists of project leaders who have a class of senior programmers
under him, while under every senior programmer is a group of a junior
programmer.
• The group of a senior programmer and his junior programmers behave like
an ego-less team, but communication among different groups occurs only
through the senior programmers of the group.
• The senior programmer also communicates with the project leader.
• Such a team has fewer communication paths than a democratic team but
more paths compared to a chief programmer team.
• This structure works best for large projects that are reasonably
straightforward. It is not well suited for simple projects or research-type
projects.
Software Requirement Specifications

• The production of the requirements stage of the software development process


is Software Requirements Specifications (SRS) (also called a requirements
document). This report lays a foundation for software engineering activities and is
constructing when entire requirements are elicited and analyzed. SRS is a formal
report, which acts as a representation of software that enables the customers to
review whether it (SRS) is according to their requirements. Also, it comprises user
requirements for a system as well as detailed specifications of the system
requirements.
• The SRS is a specification for a specific software product, program, or set of
applications that perform particular functions in a specific environment. It serves
several goals depending on who is writing it. First, the SRS could be written by the
client of a system. Second, the SRS could be written by a developer of the system.
The two methods create entirely various situations and establish different
purposes for the document altogether. The first case, SRS, is used to define the
needs and expectation of the users. The second case, SRS, is written for various
purposes and serves as a contract document between customer and developer.
Characteristics of good SRS
Following are the features of a good SRS document:

1. Correctness: User review is used to provide the accuracy of requirements stated in


the SRS. SRS is said to be perfect if it covers all the needs that are truly expected
from the system.
2. Completeness: The SRS is complete if, and only if, it includes the following
elements:
(1). All essential requirements, whether relating to functionality, performance,
design, constraints, attributes, or external interfaces.
(2). Definition of their responses of the software to all realizable classes of input data
in all available categories of situations.
• Note: It is essential to specify the responses to both valid and invalid values.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and
definitions of all terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible conflict in
the SRS:
• (1). The specified characteristics of real-world objects may conflicts. For example,
• (a) The format of an output report may be described in one requirement as
tabular but in another as textual.
• (b) One condition may state that all lights shall be green while another states
that all lights shall be blue.
• (2). There may be a reasonable or temporal conflict between the two
specified actions. For example,
• (a) One requirement may determine that the program will add two inputs,
and another may determine that the program will multiply them.
• (b) One condition may state that "A" must always follow "B," while other
requires that "A and B" co-occurs.
• (3). Two or more requirements may define the same real-world object but
use different terms for that object. For example, a program's request for user
input may be called a "prompt" in one requirement's and a "cue" in another.
The use of standard terminology and descriptions promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only
one interpretation. This suggests that each element is uniquely interpreted. In case
there is a method used with multiple definitions, the requirements report should
determine the implications in the SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and
stability if each requirement in it has an identifier to indicate either the significance
or stability of that particular requirement.
• Typically, all requirements are not equally important. Some prerequisites may be
essential, especially for life-critical applications, while others may be desirable.
Each element should be identified to make these differences clear and explicit.
Another way to rank requirements is to distinguish classes of items as essential,
conditional, and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable
of quickly obtaining changes to the system to some extent. Modifications should be
perfectly indexed and cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified
with a cost-effective system to check whether the final software meets those
requirements. The requirements are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is
clear and if it facilitates the referencing of each condition in future development
or enhancement documentation.
There are two types of Traceability:
1. Backward Traceability: This depends upon each requirement explicitly
referencing its source in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having a
unique name or reference number.
• The forward traceability of the SRS is especially crucial when the software
product enters the operation and maintenance phase. As code and design
document is modified, it is necessary to be able to ascertain the complete set of
requirements that may be concerned by those modifications.
9. Design Independence: There should be an option to select from
multiple design alternatives for the final system. More specifically, the
SRS should not contain any implementation details.
10. Testability: An SRS should be written in such a method that it is
simple to generate test cases and test plans from the report.
11. Understandable by the customer: An end user may be an expert in
his/her explicit domain but might not be trained in computer science.
Hence, the purpose of formal notations and symbols should be avoided
too as much extent as possible. The language should be kept simple and
clear.
12. The right level of abstraction: If the SRS is written for the
requirements stage, the details should be explained explicitly. whereas,
for a feasibility study, fewer analysis can be used. Hence, the level of
abstraction modifies according to the objective of the SRS.
Properties of a good SRS document

The essential properties of a good SRS document are the following:


• Concise: The SRS report should be concise and at the same time, unambiguous,
consistent, and complete. Verbose and irrelevant descriptions decrease
readability and also increase error possibilities.
• Structured: It should be well-structured. A well-structured document is simple
to understand and modify. In practice, the SRS document undergoes several
revisions to cope up with the user requirements. Often, user requirements
evolve over a period of time. Therefore, to make the modifications to the SRS
document easy, it is vital to make the report well-structured.
• Black-box view: It should only define what the system should do and refrain
from stating how to do these. This means that the SRS document should define
the external behavior of the system and not discuss the implementation issues.
The SRS report should view the system to be developed as a black box and
should define the externally visible behavior of the system. For this reason, the
SRS report is also known as the black-box specification of a system.
• Conceptual integrity: It should show conceptual
integrity so that the reader can merely understand it.
Response to undesired events: It should characterize
acceptable responses to unwanted events. These are
called system response to exceptional conditions.
• Verifiable: All requirements of the system, as
documented in the SRS document, should be correct.
This means that it should be possible to decide
whether or not requirements have been met in an
implementation.
Requirements Analysis

• Requirement analysis is significant and essential activity


after elicitation. We analyze, refine, and scrutinize the
gathered requirements to make consistent and
unambiguous requirements. This activity reviews all
requirements and may provide a graphical view of the
entire system. After the completion of the analysis, it is
expected that the understandability of the project may
improve significantly. Here, we may also use the
interaction with the customer to clarify points of
confusion and to understand which requirements are
more important than others.
The various steps of requirement analysis are shown in fig:
(i) Draw the context diagram: The context
diagram is a simple model that defines the
boundaries and interfaces of the proposed
systems with the external world. It identifies
the entities outside the proposed system that
interact with the system. The context diagram
of student result management system is given
below:
(ii) Development of a Prototype (optional): One effective way to find out what the
customer wants is to construct a prototype, something that looks and preferably
acts as part of the system they say they want.
• We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed
system and increase the understanding of the requirements. When developers and
users are not sure about some of the elements, a prototype may help both the
parties to take a final decision.
• Some projects are developed for the general market. In such cases, the prototype
should be shown to some representative sample of the population of potential
purchasers. Even though a person who tries out a prototype may not buy the final
system, but their feedback may allow us to make the product more attractive to
others.
• The prototype should be built quickly and at a relatively low cost. Hence it will
always have limitations and would not be acceptable in the final system. This is an
optional activity.
(iii) Model the requirements: This process usually consists of various
graphical representations of the functions, data entities, external
entities, and the relationships between them. The graphical view may
help to find incorrect, inconsistent, missing, and superfluous
requirements. Such models include the Data Flow diagram, Entity-
Relationship diagram, Data Dictionaries, State-transition diagrams, etc.
(iv) Finalize the requirements: After modeling the requirements, we
will have a better understanding of the system behavior. The
inconsistencies and ambiguities have been identified and corrected.
The flow of data amongst various modules has been analyzed.
Elicitation and analyze activities have provided better insight into the
system. Now we finalize the analyzed requirements, and the next step
is to document these requirements in a prescribed format.
Data Flow Diagrams

• A Data Flow Diagram (DFD) is a traditional visual


representation of the information flow within a system. A neat
and clear DFD can depict the right amount of the system
requirement graphically. It can be manual, automated, or a
combination of both.
• It shows how data enter and leaves the system, what changes
the information, and where data is stored.
• The objective of a DFD is to show the scope and boundaries of
a system as a whole. It may be used as a communication tool
between a system analyst and any person who plays a part in
the order that acts as a starting point for redesigning a system.
The DFD is also called as a data flow graph or bubble chart.
The following observations about DFDs are essential:

• All names should be unique. This makes it easier to refer to


elements in the DFD.
• Remember that DFD is not a flow chart. Arrows is a flow chart
that represents the order of events; arrows in DFD represents
flowing data. A DFD does not involve any order of events.
• Suppress logical decisions. If we ever have the urge to draw a
diamond-shaped box in a DFD, suppress that urge! A diamond-
shaped box is used in flow charts to represents decision points
with multiple exists paths of which the only one is taken. This
implies an ordering of events, which makes no sense in a DFD.
• Do not become bogged down with details. Defer error
conditions and error handling until the end of the analysis.
Standard symbols for DFDs are derived from the electric circuit diagram
analysis and are shown in fig:
• Circle: A circle (bubble) shows a process that transforms
data inputs into data outputs.
• Data Flow: A curved line shows the flow of data into or out
of a process or data store.
• Data Store: A set of parallel lines shows a place for the
collection of data items. A data store indicates that the
data is stored which can be used at a later stage or by the
other processes in a different order. The data store can
have an element or group of elements.
• Source or Sink: Source or Sink is an external entity and acts
as a source of system inputs or sink of system outputs.
Levels in Data Flow Diagrams (DFD)

• The DFD may be used to perform a system or software at any level of abstraction.
Infact, DFDs may be partitioned into levels that represent increasing information
flow and functional detail. Levels in DFD are numbered 0, 1, 2 or beyond. Here, we
will see primarily three levels in the data flow diagram, which are: 0-level DFD, 1-
level DFD, and 2-level DFD.
0-level DFD
• It is also known as fundamental system model, or context diagram represents the
entire software requirement as a single bubble with input and output data denoted
by incoming and outgoing arrows. Then the system is decomposed and described as
a DFD with multiple bubbles. Parts of the system represented by each of these
bubbles are then decomposed and documented as more and more detailed DFDs.
This process may be repeated at as many levels as necessary until the program at
hand is well understood. It is essential to preserve the number of inputs and outputs
between levels, this concept is called leveling by DeMacro. Thus, if bubble "A" has
two inputs x1 and x2 and one output y, then the expanded DFD, that represents "A"
should have exactly two external inputs and one external output as shown in fig:
• The Level-0 DFD, also called context diagram
of the result management system is shown in
fig. As the bubbles are decomposed into less
and less abstract bubbles, the corresponding
data flow may also be needed to be
decomposed.
1-level DFD
• In 1-level DFD, a context diagram is
decomposed into multiple bubbles/processes.
In this level, we highlight the main objectives
of the system and breakdown the high-level
process of 0-level DFD into sub-processes.
2-Level DFD
• 2-level DFD goes one process deeper into
parts of 1-level DFD. It can be used to project
or record the specific/necessary detail about
the system's functioning.
Data Dictionaries

• A data dictionary is a file or a set of files that includes a database's


metadata. The data dictionary hold records about other objects in the
database, such as data ownership, data relationships to other objects, and
other data. The data dictionary is an essential component of any relational
database. Ironically, because of its importance, it is invisible to most
database users. Typically, only database administrators interact with the
data dictionary.
The data dictionary, in general, includes information about the following:
• Name of the data item
• Aliases
• Description/purpose
• Related data items
• Range of values
• Data structure definition/Forms
• The name of the data item is self-explanatory.
• Aliases include other names by which this data item is called DEO for Data
Entry Operator and DR for Deputy Registrar.
• Description/purpose is a textual description of what the data item is used
for or why it exists.
• Related data items capture relationships between data items e.g.,
total_marks must always equal to internal_marks plus external_marks.
• Range of values records all possible values, e.g. total marks must be
positive and between 0 to 100.
• Data structure Forms: Data flows capture the name of processes that
generate or receive the data items. If the data item is primitive, then data
structure form captures the physical structures of the data item. If the data
is itself a data aggregate, then data structure form capture the composition
of the data items in terms of other data items.
The mathematical operators used within the data dictionary are defined in the table:
Entity-Relationship Diagrams

• ER-modeling is a data modeling method used in software


engineering to produce a conceptual data model of an information
system. Diagrams created using this ER-modeling method are
called Entity-Relationship Diagrams or ER diagrams or ERDs.
Purpose of ERD
• The database analyst gains a better understanding of the data to
be contained in the database through the step of constructing the
ERD.
• The ERD serves as a documentation tool.
• Finally, the ERD is used to connect the logical structure of the
database to users. In particular, the ERD effectively communicates
the logic of the database to users.
Components of an ER Diagrams
1. Entity
• An entity can be a real-world object, either animate or inanimate,
that can be merely identifiable. An entity is denoted as a rectangle
in an ER diagram. For example, in a school database, students,
teachers, classes, and courses offered can be treated as entities.
All these entities have some attributes or properties that give
them their identity.
Entity Set
• An entity set is a collection of related types of entities. An entity
set may include entities with attribute sharing similar values. For
example, a Student set may contain all the students of a school;
likewise, a Teacher set may include all the teachers of a school
from all faculties. Entity set need not be disjoint.
2. Attributes
• Entities are denoted utilizing their properties,
known as attributes. All attributes have values. For
example, a student entity may have name, class,
and age as attributes.
• There exists a domain or range of values that can
be assigned to attributes. For example, a student's
name cannot be a numeric value. It has to be
alphabetic. A student's age cannot be negative,
etc.
There are four types of Attributes:
• Key attribute
• Composite attribute
• Single-valued attribute
• Multi-valued attribute
• Derived attribute
1. Key attribute: Key is an attribute or collection of
attributes that uniquely identifies an entity among the
entity set. For example, the roll_number of a student
makes him identifiable among students.
There are mainly three types of keys:
• Super key: A set of attributes that collectively
identifies an entity in the entity set.
• Candidate key: A minimal super key is known
as a candidate key. An entity set may have
more than one candidate key.
• Primary key: A primary key is one of the
candidate keys chosen by the database
designer to uniquely identify the entity set.
2. Composite attribute: An attribute that is a
combination of other attributes is called a
composite attribute. For example, In student
entity, the student address is a composite
attribute as an address is composed of other
characteristics such as pin code, state,
country.
3. Single-valued attribute: Single-valued
attribute contain a single value. For example,
Social_Security_Number.
4. Multi-valued Attribute: If an attribute can
have more than one value, it is known as a
multi-valued attribute. Multi-valued attributes
are depicted by the double ellipse. For
example, a person can have more than one
phone number, email-address, etc.
5. Derived attribute: Derived attributes are the attribute that
does not exist in the physical database, but their values are
derived from other attributes present in the database. For
example, age can be derived from date_of_birth. In the ER
diagram, Derived attributes are depicted by the dashed ellipse.
3. Relationships
The association among entities is known as relationship. Relationships are
represented by the diamond-shaped box. For example, an employee
works_at a department, a student enrolls in a course. Here, Works_at and
Enrolls are called relationships.
Relationship set
• A set of relationships of a similar type is known as a
relationship set. Like entities, a relationship too can have
attributes. These attributes are called descriptive attributes.
Degree of a relationship set
• The number of participating entities in a relationship
describes the degree of the relationship. The three most
common relationships in E-R models are:
• Unary (degree1)
• Binary (degree2)
• Ternary (degree3)
1. Unary relationship: This is also called
recursive relationships. It is a relationship
between the instances of one entity type. For
example, one person is married to only one
person.
2. Binary relationship: It is a relationship between the instances of two entity
types. For example, the Teacher teaches the subject.

3. Ternary relationship: It is a relationship amongst instances of three entity


types. In fig, the relationships "may have" provide the association of three
entities, i.e., TEACHER, STUDENT, and SUBJECT. All three entities are many-to-
many participants. There may be one or many participants in a ternary
relationship.
• In general, "n" entities can be related by the same relationship and is known
as n-ary relationship.
Cardinality

• Cardinality describes the number of entities in one


entity set, which can be associated with the number
of entities of other sets via relationship set.
Types of Cardinalities
1. One to One: One entity from entity set A can be
contained with at most one entity of entity set B
and vice versa. Let us assume that each student has
only one student ID, and each student ID is assigned
to only one person. So, the relationship will be one
to one.
Using Sets, it can be represented as:

2. One to many: When a single instance of an entity is


associated with more than one instances of another entity then
it is called one to many relationships. For example, a client can
place many orders; a order cannot be placed by many
customers.
Using Sets, it can be represented as:
3. Many to One: More than one entity from entity set A can be associated with at
most one entity of entity set B, however an entity from entity set B can be
associated with more than one entity from entity set A. For example - many
students can study in a single college, but a student cannot study in many colleges
at the same time.
• Using Sets, it can be represented as:
4. Many to Many: One entity from A can be
associated with more than one entity from B
and vice-versa. For example, the student can
be assigned to many projects, and a project
can be assigned to many students.
Using Sets, it can be represented as:
Software Configuration Management

• When we develop software, the product (software) undergoes


many changes in their maintenance phase; we need to handle
these changes effectively.
• Several individuals (programs) works together to achieve these
common goals. This individual produces several work product
(SC Items) e.g., Intermediate version of modules or test data
used during debugging, parts of the final product.
• The elements that comprise all information produced as a part
of the software process are collectively called a software
configuration.
• As software development progresses, the number of Software
Configuration elements (SCI's) grow rapidly.
These are handled and controlled by SCM. This is where we require software
configuration management.
• A configuration of the product refers not only to the product's constituent but
also to a particular version of the component.
Therefore, SCM is the discipline which
• Identify change
• Monitor and control change
• Ensure the proper implementation of change made to the item.
• Auditing and reporting on the change made.
• Configuration Management (CM) is a technic of identifying, organizing, and
controlling modification to software being built by a programming team.
The objective is to maximize productivity by minimizing mistakes (errors).
• CM is used to essential due to the inventory management, library
management, and updation management of the items essential for the project.
Why do we need Configuration Management?
• Multiple people are working on software which is consistently updating. It may
be a method where multiple version, branches, authors are involved in a
software project, and the team is geographically distributed and works
concurrently. It changes in user requirements, and policy, budget, schedules
need to be accommodated.
Importance of SCM
• It is practical in controlling and managing the access to various SCIs e.g., by
preventing the two members of a team for checking out the same component
for modification at the same time.
It provides the tool to ensure that changes are being properly implemented.
• It has the capability of describing and storing the various constituent of
software.
• SCM is used in keeping a system in a consistent state by automatically
producing derived version upon modification of the same component.
SCM Process

It uses the tools which keep that the necessary


change has been implemented adequately to the
appropriate component. The SCM process defines a
number of tasks:
• Identification of objects in the software configuration
• Version Control
• Change Control
• Configuration Audit
• Status Reporting
Identification
• Basic Object: Unit of Text created by a software engineer during analysis, design, code, or
test.
• Aggregate Object: A collection of essential objects and other aggregate objects. Design
Specification is an aggregate object.
• Each object has a set of distinct characteristics that identify it uniquely: a name, a
description, a list of resources, and a "realization."
• The interrelationships between configuration objects can be described with a Module
Interconnection Language (MIL).
Version Control
• Version Control combines procedures and tools to handle different version of configuration
objects that are generated during the software process.
• Clemm defines version control in the context of SCM: Configuration management allows a
user to specify the alternative configuration of the software system through the selection
of appropriate versions. This is supported by associating attributes with each software
version, and then allowing a configuration to be specified [and constructed] by describing
the set of desired attributes.
Change Control
• James Bach describes change control in the context of SCM is: Change Control is Vital. But
the forces that make it essential also make it annoying.
• We worry about change because a small confusion in the code can create a big failure in the
product. But it can also fix a significant failure or enable incredible new capabilities.
• We worry about change because a single rogue developer could sink the project, yet brilliant
ideas originate in the mind of those rogues, and
• A burdensome change control process could effectively discourage them from doing creative
work.
• A change request is submitted and calculated to assess technical merit; potential side
effects, the overall impact on other configuration objects and system functions, and
projected cost of the change.
• The results of the evaluations are presented as a change report, which is used by a change
control authority (CCA) - a person or a group who makes a final decision on the status and
priority of the change.
• The "check-in" and "check-out" process implements two necessary elements of change
control-access control and synchronization control.
• Access Control governs which software engineers have the authority to access and
modify a particular configuration object.
• Synchronization Control helps to ensure that parallel changes, performed by two
different people, don't overwrite one another.
• Configuration Audit
• SCM audits to verify that the software product satisfies the baselines requirements
and ensures that what is built and what is delivered.
• SCM audits also ensure that traceability is maintained between all CIs and that all
work requests are associated with one or more CI modification.
• SCM audits are the "watchdogs" that ensures that the integrity of the project's scope
is preserved.
• Status Reporting
• Configuration Status reporting (sometimes also called status accounting) providing
accurate status and current configuration data to developers, testers, end users,
customers and stakeholders through admin guides, user guides, FAQs, Release Notes,
Installation Guide, Configuration Guide, etc.
Software Quality Assurance

What is Quality?
• Quality defines to any measurable characteristics such as correctness, maintainability,
portability, testability, usability, reliability, efficiency, integrity, reusability, and
interoperability.
• There are two kinds of Quality:
• Quality of Design: Quality of Design refers to the characteristics
that designers specify for an item. The grade of materials,
tolerances, and performance specifications that all contribute to
the quality of design.
• Quality of conformance: Quality of conformance is the degree to
which the design specifications are followed during
manufacturing. Greater the degree of conformance, the higher is
the level of quality of conformance.
• Software Quality: Software Quality is defined as the conformance
to explicitly state functional and performance requirements,
explicitly documented development standards, and inherent
characteristics that are expected of all professionally developed
software.
• Quality Control: Quality Control involves a series of inspections, reviews,
and tests used throughout the software process to ensure each work
product meets the requirements place upon it. Quality control includes
a feedback loop to the process that created the work product.
• Quality Assurance: Quality Assurance is the preventive set of activities
that provide greater confidence that the project will be completed
successfully.
• Quality Assurance focuses on how the engineering and management
activity will be done?
• As anyone is interested in the quality of the final product, it should be
assured that we are building the right product.
• It can be assured only when we do inspection & review of intermediate
products, if there are any bugs, then it is debugged. This quality can be
enhanced.
• Importance of Quality
• We would expect the quality to be a concern of all producers of goods
and services. However, the distinctive characteristics of software and
in particular its intangibility and complexity, make special demands.
• Increasing criticality of software: The final customer or user is
naturally concerned about the general quality of software, especially
its reliability. This is increasing in the case as organizations become
more dependent on their computer systems and software is used
more and more in safety-critical areas. For example, to control aircraft.
• The intangibility of software: This makes it challenging to know that a
particular task in a project has been completed satisfactorily. The
results of these tasks can be made tangible by demanding that the
developers produce 'deliverables' that can be examined for quality.
• Accumulating errors during software
development: As computer system development is
made up of several steps where the output from one
level is input to the next, the errors in the earlier ?
deliverables? will be added to those in the later stages
leading to accumulated determinable effects. In
general the later in a project that an error is found, the
more expensive it will be to fix. In addition, because
the number of errors in the system is unknown, the
debugging phases of a project are particularly
challenging to control.
Software Quality Assurance

• Software quality assurance is a planned and systematic plan of all actions necessary
to provide adequate confidence that an item or product conforms to establish
technical requirements.
• A set of activities designed to calculate the process by which the products are
developed or manufactured.

SQA Encompasses

• A quality management approach


• Effective Software engineering technology (methods and tools)
• Formal technical reviews that are tested throughout the software process
• A multitier testing strategy
• Control of software documentation and the changes made to it.
• A procedure to ensure compliances with software development standards
• Measuring and reporting mechanisms.
SQA Activities

Software quality assurance is composed of a variety of functions associated with two


different constituencies, the software engineers who do technical work and an SQA
group that has responsibility for quality assurance planning, record keeping, analysis,
and reporting.
Following activities are performed by an independent SQA group:
• Prepares an SQA plan for a project: The program is developed during project planning
and is reviewed by all stakeholders. The plan governs quality assurance activities
performed by the software engineering team and the SQA group. The plan identifies
calculation to be performed, audits and reviews to be performed, standards that apply
to the project, techniques for error reporting and tracking, documents to be produced
by the SQA team, and amount of feedback provided to the software project team.
• Participates in the development of the project's software process description: The
software team selects a process for the work to be performed. The SQA group reviews
the process description for compliance with organizational policy, internal software
standards, externally imposed standards (e.g. ISO-9001), and other parts of the
software project plan.
• Reviews software engineering activities to verify compliance with the
defined software process: The SQA group identifies, reports, and tracks
deviations from the process and verifies that corrections have been made.
• Audits designated software work products to verify compliance with
those defined as a part of the software process: The SQA group reviews
selected work products, identifies, documents and tracks deviations, verify
that corrections have been made, and periodically reports the results of its
work to the project manager.
• Ensures that deviations in software work and work products are
documented and handled according to a documented
procedure: Deviations may be encountered in the project method,
process description, applicable standards, or technical work products.
• Records any noncompliance and reports to senior management: Non-
compliance items are tracked until they are resolved.
Quality Assurance v/s Quality control

Quality Assurance Quality Control

Quality Assurance (QA) is the set of actions Quality Control (QC) is described as the
including facilitation, training, measurement, and processes and methods used to compare product
analysis needed to provide adequate confidence quality to requirements and applicable standards,
that processes are established and continuously and the actions are taken when a nonconformance
improved to produce products or services that is detected.
conform to specifications and are fit for use.

QA is an activity that establishes and calculates QC is an activity that demonstrates whether or not
the processes that produce the product. If there is the product produced met standards.
no process, there is no role for QA.

QA helps establish process QC relates to a particular product or service

QA sets up a measurement program to evaluate QC verified whether particular attributes exist, or


processes do not exist, in a explicit product or service.

QA identifies weakness in processes and improves QC identifies defects for the primary goals of
them correcting errors.

Quality Assurance is a managerial tool. Quality Control is a corrective tool.

Verification is an example of QA. Validation is an example of QC.


Project Monitoring and Control

• Monitoring and Controlling are processes


needed to track, review, and regulate the
progress and performance of the project. It
also identifies any areas where changes to the
project management method are required and
initiates the required changes.
• The Monitoring & Controlling process group
includes eleven processes, which are:
• Monitor and control project work: The generic step under which all other monitoring and controlling
activities fall under.
• Perform integrated change control: The functions involved in making changes to the project plan.
When changes to the schedule, cost, or any other area of the project management plan are necessary,
the program is changed and re-approved by the project sponsor.
• Validate scope: The activities involved with gaining approval of the project's deliverables.
• Control scope: Ensuring that the scope of the project does not change and that unauthorized activities
are not performed as part of the plan (scope creep).
• Control schedule: The functions involved with ensuring the project work is performed according to the
schedule, and that project deadlines are met.
• Control costs: The tasks involved with ensuring the project costs stay within the approved budget.
• Control quality: Ensuring that the quality of the project’s deliverables is to the standard defined in the
project management plan.
• Control communications: Providing for the communication needs of each project stakeholder.
• Control Risks: Safeguarding the project from unexpected events that negatively impact the project's
budget, schedule, stakeholder needs, or any other project success criteria.
• Control procurements: Ensuring the project's subcontractors and vendors meet the project goals.
• Control stakeholder engagement: The tasks involved with ensuring that all of the project's stakeholders
are left satisfied with the project work.
Software Quality

• Software quality product is defined in term of its fitness of purpose. That is, a quality product does precisely
what the users want it to do. For software products, the fitness of use is generally explained in terms of
satisfaction of the requirements laid down in the SRS document. Although "fitness of purpose" is a satisfactory
interpretation of quality for many devices such as a car, a table fan, a grinding machine, etc. for software
products, "fitness of purpose" is not a wholly satisfactory definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the SRS
document. But, has an almost unusable user interface. Even though it may be functionally right, we cannot
consider it to be a quality product.
• The modern view of a quality associated with a software product several quality methods such as the
following:
Portability: A software device is said to be portable, if it can be freely made to work in various operating
system environments, in multiple machines, with other software products, etc.
Usability: A software product has better usability if various categories of users can easily invoke the functions
of the product.
Reusability: A software product has excellent reusability if different modules of the product can quickly be
reused to develop new products.
Correctness: A software product is correct if various requirements as specified in the SRS document have been
correctly implemented.
Maintainability: A software product is maintainable if bugs can be easily corrected as and when they show up,
new tasks can be easily added to the product, and the functionalities of the product can be easily modified,
etc.
Software Quality Management System

• A quality management system is the principal methods used by organizations to


provide that the products they develop have the desired quality.
• A quality system subsists of the following:
• Managerial Structure and Individual Responsibilities: A quality system is the
responsibility of the organization as a whole. However, every organization has a
sever quality department to perform various quality system activities. The quality
system of an arrangement should have the support of the top management.
Without help for the quality system at a high level in a company, some members of
staff will take the quality system seriously.
• Quality System Activities: The quality system activities encompass the following:
• Auditing of projects
• Review of the quality system
• Development of standards, methods, and guidelines, etc.
• Production of documents for the top management summarizing the effectiveness
of the quality system in the organization.
Evolution of Quality Management System

• Quality systems have increasingly evolved over the last five decades. Before World War II, the usual
function to produce quality products was to inspect the finished products to remove defective
devices. Since that time, quality systems of organizations have undergone through four steps of
evolution, as shown in the fig. The first product inspection task gave method to quality control (QC).
• Quality control target not only on detecting the defective devices and removes them but also on
determining the causes behind the defects. Thus, quality control aims at correcting the reasons for
bugs and not just rejecting the products. The next breakthrough in quality methods was the
development of quality assurance methods.
• The primary premise of modern quality assurance is that if an organization's processes are proper
and are followed rigorously, then the products are obligated to be of good quality. The new quality
functions include guidance for recognizing, defining, analyzing, and improving the production
process.
• Total quality management (TQM) advocates that the procedure followed by an organization must be
continuously improved through process measurements. TQM goes stages further than quality
assurance and aims at frequently process improvement. TQM goes beyond documenting steps to
optimizing them through a redesign. A term linked to TQM is Business Process Reengineering (BPR).
• BPR aims at reengineering the method business is carried out in an organization. From the above
conversation, it can be stated that over the years, the quality paradigm has changed from product
assurance to process assurance, as shown in fig.
• ISO 9000 Certification
• ISO (International Standards Organization) is a group or
consortium of 63 countries established to plan and fosters
standardization. ISO declared its 9000 series of standards in
1987. It serves as a reference for the contract between
independent parties. The ISO 9000 standard determines the
guidelines for maintaining a quality system. The ISO standard
mainly addresses operational methods and organizational
methods such as responsibilities, reporting, etc. ISO 9000
defines a set of guidelines for the production process and is
not directly concerned about the product itself.
• Types of ISO 9000 Quality Standards
• The ISO 9000 series of standards is based on the assumption that if a proper
stage is followed for production, then good quality products are bound to
follow automatically. The types of industries to which the various ISO
standards apply are as follows.
• ISO 9001: This standard applies to the organizations engaged in design,
development, production, and servicing of goods. This is the standard that
applies to most software development organizations.
• ISO 9002: This standard applies to those organizations which do not design
products but are only involved in the production. Examples of these category
industries contain steel and car manufacturing industries that buy the product
and plants designs from external sources and are engaged in only
manufacturing those products. Therefore, ISO 9002 does not apply to
software development organizations.
• ISO 9003: This standard applies to organizations that are involved only in the
installation and testing of the products. For example, Gas companies.
How to get ISO 9000 Certification?

• An organization determines to obtain ISO


9000 certification applies to ISO registrar
office for registration. The process consists of
the following stages:
• Application: Once an organization decided to go for ISO certification, it
applies to the registrar for registration.
• Pre-Assessment: During this stage, the registrar makes a rough
assessment of the organization.
• Document review and Adequacy of Audit: During this stage, the registrar
reviews the document submitted by the organization and suggest an
improvement.
• Compliance Audit: During this stage, the registrar checks whether the
organization has compiled the suggestion made by it during the review or
not.
• Registration: The Registrar awards the ISO certification after the
successful completion of all the phases.
• Continued Inspection: The registrar continued to monitor the
organization time by time.
Software Engineering Institute Capability Maturity
Model (SEICMM)

• The Capability Maturity Model (CMM) is a procedure used


to develop and refine an organization's software
development process.
• The model defines a five-level evolutionary stage of
increasingly organized and consistently more mature
processes.
• CMM was developed and is promoted by the Software
Engineering Institute (SEI), a research and development
center promote by the U.S. Department of Defense (DOD).
• Capability Maturity Model is used as a benchmark to
measure the maturity of an organization's software process.
• Methods of SEICMM
• There are two methods of SEICMM:
• Capability Evaluation: Capability evaluation provides a way to
assess the software process capability of an organization. The
results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the
results of the software process capability assessment can be used to
select a contractor.
• Software Process Assessment: Software process assessment is used
by an organization to improve its process capability. Thus, this type
of evaluation is for purely internal use.
• SEI CMM categorized software development industries into the
following five maturity levels. The various levels of SEI CMM have
been designed so that it is easy for an organization to build its
quality system starting from scratch slowly.
Level 1: Initial
• Ad hoc activities characterize a software development organization at this level.
Very few or no processes are described and followed. Since software
production processes are not limited, different engineers follow their process
and as a result, development efforts become chaotic. Therefore, it is also called
a chaotic level.
Level 2: Repeatable
• At this level, the fundamental project management practices like tracking cost
and schedule are established. Size and cost estimation methods, like function
point analysis, COCOMO, etc. are used.
Level 3: Defined
• At this level, the methods for both management and development activities are
defined and documented. There is a common organization-wide understanding
of operations, roles, and responsibilities. The ways through defined, the process
and product qualities are not measured. ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
• Product metrics measure the features of the product being developed, such as its
size, reliability, time complexity, understandability, etc.
• Process metrics follow the effectiveness of the process being used, such as average
defect correction time, productivity, the average number of defects found per hour
inspection, the average number of failures detected during testing per LOC, etc. The
software process and product quality are measured, and quantitative quality
requirements for the product are met. Various tools like Pareto charts, fishbone
diagrams, etc. are used to measure the product and process quality. The process
metrics are used to analyze if a project performed satisfactorily. Thus, the outcome
of process measurements is used to calculate project performance rather than
improve the process.
Level 5: Optimizing
• At this phase, process and product metrics are collected. Process and product
measurement data are evaluated for continuous process improvement.
• Key Process Areas (KPA) of a software
organization
• Except for SEI CMM level 1, each maturity
level is featured by several Key Process Areas
(KPAs) that contains the areas an organization
should focus on improving its software
process to the next level. The focus of each
level and the corresponding key process areas
are shown in the fig.
• SEI CMM provides a series of key areas on which to focus to take an organization
from one level of maturity to the next. Thus, it provides a method for gradual quality
improvement over various stages. Each step has been carefully designed such that
one step enhances the capability already built up.
• People Capability Maturity Model (PCMM)
• PCMM is a maturity structure that focuses on continuously improving the
management and development of the human assets of an organization.
• It defines an evolutionary improvement path from Adhoc, inconsistently performed
practices, to a mature, disciplined, and continuously improving the development of
the knowledge, skills, and motivation of the workforce that enhances strategic
business performance.
• The People Capability Maturity Model (PCMM) is a framework that helps the
organization successfully address their critical people issues. Based on the best
current study in fields such as human resources, knowledge management, and
organizational development, the PCMM guides organizations in improving their steps
for managing and developing their workforces.
• The People CMM defines an evolutionary improvement path
from Adhoc, inconsistently performed workforce practices, to
a mature infrastructure of practices for continuously
elevating workforce capability.
• The PCMM subsists of five maturity levels that lay successive
foundations for continuously improving talent, developing
effective methods, and successfully directing the people
assets of the organization. Each maturity level is a well-
defined evolutionary plateau that institutionalizes a level of
capability for developing the talent within the organization
• The five steps of the People CMM framework are:
Initial Level: Maturity Level 1
• The Initial Level of maturity includes no process areas. Although workforce practices implement in
Maturity Level, 1 organization tend to be inconsistent or ritualistic, virtually all of these
organizations perform processes that are defined in the Maturity Level 2 process areas.
Managed Level: Maturity Level 2
• To achieve the Managed Level, Maturity Level 2, managers starts to perform necessary people
management practices such as staffing, operating performance, and adjusting compensation as a
repeatable management discipline. The organization establishes a culture focused at the unit
level for ensuring that person can meet their work commitments. In achieving Maturity Level 2,
the organization develops the capability to handle skills and performance at the unit level. The
process areas at Maturity Level 2 are Staffing, Communication and Coordination, Work
Environment, Performance Management, Training and Development, and Compensation.
Defined Level: Maturity Level 3
• The fundamental objective of the defined level is to help an organization gain a competitive
benefit from developing the different competencies that must be combined in its workforce to
accomplish its business activities. These workforce competencies represent critical pillars
supporting the strategic workforce competencies to current and future business objectives; the
improved workforce practices for implemented at Maturity Level 3 become crucial enablers of
business strategy.
Predictable Level: Maturity Level 4
• At the Predictable Level, the organization handles and exploits the
capability developed by its framework of workforce competencies.
The organization is now able to handle its capacity and performance
quantitatively. The organization can predict its capability for
performing work because it can quantify the ability of its workforce
and of the competency-based methods they use performing in their
assignments.
Optimizing Level: Maturity Level 5
• At the Optimizing Level, the integrated organization is focused on
continual improvement. These improvements are made to the
efficiency of individuals and workgroups, to the act of competency-
based processes, and workforce practices and activities.
Six Sigma

• Six Sigma is the process of improving the quality of the


output by identifying and eliminating the cause of
defects and reduce variability in manufacturing and
business processes. The maturity of a manufacturing
process can be defined by a sigma rating indicating its
percentage of defect-free products it creates. A six
sigma method is one in which 99.99966% of all the
opportunities to produce some features of a
component are statistically expected to be free of
defects (3.4 defective features per million
opportunities).
History of Six Sigma

• Six-Sigma is a set of methods and tools for process improvement. It


was introduced by Engineer Sir Bill Smith while working
at Motorola in 1986. In the 1980s, Motorola was developing Quasar
televisions which were famous, but the time there was lots of
defects which came up on that due to picture quality and sound
variations.
• By using the same raw material, machinery and workforce a
Japanese form took over Quasar television production, and within a
few months, they produce Quasar TV's sets which have fewer
errors. This was obtained by improving management techniques.
• Six Sigma was adopted by Bob Galvin, the CEO of Motorola in 1986
and registered as a Motorola Trademark on December 28, 1993,
then it became a quality leader.
• Characteristics of Six Sigma
• The Characteristics of Six Sigma are as follows:
• Statistical Quality Control: Six Sigma is derived from the Greek Letter σ
(Sigma) from the Greek alphabet, which is used to denote Standard
Deviation in statistics. Standard Deviation is used to measure variance,
which is an essential tool for measuring non-conformance as far as the
quality of output is concerned.
• Methodical Approach: The Six Sigma is not a merely quality improvement
strategy in theory, as it features a well defined systematic approach of
application in DMAIC and DMADV which can be used to improve the
quality of production. DMAIC is an acronym for Design-Measure- Analyze-
Improve-Control. The alternative method DMADV stands for Design-
Measure- Analyze-Design-Verify.
• Fact and Data-Based Approach: The statistical and methodical aspect of
Six Sigma shows the scientific basis of the technique. This accentuates
essential elements of the Six Sigma that is a fact and data-based.
• Project and Objective-Based Focus: The Six Sigma process is
implemented for an organization's project tailored to its
specification and requirements. The process is flexed to suits the
requirements and conditions in which the projects are operating
to get the best results.
• Customer Focus: The customer focus is fundamental to the Six
Sigma approach. The quality improvement and control standards
are based on specific customer requirements.
• Teamwork Approach to Quality Management: The Six Sigma
process requires organizations to get organized when it comes to
controlling and improving quality. Six Sigma involving a lot of
training depending on the role of an individual in the Quality
Management team.
• Six Sigma Methodologies
• Six Sigma projects follow two project methodologies:
• DMAIC
• DMADV
• DMAIC
• It specifies a data-driven
quality strategy for improving
processes. This methodology is
used to enhance an existing
business process.
• The DMAIC project
methodology has five phases:
• Define: It covers the process mapping and flow-charting, project charter
development, problem-solving tools, and so-called 7-M tools.
• Measure: It includes the principles of measurement, continuous and discrete
data, and scales of measurement, an overview of the principle of variations and
repeatability and reproducibility (RR) studies for continuous and discrete data.
• Analyze: It covers establishing a process baseline, how to determine process
improvement goals, knowledge discovery, including descriptive and exploratory
data analysis and data mining tools, the basic principle of Statistical Process
Control (SPC), specialized control charts, process capability analysis, correlation
and regression analysis, analysis of categorical data, and non-parametric
statistical methods.
• Improve: It covers project management, risk assessment, process simulation, and
design of experiments (DOE), robust design concepts, and process optimization.
• Control: It covers process control planning, using SPC for operational control and
PRE-Control.
• DMADV
• It specifies a data-driven quality strategy for
designing products and processes. This
method is used to create new product designs
or process designs in such a way that it results
in a more predictable, mature, and detect free
performance.
• The DMADV project methodology has five
phases:
• Define: It defines the problem or project goal that
needs to be addressed.
• Measure: It measures and determines the
customer's needs and specifications.
• Analyze: It analyzes the process to meet customer
needs.
• Design: It can design a process that will meet
customer needs.
• Verify: It can verify the design performance and
ability to meet customer needs.
Software Design

• Software design is a mechanism to transform user


requirements into some suitable form, which helps the
programmer in software coding and implementation. It
deals with representing the client's requirement, as
described in SRS (Software Requirement Specification)
document, into a form, i.e., easily implementable using
programming language.
• The software design phase is the first step in SDLC (Software
Design Life Cycle), which moves the concentration from the
problem domain to the solution domain. In software design,
we consider the system to be a set of components or
modules with clearly defined behaviors & boundaries.
Objectives of Software Design
Following are the purposes of Software design:
• Correctness:Software design should be correct as per
requirement.
• Completeness:The design should have all components like
data structures, modules, and external interfaces, etc.
• Efficiency:Resources should be used efficiently by the
program.
• Flexibility:Able to modify on changing needs.
• Consistency:There should not be any inconsistency in the
design.
• Maintainability: The design should be so simple so that it
can be easily maintainable by other designers.
Software Design Principles

• Software design principles are concerned with


providing means to handle the complexity of
the design process effectively. Effectively
managing the complexity will not only reduce
the effort needed for design but can also
reduce the scope of introducing errors during
design.
Following are the principles of Software Design
Problem Partitioning

• For small problem, we can handle the entire problem at once but for the
significant problem, divide the problems and conquer the problem it means to
divide the problem into smaller pieces so that each piece can be captured
separately.
• For software design, the goal is to divide the problem into manageable pieces.
• Benefits of Problem Partitioning
• Software is easy to understand
• Software becomes simple
• Software is easy to test
• Software is easy to modify
• Software is easy to maintain
• Software is easy to expand
• These pieces cannot be entirely independent of each other as they together form
the system. They have to cooperate and communicate to solve the problem. This
communication adds complexity.
Abstraction

• An abstraction is a tool that enables a designer to consider a component at an


abstract level without bothering about the internal details of the implementation.
Abstraction can be used for existing element as well as the component being
designed.
• Here, there are two common abstraction mechanisms
• Functional Abstraction
• Data Abstraction
• Functional Abstraction
• A module is specified by the method it performs.
• The details of the algorithm to accomplish the functions are not visible to the user
of the function.
• Functional abstraction forms the basis for Function oriented design approaches.
• Data Abstraction
• Details of the data elements are not visible to the users of data. Data Abstraction
forms the basis for Object Oriented design approaches.
Modularity

• Modularity specifies to the division of software into separate modules which


are differently named and addressed and are integrated later on in to obtain
the completely functional software. It is the only property that allows a
program to be intellectually manageable. Single large programs are difficult to
understand and read due to a large number of reference variables, control
paths, global variables, etc.
• The desirable properties of a modular system are:
• Each module is a well-defined system that can be used with other applications.
• Each module has single specified objectives.
• Modules can be separately compiled and saved in the library.
• Modules should be easier to use than to build.
• Modules are simpler from outside than inside.
• Advantages and Disadvantages of Modularity
• In this topic, we will discuss various advantage and disadvantage of Modularity.
Advantages of Modularity
• There are several advantages of Modularity
• It allows large programs to be written by several or different people
• It encourages the creation of commonly used routines to be placed in the library and used by other
programs.
• It simplifies the overlay procedure of loading a large program into main storage.
• It provides more checkpoints to measure progress.
• It provides a framework for complete testing, more accessible to test
• It produced the well designed and more readable program.
• Disadvantages of Modularity
• There are several disadvantages of Modularity
• Execution time maybe, but not certainly, longer
• Storage size perhaps, but is not certainly, increased
• Compilation and loading time may be longer
• Inter-module communication problems may be increased
• More linkage required, run-time may be longer, more source lines must be written, and more
documentation has to be done
Modular Design

• Modular design reduces the design complexity and results in


easier and faster implementation by allowing parallel
development of various parts of a system. We discuss a different
section of modular design in detail in this section:
• 1. Functional Independence: Functional independence is
achieved by developing functions that perform only one kind of
task and do not excessively interact with other modules.
Independence is important because it makes implementation
more accessible and faster. The independent modules are easier
to maintain, test, and reduce error propagation and can be reused
in other programs as well. Thus, functional independence is a
good design feature which ensures software quality.
• It is measured using two criteria:
• Cohesion: It measures the relative function strength of a module.
• Coupling: It measures the relative interdependence among modules.
• 2. Information hiding: The fundamental of Information hiding suggests that
modules can be characterized by the design decisions that protect from the
others, i.e., In other words, modules should be specified that data include
within a module is inaccessible to other modules that do not need for such
information.
• The use of information hiding as design criteria for modular system
provides the most significant benefits when modifications are required
during testing's and later during software maintenance. This is because as
most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to
propagate to different locations within the software.
• Strategy of Design
• A good system design strategy is to organize the program
modules in such a method that are easy to develop and
latter too, change. Structured design methods help
developers to deal with the size and complexity of
programs. Analysts generate instructions for the developers
about how code should be composed and how pieces of
code should fit together to form a program.
• To design a system, there are two possible approaches:
• Top-down Approach
• Bottom-up Approach
• 1. Top-down Approach: This approach starts
with the identification of the main
components and then decomposing them into
their more detailed sub-components.
• 2. Bottom-up Approach: A bottom-up
approach begins with the lower details and
moves towards up the hierarchy, as shown in
fig. This approach is suitable in case of an
existing system.
Coupling and Cohesion

• Module Coupling
• In software engineering, the coupling is the degree
of interdependence between software modules.
Two modules that are tightly coupled are strongly
dependent on each other. However, two modules
that are loosely coupled are not dependent on each
other. Uncoupled modules have no
interdependence at all within them.
• The various types of coupling techniques are
shown in fig:
• A good design is the one that has low coupling. Coupling is measured by
the number of relations between the modules. That is, the coupling
increases as the number of calls between modules increase or the amount
of shared data is large. Thus, it can be said that a design with high coupling
will have more errors.
Types of Module Coupling
• 1. No Direct Coupling: There is no direct
coupling between M1 and M2.

In this case, modules are subordinates to different modules.


Therefore, no direct coupling.
• 2. Data Coupling: When data of one module is
passed to another module, this is called data
coupling.
• 3. Stamp Coupling: Two modules are stamp coupled if they
communicate using composite data items such as structure, objects, etc.
When the module passes non-global data structure or entire structure
to another module, they are said to be stamp coupled. For example,
passing structure variable in C or object in C++ language to a module.
• 4. Control Coupling: Control Coupling exists among two modules if data
from one module is used to direct the structure of instruction execution
in another.
• 5. External Coupling: External Coupling arises when two modules share
an externally imposed data format, communication protocols, or device
interface. This is related to communication to external tools and devices.
• 6. Common Coupling: Two modules are common coupled if they share
information through some global data items.
• 7. Content Coupling: Content Coupling exists among two modules if they
share code, e.g., a branch from one module into another module.
Module Cohesion

• In computer programming, cohesion defines to the degree to which the


elements of a module belong together. Thus, cohesion measures the
strength of relationships between pieces of functionality within a given
module. For example, in highly cohesive systems, functionality is strongly
related.
• Cohesion is an ordinal type of measurement and is generally described as
"high cohesion" or "low cohesion."
Functional Cohesion: Functional Cohesion is said to exist if the different
elements of a module, cooperate to achieve a single function.
Sequential Cohesion: A module is said to possess sequential cohesion if the
element of a module form the components of the sequence, where the output
from one component of the sequence is input to the next.
Communicational Cohesion: A module is said to have communicational
cohesion, if all tasks of the module refer to or update the same data structure,
e.g., the set of functions defined on an array or a stack.
Procedural Cohesion: A module is said to be procedural cohesion if the set of
purpose of the module are all parts of a procedure in which particular
sequence of steps has to be carried out for achieving a goal, e.g., the algorithm
for decoding a message.
Temporal Cohesion: When a module includes functions that are associated by
the fact that all the methods must be executed in the same time, the module is
said to exhibit temporal cohesion.
Logical Cohesion: A module is said to be logically cohesive if all the elements of
the module perform a similar operation. For example Error handling, data input
and data output, etc.
Coincidental Cohesion: A module is said to have coincidental cohesion if it
performs a set of tasks that are associated with each other very loosely, if at all.
Function Oriented Design

Function Oriented design is a method to software design where the model is


decomposed into a set of interacting units or modules where each unit or
module has a clearly defined function. Thus, the system is designed from a
functional viewpoint.
Design Notations
Design Notations are primarily meant to be used during the process of design
and are used to represent design or design decisions. For a function-oriented
design, the design can be represented graphically or mathematically by the
following:
Data Flow Diagram

Data-flow design is concerned with designing a series of functional


transformations that convert system inputs into the required outputs.
The design is described as data-flow diagrams. These diagrams show how
data flows through a system and how the output is derived from the
input through a series of functional transformations.
Data-flow diagrams are a useful and intuitive way of describing a system.
They are generally understandable without specialized training, notably if
control information is excluded. They show end-to-end processing. That
is the flow of processing from when data enters the system to where it
leaves the system can be traced. Data-flow design is an integral part of
several design methods, and most CASE tools support data-flow diagram
creation. Different ways may use different icons to represent data-flow
diagram entities, but their meanings are similar.
The report generator produces a report which describes all of the named entities
in a data-flow diagram. The user inputs the name of the design represented by
the diagram. The report generator then finds all the names used in the data-flow
diagram. It looks up a data dictionary and retrieves information about each
name. This is then collated into a report which is output by the system.
Data Dictionaries
A data dictionary lists all data elements appearing in the DFD model of a system.
The data items listed contain all data flows and the contents of all data stores
looking on the DFDs in the DFD model of a system.
A data dictionary lists the objective of all data items and the definition of all
composite data elements in terms of their component data items. For example, a
data dictionary entry may contain that the data grossPay consists of the
parts regularPay and overtimePay.
grossPay = regularPay + overtimePay
For the smallest units of data elements, the data dictionary lists their name and
their type.
A data dictionary plays a significant role in any software development process
because of the following reasons:
• A Data dictionary provides a standard language
for all relevant information for use by engineers
working in a project. A consistent vocabulary for
data items is essential since, in large projects,
different engineers of the project tend to use
different terms to refer to the same data, which
unnecessarily causes confusion.
• The data dictionary provides the analyst with a
means to determine the definition of various data
structures in terms of their component elements.
Structured Charts

• It partitions a system into block boxes. A Black


box system that functionality is known to the
user without the knowledge of internal
design.
Structured Chart is a graphical representation which shows:

• System partitions into modules


• Hierarchy of component modules
• The relation between processing modules
• Interaction between modules
• Information passed between modules
• Pseudo-code
• Pseudo-code notations can be used in both
the preliminary and detailed design phases.
Using pseudo-code, the designer describes
system characteristics using short, concise,
English Language phases that are structured
by keywords such as If-Then-Else, While-Do,
and End.
Object-Oriented Design

In the object-oriented design method, the system is viewed as a collection


of objects (i.e., entities). The state is distributed among the objects, and
each object handles its state data. For example, in a Library Automation
Software, each library representative may be a separate object with its data
and functions to operate on these data. The tasks defined for one purpose
cannot refer or change data of other objects. Objects have their internal
data which represent their state. Similar objects create a class. In other
words, each object is a member of some class. Classes may inherit features
from the superclass.
The different terms related to object
design are:
• Objects: All entities involved in the solution design are known as objects. For example, person, banks,
company, and users are considered as objects. Every entity has some attributes associated with it and has
some methods to perform on the attributes.
• Classes: A class is a generalized description of an object. An object is an instance of a class. A class defines all
the attributes, which an object can have and methods, which represents the functionality of the object.
• Messages: Objects communicate by message passing. Messages consist of the integrity of the target object,
the name of the requested operation, and any other action needed to perform the function. Messages are
often implemented as procedure or function calls.
• Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is the removal of
the irrelevant and the amplification of the essentials.
• Encapsulation: Encapsulation is also called an information hiding concept. The data and operations are linked
to a single unit. Encapsulation not only bundles essential information of an object together but also restricts
access to the data and methods from the outside world.
• Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or sub-classes
can import, implement, and re-use allowed variables and functions from their immediate superclasses.This
property of OOD is called an inheritance. This makes it easier to define a specific class and to create
generalized classes from specific ones.
• Polymorphism: OOD languages provide a mechanism where methods performing similar tasks but vary in
arguments, can be assigned the same name. This is known as polymorphism, which allows a single interface is
performing functions for different types. Depending upon how the service is invoked, the respective portion of
the code gets executed
• User Interface Design
• The visual part of a computer application or operating system through which a client
interacts with a computer or software. It determines how commands are given to the
computer or the program and how data is displayed on the screen.
• Types of User Interface
• There are two main types of User Interface:
• Text-Based User Interface or Command Line Interface
• Graphical User Interface (GUI)
• Text-Based User Interface: This method relies primarily on the keyboard. A typical
example of this is UNIX.
• Advantages
• Many and easier to customizations options.
• Typically capable of more important tasks.
• Disadvantages
• Relies heavily on recall rather than recognition.
• Navigation is often more difficult.
• Graphical User Interface (GUI): GUI relies
much more heavily on the mouse. A typical
example of this type of interface is any
versions of the Windows operating systems.
• GUI Characteristics
• Characteristics Descriptions
• WindowsMultiple windows allow different information to be
displayed simultaneously on the user's screen.
• IconsIcons different types of information. On some systems,
icons represent files. On other icons describes processes.
• MenusCommands are selected from a menu rather than typed
in a command language.
• PointingA pointing device such as a mouse is used for selecting
choices from a menu or indicating items of interests in a
window.
• GraphicsGraphics elements can be mixed with text or the same
display.
• Advantages
• Less expert knowledge is required to use it.
• Easier to Navigate and can look through folders quickly
in a guess and check manner.
• The user may switch quickly from one task to another
and can interact with several different applications.
• Disadvantages
• Typically decreased options.
• Usually less customizable. Not easy to use one button
for tons of different variations.
UI Design Principles
• Structure: Design should organize the user interface purposefully, in the meaningful
and usual based on precise, consistent models that are apparent and recognizable to
users, putting related things together and separating unrelated things, differentiating
dissimilar things and making similar things resemble one another. The structure
principle is concerned with overall user interface architecture.
• Simplicity: The design should make the simple, common task easy, communicating
clearly and directly in the user's language, and providing good shortcuts that are
meaningfully related to longer procedures.
• Visibility: The design should make all required options and materials for a given
function visible without distracting the user with extraneous or redundant data.
• Feedback: The design should keep users informed of actions or interpretation, changes
of state or condition, and bugs or exceptions that are relevant and of interest to the
user through clear, concise, and unambiguous language familiar to users.
• Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and
misuse by allowing undoing and redoing while also preventing bugs wherever possible
by tolerating varied inputs and sequences and by interpreting all reasonable actions.
Coding

• The coding is the process of transforming the design of a system into a computer language
format. This coding phase of software development is concerned with software translating design
specification into the source code. It is necessary to write source code & internal documentation
so that conformance of the code to its specification can be easily verified.
• Coding is done by the coder or programmers who are independent people than the designer. The
goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later stage.
The cost of testing and maintenance can be significantly reduced with efficient coding.
• Goals of Coding
• To translate the design of system into a computer language format: The coding is the process of
transforming the design of a system into a computer language format, which can be executed by a
computer and that perform tasks as specified by the design of operation during the design phase.
• To reduce the cost of later phases: The cost of testing and maintenance can be significantly
reduced with efficient coding.
• Making the program more readable: Program should be easy to read and understand. It
increases code understanding having readability and understandability as a clear objective of the
coding activity can itself help in producing more maintainable software.
• For implementing our design into code, we require a high-level functional language. A
programming language should have the following characteristics:
Characteristics of Programming Language

• Following are the characteristics of


Programming Language:
• Readability: A good high-level language will allow programs to be written in some
methods that resemble a quite-English description of the underlying functions. The coding
may be done in an essentially self-documenting way.
• Portability: High-level languages, being virtually machine-independent, should be easy to
develop portable software.
• Generality: Most high-level languages allow the writing of a vast collection of programs,
thus relieving the programmer of the need to develop into an expert in many diverse
languages.
• Brevity: Language should have the ability to implement the algorithm with less amount of
code. Programs mean in high-level languages are often significantly shorter than their low-
level equivalents.
• Error checking: A programmer is likely to make many errors in the development of a
computer program. Many high-level languages invoke a lot of bugs checking both at
compile-time and run-time.
• Cost: The ultimate cost of a programming language is a task of many of its characteristics.
• Quick translation: It should permit quick translation.
• Efficiency: It should authorize the creation of an efficient object code.
• Modularity: It is desirable that programs can be
developed in the language as several separately
compiled modules, with the appropriate structure for
ensuring self-consistency among these modules.
• Widely available: Language should be widely available,
and it should be feasible to provide translators for all the
major machines and all the primary operating systems.
• A coding standard lists several rules to be followed
during coding, such as the way variables are to be
named, the way the code is to be laid out, error return
conventions, etc.
Coding Standards

• General coding standards refers to how the developer writes code, so here we will
discuss some essential standards regardless of the programming language being
used.
• The following are some representative coding standards:
• Indentation: Proper and consistent indentation is essential in producing easy to read and
maintainable programs.
Indentation should be used to:
– Emphasize the body of a control structure such as a loop or a select statement.
– Emphasize the body of a conditional statement
– Emphasize a new scope block
• Inline comments: Inline comments analyze the functioning of the subroutine, or key aspects of the
algorithm shall be frequently used.
• Rules for limiting the use of global: These rules file what types of data can be declared global and
what cannot.
• Structured Programming: Structured (or Modular) Programming methods shall be used. "GOTO"
statements shall not be used as they lead to "spaghetti" code, which is hard to read and maintain,
except as outlined line in the FORTRAN Standards and Guidelines.
• Naming conventions for global variables, local variables, and constant identifiers: A possible
naming convention can be that global variable names always begin with a capital letter, local variable
names are made of small letters, and constant names are always capital letters.
• Error return conventions and exception handling system: Different functions in a program report
the way error conditions are handled should be standard within an organization. For example,
different tasks while encountering an error condition should either return a 0 or 1 consistently.
• Coding Guidelines
• General coding guidelines provide the programmer
with a set of the best methods which can be used
to make programs more comfortable to read and
maintain. Most of the examples use the C language
syntax, but the guidelines can be tested to all
languages.
• The following are some representative coding
guidelines recommended by many software
development organizations.
• 1. Line Length: It is considered a good practice to keep the length of source code lines at or below 80
characters. Lines longer than this may not be visible properly on some terminals and tools. Some printers
will truncate lines longer than 80 columns.
• 2. Spacing: The appropriate use of spaces within a line of code can improve readability.
• Example:
• Bad: cost=price+(price*sales_tax)
fprintf(stdout ,"The total cost is %5.2f\n",cost);
• Better: cost = price + ( price * sales_tax )
fprintf (stdout,"The total cost is %5.2f\n",cost);
• 3. The code should be well-documented: As a rule of thumb, there must be at least one comment line on
the average for every three-source line.
• 4. The length of any function should not exceed 10 source lines: A very lengthy function is generally very
difficult to understand as it possibly carries out many various functions. For the same reason, lengthy
functions are possible to have a disproportionately larger number of bugs.
• 5. Do not use goto statements: Use of goto statements makes a program unstructured and very tough to
understand.
• 6. Inline Comments: Inline comments promote readability.
• 7. Error Messages: Error handling is an essential aspect of computer programming. This does not only
include adding the necessary logic to test for and handle errors but also involves making error messages
meaningful.
Programming Style

• Programming style refers to the technique used in writing the


source code for a computer program. Most programming styles are
designed to help programmers quickly read and understands the
program as well as avoid making errors. (Older programming styles
also focused on conserving screen space.) A good coding style can
overcome the many deficiencies of a first programming language,
while poor style can defeat the intent of an excellent language.
• The goal of good programming style is to provide understandable,
straightforward, elegant code. The programming style used in a
various program may be derived from the coding standards or code
conventions of a company or other computing organization, as well
as the preferences of the actual programmer.
• Some general rules or guidelines in respect of programming style:
• 1. Clarity and simplicity of Expression: The programs should be designed in such a
manner so that the objectives of the program is clear.
• 2. Naming: In a program, you are required to name the module, processes, and
variable, and so on. Care should be taken that the naming style should not be cryptic
and non-representative.
• For Example: a = 3.14 * r * r
area of circle = 3.14 * radius * radius;
• 3. Control Constructs: It is desirable that as much as a possible single entry and
single exit constructs used.
• 4. Information hiding: The information secure in the data structures should be
hidden from the rest of the system where possible. Information hiding can decrease
the coupling between modules and make the system more maintainable.
• 5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic
behavior of a program. It also becomes difficult to understand the program logic, so
it is desirable to avoid deep nesting.
• 6. User-defined types: Make heavy use of user-defined data
types like enum, class, structure, and union. These data types
make your program code easy to write and easy to understand.
• 7. Module size: The module size should be uniform. The size of
the module should not be too big or too small. If the module
size is too large, it is not generally functionally cohesive. If the
module size is too small, it leads to unnecessary overheads.
• 8. Module Interface: A module with a complex interface should
be carefully examined.
• 9. Side-effects: When a module is invoked, it sometimes has a
side effect of modifying the program state. Such side-effect
should be avoided where as possible.
Structured Programming

• In structured programming, we sub-divide the whole program into small modules so that the
program becomes easy to understand. The purpose of structured programming is to
linearize control flow through a computer program so that the execution sequence follows
the sequence in which the code is written. The dynamic structure of the program than
resemble the static structure of the program. This enhances the readability, testability, and
modifiability of the program. This linear flow of control can be managed by restricting the
set of allowed applications construct to a single entry, single exit formats.
• Why we use Structured Programming?
• We use structured programming because it allows the programmer to understand the
program easily. If a program consists of thousands of instructions and an error occurs then it
is complicated to find that error in the whole program, but in structured programming, we
can easily detect the error and then go to that location and correct it. This saves a lot of
time.
• These are the following rules in structured programming:
• Structured Rule One: Code Block
• If the entry conditions are correct, but the exit conditions are wrong, the error must be in
the block. This is not true if the execution is allowed to jump into a block. The error might be
anywhere in the program. Debugging under these circumstances is much harder.
• Rule 1 of Structured Programming: A code block is
structured, as shown in the figure. In flow-charting condition,
a box with a single entry point and single exit point are
structured. Structured programming is a method of making it
evident that the program is correct.
Structure Rule Two: Sequence

• A sequence of blocks is correct if the exit


conditions of each block match the entry
conditions of the following block. Execution enters
each block at the block's entry point and leaves
through the block's exit point. The whole series
can be regarded as a single block, with an entry
point and an exit point.
• Rule 2 of Structured Programming: Two or more
code blocks in the sequence are structured, as
shown in the figure.
Structured Rule Three: Alternation

• If-then-else is frequently called alternation (because there are


alternative options). In structured programming, each choice is a
code block. If alternation is organized as in the flowchart at right,
then there is one entry point (at the top) and one exit point (at the
bottom). The structure should be coded so that if the entry
conditions are fulfilled, then the exit conditions are satisfied (just
like a code block).
• Rule 3 of Structured Programming: The alternation of two code
blocks is structured, as shown in the figure.
• An example of an entry condition for an alternation method is:
register $8 includes a signed integer. The exit condition may be:
register $8 includes the absolute value of the signed number. The
branch structure is used to fulfill the exit condition.
• Structured Rule 4: Iteration
• Iteration (while-loop) is organized as at right. It also
has one entry point and one exit point. The entry
point has conditions that must be satisfied, and the
exit point has requirements that will be fulfilled.
There are no jumps into the form from external
points of the code.
• Rule 4 of Structured Programming: The iteration
of a code block is structured, as shown in the
figure.
• Structured Rule 5: Nested Structures
• In flowcharting conditions, any code block can be spread into
any of the structures. If there is a portion of the flowchart that
has a single entry point and a single exit point, it can be
summarized as a single code block.
• Rule 5 of Structured Programming: A structure (of any size)
that has a single entry point and a single exit point is equivalent
to a code block. For example, we are designing a program to go
through a list of signed integers calculating the absolute value
of each one. We may (1) first regard the program as one block,
then (2) sketch in the iteration required, and finally (3) put in
the details of the loop body, as shown in the figure.
• The other control structures are the case, do-until, do-while, and for are
not needed. However, they are sometimes convenient and are usually
regarded as part of structured programming. In assembly language, they
add little convenience.
Software Reliability

• Software Reliability means Operational reliability. It is described as the ability of a system or


component to perform its required functions under static conditions for a specific period.
• Software reliability is also defined as the probability that a software system fulfills its assigned
task in a given environment for a predefined number of input cases, assuming that the hardware
and the input are free of error.
• Software Reliability is an essential connect of software quality, composed with functionality,
usability, performance, serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve because the complexity of software turn
to be high. While any system with a high degree of complexity, containing software, will be hard
to reach a certain level of reliability, system developers tend to push complexity into the
software layer, with the speedy growth of system size and ease of doing so by upgrading the
software.
• For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will have
over 5 million source lines of software. While the complexity of software is inversely associated
with software reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.
• Software Failure Mechanisms
• The software failure can be classified as:
• Transient failure: These failures only occur with specific inputs.
• Permanent failure: This failure appears on all inputs.
• Recoverable failure: System can recover without operator help.
• Unrecoverable failure: System can recover with operator help only.
• Non-corruption failure: Failure does not corrupt system state or data.
• Corrupting failure: It damages the system state or data.
• Software failures may be due to bugs, ambiguities, oversights or
misinterpretation of the specification that the software is supposed to
satisfy, carelessness or incompetence in writing code, inadequate testing,
incorrect or unexpected usage of the software or other unforeseen
problems.
Hardware Reliability Software Reliability
Hardware faults are mostly Software faults are design
physical faults. faults, which are tough to
visualize, classify, detect,
and correct.
Hardware components Software component fails
generally fail due to wear due to bugs.
and tear.
In hardware, design faults In software, we can simply
may also exist, but physical find a strict corresponding
faults generally dominate. counterpart for
"manufacturing" as the
hardware manufacturing
process, if the simple action
of uploading software
modules into place does
not count. Therefore, the
quality of the software will
not change once it is
uploaded into the storage
and start running
Hardware exhibits the Software reliability does not
failure features shown in show the same features
the following figure: similar as hardware. A
possible curve is shown in
It is called the bathtub the following figure:
curve. Period A, B, and C
stand for burn-in phase, If we projected software
useful life phase, and end- reliability on the same axes.
of-life phase respectively.
There are two significant differences between hardware and software curves
are:
One difference is that in the last stage, the software does not have
an increasing failure rate as hardware does. In this phase, the software is
approaching obsolescence; there are no motivations for any upgrades or
changes to the software. Therefore, the failure rate will not change.
The second difference is that in the useful-life phase, the software will
experience a radical increase in failure rate each time an upgrade is made. The
failure rate levels off gradually, partly because of the defects create and fixed
after the updates.
The upgrades in above figure signify feature upgrades, not upgrades for
reliability. For feature upgrades, the complexity of software is possible to be
increased, since the functionality of the software is enhanced. Even error fixes
may be a reason for more software failures if the bug fix induces other defects
into the software. For reliability upgrades, it is likely to incur a drop in
software failure rate, if the objective of the upgrade is enhancing software
reliability, such as a redesign or reimplementation of some modules using
better engineering approaches, such as clean-room method.
A partial list of the distinct features of software
compared to hardware is listed below:
• Failure cause: Software defects are primarily designed defects.
• Wear-out: Software does not have an energy-related wear-out
phase. Bugs can arise without warning.
• Repairable system: Periodic restarts can help fix software
queries.
• Time dependency and life cycle: Software reliability is not a
purpose of operational time.
• Environmental factors: Do not affect Software reliability, except
it may affect program inputs.
• Reliability prediction: Software reliability cannot be predicted
from any physical basis since it depends entirely on human
factors in design.
• Redundancy: It cannot improve Software reliability if identical
software elements are used.
• Interfaces: Software interfaces are merely conceptual other than
visual.
• Failure rate motivators: It is generally not predictable from
analyses of separate statements.
• Built with standard components: Well-understood and extensively
tested standard element will help improve maintainability and
reliability. But in the software industry, we have not observed this
trend. Code reuse has been around for some time but to a minimal
extent. There are no standard elements for software, except for
some standardized logic structures.
Software Reliability Measurement Techniques
Reliability metrics are used to quantitatively expressed the reliability of the
software product. The option of which parameter is to be used depends
upon the type of system to which it applies & the requirements of the
application domain.
Measuring software reliability is a severe problem because we don't have a
good understanding of the nature of software. It is difficult to find a suitable
method to measure software reliability and most of the aspects connected
to software reliability. Even the software estimates have no uniform
definition. If we cannot measure the reliability directly, something can be
measured that reflects the features related to reliability.
The current methods of software reliability
measurement can be divided into four categories:
1. Product Metrics
Product metrics are those which are used to build the artifacts, i.e., requirement
specification documents, system design documents, etc. These metrics help in
the assessment if the product is right sufficient through records on attributes like
usability, reliability, maintainability & portability. In these measurements are
taken from the actual body of the source code.
• Software size is thought to be reflective of complexity, development
effort, and reliability. Lines of Code (LOC), or LOC in
thousands (KLOC), is an initial intuitive approach to measuring
software size. The basis of LOC is that program length can be used as
a predictor of program characteristics such as effort &ease of
maintenance. It is a measure of the functional complexity of the
program and is independent of the programming language.
• Function point metric is a technique to measure the functionality of
proposed software development based on the count of inputs,
outputs, master files, inquires, and interfaces.
• Test coverage metric size fault and reliability by performing tests on
software products, assuming that software reliability is a function of
the portion of software that is successfully verified or tested.
• Complexity is directly linked to software reliability, so
representing complexity is essential. Complexity-oriented
metrics is a way of determining the complexity of a
program's control structure by simplifying the code into a
graphical representation. The representative metric is
McCabe's Complexity Metric.
• Quality metrics measure the quality at various steps of
software product development. An vital quality metric
is Defect Removal Efficiency (DRE). DRE provides a measure
of quality because of different quality assurance and control
activities applied throughout the development process.
2. Project Management Metrics

• Project metrics define project characteristics and execution. If there


is proper management of the project by the programmer, then this
helps us to achieve better products. A relationship exists between
the development process and the ability to complete projects on
time and within the desired quality objectives. Cost increase when
developers use inadequate methods. Higher reliability can be
achieved by using a better development process, risk management
process, configuration management process.
• These metrics are:
• Number of software developers
• Staffing pattern over the life-cycle of the software
• Cost and schedule
• Productivity
3. Process Metrics

• Process metrics quantify useful attributes of the software development


process & its environment. They tell if the process is functioning optimally as
they report on characteristics like cycle time & rework time. The goal of
process metric is to do the right job on the first time through the process.
The quality of the product is a direct function of the process. So process
metrics can be used to estimate, monitor, and improve the reliability and
quality of software. Process metrics describe the effectiveness and quality of
the processes that produce the software product.
• Examples are:
• The effort required in the process
• Time to produce the product
• Effectiveness of defect removal during development
• Number of defects found during testing
• Maturity of the process
• 4. Fault and Failure Metrics
• A fault is a defect in a program which appears when the
programmer makes an error and causes failure when executed
under particular conditions. These metrics are used to determine
the failure-free execution software.
• To achieve this objective, a number of faults found during testing
and the failures or other problems which are reported by the user
after delivery are collected, summarized, and analyzed. Failure
metrics are based upon customer information regarding faults
found after release of the software. The failure data collected is
therefore used to calculate failure density, Mean Time between
Failures (MTBF), or other parameters to measure or predict
software reliability.
Reliability Metrics

Reliability metrics are used to quantitatively expressed the reliability of the


software product. The option of which metric is to be used depends upon the
type of system to which it applies & the requirements of the application
domain.
Some reliability metrics which can be used to quantify the reliability of the
software product are as follows:

• To measure MTTF, we can evidence the failure data for n


failures. Let the failures appear at the time
instants t1,t2.....tn.
• MTTF can be calculated as
2. Mean Time to Repair (MTTR)
Once failure occurs, some-time is required to fix the error. MTTR measures the
average time it takes to track the errors causing the failure and to fix them.
3. Mean Time Between Failure (MTBR)
We can merge MTTF & MTTR metrics to get the MTBF metric.
MTBF = MTTF + MTTR
Thus, an MTBF of 300 denoted that once the failure appears, the next failure is
expected to appear only after 300 hours. In this method, the time
measurements are real-time & not the execution time as in MTTF.
4. Rate of occurrence of failure (ROCOF)
It is the number of failures appearing in a unit time interval. The number of
unexpected events over a specific time of operation. ROCOF is the frequency of
occurrence with which unexpected role is likely to appear. A ROCOF of 0.02
mean that two failures are likely to occur in each 100 operational time unit
steps. It is also called the failure intensity metric.
• 5. Probability of Failure on Demand (POFOD)
• POFOD is described as the probability that the system will fail when a service is
requested. It is the number of system deficiency given several systems inputs.
• POFOD is the possibility that the system will fail when a service request is made.
• A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an
essential measure for safety-critical systems. POFOD is relevant for protection
systems where services are demanded occasionally.
• 6. Availability (AVAIL)
• Availability is the probability that the system is applicable for use at a given time. It
takes into account the repair time & the restart time for the system. An availability
of 0.995 means that in every 1000 time units, the system is feasible to be available
for 995 of these. The percentage of time that a system is applicable for use, taking
into account planned and unplanned downtime. If a system is down an average of
four hours out of 100 hours of operation, its AVAIL is 96%.
Software Metrics for Reliability

• The Metrics are used to improve the reliability


of the system by identifying the areas of
requirements.
• Different Types of Software Metrics are:-
• Requirements Reliability Metrics
• Requirements denote what features the
software must include. It specifies the
functionality that must be contained in the
software. The requirements must be written
such that is no misconception between the
developer & the client. The requirements
must include valid structure to avoid the loss
of valuable data.
• The requirements should be thorough and in a detailed manner so that it is simple for the design stage. The
requirements should not include inadequate data. Requirement Reliability metrics calculates the above-said
quality factors of the required document.
• Design and Code Reliability Metrics
• The quality methods that exists in design and coding plan are complexity, size, and modularity. Complex
modules are tough to understand & there is a high probability of occurring bugs. The reliability will reduce if
modules have a combination of high complexity and large size or high complexity and small size. These metrics
are also available to object-oriented code, but in this, additional metrics are required to evaluate the quality.
• Testing Reliability Metrics
• These metrics use two methods to calculate reliability.
• First, it provides that the system is equipped with the tasks that are specified in the requirements. Because of
this, the bugs due to the lack of functionality reduces.
• The second method is calculating the code, finding the bugs & fixing them. To ensure that the system includes
the functionality specified, test plans are written that include multiple test cases. Each test method is based on
one system state and tests some tasks that are based on an associated set of requirements. The goals of an
effective verification program is to ensure that each elements is tested, the implication being that if the system
passes the test, the requirement?s functionality is contained in the delivered system.
Software Fault Tolerance

• Software fault tolerance is the ability for software to detect and recover
from a fault that is happening or has already happened in either the
software or hardware in the system in which the software is running to
provide service by the specification.
• Software fault tolerance is a necessary component to construct the next
generation of highly available and reliable computing systems from
embedded systems to data warehouse systems.
• To adequately understand software fault tolerance it is important to
understand the nature of the problem that software fault tolerance is
supposed to solve.
• Software faults are all design faults. Software manufacturing, the
reproduction of software, is considered to be perfect. The source of the
problem being solely designed faults is very different than almost any
other system in which fault tolerance is the desired property.
Software Fault Tolerance Techniques
1. Recovery Block

• The recovery block method is a simple technique developed by Randel. The recovery block
operates with an adjudicator, which confirms the results of various implementations of the
same algorithm. In a system with recovery blocks, the system view is broken down into fault
recoverable blocks.
• The entire system is constructed of these fault-tolerant blocks. Each block contains at least a
primary, secondary, and exceptional case code along with an adjudicator. The adjudicator is
the component, which determines the correctness of the various blocks to try.
• The adjudicator should be kept somewhat simple to maintain execution speed and aide in
correctness. Upon first entering a unit, the adjudicator first executes the primary alternate.
(There may be N alternates in a unit which the adjudicator may try.) If the adjudicator
determines that the fundamental block failed, it then tries to roll back the state of the system
and tries the secondary alternate.
• If the adjudicator does not accept the results of any of the alternates, it then invokes the
exception handler, which then indicates the fact that the software could not perform the
requested operation.
• The recovery block technique increases the pressure on the specification to be specific
enough to create various multiple alternatives that are functionally the same. This problem is
further discussed in the context of the N-version software method.
2. N-Version Software

• The N-version software methods attempt to parallel the traditional hardware


fault tolerance concept of N-way redundant hardware. In an N-version
software system, every module is done with up to N different methods. Each
variant accomplishes the same function, but hopefully in a various way. Each
version then submits its answer to voter or decider, which decides the
correct answer, and returns that as the result of the module.
• This system can hopefully overcome the design faults present in most
software by relying upon the design diversity concept. An essential
distinction in N-version software is the fact that the system could include
multiple types of hardware using numerous versions of the software.
• N-version software can only be successful and successfully tolerate faults if
the required design diversity is met. The dependence on appropriate
specifications in N-version software, (and recovery blocks,) cannot be
stressed enough.
3. N-Version Software and Recovery Blocks

• The differences between the recovery block technique and the N-version
technique are not too numerous, but they are essential. In traditional recovery
blocks, each alternative would be executed serially until an acceptable solution
is found as determined by the adjudicator. The recovery block method has
been extended to contain concurrent execution of the various alternatives.
• The N-version techniques have always been designed to be implemented
using N-way hardware concurrently. In a serial retry system, the cost in time of
trying multiple methods may be too expensive, especially for a real-time
system. Conversely, concurrent systems need the expense of N-way hardware
and a communications network to connect them.
• The recovery block technique requires that each module build a specific
adjudicator; in the N-version method, a single decider may be used. The
recovery block technique, assuming that the programmer can create a
sufficiently simple adjudicator, will create a system, which is challenging to
enter into an incorrect state.
Software Reliability Models

• A software reliability model indicates the form of a


random process that defines the behavior of software
failures to time.
• Software reliability models have appeared as people try
to understand the features of how and why software
fails, and attempt to quantify software reliability.
• Over 200 models have been established since the early
1970s, but how to quantify software reliability remains
mostly unsolved.
• There is no individual model that can be used in all
situations. No model is complete or even representative.
• Most software models contain the following
parts:
• Assumptions
• Factors
• A mathematical function that includes the
reliability with the elements. The
mathematical function is generally higher-
order exponential or logarithmic.
Software Reliability Modeling Techniques

• Both kinds of modelling methods are based on


observing and accumulating failure data and
analyzing with statistical inference.
Both kinds of modeling methods are based on
observing and accumulating failure data and
analyzing with statistical inference.
Differentiate between software reliability prediction models and
software reliability estimation models

Basics Prediction Models Estimation Models

Data Reference Uses historical Uses data from the


information current software
development
effort.
When used in Usually made Usually made later
development cycle before in the life cycle
development or (after some data
test phases; can have been
be used as early collected); not
as concept phase. typically used in
concept or
development
phases.
Time Frame Predict reliability Estimate reliability
at some future at either present
time. or some next time.
Reliability Models

A reliability growth model is a numerical model of software reliability, which


predicts how software reliability should improve over time as errors are
discovered and repaired. These models help the manager in deciding how
much efforts should be devoted to testing. The objective of the project
manager is to test and debug the system until the required level of reliability
is reached.
Following are the Software Reliability
Models are:
Software Maintenance

• Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to
modify and update software application after delivery to correct errors and to improve
performance. Software is a model of the real world. When the real world changes, the
software require alteration wherever possible.
• Software Maintenance is an inclusive activity that includes error corrections, enhancement of
capabilities, deletion of obsolete capabilities, and optimization.
• Need for Maintenance
• Software Maintenance is needed for:-
• Correct errors
• Change in user requirement with time
• Changing hardware/software requirements
• To improve system efficiency
• To optimize the code to run faster
• To modify the components
• To reduce any unwanted side effects.
• Thus the maintenance is required to ensure that the system continues to satisfy user
requirements.
Types of Software Maintenance
• 1. Corrective Maintenance
• Corrective maintenance aims to correct any remaining errors regardless of where
they may cause specifications, design, coding, testing, and documentation, etc.
• 2. Adaptive Maintenance
• It contains modifying the software to match changes in the ever-changing
environment.
• 3. Preventive Maintenance
• It is the process by which we prevent our system from being obsolete. It involves
the concept of reengineering & reverse engineering in which an old system with
old technology is re-engineered using new technology. This maintenance
prevents the system from dying out.
• 4. Perfective Maintenance
• It defines improving processing efficiency or performance or restricting the
software to enhance changeability. This may contain enhancement of existing
system functionality, improvement in computational efficiency, etc.
Causes of Software Maintenance Problems
• Lack of Traceability
• Codes are rarely traceable to the requirements and design specifications.
• It makes it very difficult for a programmer to detect and correct a critical defect affecting
customer operations.
• Like a detective, the programmer pores over the program looking for clues.
• Life Cycle documents are not always produced even as part of a development project.
• Lack of Code Comments
• Most of the software system codes lack adequate comments. Lesser comments may not be
helpful in certain situations.
• Obsolete Legacy Systems
• In most of the countries worldwide, the legacy system that provides the backbone of the
nation's critical industries, e.g., telecommunications, medical, transportation utility services,
were not designed with maintenance in mind.
• They were not expected to last for a quarter of a century or more!
• As a consequence, the code supporting these systems is devoid of traceability to the
requirements, compliance to design and programming standards and often includes dead,
extra and uncommented code, which all make the maintenance task next to the impossible.
Software Maintenance Process
• Program Understanding
The first step consists of analyzing the program to understand.
• Generating a Particular maintenance problem
• The second phase consists of creating a particular maintenance proposal to accomplish
the implementation of the maintenance goals.
• Ripple Effect
• The third step consists of accounting for all of the ripple effects as a consequence of
program modifications.
• Modified Program Testing
• The fourth step consists of testing the modified program to ensure that the revised
application has at least the same reliability level as prior.
• Maintainability
• Each of these four steps and their associated software quality attributes is critical to the
maintenance process. All of these methods must be combined to form maintainability.
Software Maintenance Cost Factors

• There are two types of cost factors involved in


software maintenance.
• These are
• Non-Technical Factors
• Technical Factors
• Non-Technical Factors
• 1. Application Domain
• If the application of the program is defined and well understood, the system
requirements may be definitive and maintenance due to changing needs minimized.
• If the form is entirely new, it is likely that the initial conditions will be modified
frequently, as user gain experience with the system.
• 2. Staff Stability
• It is simple for the original writer of a program to understand and change an
application rather than some other person who must understand the program by the
study of the reports and code listing.
• If the implementation of a system also maintains that systems, maintenance costs will
reduce.
• In practice, the feature of the programming profession is such that persons change
jobs regularly. It is unusual for one user to develop and maintain an application
throughout its useful life.
• 3. Program Lifetime
• Programs become obsolete when the program becomes obsolete, or their
original hardware is replaced, and conversion costs exceed rewriting costs.
• 4. Dependence on External Environment
• If an application is dependent on its external environment, it must be
modified as the climate changes.
• For example:
• Changes in a taxation system might need payroll, accounting, and stock
control programs to be modified.
• Taxation changes are nearly frequent, and maintenance costs for these
programs are associated with the frequency of these changes.
• A program used in mathematical applications does not typically depend
on humans changing the assumptions on which the program is based.
• 5. Hardware Stability
• If an application is designed to operate on a specific
hardware configuration and that configuration does not
changes during the program's lifetime, no maintenance
costs due to hardware changes will be incurred.
• Hardware developments are so increased that this situation
is rare.
• The application must be changed to use new hardware that
replaces obsolete equipment.
• Technical Factors
• Technical Factors include the following:
• Module Independence
• It should be possible to change one program unit of a system without
affecting any other unit.
• Programming Language
• Programs written in a high-level programming language are generally
easier to understand than programs written in a low-level language.
• Programming Style
• The method in which a program is written contributes to its
understandability and hence, the ease with which it can be modified.
• Program Validation and Testing
• Generally, more the time and effort are spent on design validation and
program testing, the fewer bugs in the program and, consequently,
maintenance costs resulting from bugs correction are lower.
• Maintenance costs due to bug's correction are governed by the type of
fault to be repaired.
• Coding errors are generally relatively cheap to correct, design errors are
more expensive as they may include the rewriting of one or more
program units.
• Bugs in the software requirements are usually the most expensive to
correct because of the drastic design which is generally involved.
• Documentation
• If a program is supported by clear, complete
yet concise documentation, the functions of
understanding the application can be
associatively straight-forward.
• Program maintenance costs tends to be less
for well-reported systems than for the system
supplied with inadequate or incomplete
documentation.
Configuration Management Techniques

• One of the essential costs of maintenance is


keeping track of all system documents and
ensuring that these are kept consistent.
• Effective configuration management can help
control these costs.
Software Maintenance Cost Factors

• There are two types of cost factors involved in software maintenance.


• These are
• Non-Technical Factors
• Technical Factors
• Non-Technical Factors
• 1. Application Domain
• If the application of the program is defined and well understood, the system
requirements may be definitive and maintenance due to changing needs
minimized.
• If the form is entirely new, it is likely that the initial conditions will be modified
frequently, as user gain experience with the system.
• 2. Staff Stability
• It is simple for the original writer of a program to understand and change an
application rather than some other person who must understand the program
by the study of the reports and code listing.
• If the implementation of a system also maintains that systems, maintenance
costs will reduce.
• In practice, the feature of the programming profession is such that persons
change jobs regularly. It is unusual for one user to develop and maintain an
application throughout its useful life.
3. Program Lifetime
Programs become obsolete when the program becomes obsolete, or their
original hardware is replaced, and conversion costs exceed rewriting costs.
4. Dependence on External Environment
If an application is dependent on its external environment, it must be
modified as the climate changes.
For example:
Changes in a taxation system might need payroll, accounting, and stock
control programs to be modified.
Taxation changes are nearly frequent, and maintenance costs for these
programs are associated with the frequency of these changes.
A program used in mathematical applications does not typically depend on
humans changing the assumptions on which the program is based.
5. Hardware Stability

• If an application is designed to operate on a specific


hardware configuration and that configuration does not
changes during the program's lifetime, no maintenance
costs due to hardware changes will be incurred.
• Hardware developments are so increased that this
situation is rare.
• The application must be changed to use new hardware
that replaces obsolete equipment.
• Technical Factors
• Technical Factors include the following:
• Module Independence
• It should be possible to change one program unit of a system without affecting any other unit.
• Programming Language
• Programs written in a high-level programming language are generally easier to understand than
programs written in a low-level language.
• Programming Style
• The method in which a program is written contributes to its understandability and hence, the
ease with which it can be modified.
• Program Validation and Testing
• Generally, more the time and effort are spent on design validation and program testing, the
fewer bugs in the program and, consequently, maintenance costs resulting from bugs correction
are lower.
• Maintenance costs due to bug's correction are governed by the type of fault to be repaired.
• Coding errors are generally relatively cheap to correct, design errors are more expensive as they
may include the rewriting of one or more program units.
• Bugs in the software requirements are usually the most expensive to correct because of the
drastic design which is generally involved.
Documentation
If a program is supported by clear, complete yet concise documentation, the
functions of understanding the application can be associatively straight-
forward.
Program maintenance costs tends to be less for well-reported systems than
for the system supplied with inadequate or incomplete documentation.
Configuration Management Techniques
One of the essential costs of maintenance is keeping track of all system
documents and ensuring that these are kept consistent.
Effective configuration management can help control these costs.

You might also like