Software Engg PCC CS-601
Software Engg PCC CS-601
• Size: Software is becoming more expensive and more complex with the growing
complexity and expectation out of software. For example, the code in the consumer
product is doubling every couple of years.
• Quality: Many software products have poor quality, i.e., the software products
defects after putting into use due to ineffective testing technique. For example,
Software testing typically finds 25 errors per 1000 lines of code.
• Cost: Software development is costly i.e. in terms of time taken to develop and the
money involved. For example, Development of the FAA's Advanced Automation
System cost over $700 per lines of code.
• Delayed Delivery: Serious schedule overruns are common. Very often the software
takes longer than the estimated time to develop, which in turn leads to cost shooting
up. For example, one in four large-scale development projects are never completed.
Software
3 components (needed to be built in SE environment) of software:
Program vs. Software
• The development team must determine a suitable life cycle model for a particular plan and
then observe to it.
• Without using an exact life cycle model, the development of a software product would not
be in a systematic and disciplined manner.
• When a team is developing a software product, there must be a clear understanding among
team representative about when and what to do. Otherwise, it would point to chaos and
project failure.
• This problem can be defined by using an example. Suppose a software development issue is
divided into various parts and the parts are assigned to the team members. From then on,
suppose the team representative is allowed the freedom to develop the roles assigned to
them in whatever way they like. It is possible that one representative might start writing the
code for his part, another might choose to prepare the test documents first, and some other
engineer might begin with the design phase of the roles assigned to him. This would be one
of the perfect methods for project failure. A software life cycle model describes entry and
exit criteria for each phase. A phase can begin only if its stage-entry criteria have been
fulfilled. So without a software life cycle model, the entry and exit criteria for a stage cannot
be recognized. Without software life cycle models, it becomes tough for software project
managers to monitor the progress of the project.
The stages of SDLC are as follows:
Stage1: Planning and requirement analysis
• Requirement Analysis is the most important and necessary stage in SDLC.
• The senior members of the team perform it with inputs from all the stakeholders and
domain experts or SMEs (A Subject Matter Expert is an authority in a particular technology)
in the industry.
• Planning for the quality assurance requirements and identifications of the risks associated
with the projects is also done at this stage.
• Business analyst and Project organizer set up a meeting with the client to gather all the
data like what the customer wants to build, who will be the end user, what is the objective
of the product. Before creating a product, a core understanding or knowledge of the
product is very necessary.
• For Example, A client wants to have an application which concerns money transactions. In
this method, the requirement has to be precise like what kind of operations will be done,
how it will be done, in which currency it will be done, etc.
• Once the required function is done, an analysis is complete with auditing (conduct an
official financial inspection of (a company or its accounts)) the feasibility analysis of the
growth of a product. In case of any ambiguity, a signal is set up for further discussion.
• Once the requirement is understood, the SRS (Software Requirement Specification)
document is created. The developers should thoroughly follow this document and also
should be reviewed by the customer for future reference.
• Stage2: Defining Requirements
• Once the requirement analysis is done, the
next stage is to certainly represent and
document the software requirements and get
them accepted from the project stakeholders.
• This is accomplished through "SRS"- Software
Requirement Specification document which
contains all the product requirements to be
constructed and developed during the project
life cycle.
• Stage3: Designing the Software
• The next phase is about to bring down all the
knowledge of requirements, analysis, and
design of the software project. This phase is
the product of the last two, like inputs from
the customer and requirement gathering.
• Stage 4: Developing the project
• In this phase of SDLC, the actual development
begins, and the programming is built. The
implementation of design begins concerning
writing code. Developers have to follow the
coding guidelines described by their
management and programming tools like
compilers, interpreters, debuggers, etc. are
used to develop and implement the code.
• Stage 5: Testing
• During this stage, unit testing, integration
testing, system testing, acceptance testing are
done.
• Stage 6: Deployment
• Once the software is certified, and no bugs or
errors are stated, then it is deployed.
• Then based on the assessment, the software
may be released as it is or with suggested
enhancement.
• After the software is deployed, then its
maintenance begins.
• Stage 7: Maintenance
• Once when the client starts using the
developed systems, then the real issues come
up and requirements to be solved from time
to time.
• This procedure where the care is taken for the
developed product is known as maintenance.
SDLC Models
• Winston Royce introduced the Waterfall Model in 1970. This model has five
phases: Requirements analysis and specification, design, implementation, and
unit testing, integration and system testing, and operation and maintenance. The
steps always follow in this order and do not overlap. The developer must
complete every phase before the next phase begins. This model is named
"Waterfall Model", because its diagrammatic representation resembles a
cascade of waterfalls.
• 1. Requirements analysis and specification phase: The aim of this phase is to
understand the exact requirements of the customer and to document them
properly. Both the customer and the software developer work together so as to
document all the functions, performance, and interfacing (user and the software
will interact) requirement of the software. It describes the "what" of the system
to be produced and not "how“. In this phase, a large document called Software
Requirement Specification (SRS) document is created which contained a
detailed description of what the system will do in the common language.
• 2. Design Phase: This phase aims to transform the requirements
gathered in the SRS into a suitable form which permits further
coding in a programming language. It defines the overall software
architecture together with high level and detailed design. All this
work is documented as a Software Design Document (SDD).
• 3. Implementation and unit testing: During this phase, design is
implemented. If the SDD is complete, the implementation or coding
phase proceeds smoothly, because all the information needed by
software developers is contained in the SDD.
• During testing, the code is thoroughly examined and modified. Small
modules are tested in isolation initially. After that these modules are
tested by writing some code to check the interaction between these
modules and the flow of intermediate output.
• 4. Integration and System Testing: This phase is highly
crucial as the quality of the end product is determined by
the effectiveness of the testing carried out. The better
output will lead to satisfied customers, lower
maintenance costs, and accurate results. Unit testing
determines the efficiency of individual modules. However,
in this phase, the modules are also tested for their
interactions with each other and with the system.
• 5. Operation and maintenance phase: Maintenance is the
task performed by every user once the software has been
delivered to the customer, installed, and operational.
When to use SDLC Waterfall Model?
Some Circumstances where the use of the Waterfall model is most suited are:
• When the requirements are constant and not changed regularly.
• A project is short
• The situation is calm (not complex software in terms of functionalities)
• Where the tools and technology used is consistent and is not changing
• When resources (S/W and H/W) are well prepared and are available to use .
• Risk Handling: In the projects with many unknown risks that occur as
the development proceeds, in that case, Spiral Model is the best
development model to follow due to the risk analysis and risk
handling at every phase.
• Good for large projects: It is recommended to use the Spiral Model in
large and complex projects.
• Flexibility in Requirements: Change requests in the Requirements at
later phase can be incorporated accurately by using this model.
• Customer Satisfaction: Customer can see the development of the
product at the early phase of the software development and thus,
they are habituated with the system by using it before completion of
the total product.
Disadvantages of Spiral Model:
• Complex: The Spiral Model is much more complex than other SDLC
models.
• Expensive: Spiral Model is not suitable for small projects as it is expensive.
• Too much dependability on Risk Analysis: The successful completion of
the project is very much dependent on Risk Analysis. Without very highly
experienced experts, it is going to be a failure to develop a project using
this model.
• Difficulty in time management: As the number of phases is unknown at
the start of the project, so time estimation is very difficult.
Choosing the right Software development life cycle model
• When you define the criteria and the arguments you need to discuss with
the team, you will need to have a decision matrix and give each criterion
a defined weight and score for each option. After analyzing the results,
you should document this decision in the project artifacts and share it
with the related stakeholders.
STEP 5: Optimize
• You can always optimize (add to / change) the SDLC during the project
execution, you may notice upcoming changes do not fit with the selected
SDLC, it is okay to align and cope with the changes. You can even make
your own SDLC model which is optimum for your organization or the type
of projects you are involved in.
V-Model
• Easy to Understand.
• Testing Methods like planning, test designing happens well before coding.
• This saves a lot of time. Hence a higher chance of success over the waterfall
model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.
Disadvantage (Cons) of V-Model:
• Very rigid and least flexible.
• Not a good for a complex project.
• Software is developed during the implementation stage, so no early
prototypes of the software are produced.
• If any changes happen in the midway, then the test documents along with the
required documents, has to be updated.
Incremental Model
•Scrum
•Crystal
•Dynamic Software Development
Method(DSDM)
•Feature Driven Development(FDD)
•Lean Software Development
•eXtreme Programming(XP)
When to use the Agile Model?
• 1. Requirement gathering & analysis: In this phase, requirements are gathered from
customers and checked by an analyst whether any requirements will be fulfilled or not.
Analyst checks that need will achieve within budget or not. After all of this, the software
team skips to the next phase.
• 2. Design: In the design phase, team design the software by the different diagrams like Data
Flow diagram, activity diagram, class diagram, state transition diagram, etc.
• 3. Implementation: In the implementation, requirements are written in the coding language
and transformed into computer programs which are called Software.
• 4. Testing: After completing the coding phase, software testing starts using different test
methods. There are many test methods, but the most common are white box, black box, and
grey box test methods.
• 5. Deployment: After completing all the phases, software is deployed to its work
environment.
• 6. Review: In this phase, after the product deployment, review phase is performed to check
the behavior and validity of the developed product. And if there are any error found then the
process starts again from the requirement gathering.
• 7. Maintenance: In the maintenance phase, after deployment of the software in the working
environment there may be some bugs, some errors or new updates are required.
Maintenance involves debugging and new addition options.
When to use the Iterative Model?
• A project manager is a character who has the overall responsibility for the planning, design,
execution, monitoring, controlling and closure of a project. A project manager represents an
essential role in the achievement of the projects.
• A project manager is a character who is responsible for giving decisions, both large and small
projects. The project manager is used to manage the risk and minimize uncertainty. Every
decision the project manager makes must directly profit their project.
• The Experienced team leaves the project, and the new team
joins it.
• Changes in requirement.
• Change in technologies and the environment.
• 7. Project Communication
Management: Communication is an essential factor
in the success of the project. It is a bridge between
client, organization, team members and as well as
other stakeholders of the project such as hardware
suppliers.
• From the planning to closure, communication plays
a vital role. In all the phases, communication must
be clear and understood. Miscommunication can
create a big blunder in the project.
• 8. Project Configuration Management: Configuration management is
about to control (how much changes will be required) the changes in
software like requirements, design, and development of the product.
• The Primary goal is to increase productivity with fewer errors.
• Internal metrics: Internal metrics are the metrics used to be of greater importance to a software
developer. For example, Lines of Code (LOC) measure.
• External metrics: External metrics are the metrics used for measuring properties that are of greater
importance to the user, e.g., portability (Being able to move software from one machine platform
to another. It refers to system software or application software that can be recompiled for a
different platform or to software that is available for two or more different platforms), reliability
(Software reliability is the probability of failure-free operation of a computer program for a
specified period in a specified environment), functionality (Functionality is the ability of the system
to do the work for which it was intended), usability (usability is the degree to which a software can
be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and
satisfaction in a quantified context of use), etc.
• Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource metrics.
For example, cost per FP where FP stands for Function Point Metric.
• Project metrics: Project metrics are the metrics used by the project manager to check the project's
progress. Data from the past projects are used to collect various metrics, like time and cost; these
estimates are used as a base of new software. Note that as the project proceeds, the project
manager will check its progress from time-to-time. Also understand that these metrics are used to
decrease the development costs, time, efforts and risks. The project quality can also be improved. As
quality improves, the number of errors and time, as well as cost required, is also reduced.
Advantage of Software Metrics
LOC Metrics
• It is one of the earliest and simpler metrics for
calculating the size of the computer program.
It is generally used in calculating and
comparing the productivity of programmers.
These metrics are derived by normalizing the
quality and productivity measures by
considering the size of the product as a metric.
Following are the points regarding LOC measures:
• Errors/KLOC.
• $/ KLOC.
• Pages of documentation/KLOC.
• Errors/PM.
• Productivity = KLOC/PM (effort is measured in
person-months).
• $/ Page of documentation.
Advantages of LOC
• Simple to measure
Disadvantage of LOC
• It is defined on the code. For example, it cannot measure
the size of the specification.
• It characterizes only one specific view of size, namely
length, it takes no account of functionality or complexity
• Bad software design may cause an excessive line of code
• It is language dependent
• Users cannot easily understand it
Halstead's Software Metrics
Potential Minimum Volume (V*) – The potential minimum volume V* is defined as the volume of the most
succinct (briefly and clearly expressed) program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional to the number of the unique operator in
the program.
D= (n1/2) * (N2/n2)
Programming Effort (E)
The unit of measurement of E is elementary mental
discriminations.
E=V/L=D*V
Estimated Program Length
According to Halstead, The first Hypothesis of software science
is that the length of a well-structured program is a function
only of the number of unique operators and operands.
N=N1+N2
And estimated program length is denoted by N^
N^ = n1log2n1 + n2log2n2
Potential Minimum Volume
• The potential minimum volume V* is defined as the volume of the most short
program in which a problem can be coded.
V* = (2 + n2*) * log2 (2 + n2*)
Here, n2* is the count of unique input and output parameters (special kind of
variable used in a subroutine)
Size of Vocabulary (n)
The size of the vocabulary of a program, which consists of the number of
unique tokens used to build a program, is defined as:
n=n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
Counting rules for C language
int 4 SORT 1
() 5 x 7
, 4 n 3
[] 7 i 8
if 2 j 7
< 2 save 3
; 11 im1 3
for 2 2 2
= 6 1 3
- 1 0 1
<= 2 - -
++ 2 - -
return 2 - -
{} 3 - -
1. Number of external 7 10 15
inputs (EI)
2. Number of external 5 7 10
outputs (EO)
3. Number of external 3 4 6
inquiries (EQ)
4. Number of internal 4 5 7
files (ILF)
5. Number of external 3 4 6
interfaces (EIF)
• The functional complexities are multiplied
with the corresponding weights against each
function, and the values are added up to
determine the UFP (Unadjusted Function
Point) of the subsystem.
• Here that weighing factor will be simple, average, or complex for a measurement parameter type.
The Function Point (FP) is thus calculated with the following formula.
• FP = Count-total * [0.65 + 0.01 * ∑(f i)]
= Count-total * CAF
where Count-total is obtained from the above Table.
• CAF = [0.65 + 0.01 *∑(f i)]
• and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-CAF (where i ranges from 1 to
14). Usually, a student is provided with the value of ∑(f i)
• Also note that ∑(fi) ranges from 0 to 70, i.e.,
• 0 <= ∑(fi) <=70
• and CAF ranges from 0.65 to 1.35 because
• When ∑(fi) = 0 then CAF = 0.65
• When ∑(fi) = 70 then CAF = 0.65 + (0.01 * 70) = 0.65 + 0.7 = 1.35
• Based on the FP measure of software many other metrics can be computed:
• Errors/FP
• $/FP.
• Defects/FP
• Pages of documentation/FP
• Errors/PM.
• Productivity = FP/PM (effort is measured in person-months).
• $/Page of Documentation.
• 8. LOCs of an application can be estimated from FPs. That is, they are
interconvertible. This process is known as backfiring. For example, 1 FP is equal to
about 100 lines of COBOL code.
• 9. FP metrics is used mostly for measuring the size of Management Information
System (MIS) software.
• 10. But the function points obtained above are unadjusted function points (UFPs).
These (UFPs) of a subsystem are further adjusted by considering some more
General System Characteristics (GSCs). It is a set of 14 GSCs that need to be
considered. The procedure for adjusting UFPs is as follows:
• Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5.
(b) If a particular GSC has no influence, then its weight is taken as 0 and if it has a
strong influence then its weight is 5.
• The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
• Then Value Adjustment Factor (VAF) is computed from TDI by using the
formula: VAF = (TDI * 0.01) + 0.65
• Remember that the value of VAF lies within 0.65 to 1.35 because
• When TDI = 0, VAF = 0.65
• When TDI = 70, VAF = 1.35
• VAF is then multiplied with the UFP to get the final FP count: FP = VAF * UFP
• Example: Compute the function point, productivity, documentation, cost
per function for the following data:
• Number of user inputs = 24
• Number of user outputs = 46
• Number of inquiries = 8
• Number of files = 4
• Number of external interfaces = 2
• Effort = 36.9 PM
• Technical documents = 265 pages
• User documents = 122 pages
• Cost = $7744/ month
• Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2, 2, 4,
5.
Solution:
1. Number of external 24 * 4 = 96
inputs (EI)
3. Number of external 8 * 6 = 48
inquiries (EQ)
4. Number of internal 4 * 10 = 40
files (ILF)
5. Number of external 2 * 5 = 10
interfaces (EIF) Count- 378
total →
• So sum of all fi (i ← 1 to 14) = 4 + 1 + 0 + 3 + 3 + 5 + 4 + 4 +
3 + 3 + 2 + 2 + 4 + 5 = 43
• FP = Count-total * [0.65 + 0.01 *∑(fi)]
= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408
• Total pages of documentation = technical document + user
document
= 265 + 122 = 387pages
• Documentation = Pages of documentation/FP
= 387/408 = 0.94
Differentiate between FP and LOC
FP LOC
1. Number of - * 4 -
external inputs (EI)
2. Number of - * 5 -
external outputs
(EO)
3. Number of - * 4 -
external inquiries
(EQ)
4. Number of - * 7 -
internal files (ILF)
5. Number of - * 7 -
external interfaces
(EIF)
6.Algorithms used - * 3 -
Count total →
The feature point is thus calculated with the following formula:
Software Planner Program Size/No of Model Parameter Est. project effort Est.
Software developer on Constants Coefficients project duration
team
• That's why an important set of metrics which capture in the
amount of data input, processed in an output form software. A
count of this data structure is called Data Structured Metrics. In
these concentrations is on variables (and given constant) within
each module & ignores the input-output dependencies.
• There are some Data Structure metrics to compute the effort
and time required to complete the project. There metrics are:
• The Amount of Data.
• The Usage of data within a Module.
• Program weakness.
• The sharing of Data among Modules.
1. The Amount of Data: To measure the amount of Data,
there are further many different metrics, and these are:
• Number of variable (VARS): In this metric, the Number
of variables used in the program is counted.
• Number of Operands (η2): In this metric, the Number of
operands used in the program is counted.
η2 = VARS + Constants + Labels
• Total number of occurrence of the variable (N2): In this
metric, the total number of occurrence of the variables
are computed
2. The Usage of data within a Module: To
measure this metric, the average numbers of
live variables are computed. A variable is live
from its first to its last references within the
procedure.
• For Example: If we want to characterize the
average number of live variables for a program
having modules, we can use this equation.
• Where (LV) is the average live variable metric
computed from the ith module. This equation
could compute the average span size (SP) for a
program of n spans.
• 3. Program weakness: Program weakness depends
on its Modules weakness. If Modules are weak(less
Cohesive), then it increases the effort and time
metrics required to complete the project.
• Module Weakness (WM) = LV*γ
• A program is normally a combination of
various modules; hence, program weakness
can be a useful measure and is defined as:
• WP=(∑WMi, i=1……m)/m
where
• WMi: Weakness of the ith module
• WP: Weakness of the program
• m: No of modules in the program
• 4. The Sharing of Data among Module: As the
data sharing between the Modules increases
(higher Coupling), number of parameter
passing between Modules also increased, As a
result, more effort and time are required to
complete the project. So Sharing Data among
Module is an important metrics to calculate
effort and time.
Information Flow Metrics
• The other set of metrics we would live to consider are known as
Information Flow Metrics. The basis of information flow metrics is found
upon the following concept that the simplest system consists of the
component, and it is the work that these components do and how they
are fitted together that identify the complexity of the system. The
following are the working definitions that are used in Information flow:
Component: Any element identified by decomposing a (software)
system into it's constituent's parts.
Cohesion: The degree to which a component performs a single function.
Coupling: The term used to describe the degree of linkage between one
component to others in the same system.
• Information Flow metrics deal with this type of complexity by observing
the flow of information among system components or modules. This
metrics is given by Henry and Kafura. So it is also known as Henry and
Kafura's Metric.
• This metrics is based on the measurement of the information flow among system
modules. It is sensitive to the complexity due to interconnection among system
component. This measure including the complexity of a software module is
defined to be the sum of complexities of the procedures included in the module. A
process contributes to the complexity due to the following two factors.
• The complexity of the procedure code itself.
• The complexity due to the procedure's connections to its environment. The effect
of the first factor has been included through LOC (Line Of Code) measure. For the
quantification of the second factor, Henry and Kafura have defined two terms,
namely FAN-IN and FAN-OUT.
FAN-IN: FAN-IN of a procedure is the number of local flows into that procedure
plus the number of data structures from which this procedure retrieve
information.
FAN -OUT: FAN-OUT is the number of local flows from that procedure plus the
number of data structures for which that procedure updates.
• Procedure Complexity = Length * (FAN-IN * FANOUT)**2
Cyclomatic Complexity
• Many CASE tools (Computer Aided Software Engineering tools) exist for measuring
software. They are either open source or are paid tools. Some of them are listed below:
• Analyst4j tool is based on the Eclipse platform and available as a stand-alone Rich Client
Application or as an Eclipse IDE plug-in. It features search, metrics, analyzing quality, and
report generation for Java programs.
• CCCC is an open source command-line tool. It analyzes C++ and Java lines and generates
reports on various metrics, including Lines of Code and metrics proposed by Chidamber &
Kemerer and Henry & Kafura.
• Chidamber & Kemerer Java Metrics is an open source command-line tool. It calculates the
C&K object-oriented metrics by processing the byte-code of compiled Java.
• Dependency Finder is an open source. It is a suite of tools for analyzing compiled Java
code. Its core is a dependency analysis application that extracts dependency graphs and
mines them for useful information. This application comes as a command-line tool, a
Swing-based application, and a web application.
• Eclipse Metrics Plug-in 1.3.6 by Frank Sauer is an open source metrics calculation and
dependency analyzer plugin for the Eclipse IDE. It measures various metrics and detects
cycles in package and type dependencies.
• Eclipse Metrics Plug-in 3.4 by Lance Walton is open
source. It calculates various metrics during built cycles
and warns, via the problems view, of metrics 'range
violations'.
• OOMeter is an experimental software metrics tool
developed by Alghamdi. It accepts Java/C# source code
and UML models in XMI and calculates various metrics.
• Semmle is an Eclipse plug-in. It provides an SQL like
querying language for object-oriented code, which
allows searching for bugs, measure code metrics, etc.
Software Project Planning
• Software manager is responsible for planning and scheduling project development. They
manage the work to ensure that it is completed to the required standard. They monitor the
progress to check that the event is on time and within budget. The project planning must
incorporate the major issues like size & cost estimation scheduling, project monitoring,
personnel selection evaluation & risk management. To plan a successful software project,
we must understand:
• Scope of work to be completed
• Risk analysis
• The resources mandatory
• The project to be accomplished
• Record of being followed
• Software Project planning starts before technical work start. The various steps of planning
activities are:
• The size is the crucial parameter for the estimation of other activities. Resources
requirement are required based on cost and development time. Project schedule may
prove to be very useful for controlling and monitoring the progress of the project. This is
dependent on resources & development time.
Software Cost Estimation
• For any new software project, it is necessary to know how much it will
cost to develop and how much development time will it take. These
estimates are needed before development is initiated, but how is this
done? Several estimation procedures have been developed and are
having the following attributes in common.
• Project scope must be established in advanced.
• Software metrics are used as a support from which evaluation is made.
• The project is broken into small PCs which are estimated individually.
To achieve true cost & schedule estimate, several option arise.
• Delay estimation
• Used symbol decomposition techniques to generate project cost and
schedule estimates.
• Acquire one or more automated estimation tools.
Uses of Cost Estimation
• Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981. COCOMO is one
of the most generally used software estimation models in the world. COCOMO predicts the
efforts and schedule of a software product based on the size of the software.
The necessary steps in this model are:
• Get an initial estimate of the development effort from evaluation of thousands of delivered
lines of source code (KDLOC).
• Determine a set of 15 multiplying factors from various attributes (for e.g., Required
Software Reliability, Size of Application Database, Complexity of The Product) of the
project.
• Calculate the effort estimate by multiplying the initial estimate with all the multiplying
factors i.e., multiply the values in step1 and step2.
• The initial estimate (also called nominal estimate) is determined by an equation of the form
used in the static single variable models, using KDLOC as the measure of the size. To
determine the initial effort Ei in person-months the equation used is of the type is shown
below
Ei=a*(KDLOC)b
• The value of the constant a and b depends on the project type.
In COCOMO, projects are categorized into
three types
1. Organic: A development project can be treated of the organic type, if the project deals with
developing a well-understood application program, the size of the development team is reasonably
small, and the team members are experienced in developing similar methods of projects. Examples
of this type of projects are simple business systems, simple inventory management systems, and
data processing systems.
2. Semidetached: A development project can be treated with semidetached type if the development
consists of a mixture of experienced and inexperienced staff. Team members may have finite
experience in related systems but may be unfamiliar with some aspects of the order being
developed. Example of Semidetached system includes developing a new operating system (OS), a
Database Management System (DBMS), and complex inventory management system.
For three product categories, Bohem provides a different set of expression to predict effort (in a unit
of person month)and development time from the size of estimation in KLOC(Kilo Line of code) efforts
estimation takes into account the productivity loss due to holidays, weekly off, coffee breaks, etc.
According to Boehm, software cost estimation should be done through three stages:
• Basic Model
• Intermediate Model
• Detailed Model
1. Basic COCOMO Model: The basic COCOMO model provide an accurate size of the
project parameters. The following expressions give the basic COCOMO estimation model:
Effort=a1*(KLOC)a2 PM
Tdev=b1*(efforts)b2 Months
where
KLOC is the estimated size of the software product indicated in Kilo Lines of Code,
a1,a2,b1,b2 are constants for each group of software products,
Tdev is the estimated time to develop the software, expressed in months,
Effort is the total effort required to develop the software product, expressed in person
months (PMs).
Estimation of development effort
For the three classes of software products, the formulas for estimating the effort based on
the code size are shown below:
• Organic: Effort = 2.4(KLOC) 1.05 PM
• Semi-detached: Effort = 3.0(KLOC) 1.12 PM
• Embedded: Effort = 3.6(KLOC) 1.20 PM
Estimation of development time
For the three classes of software products, the formulas for estimating the development
time based on the effort are given below:
• Organic: Tdev = 2.5(Effort) 0.38 Months
• Semi-detached: Tdev = 2.5(Effort) 0.35 Months
• Embedded: Tdev = 2.5(Effort) 0.32 Months
• Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes. Fig shows a plot of estimated effort versus
product size. From fig, we can observe that the effort is somewhat superliner in the size of
the software product. Thus, the effort required to develop a product increases very rapidly
with project size.
• The graph indicates that the effort taken by
organic type of projects is less than the semi-
detached or embedded type of project
• From the effort estimation, the project cost can be obtained by
multiplying the required effort by the manpower cost per month. But,
implicit in this project cost computation is the assumption that the entire
project cost is incurred on account of the manpower cost alone. In
addition to manpower cost, a project would incur costs due to hardware
and software required for the project and the company overheads for
administration, office space, etc.
• It is important to note that the effort and the duration estimations
obtained using the COCOMO model are called a nominal effort estimate
and nominal duration estimate. The term nominal implies that if anyone
tries to complete the project in a time shorter than the estimated
duration, then the cost will increase drastically.
• But, if anyone completes the project over a longer period of time than the
estimated, then there is almost no decrease in the estimated cost value.
• Example1: Suppose a project was estimated to be 400
KLOC. Calculate the effort and development time for each
of the three model i.e., organic, semi-detached &
embedded.
• Solution: The basic COCOMO equation takes the form
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i) Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 M
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 M
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 M
• Example 2: A project size of 200 KLOC is to be
developed. Software development team has average
experience on similar type of projects. The project
schedule is not very tight. Calculate the Effort,
development time, average staff size, and productivity of
the project.
• Solution: The semidetached mode is the most
appropriate mode, keeping in view the size, schedule
and experience of development time.
Hence E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3 M
P = 176 LOC/PM
• 2. Intermediate Model: The basic COCOMO model considers that the effort is only a function
of the number of lines of code and some constants calculated according to the various
software systems. The intermediate COCOMO model recognizes these facts and refines the
initial estimates obtained through the basic COCOMO model by using a set of 15 cost drivers
based on various attributes of software engineering.
• Classification of Cost Drivers and their attributes:
(i) Product attributes -
• Required software reliability extent
• Size of the application database
• The complexity of the product
(ii) Hardware attributes -
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time (Turnaround time (TAT) is the amount of time taken to complete a
process or fulfill a request)
(iii) Personnel attributes -
• Analyst capability
• Applications experience
• Software engineering capability
• Virtual machine experience
• Programming language experience
(iv) Project attributes -
• Use of software tools
• Application of software engineering methods
• Required development schedule
The cost drivers are divided into four categories:
• Intermediate COCOMO equation:
E=ai (KLOC) bi*EAF
D=ci (E)di
• Coefficients for intermediate COCOMO
Multiplier Values For Effort Ratings
Calculations
Cost Drivers Very Low Low Nominal High Very High Extra
Product attributes High
• K is the total effort expended (in PM) in product development, and L is the
product estimate in KLOC .
• td correlate to the time of system and integration testing. Therefore, td can be
relatively considered as the time required for developing the product.
• Ck Is the state of technology constant and reflects requirements that impede the
development of the program.
• Typical values of Ck = 2 for poor development environment
• Ck= 8 for good software development environment
• Ck = 11 for an excellent environment (in addition to following software
engineering principles, automated tools and techniques are used).
• The exact value of Ck for a specific task can be computed from the historical data
of the organization developing it.
• Putnam proposed that optimal staff develop
on a project should follow the Rayleigh curve.
Only a small number of engineers are required
at the beginning of a plan to carry out
planning and specification tasks. As the
project progresses and more detailed work are
necessary, the number of engineers reaches a
peak. After implementation and unit testing,
the number of project staff falls.
• Effect of a Schedule change on Cost
• Putnam derived the following expression:
• Where, K is the total effort expended (in PM) in the product
development
• L is the product size in KLOC
• td corresponds to the time of system and integration testing
• Ck Is the state of technology constant and reflects
constraints that impede the progress of the program
• Now by using the above expression, it is obtained that,
• For the same product size, C =L3 / Ck3 is a constant.
• (As project development effort is equally proportional to project
development cost)
• From the above expression, it can be easily observed that when
the schedule of a project is compressed, the required
development effort as well as project development cost increases
in proportion to the fourth power of the degree of compression. It
means that a relatively small compression in delivery schedule can
result in a substantial penalty of human effort as well as
development cost.
• For example, if the estimated development time is 1 year, then to
develop the product in 6 months, the total effort required to
develop the product (and hence the project cost) increases 16
times.
What is Risk?
• The project organizer needs to anticipate the risk in the project as early as possible so
that the impact of risk can be reduced by making effective risk management planning.
• A project can be of use by a large variety of risk. To identify the significant risk, this might
affect a project. It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:
• Technology risks: Risks that assume from the software or hardware technologies that are
used to develop the system.
• People risks: Risks that are connected with the person in the development team.
• Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
• Tools risks: Risks that assume from the software tools and other support software used
to create the system.
• Requirement risks: Risks that assume from the changes to the customer requirement
and the process of managing the requirements change.
• Estimation risks: Risks that assume from the management estimates of the resources
required to build the system
2. Risk Analysis:
• During the risk analysis process, you have to consider every identified
risk and make a perception of the probability and seriousness of that
risk.
• There is no simple way to do this. You have to rely on your perception
and experience of previous projects and the problems that arise in
them.
• It is not possible to make an exact, the numerical estimate of the
probability and seriousness of each risk. Instead, you should authorize
the risk to one of several bands:
• The probability of the risk might be determined as very low (0-10%), low
(10-25%), moderate (25-50%), high (50-75%) or very high (+75%).
• The effect of the risk might be determined as catastrophic (threaten the
survival of the plan), serious (would cause significant delays), tolerable
(delays are within allowed contingency), or insignificant.
Risk Control
• It is the process of managing risks to achieve desired outcomes. After all, the
identified risks of a plan are determined; the project must be made to include
the most harmful and the most likely risks. Different risks need different
containment methods. In fact, most risks need ingenuity on the part of the
project manager in tackling the risk.
• There are three main methods to plan for risk management:
• Avoid the risk: This may take several ways such as discussing with the client
to change the requirements to decrease the scope of the work, giving
incentives to the engineers to avoid the risk of human resources turnover, etc.
• Transfer the risk: This method involves getting the risky element developed
by a third party, buying insurance cover, etc.
• Risk reduction: This means planning method to include the loss due to risk.
For instance, if there is a risk that some key personnel might leave, new
recruitment can be planned.
Risk Leverage:
• To choose between the various methods of
handling risk, the project plan must consider the
amount of controlling the risk and the
corresponding reduction of risk. For this, the risk
leverage of the various risks can be estimated.
• Risk leverage is the variation in risk exposure
divided by the amount of reducing the risk.
• Risk leverage = (risk exposure before reduction -
risk exposure after reduction) / (cost of reduction)
1. Risk planning: The risk planning method considers each of the key risks
that have been identified and develop ways to maintain these risks.
• For each of the risks, you have to think of the behavior that you may take
to minimize the disruption to the plan if the issue identified in the risk
occurs.
• You also should think about data that you might need to collect while
monitoring the plan so that issues can be anticipated.
• Again, there is no easy process that can be followed for contingency
planning. It rely on the judgment and experience of the project manager.
2. Risk Monitoring: Risk monitoring is the method kind that your
assumption about the product, process, and business risks has not
changed.
Project Scheduling
• Personnel Planning deals with staffing. Staffing deals with the appointed
personnel for the position that is identified by the organizational
structure.
It involves:
• Defining requirement for personnel
• Recruiting (identifying, interviewing, and selecting candidates)
• Compensating
• Developing and promoting agent
• For personnel planning and scheduling, it is helpful to have efforts and
schedule size for the subsystems and necessary component in the system.
• At planning time, when the system method has not been completed, the
planner can only think to know about the large subsystems in the system
and possibly the major modules in these subsystems.
• Once the project plan is estimated, and the effort and schedule of various phases and
functions are known, staff requirements can be achieved.
• From the cost and overall duration of the projects, the average staff size for the
projects can be determined by dividing the total efforts (in person-months) by the
whole project duration (in months).
• Typically the staff required for the project is small during requirement and design, the
maximum during implementation and testing, and drops again during the last stage
of integration and testing.
• Using the COCOMO model, average staff requirement for various phases can be
calculated as the effort and schedule for each method are known.
• When the schedule and average staff level for every action are well-known, the
overall personnel allocation for the project can be planned.
• This plan will indicate how many people will be required for different activities at
different times for the duration of the project.
• The total effort for each month and the total effort for each step can easily be
calculated from this plan.
Team Structure
• The DFD may be used to perform a system or software at any level of abstraction.
Infact, DFDs may be partitioned into levels that represent increasing information
flow and functional detail. Levels in DFD are numbered 0, 1, 2 or beyond. Here, we
will see primarily three levels in the data flow diagram, which are: 0-level DFD, 1-
level DFD, and 2-level DFD.
0-level DFD
• It is also known as fundamental system model, or context diagram represents the
entire software requirement as a single bubble with input and output data denoted
by incoming and outgoing arrows. Then the system is decomposed and described as
a DFD with multiple bubbles. Parts of the system represented by each of these
bubbles are then decomposed and documented as more and more detailed DFDs.
This process may be repeated at as many levels as necessary until the program at
hand is well understood. It is essential to preserve the number of inputs and outputs
between levels, this concept is called leveling by DeMacro. Thus, if bubble "A" has
two inputs x1 and x2 and one output y, then the expanded DFD, that represents "A"
should have exactly two external inputs and one external output as shown in fig:
• The Level-0 DFD, also called context diagram
of the result management system is shown in
fig. As the bubbles are decomposed into less
and less abstract bubbles, the corresponding
data flow may also be needed to be
decomposed.
1-level DFD
• In 1-level DFD, a context diagram is
decomposed into multiple bubbles/processes.
In this level, we highlight the main objectives
of the system and breakdown the high-level
process of 0-level DFD into sub-processes.
2-Level DFD
• 2-level DFD goes one process deeper into
parts of 1-level DFD. It can be used to project
or record the specific/necessary detail about
the system's functioning.
Data Dictionaries
What is Quality?
• Quality defines to any measurable characteristics such as correctness, maintainability,
portability, testability, usability, reliability, efficiency, integrity, reusability, and
interoperability.
• There are two kinds of Quality:
• Quality of Design: Quality of Design refers to the characteristics
that designers specify for an item. The grade of materials,
tolerances, and performance specifications that all contribute to
the quality of design.
• Quality of conformance: Quality of conformance is the degree to
which the design specifications are followed during
manufacturing. Greater the degree of conformance, the higher is
the level of quality of conformance.
• Software Quality: Software Quality is defined as the conformance
to explicitly state functional and performance requirements,
explicitly documented development standards, and inherent
characteristics that are expected of all professionally developed
software.
• Quality Control: Quality Control involves a series of inspections, reviews,
and tests used throughout the software process to ensure each work
product meets the requirements place upon it. Quality control includes
a feedback loop to the process that created the work product.
• Quality Assurance: Quality Assurance is the preventive set of activities
that provide greater confidence that the project will be completed
successfully.
• Quality Assurance focuses on how the engineering and management
activity will be done?
• As anyone is interested in the quality of the final product, it should be
assured that we are building the right product.
• It can be assured only when we do inspection & review of intermediate
products, if there are any bugs, then it is debugged. This quality can be
enhanced.
• Importance of Quality
• We would expect the quality to be a concern of all producers of goods
and services. However, the distinctive characteristics of software and
in particular its intangibility and complexity, make special demands.
• Increasing criticality of software: The final customer or user is
naturally concerned about the general quality of software, especially
its reliability. This is increasing in the case as organizations become
more dependent on their computer systems and software is used
more and more in safety-critical areas. For example, to control aircraft.
• The intangibility of software: This makes it challenging to know that a
particular task in a project has been completed satisfactorily. The
results of these tasks can be made tangible by demanding that the
developers produce 'deliverables' that can be examined for quality.
• Accumulating errors during software
development: As computer system development is
made up of several steps where the output from one
level is input to the next, the errors in the earlier ?
deliverables? will be added to those in the later stages
leading to accumulated determinable effects. In
general the later in a project that an error is found, the
more expensive it will be to fix. In addition, because
the number of errors in the system is unknown, the
debugging phases of a project are particularly
challenging to control.
Software Quality Assurance
• Software quality assurance is a planned and systematic plan of all actions necessary
to provide adequate confidence that an item or product conforms to establish
technical requirements.
• A set of activities designed to calculate the process by which the products are
developed or manufactured.
SQA Encompasses
Quality Assurance (QA) is the set of actions Quality Control (QC) is described as the
including facilitation, training, measurement, and processes and methods used to compare product
analysis needed to provide adequate confidence quality to requirements and applicable standards,
that processes are established and continuously and the actions are taken when a nonconformance
improved to produce products or services that is detected.
conform to specifications and are fit for use.
QA is an activity that establishes and calculates QC is an activity that demonstrates whether or not
the processes that produce the product. If there is the product produced met standards.
no process, there is no role for QA.
QA identifies weakness in processes and improves QC identifies defects for the primary goals of
them correcting errors.
• Software quality product is defined in term of its fitness of purpose. That is, a quality product does precisely
what the users want it to do. For software products, the fitness of use is generally explained in terms of
satisfaction of the requirements laid down in the SRS document. Although "fitness of purpose" is a satisfactory
interpretation of quality for many devices such as a car, a table fan, a grinding machine, etc. for software
products, "fitness of purpose" is not a wholly satisfactory definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the SRS
document. But, has an almost unusable user interface. Even though it may be functionally right, we cannot
consider it to be a quality product.
• The modern view of a quality associated with a software product several quality methods such as the
following:
Portability: A software device is said to be portable, if it can be freely made to work in various operating
system environments, in multiple machines, with other software products, etc.
Usability: A software product has better usability if various categories of users can easily invoke the functions
of the product.
Reusability: A software product has excellent reusability if different modules of the product can quickly be
reused to develop new products.
Correctness: A software product is correct if various requirements as specified in the SRS document have been
correctly implemented.
Maintainability: A software product is maintainable if bugs can be easily corrected as and when they show up,
new tasks can be easily added to the product, and the functionalities of the product can be easily modified,
etc.
Software Quality Management System
• Quality systems have increasingly evolved over the last five decades. Before World War II, the usual
function to produce quality products was to inspect the finished products to remove defective
devices. Since that time, quality systems of organizations have undergone through four steps of
evolution, as shown in the fig. The first product inspection task gave method to quality control (QC).
• Quality control target not only on detecting the defective devices and removes them but also on
determining the causes behind the defects. Thus, quality control aims at correcting the reasons for
bugs and not just rejecting the products. The next breakthrough in quality methods was the
development of quality assurance methods.
• The primary premise of modern quality assurance is that if an organization's processes are proper
and are followed rigorously, then the products are obligated to be of good quality. The new quality
functions include guidance for recognizing, defining, analyzing, and improving the production
process.
• Total quality management (TQM) advocates that the procedure followed by an organization must be
continuously improved through process measurements. TQM goes stages further than quality
assurance and aims at frequently process improvement. TQM goes beyond documenting steps to
optimizing them through a redesign. A term linked to TQM is Business Process Reengineering (BPR).
• BPR aims at reengineering the method business is carried out in an organization. From the above
conversation, it can be stated that over the years, the quality paradigm has changed from product
assurance to process assurance, as shown in fig.
• ISO 9000 Certification
• ISO (International Standards Organization) is a group or
consortium of 63 countries established to plan and fosters
standardization. ISO declared its 9000 series of standards in
1987. It serves as a reference for the contract between
independent parties. The ISO 9000 standard determines the
guidelines for maintaining a quality system. The ISO standard
mainly addresses operational methods and organizational
methods such as responsibilities, reporting, etc. ISO 9000
defines a set of guidelines for the production process and is
not directly concerned about the product itself.
• Types of ISO 9000 Quality Standards
• The ISO 9000 series of standards is based on the assumption that if a proper
stage is followed for production, then good quality products are bound to
follow automatically. The types of industries to which the various ISO
standards apply are as follows.
• ISO 9001: This standard applies to the organizations engaged in design,
development, production, and servicing of goods. This is the standard that
applies to most software development organizations.
• ISO 9002: This standard applies to those organizations which do not design
products but are only involved in the production. Examples of these category
industries contain steel and car manufacturing industries that buy the product
and plants designs from external sources and are engaged in only
manufacturing those products. Therefore, ISO 9002 does not apply to
software development organizations.
• ISO 9003: This standard applies to organizations that are involved only in the
installation and testing of the products. For example, Gas companies.
How to get ISO 9000 Certification?
• For small problem, we can handle the entire problem at once but for the
significant problem, divide the problems and conquer the problem it means to
divide the problem into smaller pieces so that each piece can be captured
separately.
• For software design, the goal is to divide the problem into manageable pieces.
• Benefits of Problem Partitioning
• Software is easy to understand
• Software becomes simple
• Software is easy to test
• Software is easy to modify
• Software is easy to maintain
• Software is easy to expand
• These pieces cannot be entirely independent of each other as they together form
the system. They have to cooperate and communicate to solve the problem. This
communication adds complexity.
Abstraction
• Module Coupling
• In software engineering, the coupling is the degree
of interdependence between software modules.
Two modules that are tightly coupled are strongly
dependent on each other. However, two modules
that are loosely coupled are not dependent on each
other. Uncoupled modules have no
interdependence at all within them.
• The various types of coupling techniques are
shown in fig:
• A good design is the one that has low coupling. Coupling is measured by
the number of relations between the modules. That is, the coupling
increases as the number of calls between modules increase or the amount
of shared data is large. Thus, it can be said that a design with high coupling
will have more errors.
Types of Module Coupling
• 1. No Direct Coupling: There is no direct
coupling between M1 and M2.
• The coding is the process of transforming the design of a system into a computer language
format. This coding phase of software development is concerned with software translating design
specification into the source code. It is necessary to write source code & internal documentation
so that conformance of the code to its specification can be easily verified.
• Coding is done by the coder or programmers who are independent people than the designer. The
goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later stage.
The cost of testing and maintenance can be significantly reduced with efficient coding.
• Goals of Coding
• To translate the design of system into a computer language format: The coding is the process of
transforming the design of a system into a computer language format, which can be executed by a
computer and that perform tasks as specified by the design of operation during the design phase.
• To reduce the cost of later phases: The cost of testing and maintenance can be significantly
reduced with efficient coding.
• Making the program more readable: Program should be easy to read and understand. It
increases code understanding having readability and understandability as a clear objective of the
coding activity can itself help in producing more maintainable software.
• For implementing our design into code, we require a high-level functional language. A
programming language should have the following characteristics:
Characteristics of Programming Language
• General coding standards refers to how the developer writes code, so here we will
discuss some essential standards regardless of the programming language being
used.
• The following are some representative coding standards:
• Indentation: Proper and consistent indentation is essential in producing easy to read and
maintainable programs.
Indentation should be used to:
– Emphasize the body of a control structure such as a loop or a select statement.
– Emphasize the body of a conditional statement
– Emphasize a new scope block
• Inline comments: Inline comments analyze the functioning of the subroutine, or key aspects of the
algorithm shall be frequently used.
• Rules for limiting the use of global: These rules file what types of data can be declared global and
what cannot.
• Structured Programming: Structured (or Modular) Programming methods shall be used. "GOTO"
statements shall not be used as they lead to "spaghetti" code, which is hard to read and maintain,
except as outlined line in the FORTRAN Standards and Guidelines.
• Naming conventions for global variables, local variables, and constant identifiers: A possible
naming convention can be that global variable names always begin with a capital letter, local variable
names are made of small letters, and constant names are always capital letters.
• Error return conventions and exception handling system: Different functions in a program report
the way error conditions are handled should be standard within an organization. For example,
different tasks while encountering an error condition should either return a 0 or 1 consistently.
• Coding Guidelines
• General coding guidelines provide the programmer
with a set of the best methods which can be used
to make programs more comfortable to read and
maintain. Most of the examples use the C language
syntax, but the guidelines can be tested to all
languages.
• The following are some representative coding
guidelines recommended by many software
development organizations.
• 1. Line Length: It is considered a good practice to keep the length of source code lines at or below 80
characters. Lines longer than this may not be visible properly on some terminals and tools. Some printers
will truncate lines longer than 80 columns.
• 2. Spacing: The appropriate use of spaces within a line of code can improve readability.
• Example:
• Bad: cost=price+(price*sales_tax)
fprintf(stdout ,"The total cost is %5.2f\n",cost);
• Better: cost = price + ( price * sales_tax )
fprintf (stdout,"The total cost is %5.2f\n",cost);
• 3. The code should be well-documented: As a rule of thumb, there must be at least one comment line on
the average for every three-source line.
• 4. The length of any function should not exceed 10 source lines: A very lengthy function is generally very
difficult to understand as it possibly carries out many various functions. For the same reason, lengthy
functions are possible to have a disproportionately larger number of bugs.
• 5. Do not use goto statements: Use of goto statements makes a program unstructured and very tough to
understand.
• 6. Inline Comments: Inline comments promote readability.
• 7. Error Messages: Error handling is an essential aspect of computer programming. This does not only
include adding the necessary logic to test for and handle errors but also involves making error messages
meaningful.
Programming Style
• In structured programming, we sub-divide the whole program into small modules so that the
program becomes easy to understand. The purpose of structured programming is to
linearize control flow through a computer program so that the execution sequence follows
the sequence in which the code is written. The dynamic structure of the program than
resemble the static structure of the program. This enhances the readability, testability, and
modifiability of the program. This linear flow of control can be managed by restricting the
set of allowed applications construct to a single entry, single exit formats.
• Why we use Structured Programming?
• We use structured programming because it allows the programmer to understand the
program easily. If a program consists of thousands of instructions and an error occurs then it
is complicated to find that error in the whole program, but in structured programming, we
can easily detect the error and then go to that location and correct it. This saves a lot of
time.
• These are the following rules in structured programming:
• Structured Rule One: Code Block
• If the entry conditions are correct, but the exit conditions are wrong, the error must be in
the block. This is not true if the execution is allowed to jump into a block. The error might be
anywhere in the program. Debugging under these circumstances is much harder.
• Rule 1 of Structured Programming: A code block is
structured, as shown in the figure. In flow-charting condition,
a box with a single entry point and single exit point are
structured. Structured programming is a method of making it
evident that the program is correct.
Structure Rule Two: Sequence
• Software fault tolerance is the ability for software to detect and recover
from a fault that is happening or has already happened in either the
software or hardware in the system in which the software is running to
provide service by the specification.
• Software fault tolerance is a necessary component to construct the next
generation of highly available and reliable computing systems from
embedded systems to data warehouse systems.
• To adequately understand software fault tolerance it is important to
understand the nature of the problem that software fault tolerance is
supposed to solve.
• Software faults are all design faults. Software manufacturing, the
reproduction of software, is considered to be perfect. The source of the
problem being solely designed faults is very different than almost any
other system in which fault tolerance is the desired property.
Software Fault Tolerance Techniques
1. Recovery Block
• The recovery block method is a simple technique developed by Randel. The recovery block
operates with an adjudicator, which confirms the results of various implementations of the
same algorithm. In a system with recovery blocks, the system view is broken down into fault
recoverable blocks.
• The entire system is constructed of these fault-tolerant blocks. Each block contains at least a
primary, secondary, and exceptional case code along with an adjudicator. The adjudicator is
the component, which determines the correctness of the various blocks to try.
• The adjudicator should be kept somewhat simple to maintain execution speed and aide in
correctness. Upon first entering a unit, the adjudicator first executes the primary alternate.
(There may be N alternates in a unit which the adjudicator may try.) If the adjudicator
determines that the fundamental block failed, it then tries to roll back the state of the system
and tries the secondary alternate.
• If the adjudicator does not accept the results of any of the alternates, it then invokes the
exception handler, which then indicates the fact that the software could not perform the
requested operation.
• The recovery block technique increases the pressure on the specification to be specific
enough to create various multiple alternatives that are functionally the same. This problem is
further discussed in the context of the N-version software method.
2. N-Version Software
• The differences between the recovery block technique and the N-version
technique are not too numerous, but they are essential. In traditional recovery
blocks, each alternative would be executed serially until an acceptable solution
is found as determined by the adjudicator. The recovery block method has
been extended to contain concurrent execution of the various alternatives.
• The N-version techniques have always been designed to be implemented
using N-way hardware concurrently. In a serial retry system, the cost in time of
trying multiple methods may be too expensive, especially for a real-time
system. Conversely, concurrent systems need the expense of N-way hardware
and a communications network to connect them.
• The recovery block technique requires that each module build a specific
adjudicator; in the N-version method, a single decider may be used. The
recovery block technique, assuming that the programmer can create a
sufficiently simple adjudicator, will create a system, which is challenging to
enter into an incorrect state.
Software Reliability Models
• Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to
modify and update software application after delivery to correct errors and to improve
performance. Software is a model of the real world. When the real world changes, the
software require alteration wherever possible.
• Software Maintenance is an inclusive activity that includes error corrections, enhancement of
capabilities, deletion of obsolete capabilities, and optimization.
• Need for Maintenance
• Software Maintenance is needed for:-
• Correct errors
• Change in user requirement with time
• Changing hardware/software requirements
• To improve system efficiency
• To optimize the code to run faster
• To modify the components
• To reduce any unwanted side effects.
• Thus the maintenance is required to ensure that the system continues to satisfy user
requirements.
Types of Software Maintenance
• 1. Corrective Maintenance
• Corrective maintenance aims to correct any remaining errors regardless of where
they may cause specifications, design, coding, testing, and documentation, etc.
• 2. Adaptive Maintenance
• It contains modifying the software to match changes in the ever-changing
environment.
• 3. Preventive Maintenance
• It is the process by which we prevent our system from being obsolete. It involves
the concept of reengineering & reverse engineering in which an old system with
old technology is re-engineered using new technology. This maintenance
prevents the system from dying out.
• 4. Perfective Maintenance
• It defines improving processing efficiency or performance or restricting the
software to enhance changeability. This may contain enhancement of existing
system functionality, improvement in computational efficiency, etc.
Causes of Software Maintenance Problems
• Lack of Traceability
• Codes are rarely traceable to the requirements and design specifications.
• It makes it very difficult for a programmer to detect and correct a critical defect affecting
customer operations.
• Like a detective, the programmer pores over the program looking for clues.
• Life Cycle documents are not always produced even as part of a development project.
• Lack of Code Comments
• Most of the software system codes lack adequate comments. Lesser comments may not be
helpful in certain situations.
• Obsolete Legacy Systems
• In most of the countries worldwide, the legacy system that provides the backbone of the
nation's critical industries, e.g., telecommunications, medical, transportation utility services,
were not designed with maintenance in mind.
• They were not expected to last for a quarter of a century or more!
• As a consequence, the code supporting these systems is devoid of traceability to the
requirements, compliance to design and programming standards and often includes dead,
extra and uncommented code, which all make the maintenance task next to the impossible.
Software Maintenance Process
• Program Understanding
The first step consists of analyzing the program to understand.
• Generating a Particular maintenance problem
• The second phase consists of creating a particular maintenance proposal to accomplish
the implementation of the maintenance goals.
• Ripple Effect
• The third step consists of accounting for all of the ripple effects as a consequence of
program modifications.
• Modified Program Testing
• The fourth step consists of testing the modified program to ensure that the revised
application has at least the same reliability level as prior.
• Maintainability
• Each of these four steps and their associated software quality attributes is critical to the
maintenance process. All of these methods must be combined to form maintainability.
Software Maintenance Cost Factors