0% found this document useful (0 votes)
31 views57 pages

Factors Influencing Software Development Productivity

Uploaded by

Jesse Ogunlela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views57 pages

Factors Influencing Software Development Productivity

Uploaded by

Jesse Ogunlela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Factors Influencing Software

Development Productivity—
State-of-the-Art and Industrial
Experiences

ADAM TRENDOWICZ
Fraunhofer Institute for Experimental
Software Engineering, Fraunhofer-Platz 1,
67663 Kaiserslautern, Germany

JÜRGEN MÜNCH
Fraunhofer Institute for Experimental
Software Engineering, Fraunhofer-Platz 1,
67663 Kaiserslautern, Germany

Abstract
Managing software development productivity is a key issue in software organi-
zations. Business demands for shorter time-to-market while maintaining high
product quality force software organizations to look for new strategies to
increase development productivity.
Traditional, simple delivery rates employed to control hardware production
processes have turned out not to work when simply transferred to the software
domain. The productivity of software production processes may vary across
development contexts dependent on numerous influencing factors. Effective
productivity management requires considering these factors. Yet, there are
thousands of possible factors and considering all of them would make no
sense from the economical point of view. Therefore, productivity modeling
should focus on a limited number of factors with the most significant impact
on productivity.
In this chapter, we present a comprehensive overview of productivity factors
recently considered by software practitioners. The study results are based on
the review of 126 publications as well as international experiences of the

ADVANCES IN COMPUTERS, VOL. 77 185 Copyright © 2009 Elsevier Inc.


ISSN: 0065-2458/DOI: 10.1016/S0065-2458(09)01206-6 All rights reserved.
186 A. TRENDOWICZ AND J. MÜNCH

Fraunhofer Institute, including the most recent 13 industrial projects, four work-
shops, and eight surveys on software productivity. The aggregated results show
that the productivity of software development processes still depends signifi-
cantly on the capabilities of developers as well as on the tools and methods
they use.

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
2. Design of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
2.1. Review of Industrial Experiences . . . . . . . . . . . . . . . . . . . . . . 190
2.2. Review of Related Literature . . . . . . . . . . . . . . . . . . . . . . . . 191
2.3. Aggregation of the Review Results . . . . . . . . . . . . . . . . . . . . . 195
3. Related Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
3.1. Context Versus Influence Factors . . . . . . . . . . . . . . . . . . . . . . 196
3.2. Classification of Influence Factors . . . . . . . . . . . . . . . . . . . . . . 196
4. Overview of Factors Presented in Literature . . . . . . . . . . . . . . . . 197
4.1. Crosscontext Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.2. Context-Specific Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.3. Reuse-Specific Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
4.4. Summary of Literature Review . . . . . . . . . . . . . . . . . . . . . . . 202
5. Overview of Factors Indicated by Industrial Experiences . . . . . . . . . 205
5.1. Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.2. Cross-Context Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.3. Context-Specific Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.4. Summary of Industrial Experiences . . . . . . . . . . . . . . . . . . . . . 211
6. Detailed Comments on Selected Productivity Factors . . . . . . . . . . . 213
6.1. Comments on Selected Context Factors . . . . . . . . . . . . . . . . . . . 213
6.2. Comments on Selected Influence Factors . . . . . . . . . . . . . . . . . . 218
7. Considering Productivity Factors in Practice . . . . . . . . . . . . . . . . 229
7.1. Factor Definition and Interpretation . . . . . . . . . . . . . . . . . . . . . 229
7.2. Factor Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.3. Factor Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
7.4. Model Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8. Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 233
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 187

1. Introduction

Rapid growth in the demand for high-quality software and increased investment
in software projects show that software development is one of the key markets
worldwide [2, 122]. Together with the increased distribution of software, its variety
and complexity are growing constantly. A fast changing market demands software
products with ever more functionality, higher reliability, and higher performance.
Software project teams must strive to achieve these objectives by exploiting the
impressive advances in processes, development methods, and tools. Moreover, to
stay competitive and gain customer satisfaction, software providers must ensure that
software products with a certain functionality are delivered on time, within budget,
and to an agreed level of quality, or even with reduced development costs and time.
This illustrates the necessity for reliable methods to manage software development
productivity, which has traditionally been the basis of successful software manage-
ment. Numerous companies have already measured software productivity [3] or
planned to measure it for the purpose of improving their process efficiency, reducing
costs, improving estimation accuracy, or making decisions about outsourcing their
development.
Traditionally, the productivity of industrial production processes has been
measured as the ratio of units of output divided by units of input [4]. This
perspective was transferred into the software development context and is usually
defined as productivity [5] or efficiency [6]. As observed during an international
survey performed in 2006 by the Fraunhofer Institute for Experimental Software
Engineering, 80% of software organizations adapt this industrial perspective on
productivity to the context of software development, where inputs consist of the
effort expended to produce software deliverables (outputs). The assumption those
organizations make is that measuring software productivity is similar to measur-
ing any other forms of productivity. Yet, software production processes seem
to be significantly more difficult than production processes in other industries
[7, 8, 120]. This is mainly because software organizations typically develop new
products as opposed to fabricating the same product over and over again. More-
over, software development is a human-based (‘‘soft’’) activity with extreme
uncertainties from the outset. This leads to many difficulties in the reliable
definition of software productivity. Some of the most critical practical conse-
quences are that software development productivity measures based simply on
size and effort are hardly comparable [7, 125], and that size-based software
estimates are not adequate [9].
188 A. TRENDOWICZ AND J. MÜNCH

These difficulties are related to a variety of practical issues. One of these is


software sizing.1 Large numbers of associated, mutually interacting, and usually
unknown factors influencing software productivity (besides size) are another critical
issue. Even if reliable and consistent size measurement is in place, the key question
of software productivity measurement usually remains and may be formulated as
follows: ‘‘What makes software development productivity vary across different
contexts?’’ In other words, which characteristics of the project environment capture
the reasons why projects consumed different amounts of resources when normalized
for size.
To answer this question, characteristics that actually differentiate software pro-
jects and their impact on development productivity have to be considered. Those
characteristics include personnel involved in the project, products developed, as
well as processes, technologies and methods applied.
Yet, there are several pragmatic problems to be considered when analyzing
software project characteristics and their impact on productivity. One is, as already
mentioned, the practically infinite quantity of potential factors influencing software
productivity [11]. Even though it is possible to select a limited subset of the most
significant factors [12, 13], both factors and their impact on productivity may differ
depending on productivity measurement level and context [7]. This also means that
different project characteristics may have different impacts on the levels of single
developer, team, project, and whole organization. Moreover, even if considered on
the same level, various productivity factors may play different roles in the embedded
software, in Web applications, and in waterfall or incremental development. This
implies, for example, that factors and their ratings covered by the COCOMO model
[14] would require reappraisal when applied in a context other than that in which the
model was built [15].
To address this issue, considerable research has been directed at identifying
factors that have a significant impact on software development productivity. This
includes (1) studies dealing directly with identifying productivity factors [16];
(2) studies aiming at building cost and productivity modeling and measurement
techniques, methods, and tools [8]; (3) data collection projects intending to build
software project data repositories useful for benchmarking productivity and predict-
ing cost [17–20]; and (4) studies providing influence factors hidden in best practices
to reduce development cost (increase productivity) [13, 123].

1
Software sizing (size measurement) as a separate large topic is beyond the scope of this chapter.
For more details on the issue of size measurement in the context of productivity measurement, see, for
example, [10].
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 189

Although a large amount of work has been directed recently at identifying


significant software productivity factors, many software organizations actually still
use the simplified definition of productivity provided in [12]. In the context of high-
maturity organizations [3], where production processes are stable and projects
largely homogeneous, a simple productivity measure may work, provided that it is
used to compare similar projects. Yet, in the case of crosscontext measurement of
heterogeneous projects with unstable processes, using simple productivity may lead
to serious problems. In consequence, organizations that failed to measure simplified
productivity find it either difficult to interpret and/or entailing little benefit [7].
Software organizations do not consider productivity factors because they often do
not know which ones they should start with—which are the most significant ones.
Existing publications provide hardly any support in this matter. Published results are
distributed over hundreds of publications, which in many cases lack a systematic
approach to presenting influence factors (e.g., context information, relative impact
on productivity) and/or present significant factors implicitly as cost drivers in a cost
model, project attributes in a data repository, or best practices for improving
development processes.
In this study, we try to provide a systematic overview of those productivity factors
that have the most significant impact on software productivity. In doing so, we
assume that software cost is a derivative of software productivity and, therefore, we
do not treat factors influencing cost (also called cost drivers) separately. Besides
commonly available published studies investigating productivity factors, the over-
view also includes internal experiences gained in recent years at the Fraunhofer
Institute for Experimental Software Engineering (IESE) in several industrial pro-
jects, workshops, and surveys performed in the area of software productivity
measurement and cost modeling. Yet, due to confidentiality reasons, we do not
disclose some sensitive data such as company name or titles of respective internal
reports that may indicate the names of the involved industry partners. Instead, we
refer to certain characteristics of the companies. Such context information might be
useful when selecting relevant factors for similar contexts, based on the overview
presented in this chapter.
The remainder of the chapter is organized as follows. Section 2 presents the
design of the study. Section 3 provides brief definitions of the terminology used.
Sections 4 and 5 provide a summary of productivity factors based on the literature
review and the authors’ industrial experiences gained at Fraunhofer IESE, respec-
tively. An in-depth discussion on selected productivity factors presented in the
literature is given in Section 6. Section 7 provides an overview of several practical
issues to be considered when identifying and productivity factors and using them to
model software development productivity and cost. Finally, Section 8 summarizes
the presented results and discusses open issues.
190 A. TRENDOWICZ AND J. MÜNCH

2. Design of the Study

The review of productivity factors presented in this chapter consists of two parts.
First, we present a review of the authors’ individual experiences gained during a
number of industrial initiatives. Here, we would include factors identified for the
purpose of cost and productivity modeling as well as factors acquired during surveys
and workshops performed by Fraunhofer IESE in the years 1998–2006 (also referred
to as IESE studies). The second part presents an overview of publications regarding
software project characteristics that have an influence on the cost and productivity of
software development.

2.1 Review of Industrial Experiences


The review includes the following industrial initiatives:
l International commercial projects regarding cost and/or productivity model-
ing performed in the years 1998–2006 at medium and large software organiza-
tions in Europe (mainly Germany, e.g., [21]), Japan, India, and Australia
(e.g., [22]).
l International workshops on cost and/or productivity modeling performed in the
years 2005–2006 in Germany and Japan (e.g., [7, 24, 124]). The workshop
results include factors explicitly encountered by representative of various,
international software organizations. The considered factors consist of both
factors that were considered by experts as having a significant impact on
productivity but not already measured, and factors already included in the
organizational measurement system for managing development cost and
productivity.
l Surveys on productivity measurement practices performed in various interna-
tional companies in the years 2004–2006. These surveys concerned productivity/
cost modeling and included questions about the most significant factors influen-
cing software productivity. The 2004 study was a state-of-the-practice survey
performed across 12 business units of a global software organization dealing
mainly with embedded software. The four 2005 studies were performed across
Japanese units of internationally operating software organizations dealing both
with embedded and business applications. Finally, the 2006 survey was per-
formed across 25 software organizations all over the world and covered various
software domains.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 191

2.2 Review of Related Literature


2.2.1 Review Scope and Criteria
The design of the review is based on the guidelines for structural reviews in
software engineering proposed by Kitchenham [25]. Yet, based on the conclusions
of Jrgensen and Shepperd [10], we relied on a manual rather than an automatic
search of relevant references. The automatic search through the INSPEC repository
was complemented by a manual search through references found in reviewed papers
and using a generic Web search machine (http://www.google.com). The automatic
search criteria were specified as follows:

INSPEC: (SOFTWARE AND ((COST OR EFFORT OR PRODUCTIVITY)


WITH (FACTORS OR INDICATORS OR DRIVERS OR MEASURE))).
TX. ¼ 417 documents

The available papers2 were selected for inclusion (full review) based on the title
and abstract. The review was done by one researcher. The criteria used to decide
whether to include or exclude papers were as follows:
l Availability and age. The papers had to be published no earlier than 1995. Since
the type and impact of influence factors may change over the time [26], we limit
our review to the past decade.
We made an exception for papers presenting software project data sets. Although
indirectly, we did include those that were collected and first published before 1995
but that were applied to validate cost/productivity models published after 1995.
l Relevancy. The papers had to report factors with a potential impact on software
development cost and/or productivity. Implicit and explicit factors were con-
sidered. Explicit factors are those considered in the context of productivity
modeling/measurement. Implicit factors include those already included in
public-domain cost models and software project data repositories. Algorithmic
cost estimation models, for example, include so-called cost drivers to adjust the
gross effort estimated based only of software size (e.g., [121]). Analogy-based

2
Unfortunately, some publications could not be obtained though the library service of Fraunhofer
IESE.
192 A. TRENDOWICZ AND J. MÜNCH

methods, on the other hand, use various project characteristics found in the
distance measure, which is used to find the best analogues to base the final
estimate on. Common software project data repositories indirectly suggest a
certain set of attributes that should be measured to assure quality estimates.
l Novelty. We did not consider studies that adopt a complete set of factors from
other reference and do not provide any novel findings (e.g., transparent model)
on a factor’s impact on productivity. We do not, for instance, consider models
(e.g., those based on neural networks) that use as their input the whole set of
COCOMO I factors and do not provide any insight into the relative impact of
each factor on software cost.
l Redundancy. We did not consider publications presenting the results of the
same study (usually presented by the same authors). In such cases, only one
publication was included in the review.
l Perspective. As already mentioned in the introduction, there are several possi-
ble perspectives on productivity in the context of software development. In this
review, we will focus on project productivity, that is, factors that make devel-
opment productivity differ across various projects. Therefore, we will, for
example, not consider the productivity of individual development processes
such as inspections, coding, or testing. This perspective is the one most com-
monly considered in the reviewed literature (although not stated directly) and is
the usual perspective used by software practitioners [7].
Based on the aforementioned criteria, 136 references were included in the full
review. After a full review, 122 references in total were considered in the results
presented in this chapter. Several references (mostly books) contained well-
separated sections where productivity factors were considered in separate contexts.
In such cases, we considered these parts as separate references. This resulted in the
final count of 142 references. For each included reference, the following information
was extracted:
l Bibliographic information: title, authors, source and year of publication, etc.;
l Factor selection context: implicit, explicit, cost model, productivity measure,
project data repository;
l Factor selection method: expert assessment, literature survey, data analysis;
l Domain context: embedded software (Emb), management information systems
(MIS), and Web applications (Web);
l Size of factor set: initial, finally selected;
l Factor characteristics: name, definition, weighting (if provided).
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 193

2.2.2 Study Limitations


During the literature review, we faced several problems that may limit the validity
of this study and its conclusions. First, the description of factors given in the
literature is often incomplete and limited to a factor’s name only. Even if both
name and definition are provided, factors often differ with respect to their name;
although according to the attached definition, they refer to the same phenomenon
(factor).
Second, published studies often define factors representing a multidimensional
phenomenon, which in other studies is actually decomposed into several separate
factors. For instance, the software performance constraints factor is sometimes
defined as time constraints, sometimes as storage constraints, and sometimes as
both time and storage constraints.
Moreover, instead of factors influencing development productivity, some studies
considered a specific factor’s value and its specific impact on productivity. For
instance, some studies identified the life cycle model applied to developing software
as having an impact of productivity, while others directly pointed out incremental
development as increasing overall development productivity.
Finally, factors identified implicitly within cost models (especially data-driven
models) are burdened with a certain bias, since in most cases, data-driven models are
built using the same commonly available (public) data repositories (e.g., [1, 14, 19,
27, 28]). In consequence, factors identified by such cost models are limited a priori
to specific set of factors covered by the data repository used.
To moderate the impact of those problems on the study results, we undertook the
following steps:
l We excluded from the analysis those models that were built on one of the
publicly available project repositories and simply used all available factors. We
focused on models that selected a specific subset of factors or were built on
their ‘‘own,’’ study-specific data.
l We compared factors with respect to their definition. If this was not available,
we used a factor’s name as its definition.
l In our study, we decomposed a complex factor into several base factors
according to the factor’s definition.
l We generalized factors that actually referred to a specific factor’s value to a
more abstract level. For instance, the ‘‘iterative development’’ factor actually
refers to a specific value development life cycle model factor—we classified
this factor as ‘‘life cycle model.’’
l Finally, we excluded (skipped) factors whose meaning (based on the name and/
or definition provided) was not clear.
194 A. TRENDOWICZ AND J. MÜNCH

2.2.3 Demographical Information


Among the reviewed references, 33 studies identified factors directly in the
context of development productivity modeling, 82 indirectly in the context of cost
modeling (e.g., estimation, factor selection), 14 in the context of a project data
repository (for the purpose of cost/productivity estimation/benchmarking), nine
in the context of software process improvement, two in the context of schedule
modeling, and, finally, two in the context of schedule estimation (see Fig. 1).
Regarding the domain, 27 references considered productivity factors in the
context of embedded software and software systems, 50 in the context of manage-
ment information systems, and four in the context of Web applications (see Fig. 2).
The remaining references either considered multiple domains (18) or did not explic-
itly specify it (43).
With respect to development type, 19 references analyzed productivity factors in
the context of new development, 44 in the context of enhancement/maintenance, and
15 in the context of software reuse and COTS development (see Fig. 3).

Cost modeling 57.7%


Modeling context

Productivity modeling 23.2%

Project data repository 9.9%

Process improvement 6.3%

Schedule estimation 1.4%

Not specified 1.4%

0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0%


Percentage of references

FIG. 1. Literature distribution regarding modeling context.

Management infomation systems 35.2%

Embedded and software systems 19.0%


Domain

Multiple domains 12.7%

Web applications 2.8%

Not specified 30.3%

0.0% 10.0% 20.0% 30.0% 40.0%


Percentage of references

FIG. 2. Literature distribution regarding domain.


FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 195
Development type

Not specified 45.1%

Enhancement and maintenance 31.0%

New development 13.4%

Reuse and COTS 10.6%

0.0% 10.0% 20.0% 30.0% 40.0% 50.0%


Percentage of references

FIG. 3. Literature distribution regarding development type.

Most of the analyzed studies started with some initial set of potential productivity
factors and applied various selection methods to identify only the most significant
ones. The analyzed references ranged from 1 to 200 initially identified factors that
were later reduced down to the 1–31 most significant factors.
In total, the reviewed references mentioned 246 different factors, with 64 repre-
senting abstract phenomena (e.g., team experience and skills) and the remaining 178
concrete characteristics (e.g., analyst capability or domain experience).

2.3 Aggregation of the Review Results


Due to space limitations, we do not provide the complete lists of factors we
identified in this study. Instead, we report those factors that are most commonly
considered as having the greatest impact on software development productivity.
Both in the literature and in the IESE studies, various methods were used to
determine the significance of the considered factors. Several authors present a
simple ranking of factors with respect to their significance [21]; others used various
weighting schemas to present the relative impact of each considered factor on
productivity or cost [16].
We aggregated the results presented in various studies in the following procedure:
l We ranked factors with respect to the weightings they were given in the source
study. The factor having the greatest weight (impact on productivity) was
ranked with ‘‘1,’’ the next one as ‘‘2,’’ and so on. If the source study reported
factors without distinguishing their impact/importance, we considered all
factors as having rank ‘‘1.’’ The disadvantage of the applied aggregation
procedure is that the information regarding the ‘‘distance’’ between factors
with respect to their weight was lost.
196 A. TRENDOWICZ AND J. MÜNCH

l We aggregated the results for each factor using two measures (1) number of
studies a given factor was considered in (Frequency) and (2) median factor’s
rank3 over all the studies it was considered in (Median rank).

3. Related Terminology

3.1 Context Versus Influence Factors


In practice, it is difficult to build a reliable productivity model that would be
applicable across a variety of environments. Therefore, usually only a limited
number of factors influencing productivity are considered within a model; the rest
is kept constant and described as the so-called context for which the model is built
and in which it is applicable. Building, for instance, a model for business application
and embedded real-time software would require covering a large variety of factors
that play a significant role in both domains. Alternatively, one may build simpler
models for each domain separately. In that case, the factor ‘‘application domain’’
would be constant for each model. We would refer to factors that describe a
modeling context as context factors. On the other hand, factors that are included
in the model to explain productivity variance within a certain context will be called
influence factors. Moreover, we would say that context factors determine influence
factors, that is, dependent on the project context, different factors may have a
different impact on productivity and may interact with each other differently.

3.2 Classification of Influence Factors


To gain better visibility, we further categorized (after several other authors [22,
30, 31]) the identified influence factors into four groups: product, personnel, project,
and process factors.
Product factors cover the characteristics of software products being developed
throughout all development phases. These factors refer to such products as soft-
ware code, requirements, documentation, etc. and their characteristics, such as
complexity, size, volatility, etc.
Personnel factors reflect the characteristics of personnel involved in the software
development project. These factors usually consider the experience and capabilities
of such project stakeholders as development team members (e.g., analysts,
designers, programmers, project managers) as well as software users, customers,
maintainers, subcontractors, etc.

3
We assume ranks to be on the equidistant ordinal scale, which allows applying median operation
[29].
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 197

Project factors regard various qualities of project management and organization,


development constraints, working conditions, or staff turnover.
Process factors are concerned with the characteristics of software processes as
well as methods, tools, and technologies applied during a software development
project. They include, for instance, the effectiveness of quality assurance, testing
quality, quality of analysis and documentation methods, tool quality and usage,
quality of process management, or the extent of customer participation.

4. Overview of Factors Presented in Literature

In total, 246 different factors were identified in the analyzed literature. This
section presents the most commonly used productivity factors we found in the
reviewed literature. First, we present factors that are commonly selected across all
development contexts. Then, we present the most popular factors selected in the
context of a specific model, development type, and domain. Additionally, factors
specific for software reuse are analyzed.
Regarding the studies where project data repositories were presented, we ana-
lyzed publicly available data sets that contain more than only size and effort data
(see, e.g., [32] for an overview).

4.1 Crosscontext Factors


Table I presents top four productivity factors and three context factors that were
found to be the most significant ones according to all studies reviewed.
The most commonly identified factors represent complex phenomena, which
might be decomposed to basic subfactors. For each complex factor, we selected (at
most) three most commonly identified subfactors. Team capability and experience,
for instance, is considered to be the most influential productivity factor. Specifically,
programming language experience, application experience and familiarity, and
project manager experience and skills are those team capabilities that were most
often selected in the reviewed studies.

4.2 Context-Specific Factors


In Tables II–IV, each cell contains information in the form X/Y, where X is the
number of studies where a certain factor was selected (Frequency) and Y means
the median rank given to the factor over those studies (Median rank). Moreover, the
most significant factors for a certain context are marked with bold font and cells
198 A. TRENDOWICZ AND J. MÜNCH

Table I
TOP CROSSCONTEXT PRODUCTIVITY FACTORS

Influence factors Frequency (no. of references) Median rank

Team capabilities and experience 64 1


Programming language experience 16 1
Application experience and familiarity 16 1
Project manager experience and skills 15 1
Software complexity 42 1
Database size and complexity 9 1
Architecture complexity 9 1
Complexity of interface to other systems 8 1.5
Project constraints 41 1
Schedule pressure 43 1
Decentralized/multisite development 9 1
Tool usage and quality/effectiveness 41 1
CASE tools 12 1
Testing tools 5 1
Context factors
Programming language 29 1
Domain 14 1
Development type 11 1

containing factors that were classified as the most significant ones in two or more
contexts are gray-filled. An empty cell means that a factor did not appear in a certain
context at all.
For each considered context number of relevant references is given in the table
header (in brackets).

4.2.1 Model-Specific Factors


Table II presents the most common factors selected in the context of cost model-
ing (CM), productivity measurement (PM), project data repositories (DB), and
studies on software process improvement (SPI).

4.2.2 Development-Type-Specific Factors


Table III presents the most common productivity factors selected in the context of
new development (Nd) and maintenance/enhancement projects (Enh/Mtc). We con-
sidered maintenance and enhancement together because the considered references
did not make a clear difference between those two types of development.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 199

Table II
TOP MODEL-SPECIFIC PRODUCTIVITY FACTORS

Influence factors PM (33) CM (82) DB (14) SPI (9)

Team capabilities and experience 14/1 39/1 6/1 3/1


Overall personnel experience 5/1 4/1 – 1/1
Project manager experience and skills 3/2 6/7 3/1 3/1
Task-specific expertise 3/6 2/9.5 – –
Application experience and familiarity 2/1 12/1 1/1 –
Programming language experience 3/9 11/1 2/1 –
Tool experience 1/1 4/1 2/1 –
Teamwork capabilities 1/1 2/1 2/1 –
Training level – 4/1 1/1 2/1
Tool usage and quality/effectiveness 12/2 22/1 5/1 2/1
CASE tools 4/5.5 3/1 4/1 1/1
Testing tools 1/1 4/3.5 – –
Team size 8/1 14/1 5/1 1/1
Reuse 8/1 9/1 5/1 2/2.5
Reuse level 5/1 7/1 3/1 2/2.5
Quality of reused assets 2/1 – – –
Software complexity 7/2.3 25/1 8/1 1/1
Architecture complexity 2/6.5 6/1 1/1 –
Complexity of interface to other systems 1/1 6/2 1/1 –
Database size and complexity 1/1 4/10 4/1 –
Code complexity 1/1 2/1 4/1 –
Team organization 6/2 21/1 3/1 3/1
Team cohesion/communication 3/3 14/1 – 1/3
Staff turnover – 11/1 2/1 2/1.5
Team structure 4/1 4/3.5 1/1 2/1
Project constraints 9/3 21/3 6/1 3/1
Schedule pressure 9/1 16/5 5/1 2/1
Decentralized development 1/16 5/7 2/1 1/1
Process maturity and stability 2/1 7/1 – 4/1
Methods usage 7/6 16/1 5/1 3/1
Reviews and inspections 2/11 4/1 2/1 2/1
Testing 1/5 2/1 1/1 2/1
Requirements management – 2/1 1/1 2/1
Context factors
Programming language 8/1 12/1 8/1 1/1
Life cycle model 2/1 3/1 3/1 1/1
Domain 2/4.5 7/1 4/1 1/1
Development type – 7/1 3/1 1/1
Target platform 1/1 3/1 3/1 –
200 A. TRENDOWICZ AND J. MÜNCH

Table III
TOP DEVELOPMENT-TYPE-SPECIFIC PRODUCTIVITY FACTORS

Influence factors Nd (19) Enh/Mtc (43)

Team capabilities and experience 8/1 26/1


Task-specific experience 3/11 1/5
Application experience and familiarity 1/1 11/1
Programmer capability 1/1 5/3
Programming language experience – 10/1.5
Analyst capabilities – 6/3
Project constraints 7/6 17/1
Schedule pressure 7/5 13/2
Distributed/multisite development 3/8 4/1
Reuse 5/1 11/4
Reuse level 4/1 7/4
Quality of reusable assets 1/1 1/1
Management quality and style 5/1 5/8
Team motivation and commitment 5/1 4/4.5
Product complexity 5/7 17/2
Interface complexity to hardware and software – 6/2
Architecture complexity 1/1 6/2
Database size and complexity – 4/10
Logical problem complexity – 3/1
Required software quality 4/2 15/4
Required reliability 4/2 13/4
Required maintainability – 3/4
Tool usage 3/1 17/2
Testing tools – 3/5
Tool quality and effectiveness 1/6 2/1.5
Method usage 2/4.3 13/4
Review and inspections 1/8 4/1
Testing – 3/1
Context factors
Programming language 4/1 10/1
Target platform 1/1 3/1
Domain – 8/3
Development type – 6/1

4.2.3 Domain-Specific Factors


Table IV presents the most common productivity factors selected in the context of
specific software domains: embedded systems (Emb), management and information
systems (MIS), and Web applications (Web).
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 201

Table IV
TOP PRODUCTIVITY FACTORS USED TO MODEL SOFTWARE COST

Influence factors Emb (27) MIS (50) Web (4)

Team capabilities and experience 14/1 20/1 3/3


Programming language experience 5/1 4/3 1/3
Training level 3/1 1/1 –
Application experience and familiarity 1/1 9/1 –
Programmer capability 1/2 6/3.5 –
Design experience – 1/3 1/1
IT technology experience 1/1 1/1 1/1
Team size 8/1 9/1 1/3
Reuse 3/1 9/1 1/5
Reuse level 2/3.5 6/1 1/5
Tools usage and quality 12/1 9/1 3/2
CASE tools 3/1 4/1 –
Methods usage 12/1 8/1.5 –
Reviews and inspections 4/1 1/1 –
Testing methods 2/3 1/1 1/2
Modern programming practices 2/6 2/5 –
Project constraints 9/1 16/2.2 –
Schedule pressure 7/1 12/3 –
Distributed/multisite development 1/1 5/7 –
Requirements quality 8/1 9/2 1/2
Requirements quality 4/1 – –
Requirements volatility 4/1 8/2.5 1/2
Required software quality 8/1 12/3.5 2/4.5
Required software reliability 7/1 10/4 –
Required software maintainability – 4/3.5 –
Software complexity 3/1 17/1 3/3
Database size and complexity – 7/1 –
Source code complexity 1/1 4/1 –
Complexity of interface to other systems 3/1 2/2.5 1/3
Architecture complexity 3/1 3/1 1/1
Context factors
Programming language 7/1 10/1 1/1
Domain 4/1 4/2 –
Target platform 2/1 4/1 –
Life cycle model 2/1 3/1 –
Development type 1/1 4/1 1/1
Business sector 1/14 4/1 –
202 A. TRENDOWICZ AND J. MÜNCH

Table V
TOP REUSE-SPECIFIC PRODUCTIVITY FACTORS

Factor Frequency (no. of references) Median weight

Quality of reusable assets 12 1


Maturity 7 1
Volatility 5 1
Reliability 3 1
Asset complexity 9 1
Interface complexity 5 1
Architecture complexity 4 1
Team capabilities and experience 6 1
Integrator experience and skills 3 1
Integrator experience with the asset 3 1
Integrator experience with COTS process 3 1
Product support (from supplier) 6 1
Supplier support 6 1
Provided training 5 1
Required software quality 4 1
Required reliability 3 1
Required performance 3 1
Context factors
Life cycle model 2 1
Programming language 1 1
Domain 1 1

4.3 Reuse-Specific Factors


From among the reviewed references, 15 focused specifically on reuse-oriented
software development, that is, development with and for reuse (including COTS—
commercial-off-the-shelf components). Table V summarizes the most common
factors influencing development productivity in that context.

4.4 Summary of Literature Review


The review of the software engineering publications presented here shows that
software development productivity still depends on the capabilities of people and
tools involved (Fig. 4).
The productivity factors selected by researchers and software practitioners con-
firm requirements specification, coding, and testing as the traditionally acknowl-
edged essential phases for the success of the whole development process. Moreover,
the high importance of project constraints and project manager’s skills suggests
project management as another key factor for project success. Finally, as might have
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 203

Team capability and experience 45.1%

Software product complexity 29.6%


Productivity factor

Tool usage & quality/effectiveness 28.9%

Project constraints 28.9%

Programming language 20.4%

Domain 9.9%

Development type 7.7%

0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0%


Percentage of usage

FIG. 4. Literature review: most common productivity factors.

been expected, the internal (architecture, data) and external (interfaces) complexity
of software is a significant determinant of development productivity.
Yet, software complexity and programming language are clearly factors preferred in
the context of project data repositories. This most probably reflects a common intuition
of repository designers that those factors have a significant impact on development
productivity and numerous software qualities (e.g., reliability, maintainability).
As already mentioned in the introduction, the importance of a certain productivity
factor varies depending on the project context. The skills of software programmers
and analysts, for instance, seem to play a more important role in enhancement/
maintenance projects. Similarly, tool/method usage seems to be less important in
new development projects. On the other hand, software development that is not a
continuation of a previous product/release seems to significantly rely on the quality
of project management and team motivation.
The results presented in the literature support the claim made by Fenton and
Pfleeger [30] who suggest that software complexity might have a positive impact on
productivity in the context of new development (and thus is not perceived as a factor
worth considering) and a negative impact in case of maintenance and enhancement.
What might be quite surprising is that the domain is not considered in the new
development project at all (Fig. 5).
Regarding software domain-specific factors, there are several significant differ-
ences between factors playing an important role in the embedded and MIS domains.
The productivity of embedded software development depends more on tools and
methods used, whereas that of MIS depends more on the product complexity (Fig. 6).
Finally, reuse is not as significant a productivity factor as commonly believed.
Less than 17% of publications mention reuse as having a significant influence on
development productivity. This should not be a surprise, however, if we consider the
complex nature of the reuse process and the numerous factors determining its
success (Fig. 7).
204 A. TRENDOWICZ AND J. MÜNCH

New development Enhancement and maintenance

Team capability and 60.5%


experience 42.1%
39.5%
Project constraints
36.8%
39.5%
Product complexity
26.3%
Required software 34.9%
Productivity factor

quality 21.1%
39.5%
Tool usage
15.8%
25.6%
Reuse
26.3%
Programming 23.3%
language 21.1%
30.2%
Method usage
10.5%
Management quality 11.6%
and style 26.3%
Team motivation and 9.3%
commitment 26.3%
18.6%
Domain

0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0%


Percentage of usage

FIG. 5. Literature review: development-specific factors.

Embedded Management and information systems

Team capability and 40.0%


experience 51.9%
Project constraints 32.0%
33.3%
Tools usage and 18.0%
quality 44.4%
16.0%
Methods usage
44.4%
Required software 24.0%
Domain

quality 29.6%
Requirements 18.0%
characteristics 29.6%
Team size 18.0%
29.6%
Programming 20.0%
language 25.9%
Software complexity 34.0%
11.1%
Reuse 18.0%
11.1%
0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0%
Percentage of usage

FIG. 6. Literature review: domain-specific factors.


FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 205

Quality of reusable assets 80.0%

Asset complexity 60.0%


Productivity factor

Product support (from supplier) 40.0%

Team capability & experience 40.0%

Required software quality 26.7%

Life cycle model 13.3%

Domain 6.7%

Programming language 6.7%


0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0% 90.0%
Percentage of usage

FIG. 7. Literature review: reuse success factors.

According to the reviewed studies, the complexity and quality of reusable assets
are key success factors for software reuse. Furthermore, the support of the asset’s
supplier and the capabilities of development team (for integrating the asset into the
developed product) significantly influence the potential benefits/losses of reuse (see
Section 6.2.4 for a comprehensive overview of key reuse success factors).

5. Overview of Factors Indicated by


Industrial Experiences

This section summarizes experiences regarding the most commonly used produc-
tivity factors indicated by our industrial experiences gained in recent years at
Fraunhofer IESE. The studies summarized here are subject to a nondisclosure
agreement, thus only aggregated results without any details on specific companies
are presented.

5.1 Demographics
The IESE studies considered in this section include:
l Thirteen industrial projects on cost and productivity modeling performed for
international software organizations in Europe (mostly), Japan, India, and
Australia.
l Four international workshops on software cost and productivity modeling,
which took place in the years 2005–2006 in Germany and Japan. The workshop
206 A. TRENDOWICZ AND J. MÜNCH

participants came from both academic (universities, research institute) and


industrial (software and system) contexts.
l Eight worldwide surveys on cost and productivity modeling. This includes one
crosscompany survey where we talked to a single representative of various
software organizations, and seven surveys where we talked to a number of
software engineers within a single company.
The studies considered here covered a variety of software domains (Fig. 8) and
development types (Fig. 9).
In total, we identified 167 different factors, with 31 of them representing complex
phenomena and 136 basic concepts (subfactors).

5.2 Cross-Context Factors


Table VI and Figure 10 present these productivity factors that were selected
most often in the context of all IESE studies (commercial projects, surveys, and
workshops).
Similarly to published studies, development team capabilities, project constraints,
and method usage were considered as the most significant factors influencing

Embedded & Real-time 36.0%


Domain

Multiple domains 28.0%

Management information systems 28.0%

Web applications 8.0%

0.0% 10.0% 20.0% 30.0% 40.0%


Percentage of studies

FIG. 8. Industrial experiences: application domains considered.

Multiple 48.0%
Development type

Outsourcing 24.0%

New development 12.0%

Reuse 8.0%

Enhancement & maintenance 8.0%

0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0%


Percentage of studies

FIG. 9. Industrial experiences: development types considered.


FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 207

Table VI
INDUSTRIAL EXPERIENCES: CROSS-CONTEXT PRODUCTIVITY FACTORS

Factor Frequency (no. of references) Median rank

Requirements quality 23 1
Requirements volatility 20 1
Requirements novelty 11 1
Team capabilities and experience 23 3
Project manager experience and skills 10 4.5
Programming language experience 10 17
Teamwork and communication skills 9 3
Domain experience and knowledge 9 5
Project constraints 20 4.6
Schedule pressure 13 2
Distributed/multisite development 11 6
Customer involvement 18 2
Method usage and quality 18 4.3
Requirements management 10 2.5
Reviews and inspections 7 5
Required software quality 18 8.5
Required software reliability 16 4
Required software maintainability 10 15
Context factors
Life cycle model 8 3
Development type 3 5
Domain 2 1.5

software development productivity. Unlike other practitioners, software engineers


involved in IESE undertakings selected requirements novelty and stability, customer
involvement, and required software reliability and maintainability as high-
importance productivity factors. High importance of requirements volatility and
customer involvement might be connected to the fact that most of the IESE studies
regarded outsourcing projects where stable requirements and communication
between software supplier and contractor are essential for project success.
The fact that programming language does not occur at all as a context factor
results from the homogeneous context of most IESE studies (most of them regard a
single business unit or group where a single programming language is in use).

5.3 Context-Specific Factors


In this section, we look at the variances in selected productivity factors across
different contexts they were selected in.
208 A. TRENDOWICZ AND J. MÜNCH

Team capability
92%
and experience

Requirements quality 92%

Project constraints 80%


Productivity factor

Method usage 72%

Customer/user
72%
participation
Required product
72%
quality

Life cycle type 32%

Development type 12%

Domain 8%

0.0% 20.0% 40.0% 60.0% 80.0% 100.0%


Percentage of usage

FIG. 10. Industrial experiences: cross-context factors.

Each cell of Tables VII and IX contains information in the form X/Y, where X is
the number of studies where a certain factor was selected (Frequency) and Y means
the median rank given to the factor over those studies (Median rank). Moreover, the
most significant factors for a certain context are marked with bold font and cells
containing factors that were classified as the most significant ones in two or more
contexts are gray-filled. An empty cell means that a factor did not appear in a certain
context at all.
For each context considered, the number of relevant references is given in the
table header (in brackets).

5.3.1 Study-Specific Factors


In Table VII, the most commonly selected factors across various study types are
presented.
Notice that practically no context factors were considered in the context of
commercial projects, which, in fact, took place in a homogeneous context where
all usually significant context factors were constant (thus had no factual impact on
productivity).
Moreover, product qualities and constraints such as reliability, maintainability,
and performance (execution time and storage) are also mentioned only in the context
of specific commercial projects.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 209

Table VII
INDUSTRIAL EXPERIENCES: STUDY-SPECIFIC PRODUCTIVITY FACTORS

Factor C&P (13) Srv (8) Wsk (4)

Requirements quality 13/3 8/1 2/1


Requirements volatility 12/1 6/1 2/1
Requirements novelty 7/9 3/1 1/1
Required software quality 13/13.5 4/2 1/1
Required software reliability 12/5 3/3 1/1
Required software maintainability 9/16 1/1 –
Required software usability 7/19 – –
Project constraints 12/7.5 5/3 3/1
Schedule pressure 6/2.5 5/3 2/1
Distributed/multisite development 5/14 4/3.5 2/1
Budget constraints – – 2/1
Team capability and experience 12/7.3 8/1 3/1
Programming language experience 8/20.5 1/15 1/1
Project manager experience and skills 7/8 2/4.5 1/1
Platform (virtual machine) experience 5/14 – –
Teamwork and communication skills 3/4 6/2 –
Domain experience and knowledge 4/18.5 4/3 1/1
Training level 3/19 1/4 2/1
Software product constraints 11/12 2/8 1/1
Execution time constraints 11/11 2/7 –
Storage constraints 6/18.5 1/1 –
Customer/user involvement 10/2 7/2 1/1
Method usage 10/8.5 7/3 1/1
Reviews and inspections 3/9 4/1.5 –
Requirements management 6/1.5 4/3 –
Tool usage and quality 9/10 6/4.7 1/1
Testing tools 1/35 2/3 –
Project management tools 1/42 2/10.5 –
Reuse 10/14 4/2 3/1
Reuse level 6/13.5 3/3 2/1
Required software reusability 6/12 1/18 2/3
Context factors
Life cycle model 2/18 3/5 3/1
Domain – 2/1.5 –
Development type – 2/6 1/1
210 A. TRENDOWICZ AND J. MÜNCH

5.3.2 Development-Type-Specific Factors


For most of the studies considered here, the development type was either not clear
(not explicitly stated) or productivity factors were given as valid for multiple
development types. Since there are too little data to analyze productivity factors
for most of the development types considered, we only take a closer look at factors
that are characteristic for organizations outsourcing their software development
(Table VIII).
It might be no surprise that in outsourcing projects, requirements quality and
customer involvement are key factors driving development productivity. Stable
requirements and communication with the software customer (especially during
early development phases) are commonly believed to have crucial impact on the
success of outsourcing projects.

5.3.3 Domain-Specific Factors


Unfortunately, most of the IESE studies considered either regarded multiple
domains or the domain was not clearly (explicitly) specified. Therefore, we analyzed
only factors specific for the embedded systems (Emb) and management and informa-
tion systems (MIS) domains, for which a reasonable number of inputs were available.
Similar to what we found in the related literature, the usage of tools (especially
testing) is characteristic of the embedded domain. Regarding method usage,

Table VIII
INDUSTRIAL EXPERIENCES: PRODUCTIVITY FACTORS IN THE OUTSOURCING CONTEXT

Factor Frequency (no. of references) Median weight

Requirements quality 6 6.2


Requirements volatility 6 1.5
Requirements novelty 5 11
Customer/user involvement 6 4
Project constraints 6 8.5
Distributed/multisite development 3 14
Concurrent hardware development 3 12
Legal constraints 3 30
Software complexity 6 12.5
Database size and complexity 6 22
Complexity of device-dependent operations 2 8.5
Complexity of control operations 2 9
Context factors
Life cycle model 1 18
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 211

different methods play an important role in the embedded and MIS domains. Use of
early quality assurance methods (reviews and inspections) is regarded as having a
significant impact on productivity in the MIS domain, whereas use of late methods,
such as testing, counts more in embedded software development.
Requirements management as well as configuration and change management
activities are significant productivity factors only in the MIS domain. Software
practitioners do not relate them to productivity variance in the embedded domain
because they are usually consistently applied across development projects. According
to our observation, however, the effectiveness of those activities varies widely and
therefore should be considered as a significant influence factor (Table IX).

5.4 Summary of Industrial Experiences


The productivity factors observed in the most recent IESE studies do not differ
much from those indicated in the reviewed literature. Capabilities of development
team, project constraints, and methods usage are the main impact factors. The
outsourcing character of the projects considered in the majority of the IESE studies
gave priority to some additional factors such as requirements quality (volatility,
complexity, and novelty), required product quality (reliability, maintainability, and
performance), and customer involvement.
There were slight differences between the factors considered across various
domains. As in the reviewed literature, tool and method usage were regarded as
more significant in the embedded domain.
The IESE studies considered referred to a rather narrow context (organization,
business unit, group, etc.), where such characteristics as domain and programming
language were quite homogeneous. That is most probably why common context
factors from the literature (see Section 4.1) do not actually count in the context of
IESE studies. Constant factors do not introduce any variance across projects and
thus do not explain productivity variance, which in practice makes their consider-
ation useless (Fig. 10).
One quite surprising observation is that the implementation of early quality
assurance techniques such as reviews and inspections does not seem to have a
deciding impact on development productivity. In only 6 out of 25 studies, this factor
was stated explicitly as having a significant impact on productivity. At the same
time, our practical experiences indicate that those activities are usually ineffective or
are not applied at all. This contradicts the common belief that early quality assurance
activities contribute significantly to the improvement of development productivity
(through decreased testing and rework effort).
Another interesting observation is that while requirements volatility is considered to
have a significant impact on development productivity, requirements management
212 A. TRENDOWICZ AND J. MÜNCH

Table IX
INDUSTRIAL EXPERIENCES: DOMAIN-SPECIFIC PRODUCTIVITY FACTORS

Factor Em (9) MIS (7)

Requirements quality 9/5 7/1


Requirements volatility 9/1 7/1
Requirements novelty 6/10 –
Requirements complexity – 2/1
Software complexity 9/5 5/9
Architecture complexity 3/7 –
Database size and complexity 3/22 2/16.5
Team capabilities and experience 9/8.4 7/3
Programming language experience 7/19 –
Project manager experience and skills 5/9 3/3
Teamwork and communication skills 2/1 5/3
Customer/user involvement 8/2 7/2
Involvement in design reviews 2/19.5 –
Tool usage and quality 8/9 3/15
Testing tools 2/18 –
Method usage 8/9.6 7/3
Requirements management 3/3 7/2
Reviews and inspections 2/6.5 4/7
Testing methods 4/5.5 2/8
Documentation methods 5/20 –
Configuration management and change control 2/22.5 3/4
Project constraints 7/9 6/3.7
Schedule pressure 3/9 5/2
Distributed/multisite development 3/14 4/8.5
Context factors
Life cycle model 3/14 1/9
Domain 1/1 1/2
Application type – 1/2

is not consistently regarded as having such an impact. It is, however, believed


that proper quality management may moderate the negative influence of unstable
requirements on development productivity.
Finally, an interesting and, at the same time, surprising observation we made is
that very mature organizations (e.g., two organizations at CMMI L-5 and one
ISO-9001 certified) consider as very significant factors such factors as Clarity of
development team roles and responsibilities, which are actually common for rather
immature organizations.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 213

6. Detailed Comments on Selected


Productivity Factors
This section presents a brief overview of publications presenting in-depth inves-
tigations on rationales underlying the influence of certain factors on software
development productivity as well as dependencies between various factors. The
summary presented here includes empirical research studies as well as industrial
experiences.

6.1 Comments on Selected Context Factors


This section presents an overview of literature regarding the experiences with
context factors, that is, factors that are used to limit the context of productivity
modeling and analysis (see Section 3.1).

6.1.1 Programming Language


The analysis of the factors presented in the reviewed literature showed that
programming language is the most common context factor (Table I). The impact
of a programming language on development productivity is considered to be so large
that some authors present average development productivities for certain languages,
independent of other potential factors [33]. Moreover, a single organization or
business unit usually uses a limited number of distinct languages. Therefore, con-
sidering this factor when analyzing productivity within a single organization usually
makes no sense. This conclusion is confirmed several data analysis studies (e.g., [34,
35, 126]) where programming language has significant impact on productivity when
analyzed on crossorganization data whereas no influence was observed on
organization-specific data.

6.1.2 Application Domain


According to the literature review presented (Table I), the application domain is
considered as the second most significant context factor.
The studies presented in the literature provide evidence not only that factors
influencing productivity vary across different application domains, but also that,
in principle, the magnitude of productivity alone varies across different domains.
Putnam and Myers [36, 119], for example, analyzed the QSM data repository [37]
and found out that there is a common set of factors influencing software process
214 A. TRENDOWICZ AND J. MÜNCH

productivity. They also found out that the analyzed projects tend to cluster around a
certain discrete value of productivity,4 and that except for a limited number of
projects (outliers), each cluster actually represents a certain application domain.
Therefore, although an exact productivity measurement would require considering
other influence factors within a certain context (cluster), general conclusions about
productivity within a certain context can already be drawn. The authors provided
evidence that projects in a real-time domain tend to be less productive, in general,
than in the business systems domain. Jones combines productivity variance across
domains with different levels of project documentation generally required in each
domain. He observed [31] that the level of documenting varies widely across various
software domains. The productivity of military projects, for instance, suffers due to
the extremely high level of documentation required (at least three times more
documentation per function point is required than in MIS and Software Systems
domains) [31].

6.1.3 Development Type


Development type is another important factor that determines how other project
characteristics influence development productivity.
Fenton and Pfleeger [30] suggest, for instance, that software complexity might
have a positive impact on productivity in the context of new development and a
negative one in case of maintenance and enhancement. The development team is
able, for example, to produce more output (higher productivity) even though the
output is badly structured (high complexity). Yet, one may expect that if producing
badly structured output might be quite productive, then maintaining it might be quite
difficult and thus unproductive. This could be, for instance, observed well in the con-
text of software reuse, where the required quality and documentation of reusable
artifacts have a significant impact on the productivity of their development (negative
impact) and reuse (positive impact).
In the context of maintenance projects, the purpose of software change has a
significant impact on productivity. Bug fixes, for instance, are found to be more
difficult (and therefore less productive) than comparably sized additions of new
code by approximately a factor of 1.8 [38]. Bug fixes tend to be more difficult than
additions of new code even when additions are significantly larger.
Corrective maintenance (bug fixes), however, seems to be much more productive
than perfective maintenance (enhancements) [39]. This supports the common intui-
tion that error corrections usually consist of a number of small isolated changes,

4
The standard deviation of process productivity within clusters ranged from 1 to 4.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 215

while enhancements include larger changes to the functionality of the system.


In both types of changes, the extent of coupling between modified software units
may amplify or moderate the impact of change type on productivity. Changing a
certain software component requires considering the impact of the change on all
coupled components. This usually includes reviewing, modifying, and retesting
related (coupled) components. Therefore, the more coupled components the more
additional work is required (lower productivity).
The time spam between software release and change (so-called ‘‘aging’’ or
‘‘decay’’ effect) is also considered as a significant productivity factor in enhance-
ment and maintenance projects. Graves [38], for instance, confirms the findings of
several earlier works. He provides statistically significant evidence that a one-year
delay in change would cost 20% more effort than similar change done earlier.
Finally, maintenance productivity has traditionally been influenced by the
personal capabilities of software maintainers. It was, for example, found that one
developer may require 2.75 times as much effort as another developer to perform
comparable changes [38]. Yet, it is not always clear exactly which personnel char-
acteristics are determinant here (overall experience, domain experience, experience
with changed software, extent of parallel work, etc.).

6.1.4 Process Maturity


Process maturity is rarely identified directly as a factor influencing development
productivity. Yet, numerous companies use project productivity as the indirect
measure of software process maturity and/or use productivity improvement as a
synonym of process improvement. Rico [40] reports, for instance, that 22% of all
measures applied for the purpose of software process improvement are productivity
measures. Another 29% are development effort, cost, and cycle time measures,
which, in practice, are strongly related to development productivity.
A common belief that pushes software organizations toward process improvement
is that high-maturity organizations (e.g., as measured according to the CMMI model
[41]) are characterized by high-productivity processes5 [3, 17, 42, 43]. In fact, there
are several studies that provide quantitative evidence of that belief. Diaz and King
[17] report, for instance, that moving a certain software organization from CMM
level 2 to level 5 in the years 1997–2001 consistently increased project productivity
by the factor 2.8. A threefold increase in productivity within organizations approa-
ching higher CMM levels has been confirmed by several other software companies

5
Putnam [42] analyzed the QSM database and showed that there seems to be a strong correlation
between an organization’s CMM level and its productivity index.
216 A. TRENDOWICZ AND J. MÜNCH

worldwide [3]. Harter et al. [44] investigate the relationships between process
maturity measured on the CMM scale, development cycle time, and effort for
30 software products created by a major IT company over a period of 12 years.
They found that in the average values for process maturity and software quality, a
1% improvement in process maturity leads to a 0.32% net reduction in cycle time,
and a 0.17% net reduction in development effort (taking into account the positive
direct effects and the negative indirect effects through quality).
Yet, according to other studies (e.g., [45, 46]), overall organization maturity can
probably not be considered as a significant factor influencing productivity. One may
say that high maturity entails a certain level of project characteristics (e.g., CMMI
key practices) positively influencing productivity; however, which characteristics
influence productivity and to which extent varies most probably between different
maturity levels. In that sense, process maturity should be considered as a context
rather than as an influence factor. Influence factors should then refer to single key
practices rather than to whole maturity levels.
Overall maturity improvement should be selected if the main objective is a long-
term one, for example, high-quality products delivered on time and within budget. In
that case, increased productivity is not the most important benefit obtained from
improved maturity. The more important effects of increased maturity are stable
processes, which may, in turn, facilitate the achievement of short-term objectives,
such as effective productivity improvement (Fig. 11). One must be aware that
although increasing the maturity of a process will not hurt productivity in the
long-term perspective, it may hurt it during the transition to higher maturity levels.
It has been commonly observed (e.g., [48]) that the introduction of procedures, new
tools, and methods is, in the short-term perspective, detrimental to productivity
(so-called learning effect). Boehm and Sullivan [47], for instance, illustrate
productivity behavior when introducing new technologies (Fig. 11).
This learning effect can be moderated by introducing a so-called delta team
consisting of very skilled personnel who are is able to alleviate the short-term,
negative effect on productivity of implementing certain key process areas (KPAs).
This is, however, nothing else than preventing the productivity decrease caused by
investments on implementing certain KPAs by improving factors that have proved
to have a significant positive impact on productivity (in this case, team capabilities).
Benefits from process improvement and from introducing new technologies can also
be gained faster by sharing experiences and offering appropriate training and
management support [49, 50].
An alternative way of moderating the short-term, negative impact of process
improvement on productivity would be to first implement KPAs that have a
short-term, positive impact on productivity (e.g., team capabilities, personnel
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 217

Estimation error Relative productivity


Productivity/estimation error

Unprece- Prece- Component- System of


COTS VHLL
dented dented based systems

Time, domain understanding

FIG. 11. Short-term effect on productivity of introducing new technology [47].

continuity) in order to offset the negative impact of implementing other KPAs.


Example practices (KPAs) that have proved to make the greatest contribution to
organizational maturity (highest benefits) can be found in [3].
Yet, a certain maturity level (optionally confirmed by a respective certification) is
rather a ‘‘side effect’’ (a consequence) of process improvement (e.g., driven by a
high-productivity objective) rather than an objective for its own sake—which is the
right sequence [51].
Finally, the maturity of measurement processes should be the subject of special
attention since inconsistent measurements usually lead to misleading conclusions
with respect to productivity and its influencing factors. In that sense, one may say
that mature and rigorous measurement processes have a significant impact on
productivity [52]. Boehm and Sullivan [47] claim that there is about a 15%
range of variation in effort estimates between projects and organizations due to
the counting rules for data. Niessink and van Vliet [53] observed, in turn, that the
existence of a consistently applied (repeatable) process is an important prerequisite
for a successful measurement program. If, for instance, a process exists in multiple
variants, it is important to identify those factors that differentiate various variants
and to know which variant is applied when.
218 A. TRENDOWICZ AND J. MÜNCH

6.1.5 Development Life Cycle


Only a few studies selected the development life cycle model as a significant
factor influencing software productivity. This factor is, however, usually addressed
indirectly as a context factor. Existing empirical studies and in the field exper-
ience (e.g., [54]) confirm, for instance, the common belief that iterative and incre-
mental software development significantly increases development productivity
(as compared to the traditional waterfall model).

6.2 Comments on Selected Influence Factors


In this section, we present an overview of comments on and experiences with
selected factors influencing software development productivity.

6.2.1 Development Team Characteristics


Team size and structure is an important team-related factor influencing develop-
ment productivity. In principle, the software engineering literature indicates signifi-
cant benefits from using small teams. According to Camel and Bird [55], a common
justification for small teams is not that a small team is so advantageous, but that
a large team is disadvantageous. However, this does not seem to be consistent for the
whole development life cycle. Brooks [56] says, for instance, that when we add
people at the project’s end, the productivity goes down. Blackburn et al. [57] support
this thesis claiming that the small ‘‘tiger teams’’ in later stages of software develop-
ment (after requirements specification) tend to increase productivity. Those experi-
ences support the manpower Rayleigh buildup curve proposed in [58] and adapted in
[36] where a larger team is used during early development phases and the team size
drops during later phases.
However, the impact of team size on productivity has much to do with the
communication and coordination structure [30, 59, 60]. In other words, the impact
of team size depends significantly on the degree to which team members work
together or separately, and on the communication structure [61] rather than simply
on the number of people in a team. It has been observed that the main reason why
large teams are usually so unproductive is primarily the burden of maintaining
numerous communication links between the team members working together
[55]. Thus, even large teams, where developers work mostly independently, do
not seem to have a significant negative impact on productivity compared to small
teams [61].
Even if large teams are indispensable, their negative influence on productivity can
be moderated by an effective team (communication) structure and communication
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 219

media. For instance, even in large-scale projects, minimizing the level of work
concurrency by keeping several small, parallel teams (with some level of indepen-
dence) instead of one large team working together seems to be a good strategy for
preventing a drop in productivity [62]. That is, for example, why agile methods
promoting two-person teams working independently (pair programming) may be
(however only in certain conditions) more productive than large teams working
concurrently [63]. Yet, as further reported in [63], well-defined development pro-
cesses are not without significance in agile development. A large industrial case
study proved, for example, that even pairs of experienced developers working
together need a collaborative, role-based protocol to achieve productivity [63].
Observing social factors in hyperproductive organizations, Cain et al. [64] found
the developer close to the core of the organization. Average and low productive
organizations exhibit much higher prestige values for management functions than
for those who directly add value to the product (developers). Collaborations in these
organizations almost always flow from the manager in the center to other roles in the
organization. Even though a star structure (‘‘chief surgical team’’) is mostly the
favored communication structure, it seems that its impact on productivity depends
on who plays the central role, managerial (outward control flow) or technical
(inward control flow) staff. The authors point to architect as an especially presti-
gious role in a team within highly productive organizations.
The use of proper communication media may also significantly improve team
communication and, in consequence, development productivity. Face-to-face com-
munication still seems to be the most efficient way of communication. Carey and
Kacmar [65], for instance, observed that although simple tasks can be accomplished
successfully using electronic communication media (e.g., telephone, email, etc.)
complex tasks may result in lower productivity and larger dissatisfaction with
electronic medium used. Agile software development considers one of the most
effective ways for software developers to communicate to be standing around a
whiteboard, talking, and sketching [66]. Ambler [51] goes further and proposes to
define a specific period during the day (5 h) during which team members must be
present together in the workroom. Outside that period, they may go back to their
regular offices and work individually. A more radical option would be to gather the
development team in a common workroom for the whole duration of the project.
Teasley et al. [67] conclude their empirical investigation by saying that having
development teams reside in their own large room (an arrangement called radical
collocation) positively affected system development. The collocated projects had
significantly higher productivity and shorter schedules than both the performance
of similar past projects considered in the study and industry benchmarks. Yet, face-
to-face communication does not guarantee higher productivity and may still vary
widely dependent on the team communication structure (especially for larger teams).
220 A. TRENDOWICZ AND J. MÜNCH

Finally, it was observed that, besides a core team structure, close coupling to the
quality assurance staff and the customer turned out to be a pattern within highly
productive software organizations [64].
Staff turnover is another team-related project characteristic having a major impact
on development productivity. Collofello et al. [68] present an interesting study
where, based on a process simulation experiment, they investigated the impact of
various project strategies on team attrition. No action appeared to be the most
effective strategy when schedule pressure is high and cost containment is a priority.
Replacing a team member who left a team alleviates the team exhaustion rate
(lightens increased attrition). Even though overstaffing is an expensive option,
replacing a team member who left with several new members minimizes the
duration of the project. Thus, this strategy should be considered for projects in
which the completion date has been identified as the highest priority. It is, however,
not clear how those results (especially regarding overstaffing) relate to Brooks’ real-
life observation that putting more staff at the end of a project makes it even more
late [56].
Task assignment is the next reported human-related factor differentiating software
development productivity. Boehm [43], for instance, introduces in his COCOMO II
model the task assignment factor to reflect the observation that proper assignment
of people to corresponding tasks has a great impact on development productivity.
Hale et al. [61, 69] go further and investigate related subfactors such as intensity,
concurrency, and fragmentation. They observed that the more time is spent on the
task by the same developer (intensity), the higher the productivity. Further, the more
team members work on the same task (concurrency), the lower the productivity
(communication overhead disproportionately larger than task size). Finally, the
more fragmented the developer’s time over various tasks (fragmentation), the
lower the productivity (due to overhead to switch work context).
Finally, team capabilities and skills is traditionally acknowledged as the most
significant personnel characteristic that influences development productivity. Team
capabilities also have an indirect impact on productivity by influencing other factors
with a potential impact on development productivity. The impact of the team
structure on productivity, for instance, usually assumes highly skilled technical
people. Moreover, the full benefits of tool support and technological improvements
cannot be achieved without capable personnel [70]. On the other hand, the impact
of team capabilities on productivity may be amplified or moderated by other
factors. An appropriate management and communication structure can, for instance,
moderate the negative impact of a low-experienced team on productivity; it can,
however, never compensate for the lack of experience [57].
Researchers and practitioners agree about the high importance of team capabil-
ities regarding development productivity. Yet, it is not clear exactly which skills
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 221

have an impact on productivity, and which do not. Presented in Sections 4 and 5


suggests that programming language skills, domain experience, and project
management skills are essential characteristics of a highly productive development
team. Krishnan [59], however, claims that whereas higher levels of domain experi-
ence of the software team is associated with a reduction in the number of field
defects in the product (higher quality), there is no significant direct association
between either the language or the domain experience of the software team and the
dollar costs incurred in developing the product. This does not, however, necessary
imply no impact on productivity. Software teams with low experience (<2 years)
may cost more due to lower productivity, whereas highly experienced teams
(>10 years) may cost more due to higher salaries. Here we can again see that
when observing development productivity and its influence factors, a careful defi-
nition of respective measures is crucial for the validity of the conclusions drawn.
Although not reflected by the results presented in this chapter (Sections 4 and 5),
significant team capabilities do not only cover technical skills. There are several
nontechnical skills that are found to be essential with respect to development
productivity. Examples are ability and willingness to learn/teach quickly, multiarea
(general) specialization, knowledge of fundamentals, and flexibility [71]. It has been
observed (especially in a packaged software development environment) that it is the
innovation, creativity, and independence of the developers that determine develop-
ment productivity and project success [69, 72]. Yet, as warned in [73], an emphasis
on creativity and independence creates an atmosphere that contributes to a general
reluctance to implement accurate effort and time reporting. This, in turn, limits the
reliability of the collected productivity data and may lead to wrong conclusions
regarding productivity and its impact factors.

6.2.2 Schedule/Effort Distribution


The right schedule and work distribution are the next significant parameters
influencing development productivity. Looking at the published investigations,
one may get the impression that the development schedule should be neither too
long nor to short. Thus, both schedule over- and underestimation may have a
negative impact on productivity.
Several authors have observed the negative impact of schedule compression on
productivity (so-called deadline effect) [43, 74]. A schedule compression of 25%
(which is considered as very low compression) may, for instance, lead to a 43%
increase in development costs [75]. Schedule compression is recognized as a key
element by practically all of the most popular commercial cost estimation tools (e.g.,
PRICE-S [76], SLIM [36, 119], SEER-SEM [77], and COCOMO I/II [14, 43]),
which implement various productivity penalties related to it [75].
222 A. TRENDOWICZ AND J. MÜNCH

Others show that schedule expansion also seems to have a negative impact on
development productivity [26, 57, 75]. One of the possible explanations of this
phenomenon is the so-called Parkinson’s low, which says that ‘‘the cost of the
project will expand to consume all available resources’’ [78].
Yet, the right total schedule is only one part of success. The other part is its
right distribution across single development phases. An investigation on productiv-
ity shown in [57] concludes that some parts of the process simply need to be
‘‘unproductive,’’ that is, should take more time. This empirical insight confirms
the common software engineering wisdom that more effort (and time) spent on early
development stages bears fruit, for example, through less rework in later stages and
finally increases overall project productivity. Apparently, the initial higher analysis
costs of highly productive projects are more than compensated for by the overall
shorter time spent on the total project [23, 79]. If the analysis phase, for example, is
not done properly in the first place, it takes longer for programmers to add, modify,
or delete codes. The Norden’s Rayleigh curve [58] may here be taken as a reference
shape of the most effective effort and time distribution over the software develop-
ment life cycle. Yet, the simple rule of ‘‘the more the better’’ does not apply here and
after exceeding a certain limit of time spent on requirements specification, the
overall development productivity (and product quality) may drop [16]. The results
of a case study at Ericsson [80] determined, for example, that the implementation
phase had the largest improvement potential in the two studied projects, since it
caused a large faults-slip-through to later phases, that is, 85 and 86% of the total
improvement potential of each project.

6.2.3 Use of Tools and Methods


The effect of the usage of certain tools and technologies on software develop-
ment productivity is widely acknowledged (see Sections 4 and 5), but at the same
time, paradoxically not well understood [81]. The opinions regarding the impact of
tool usage vary largely across individual studies. Following the factors in Boehm’s
COCOMO II model [43], numerous organizations consider tool usage as a signifi-
cant means for improving development productivity. Yet, empirical studies have
reported their positive and negative effects [57] as well as little (no significant)
direct effect [60]. This paradox might be explained by findings that the effect
of tool usage on development productivity is coupled with the impact of other
factors and that isolated use of tools makes little difference in the productivity
results [1, 71, 82].
It has been observed, for instance, that due to the significant costs involved in
learning the CASE tools, the effect of tools in some projects is negative, that is, they
increase the effort required for developing software products [83]. Other significant
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 223

factors affecting the relationship between tool usage and development productivity
are the degree of tool integration, tool maturity, tool training and experience,
appropriate support when selecting and introducing corresponding tools (task-
technology fit), team coordination/interaction, structured methods use, documenta-
tion quality, and project size [54, 82–85]. For instance, when teams receive both
tool-specific operational training and more general software development training,
higher productivity is observed [83]. The same study reports on a 50% increase in
productivity if tools and formal structured methods are used together.
Abdel-Hamid [1] observed, moreover, that studies conducted in laboratory set-
tings using relatively small pilot projects tend to report productivity improvements
of 100–600%, whereas when the impact of CASE tools is assessed in real organiza-
tional settings, much more modest productivity gains (15–30%) or no gains at all
were found. The author relevantly summarizes that project managers are the main
source of failed CASE tools application, because most often they do not institute
rudimentary management practices needed before and during the introduction of
new tools. What they usually do is to look for a solution they can buy. Actually, in a
simulation presented by Abdel-Hamid, almost half the productivity gains from new
software development tools were squandered by bad management. Moreover,
Ambler [51] underlines being flexible regarding tool usage. He observed that each
development team works on different things, and each individual has different ways
of working. Forcing inappropriate tools on people will not only hamper progress, it
can destroy team morale.
The impact of project size and process maturity on the benefits gained from
introducing tools is shown in [86]. The authors observed at IBM software solutions
that in the context of advanced processes, productivity gains through requirements
planning tools may vary between 107.9% for a small project (five features) and
23.5% for a large project (80 features). Yet, in the context of a small project
(10 features) such a gain may vary between 26.1 and 56.4% when regular and
advanced processes are applied, respectively. A replicated study confirmed this
trend—productivity decreases with growing project size and higher process com-
plexity. The productivity loss in the larger project was due to additional overhead for
processing and maintaining a larger amount of data produced by a newly introduced
tool. Higher process complexity, on the other hand, brings more overhead related to
newly introduced activities (processes). Yet, measuring at the macrolevel makes it
difficult to separate the impact of the tool from other confounding variables (such as
team experience and size/complexity of a single feature). Therefore, the results of
Bruckhaus [86] should be interpreted cautiously [87].
The use of software tools to improve project productivity is usually interpreted in
terms of automated tools that assist with project planning, tracking, and manage-
ment. Yet, nonautomated tools such as checklists, templates, or guidelines that
224 A. TRENDOWICZ AND J. MÜNCH

help software engineers interpret and comply with development processes can be
considered as supporting the positive impact of high-maturity processes on
improved project productivity [71].

6.2.4 Software Reuse


Reuse of software products, processes, and other software artifacts is considered
the technological key to enabling the software industry to achieve increased levels of
productivity and quality [7, 88].
Reuse contributes to an increase in productivity in both new development and
software maintenance [89]. There are two major sources of savings generated by
reuse (1) less constructive work for developing software (reuse of similar and/or
generic components) and (2) less analytical work for testing software and rework to
correct potential defects (reuse of high-quality components). In other words, soft-
ware reuse catalyzes improvements in productivity by avoiding redevelopment and
improvements in quality by incorporating components whose reliability has already
been established [90]. That is, for example, why reuse benefits are greater for more
complex reusable components [91] (however, this positive effect levels off beyond a
certain component size).
Nevertheless, reuse is not for free and may, at least at the beginning, bring
negative savings (productivity loss). A high-quality, reusable (generic) software
component first needs to be developed, which usually costs more that developing
specific software (not intended to be reused). Populating the repository of reusable
artifacts alone contributes to an initial loss of productivity [91]. The repository must
not expand indefinitely, due to additional maintenance costs. On the one hand,
creating and maintaining rarely used, small repositories of small components tends
to cost more than the reuse savings they generate. As the repository size increases,
opportunities for reuse tend to increase, generating more development savings. On
the other hand, maintaining and searching through very large repositories may again
generate negative reuse savings.
Later on, to actually reuse a certain component, it has to be identified as available
in the repository and retrieved. Then, the feasibility of the component to be reused in
a specific context has to be determined (a component has to be analyzed and
understood). Even if accepted, a component often cannot be reused as is, but has
to be modified in order to integrate it into the new product. At the end, it has to be
(re)tested in a new context. In each step of reuse, a number of factors influencing
its productivity have to be considered. Finally, if the total cost for retrieving,
evaluating, and integrating a component is less than the cost of writing it from
scratch, it makes economic sense to reuse the component.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 225

The success of reuse depends on numerous factors. Rinie and Sonnemann [92]
used an industrial survey to identify several leading reuse success factors (so-called
reuse capability indicators):
l Use of product-line approach
l Architecture that standardizes interfaces and data formats
l Common software architecture across the product line
l Design for manufacturing approach
l Domain engineering
l Management that understands reuse issues
l Software reuse advocate(s) in senior management
l Use of state-of-the-art tools and methods
l Precedence of reusing high-level software artifacts such as requirements and
design versus just code reuse
l Tracing end-user requirements to the components that support them
Atkins et al. [87, 93] confirm part of these results in an empirical study where the
change effort of large telecommunication software was reduced by about 40%
by using a version-sensitive code editor and about four times when the domain
engineering technologies were applied.
The impact of reuse on development productivity, like most other influence
factors, is strongly coupled with other project characteristics. It should thus not be
simplified by taking as granted the positive impact of reuse on productivity. Frakes
and Succi [94] observed, for instance, a certain inconsistency regarding the relation-
ship between reuse and productivity across various industrial data sets, with some
relationships being positive and others negative.
One of the factors closely coupled with reuse is the characteristics of the person-
nel involved in reuse (development of reusable assets and their reuse). Morisio et al.
[95] observed that the more familiar developers are with reused, generic software
(framework), the more benefit is gained when reusing it. The authors report that
although developing a reusable asset may cost 10 times as much as ‘‘traditional’’
development, the observed productivity gain of each development where the asset is
reused reached 280%. Thus, the benefit from creating and maintaining reusable
(generic) assets increases with the number of its reuses.
Another factor influencing the impact of reuse on development productivity is the
existence of defined processes. Soliman [96], for example, identifies the lack of a
strategic plan available to managers to implement software reuse as a major factor
affecting the extent of productivity benefits gained from reuse. Major issues for
managers to consider include commitments from top management, training for
226 A. TRENDOWICZ AND J. MÜNCH

software development teams, reward systems to encourage reuse, measurement


tools for measuring the degree of success of the reuse program, necessary reuse
libraries, rules regarding the development of reusable codes, domain analysis, and
an efficient feedback system.
Software reuse in the context of object-oriented (OO) software development is
one specific type of reuse that proved to be especially effective in increasing
development productivity [97, 98]. Yet, the common belief that using the OO
paradigm is sufficient for improving development productivity does not have
much quantitative evidence supporting it [98]. OO reuse requires considering
several aspects, such as the type of reuse or the domain context of reuse. Results
of experiments in the context of Cþþ implementation show, for instance, that since
black-box reuse (class reuse without modification) increases programmer produc-
tivity, the benefits of white-box reuse (reuse by deriving a new class from an existing
one through inheritance) are not so clear (especially for reuse of third-party library
classes) [29]. Finally, introducing the OO paradigm, like introducing any new
technology, requires considering and often adjusting the business context (model)
to gain the expected productivity benefits [74]. For example, inappropriate schedule
planning (constrained or too long) or lack of process control (e.g., over the effects of
newly introduced technology) may completely level down any benefit from the
applied new technology, including the object-oriented paradigm [74].
The use of COTS components is closely related to software reuse and might, in
practice, be classified as a subclass of software reuse. Yet, software reuse differs
from COTS software in three significant ways [1]: (a) reuse components are not
necessarily able to operate as standalone entities (as is assumed to be the case with
most components defined as COTS software); (b) reuse software is generally
acquired internally within the software developing organization (by definition,
COTS components must come from outside); and (c) reused software usually
requires access to the source code, whereas with COTS components, access to the
source code is rare (references to so-called ‘‘white-box’’ COTS notwithstanding).
From that perspective, the productivity of software development in the context of
software reuse and use of COTS is influenced by overlapping sets of factors.
In principle, the factors presented in the literature focus on the productivity of
integrating COTS into developed software [1]. Abts [99] postulates, for instance, that
increasing the number of COTS components is economically beneficial only up to a
certain point, where the savings resulting from the use of COTS components break
even with the maintenance costs arising from the volatility of such components.
Finally, software product lines and domain engineering as the most recent
incarnations of the reuse paradigm should be considered as a potential factor
influencing productivity of software development in general, and software changes
in particular. In fact, they may potentially reduce the cost of software change
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 227

three- to fourfold (telecommunication switch system) [100, 101]. Such a gain is in


line with the generally accepted view that domain engineering techniques improve
software productivity two- to 10-fold. Yet, the cost of change is also influenced by
other factors. Besides, quite obviously, the size and complexity of change, the type
of change also has a substantial impact on change productivity. Similarly to [38], the
authors observed, for instance, that changes that fix bugs are more difficult than
comparably sized additions of new code (see also Section 6.1.3).
Concluding, there are various aspects of software reuse (besides simply the level
of reuse) that have to be taken into account when considering reuse as a way to
increase development productivity in a certain context. In practice, it is recom-
mended to collect organization-specific data related to reuse and productivity, and
use these data to guide reuse projects. However, one has to be careful when defining
both productivity and reuse measures for that purpose, since improper measure
definition might lead to wrong conclusions [102].

6.2.5 Software Outsourcing


Outsourcing has recently been gaining a lot of attention from the software
community as a means for reducing development costs. According to SPR data
[13], the level of outsourcing tripled in the years 1989–2000. In 2000, about 15% of
all information systems were produced under contract or outsource agreements (2%
international outsource).
Now, it is not clear if reduced cost is related to lower manpower costs or higher
development productivity at organizations providing outsourcing services. Analyz-
ing the ISBSG data [19], Carbonneau [103] confirmed the results of earlier studies
(e.g., [104]) that outsourced projects do not have significantly different productivity
(in terms of functionality delivered per effort invested) than in-source projects.
Therefore, software development outsourcing will only lead to significant cost
savings when the outsourcing provider has access to significantly cheaper labor.
This is consistent with prior research [105], which concludes that ‘‘an external
developer must have a considerable cost advantage over an internal developer in
order to have the larger net value.’’
Nowadays, companies are more and more interested not only in lower labor cost
but also in increased productivity of outsourcing projects. A study performed across
15 business units of a large international software organization performed by
Fraunhofer IESE showed that 42% of the respondents plan to use productivity
measurement to actually manage outsourcing projects. For that purpose, they need
to identify factors that influence productivity in the context of outsourcing.
In case of outsourcing projects, communication between software provider and
outsourcing organization seems to be a crucial aspect influencing development
228 A. TRENDOWICZ AND J. MÜNCH

productivity. As already mentioned in Section 6.2.1, the number of involved people


(team size) as well as communication structure have a significant impact on devel-
opment productivity. This is especially true for outsourced projects [46, 106], since
outsourcing projects usually suffer from a geographical and mental distance between
the involved parties. Software might be outsourced to an organization in the same
country (near-shore) or abroad (far-shore). Due to geographical, temporal, and
cultural distances, international outsourcing is found to be between 13 and 38%
more expensive than national outsourcing [107]. In that context, communication
facilities play an essential role. A summary of communication means in the context
of offshoring projects can be found, for instance, in Moczadlo [108] (Fig. 12).
Team and task distribution is another project aspect related to communication
between project stakeholders (see also Section 6.2.1) and thus should be considered
in the context of software outsourcing. de Neve and Ebert [109] strongly advise
building coherent and collocated teams of fully allocated engineers. Coherence
means splitting the work during development according to feature content and
assembling a team that can implement a set of related functionality. Collocation
means that engineers working on such a set of coherent functionality should sit
in the same building, perhaps within the same room. Full allocation implies that
engineers working on a project should not be distracted by different tasks in
other projects.

Very important Neither Less important/unimportant


0.0%
Email 0.0%
100.0%
5.3%
Telephone/fax 6.7%
88.0%
Face-to-face 5.3%
9.3%
communication 85.3%
Communication means

22.7%
Telephone conferences 17.3%
60.0%
Inter- and intranet 25.3%
20.0%
applications 54.7%
Discussion forums, 44.0%
22.7%
chats 33.3%
38.7%
Workflow systems 30.7%
30.7%
54.7%
Video conferences 25.3%
20.0%
64.0%
Virtual meeting rooms 22.7%
13.3%
0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0% 90.0% 100.0% 110.0%
Percentage of consideration

FIG. 12. Importance of communication channels [108].


FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 229

Another already known factor that influences the productivity of outsourcing


projects is project management, including realistic, detailed, and well-structured
project planning [109, 110]. Those aspects cover factors such as schedule pressure
and distribution (see Section 6.2.2), as well as task assignment (see Section 6.2.1).
Customer/contractor involvement, which is less important in case of in-house
development (see results in Sections 4 and 5), becomes essential in outsourcing,
where a software product is developed completely by a service provider [110]. It is
recommended, for instance, that ‘‘at least one technical staff from the contracting
organization should be involved in the details of the work, to form a core of technical
expertise for in-house maintenance’’ [110].

7. Considering Productivity Factors in Practice

There are several essential issues that must be considered when selecting factors
for the purpose of modeling software development productivity, effort/cost,
schedule, etc.
This section presents a brief overview of the most important aspects to be
considered in practice.

7.1 Factor Definition and Interpretation


Despite similar naming, the productivity factors presented in the literature usually
differ significantly across various studies with respect to the phenomenon they
actually represent. Moreover, our experience (e.g., [111]) is that, in practice, even
if factor definitions suggest similar underlying phenomena, these may be interpreted
differently by various experts—dependent on their background and experiences.
Software reliability, for instance, is sometimes defined (or at least tacitly inter-
preted) as including safety and security and sometimes not.
Furthermore, a high-level view on abstract productivity factors that aggregate a
number of specific indicators within a single factor may mask potential productivity
problems, as poor results in one area (indicator) can be offset by excellence in another
[73]. Therefore, it is recommended identifying project aspects influencing produc-
tivity on the level of granularity ensuring identification of potentially offsetting
factors. Team capabilities, for instance, may cover numerous specific skills such as
communication or management skills. It may thus be that low communication skills
of developers are not visible because they are compensated by great management
skill of the project manager. In such a case, considering team capabilities as a
productivity indicator would not bring much value.
230 A. TRENDOWICZ AND J. MÜNCH

Therefore, to maximize the validity of the collected inputs (factor’s significance,


factor’s value, and factor’s impact on productivity), the exact definition of the
factors as well as related measures has to be done in the first place. Blindly adopting
published factors and assuming that everyone understands them may consistently
(and usually does) lead to large disappointments. Unclear factor definition results in
invalid data and inaccurate and instable models. In consequence, after investing
significant effort, software organizations finally completely give up quantitative
project management.

7.2 Factor Selection


One of the major purposes of the overview presented in this chapter is to support
software practitioners in selecting factors relevant in their specific context. We are,
however, far from suggesting uncritical adoption of the ‘‘top’’ factors presented here.
This chapter is supposed to increase a software practitioner’s understanding of
possible sources of productivity drivers rather than bias his thinking while selecting
factors relevant in his specific context. The overview presented here should, at most,
be taken as a starting point and reference for selecting context-specific factors.
As with any decision support technology, the prerequisite when selecting produc-
tivity factors is that they maximize potential benefit while minimizing related
costs. On the business level, they should then contribute to the achievement of
organizational goals, such as productivity control and improvement, while generat-
ing minimal additional costs. On the software project level, the selected factors
should cover (explain) a possibly large part of the observed productivity variance
and be of minimal quantity to assure an acceptable cost of collecting, interpreting,
and maintaining respective project data.
Therefore, an effective factor selection approach is an essential step in modeling
productivity or any productivity-related phenomena.
Selecting an optimal6 set of productivity factors is not a trivial task. In principle,
we distinguish three major selection approaches: based on experts’ assessments,
based on data analysis, and a hybrid approach where both data- and expert-based
methods are combined. Industrial experiences (e.g., [111]) indicate that none of the
first two methods is able to provide us with an optimal set of factors.
Data-based factor selection techniques provide a subset of already measured
factors that has usually been selected arbitrarily, for example, based on one of the
popular cost estimation models [112, 113]. Data-based selection can usually be

6
A minimal amount of factors that would meet specified cost- and benefit-related criteria, for
example, effective productivity control at minimal modeling cost.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 231

easily automated and thus does not cost much (in terms of manpower7). One
significant limitation is that data-based selection simply reduces the set of factors
given a priori as input. This means in practice that if input data does not cover certain
relevant productivity factors, a data-based approach will not identify them. It may at
most exclude irrelevant factors. Maximizing the probability of covering all relevant
factors would require collecting a significant amount of data, hopefully covering all
relevant factors, which would most probably disqualify such an approach due to
high data collection costs.
On the other hand, expert-based factor selection techniques seem to be more
robust, since experts are able to identify (based on their experience) factors unmea-
sured so far. However, experts tend to be very inconsistent in their assessments,
depending, for example, on personal knowledge and expertise. Across 17 IESE
studies where we asked experts to rank identified factors with respect to their impact
on productivity and where we measured Kendall’s coefficient of concordance
W 2 ð0; 1Þ [115] to quantify experts’ agreement, in half of the cases (46%) experts
disagreed significantly (W  0.3 at a significance level p ¼ 0.05).
Hybrid approaches to selecting productivity factors seem to be the best alterna-
tive. In the reviewed literature, however, merely 6% of the studies directly propose
some kind of combined selection approach. Most of the published studies (45%)
select productivity factors based on experts’ opinion or already published factors
(with COCOMO factors [43] coming out on top). A successful attempt at combining
data- and expert-based approaches within an iterative framework has been made, for
example, in [111].

7.3 Factor Dependencies


In practice, productivity factors are not independent of each another. Identification of
reciprocal relationships between productivity factors is a crucial aspect of productivity
modeling. On the one hand, explicit consideration of factors’ dependencies provides
software practitioners with a more comprehensive basis for decision making. On
the other hand, the existence of relationships between productivity factors limits the
applicability of certain modeling methods (e.g., multiple regression models) and
thus must be known in advance, before applying certain modeling methods.
In practice, we may distinguish between several kinds of between-factor relation-
ships. Factors may be in a causal relationship, which means that a factor’s change
(cause) leads to (causes) a change (effect) to a related factor. A factor may also

7
Factor selection techniques are easy to automate; however, many of them contain NP-hard algo-
rithms (e.g., [114]), which actually limits their practical applicability.
232 A. TRENDOWICZ AND J. MÜNCH

Customer’s low High


Increases
capabilities and requirements
experience volatility

Increases
Disciplined
Moderates requirements
management

Productivity

FIG. 13. Types of factor relationships.

be correlated, which means that changes to a factor go in parallel with changes to


another factor. Although factors linked in a causal relationship should be correlated,
correlation does not imply causal association.
Moreover, besides influencing another factor, a factor may also influence the
strength of another factor’s impact on productivity. For example (Fig. 13), Customer’s
low capabilities and experiences contributes to Higher requirements volatility, which
in turn leads to higher development effort. Now, the negative impact of Higher
requirements volatility can be moderated (decreased) by Disciplined requirements
management.
There are several practical issues to be considered regarding factor dependen-
cies. The first one is how to identify various kinds of dependencies? Data-
based (analytical) methods are able at most to identify correlations. Cause–effect
relationships can be identified by experts, who, however, tend to disagree in their
subjective assessments. The next issue is what should we do with the identified
dependencies? Should we explicitly model or better eliminate them? What alterna-
tive techniques exist to model or eliminate the identified relationships? Existing
publications on productivity measurement/modeling do not explicitly pay much
attention to those issues.

7.4 Model Quantification


Quantitative productivity management requires systematic collection of related
project data. This entails quantifying selected productivity factors and their impact
on productivity.
Proper definition of measures for each identified factor later on contributes to the
cost and reliability of collecting the data upon which quantitative productivity
management will take place. Noncontinuous factors (nominal, ordinal), especially,
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 233

require a clear and unambiguous definition of scales, reflecting distinct (orthogonal)


classes. Wrong measurement processes and improper scales definition usually lead
to unreliable, messy data, which significantly limits the applicability of certain
analysis methods and the reliability of the obtained analysis results (see, e.g., [111]).
Finally, the impact each identified factor has on productivity and, if explicitly
modeled, on other factors should be quantified. Some statistical approaches, for
example, regression, provide certain weights that reflect each factor’s impact on
productivity. Data mining provides a set of techniques dedicated to weight a factor’s
impact (e.g., [114]). One major disadvantage of those methods is that the weights
they provide are rather hard to interpret by experts. There are also several appro-
aches based on expert assessments to quantify a factor’s impact on productivity. The
COBRA method [111], for instance, uses a percentage change of productivity
caused by the worst-case factor value, which is very intuitive and easy to asses by
experts. A less intuitive measure is used within Bayesian Belief Nets [116], where
the conditional probability of a certain productivity value given the values of the
influencing factors is assessed.

8. Summary and Conclusions

This chapter has presented a comprehensive overview of the literature and of


experiences made within Fraunhofer IESE regarding the most common factors
influencing software development productivity.
The major outcome of the study is that the success of software projects still relies
upon humans.
The second most commonly considered factors are tool and method. However,
even the best tool or method alone is not a silver bullet and cannot be a substitute for
highly skilled people and effective work coordination. Investing in people is still
considered as bringing more benefit than investing in tools and methods only [57].
Tools and methods should therefore be considered as human aid that amplifies the
positive impact of highly skilled and well-coordinated teams on development
productivity [60].
Some productivity factors refer to using or not using certain activities. Yet, a
certain activity may be consistently applied across development projects and there-
fore, at first glance, not by considered as having an impact on development produc-
tivity. However, when we consider the effectiveness of a certain activity, it may
occur that it still has a significant impact on productivity. This calls for considering
software development methods, processes, and tools in terms of their effectiveness
rather than simply using or not using them.
234 A. TRENDOWICZ AND J. MÜNCH

Moreover, any software development effort, even if staffed with skilled indivi-
duals, is likely to be unsuccessful if it does not explicitly account for how people
work together [60]. A software development environment is a complex social
system that may squander the positive impact of skillful individuals as well as
software tools and methods if team communication and coordination fail [1].
Factors facilitating team communication and work coordination are particularly
important in the context of software outsourcing. Geographical and, often, mental
distance between the involved parties (e.g., outsourcing company, software pro-
vider, etc.) require dedicated managerial activities and communication facilities to
maintain a satisfactory level of productivity.
The most commonly selected factors support the thesis that schedule is not a
simple derivative of project effort. The negative impact of project schedule on
productivity, however, is considered only in terms of schedule constraints (com-
pression). Parkinson’s low (‘‘cost of the project will expand to consume all available
resources’’) seems not to be considered in daily practice.
Several ‘‘top’’ factors support the common intuition regarding the requirements
specification as the key development phase. First of all, requirements quality and
volatility are considered to be essential drivers of development productivity. Several
further factors are considered as either contributing to the quality and volatility of
requirements or moderating the impact of already instable requirements on produc-
tivity. Distribution of the project effort (manpower) focusing on the requirements
phase as well as significant customer involvement in the early phases of the
development process are the factors most commonly believed to increase require-
ments quality and stability. The impact of already instable requirements may, on the
other hand, be moderated by disciplined requirements management as well as early
reviews and inspections.
Finally, the results obtained here do not support the traditional belief that software
reuse is the key to productivity improvements. It seems that the first years of
enthusiasm also brought much disappointment. A plethora of factors that should
be considered to gain the expected benefits from reuse might explain this situation.
Ad hoc reuse, without any reasonable cost-benefit analysis and proper investments
to create a reuse environment (e.g., creation and maintenance of high-quality
reusable assets, integration support, appropriate team motivation, and training)
usually contributes to a loss in productivity.
The factors presented in this chapter result from a specific aggregation approach
that reflects current industrial trends. However, it must be considered that the
analyzed studies usually differ widely with respect to the identified factors, their
interdependencies, and their impact on productivity. Therefore, each organization
should consider potential productivity factors in its own environment (‘‘what is good
for them does not have to necessarily be good for me’’), instead of uncritically
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 235

adopting factors used in other contexts (e.g., COCOMO factors) [51]. Moreover,
since software is known as a very rapidly changing environment, selected factors
should be reviewed and updated regularly.
Selecting the right factors is just a first step toward quantitative productivity
management. The respective project data must be collected, analyzed, and inter-
preted from the perspective of the stated productivity objectives [117, 118]. Incon-
sistent measurements and/or inadequate analysis methods may, and usually do, lead
to deceptive conclusions about productivity and its influencing factors [102]. In that
sense, one may say that rigorous measurement processes and adequate analysis
methods also have a significant impact on productivity, although not directly [52].
Therefore, such aspects as clear definition and quantification of selected factors,
identification of factor interdependencies, as well as quantification of their impact
on productivity has to be considered.

Acknowledgments

We would like to thank Sonnhild Namingha from the Fraunhofer Institute for Experimental Software
Engineering (IESE) for reviewing the first version of this chapter.

References

[1] T.K. Abdel-Hamid, The slippery path to productivity improvement, IEEE Software 13 (4) (1996)
43–52.
[2] Gartner, Inc. press releases, Gartner Says Worldwide IT Services Revenue Grew 6.7 Percent in
2004, 8 February 2005 (http://www.gartner.com/press_releases/pr2005.html).
[3] M.C. Paulk, M.B. Chrissis, The 2001 High Maturity Workshop, Special Report, CMU/SEI-2001-
SR-014, Carnegie Mellon Software Engineering Institute, Pittsburg, PA, 2002.
[4] National Bureau of Economic Research, Inc., Output, Input, and Productivity Measurement.
Studies in Income and Wealth, vol. 25 by the Conference on Research in Income and Wealth,
Technical Report, Princeton University Press, Princeton, NJ, 1961.
[5] IEEE Std 1045–1992, IEEE Standard for Software Productivity Metrics, IEEE Computer Society
Press, Los Alamitos, CA, 1992.
[6] K.G. van der Pohl, S.R. Schach, A software metric for cost estimation and efficiency measurement
in data processing system development, J. Syst. Software 3 (1983) pp. 187–191.
[7] N. Angkasaputra, F. Bella, J. Berger, S. Hartkopf, A. Schlichting, Zusammenfassung des 2. Work-
shops ‘‘Software-Produktivitätsmessungen’’ zum Thema Produktivitätsmessung und Wiederver-
wendung von Software [Summary of the 2nd Workshop ‘‘Software Productivity Measurement’’ on
Productivity Measurement and Reuse of Software], IESE-Report Nr, 107.05/D, Fraunhofer Insti-
tute for Experimental Software Engineering, Kaiserslautern, Germany, 2005 (in German).
[8] L.C. Briand, I. Wieczorek, Software resource estimation, in: Encyclopedia of Software Engineering,
(J.J. Marciniak, Ed.), vol. 2. John Wiley & Sons, New York, NY, 2002, pp. 1160–1196.
[9] T. Meznies, Z. Chen, D. Port, J. Hihn, Simple software cost analysis: Safe or unsafe? in: Proc.
International Workshop on Predictor Models in Software Engineering, St. Louis, MO, 15 May 2005.
236 A. TRENDOWICZ AND J. MÜNCH

[10] M. Jrgensen, M. Shepperd, A systematic review of software development cost estimation studies,
IEEE Trans. Software Eng. 33 (1) (2007) 33–53.
[11] T. Noth, M. Kretzschmar, Estimation of Software Development Projects, Springer-Verlag, Berlin,
1984, (in German).
[12] F.J. Heemstra, M.J.I.M. van Genuchten, R.J. Kusters, Selection of Cost Estimation Packages,
Research report EUT/BDK/36, Eindhoven University of Technology, Eindhoven, Netherlands, 1989.
[13] C. Jones, Software Assessments, Benchmarks, and Best Practices, Addison-Wesley Longman, Inc.,
New York, NY, 2000.
[14] B.W. Boehm, Software Engineering Economics, Prentice Hall PTR, Upper Saddle River, NJ, 1981.
[15] B.A. Kitchenham, N.R. Taylor, Software project development cost estimation, J. Syst. Software 5
(1985) 267–278.
[16] T.C. Jones, Estimating Software Cost, McGraw-Hill, New York, NY, 1998.
[17] D. Diaz, J. King, How CMM impacts quality, productivity, rework, and the bottom line, CrossTalk:
J. Defense Software Eng. 15 (3) (2002) 9–14.
[18] D. Greves, B. Schreiber, K. Maxwell, L. Van Wassenhove, S. Dutta, The ESA initiative for
software productivity benchmarking and effort estimation, Eur. Space Agency Bull. 87 (1996).
[19] ISBSG Data Repository. Release 9, International Software Benchmarking Group, Australia, 2005.
[20] Software Technology Transfer Finland (STTF). (http://www.sttf.fi/index.html).
[21] L.C. Briand, K. El Emam, F. Bomarius, COBRA: A hybrid method for software cost estimation,
benchmarking and risk assessment, in: Proc. 20th International Conference on Software Engineering,
April 1998, pp. 390–399.
[22] M. Ruhe, R. Jeffery, I. Wieczorek, Cost estimation for Web applications, in: Proc. 25th Inter-
national Conference on Software Engineering, Portland, OR, 3–10 May 2003, pp. 285–294.
[23] C. Andersson, L. Karlsson, J. Nedstam, M. Höst, B. Nilsson, Understanding software processes
through system dynamics simulation: A case study, in: Proc. 9th Annual IEEE International Confer-
ence and Workshop on the Engineering of Computer-Based Systems, 8–11 April 2002, pp. 41–50.
[24] J. Heidrich, A. Trendowicz, J. Münch, A. Wickenkamp, Zusammenfassung des 1st International
Workshop on Efficient Software Cost Estimation Approaches, WESoC’2006, IESE Report
053/06E, Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany,
April 2006 (in German).
[25] B. Kitchenham, Procedures for Performing Systematic Reviews, Technical Report TR/SE-0401,
Keele University, Keele, UK, 2004.
[26] K.D. Maxwell, L. Van Wassenhove, S. Dutta, Software development productivity of European
space, military, and industrial applications, IEEE Trans. Software Eng. 22 (10) (1996) 706–718.
[27] J.M. Desharnais, Analyse statistique de la productivite des projects de development en informa-
tique apartir de la technique des points des function, Master’s Thesis, University of Montreal,
Canada, 1989 (in French).
[28] C.F. Kemerer, An empirical validation of software cost estimation models, Commun. ACM 30
(1987) 416–429.
[29] M. Lattanzi, S. Henry, Software reuse using Cþþ classes. The question of inheritance, J. Syst.
Software 41 (1998) 127–132.
[30] N.E. Fenton, S.L. Pfleeger, Software Metrics. A Rigorous and Practical Approach, second ed.,
International Thomson Computer Press, London, 1997.
[31] C. Jones, Software cost estimating methods for large projects, CrossTalk: J. Defense Software Eng.
18 (4) (2005) 8–12.
[32] C. Mair, M. Shepperd, M. Jorgensen, An analysis of data sets used to train and validate cost
prediction systems, in: Proc. International Workshop on Predictor Models in Software Engineering,
St. Louis, MO, 15 May 2005, pp. 1–6.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 237

[33] K. Kennedy, C. Koelbel, R. Schreiber, Defining and measuring productivity of programming


languages, Int. J. High Performance Comput. Appl. 11 (4) (2004) 441–448.
[34] E. Mendes, C. Lokan, R. Harrison, C. Triggs, A replicated comparison of cross-company and
within-company effort estimation models using the ISBSG database, in: Proc. International Metrics
Symposium, Como, Italy, 2005, pp. 36–46.
[35] S. Vijayakumar, Use of historical data in software cost estimation, Comput. Control Eng. J. 8 (3)
(1997) 113–119.
[36] L.H. Putnam, W. Myers, Measures for Excellence: Reliable Software on Time, Within Budget,
Yourdon Press, Upper Saddle River, NJ, 1992.
[37] The QSM Project Database, Quantitative Software Management, Inc., McLean, VA (http://www.
qsm.com/database.html).
[38] T.L. Graves, A. Mockus, Inferring change effort from configuration management databases, in:
Proc. 5th International Software Metrics Symposium, Bethesda, MD, 1998, pp. 267–273.
[39] V. Basili, L. Briand, S. Condon, Y.M. Kim, W.L. Melo, J.D. Valen, Understanding and predicting
the process of software maintenance releases, in: Proc. 18th International Conference on Software
Engineering, Berlin, Germany, 1996, pp. 464–474.
[40] F. Rico, Using Cost Benefit Analyses to Develop Software Process Improvement (SPI) Strategies,
A DACS State-of-the-Art Report, ITT Industries Advanced Engineering & Sciences Division,
New York, NY, 2000.
[41] CMMI Project Team, CMMISM for Software Engineering, Version 1.1, Staged Representation,
Technical Report CMU/SEI-2002-TR-029, Carnegie Mellon Software Engineering Institute, Pitts-
burg, PA, 2002.
[42] L.H. Putnam, Linking the QSM Productivity Index with the SEI Maturity Level. Version 6,
Quantitative Software Management, Inc., McLean, VA, 2000 (http://www.qsma.com/pdfs/
LINKING6.pdf ).
[43] B.W. Boehm, C. Abts, A.W. Brown, S. Chulani, B.K. Clark, E. Horowitz, R. Madachy, D. Refer,
B. Steece, Software Cost Estimation with COCOMO II, Prentice-Hall PTR, Upper Saddle River,
NJ, 2000.
[44] D.E. Harter, M.S. Krishnan, S.A. Slaughter, Effect of process maturity on quality, cycle time, and
effort in software product development, Manage. Sci. 46 (4) (2000) 451–466.
[45] B.K. Clark, Quantifying the effects of process improvement on effort, IEEE Software 17 (6) (2000)
65–70.
[46] J.D. Herbsleb, A. Mockus, An empirical study of speed and communication in globally distributed
software development, IEEE Trans. Software Eng. 29 (6) (2003) 481–494.
[47] B.W. Boehm, K.J. Sullivan, Software economics: A roadmap, in: Proc. International Conference on
Software Engineering, Limerick, Ireland, 2000, pp. 319–343.
[48] J. Griffyth, Human factors in high integrity software development: A field study, in: Proc. 15th
International Conference on Computer Safety, Reliability and Security, Vienna, Austria, 23–25
October 1997, Springer-Verlag, London, 1997.
[49] P. Tomaszewski, L. Lundberg, Software development productivity on a new platform: An indus-
trial case study, Inform. Software Technol. 47 (4) (2005) 257–269.
[50] W.K. Vaneman, K. Trianfis, Planning for technology implementation: An SD(DEA) approach, in:
Proc. Portland International Conference on Management of Engineering and Technology,
Technology Management in the Knowledge Era, PICMET-Portland State University, Portland,
OR, 2001.
[51] S.M. Ambler, Doomed from the start: What everyone but senior management seems to know,
Cutter IT J. 17 (3) (2004) 29–33.
238 A. TRENDOWICZ AND J. MÜNCH

[52] S.B. Hai, K.S. Raman, Software engineering productivity measurement using function points:
A case study, J. Inf. Technol. Cases Appl. 15 (1) (2000) 79–90.
[53] F. Niessink, H. van Vliet, Two case studies in measuring software maintenance effort, in: Proc.
International Conference on Software Maintenance, Bethesda, MD, 16–20 November 1998, IEEE
Computer Society Press, Los Alamitos, CA, 1998, pp. 76–85.
[54] G.H. Subramanian, G.E. Zarnich, An examination of some software development effort and
productivity determinants in ICASE tool projects, J. Manage. Inform. Syst. 12 (4) (1996) 143–160.
[55] E. Carmel, B.J. Bird, Small is beautiful: A study of packaged software development teams, J. High
Technol. Manage. Res. 8 (1) (1997) 129–148.
[56] F.P. Brooks, The Mythical Man-Month: Essays on Software Engineering, 20th Anniversary ed.,
Addison-Wesley, Reading, MA, 1995.
[57] J.D. Blackburn, G.D. Scudder, L. Van Wassenhove, Concurrent software development, Commun.
ACM 43 (4) (2000) 200–214.
[58] P.V. Norden, Curve fitting for a model of applied research and development scheduling, IBM J.
Res. Dev. 3 (2) (1958) 232–248.
[59] M.S. Krishnan, The role of team factors in software cost and quality: An empirical analysis, Inform.
Technol. People 11 (1) (1998) 20–35.
[60] S. Sawyer, P. Guinan, Software development: Processes and performance, IBM Syst. J. 34 (7)
(1998) 552–569.
[61] R.K. Smith, J.E. Hale, A.S. Parrish, An empirical study using task assignment patterns to improve
the accuracy of software effort estimation, IEEE Trans. Software Eng. 27 (3) (2001) 264–271.
[62] M. Cusumano, R. Selby, How Microsoft builds software, Commun. ACM 40 (6) (1997) 53–61.
[63] A. Parrish, R. Smith, D. Hale, J. Hale, A field study of developer pairs: Productivity impacts and
implications, IEEE Software 21 (5) (2004) 76–79.
[64] B.G. Cain, J.O. Coplien, N.B. Harrison, Social patterns in productive software development
organizations, Ann. Software Eng. 2 (1) (1996) 259–286.
[65] J.M. Carey, C.J. Kacmar, The impact of communication mode and task complexity on small group
performance and member satisfaction, Comput. Hum. Behav. 13 (1) (1997) 23–49.
[66] A. Cockburn, Agile Software Development, Addison-Wesley Professional, Boston, MA, 2001.
[67] S.D. Teasley, L.A. Covi, M.S. Krishnan, J.S. Olson, Rapid software development through team
collocation, IEEE Trans. Software Eng. 28 (7) (2002) 671–683.
[68] J. Collofello, D. Houston, I. Rus, A. Chauhan, D.M. Sycamore, D. Smith-Daniels, A system
dynamics software process simulator for staffing policies decision support, in: Proc. 31st Annual
Hawaii International Conference on System Sciences, vol. 6, Kohala Coast, HI, 6–9 January 1998,
pp. 103–111.
[69] J. Hale, A. Parrish, B. Dixon, R.K. Smith, Enhancing the COCOMO estimation models, IEEE
Software 17 (6) (2000) 45–49.
[70] I.R. Chiang, V.S. Mookerjee, Improving software team productivity, Commun. ACM 47 (5) (2004)
89–93.
[71] R. Bechtold, Reducing software project productivity risk, CrossTalk: J. Defense Software Eng.
13 (5) (2000) 19–22.
[72] E. Carmel, S. Sawyer, Packaged software teams: What makes them so special? Inform. Technol.
People 11 (1) (1998) 6–17.
[73] Anonymous, Above average(s): Measuring application development performance, Intranet Net-
working Strategies Rep. 8 (3) (2000) 1–4.
[74] T.E. Potok, M.A. Vouk, The effects of the business model on object-oriented software development
productivity, IBM Syst. J. 36 (1) (1997) 140–161.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 239

[75] Y. Yang, Z. Chen, R. Valerdi, B.W. Boehm, Effect of schedule compression on project effort, in:
Proc. 5th Joint International Conference & Educational Workshop, the 15th Annual Conference for
the Society of Cost Estimating and Analysis and the 27th Annual Conference of the International
Society of Parametric Analysts, Denver, CO, 14–17 June 2005.
[76] R. Park, The central equations of the PRICE software cost model, in: Proc. 4th COCOMO Users’
Group Meeting, , Software Engineering Institute, Pittsburgh, PA, November 1988.
[77] R.W. Jensen, in: An improved macrolevel software development resource estimation modelAn
improved macrolevel software development resource estimation model, in: Proc. 5th International
Society of Parametric Analysts Conference, St. Louis, MO, 26–28 April 1983, pp. 88–92.
[78] C.N. Parkinson, Parkinson’s Law and Other Studies in Administration, Houghton Mifflin
Company, Boston, MA, 1957.
[79] M.A. Mahmood, K.J. Pettingell, A.I. Shaskevich, Measuring productivity of software projects:
A data envelopment analysis approach, Decision Sci. 27 (1) (1996) 57–80.
[80] L.O. Damm, L. Lundberg, C. Wohlin, Faults-slip-through—A concept for measuring the efficiency
of the test process, Software Process Improv. Practice 11 (1) (2006) 47–59.
[81] C.F. Kemerer, Software Project Management Readings and Cases, McGraw-Hill, Chicago, IL, 1997.
[82] R. Bazelmans, Productivity—The role of the tools group, ACM SIGSOFT Eng. Notes 10 (2) (1985)
63–75.
[83] P. Guinan, J. Cooprider, S. Sawyer, The effective use of automated application development tools,
IBM Syst. J. 36 (1) (1997) 124–139.
[84] J. Baik, B.W. Boehm, B.M. Steece, Disaggregating and calibrating the CASE tool variable in
COCOMO II, IEEE Trans. Software Eng. 28 (11) (2002) 1009–1022.
[85] C.D. Cruz, A proposal of an object oriented development cost model, in: Proc. European Software
Measurement Conference, Technologisch Instituut VZW, Antwerp, Belgium, 1998, pp. 581–587.
[86] T. Bruckhaus, N.H. Madhavii, I. Janssen, J. Henshaw, The impact of tools on software productivity,
IEEE Software 13 (5) (1996) 29–38.
[87] D.L. Atkins, T. Ball, T.L. Graves, A. Mockus, Using version control data to evaluate the impact of
software tools: A case study of the Version Editor, IEEE Trans. Software Eng. 28 (7) (2002) 625–637.
[88] V. Basili, H.D. Rombach, The TAME project: Towards improvement-oriented software environ-
ments, IEEE Trans. Software Eng. 14 (6) (1988) 758–773.
[89] V. Basili, Viewing maintenance as reuse-oriented software development, IEEE Software 7 (1)
(1990) 19–25.
[90] R.W. Selby, Enabling reuse-based software development of large-scale systems, IEEE Trans.
Software Eng. 31 (6) (2005) 495–510.
[91] D.L. Nazareth, R.A. Rothenberger, Assessing the cost-effectiveness of software reuse: A model for
planned reuse, J. Syst. Software 73 (2004) 245–255.
[92] D.C. Rine, R.M. Sonnemann, Investments in reusable software. A Study of software reuse
investment success factors, J. Syst. Software 41 (1) (1998) 17–32.
[93] D.L. Atkins, A. Mockus, H.P. Siy, Measuring technology effects on software change cost, Bell
Labs Tech. J. 5 (2) (2000) 7–18.
[94] W.B. Frakes, G. Succi, An industrial study of reuse, quality, and productivity, J. Syst. Software 57
(2001) 99–106.
[95] M. Morisio, D. Romano, C. Moiso, Framework based software development: Investigating the
learning effect, in: Proc. 6th IEEE International Software Metrics Symposium, Boca Raton, FL,
4–6 November 1999, pp. 260–268.
[96] K.S. Soliman, Critical success factors in implementing software reuse: A managerial prospective,
in: Proc. International on Information Resources Management Association Conference, Anchorage,
AK, 21–24 May 2000, pp. 1174–1175.
240 A. TRENDOWICZ AND J. MÜNCH

[97] V.R. Basili, L.C. Briand, W.L. Melo, How reuse influences productivity in object-oriented systems,
Commun. ACM 39 (10) (1996) 104–116.
[98] J.A. Lewis, S.M. Henry, D.G. Kafura, An empirical study of the object-oriented paradigm and
software reuse, in: Proc. Conference on Object-Oriented Programming Systems, Languages and
Applications, 1991, pp. 184–196.
[99] C.M. Abts, B.W. Boehm, COTS Software Integration Cost Modeling Study, University of Southern
California Center for Software Engineering, Los Angeles, CA, 1997.
[100] A. Mockus, D.M. Weiss, P. Zhang, Understanding and predicting effort in software projects, in:
Proc. 25th International Conference on Software Engineering, Portland, OR, 3–10 May 2003, IEEE
Computer Society Press, Los Alamitos, CA, 2003, pp. 274–284.
[101] H. Siy, A. Mockus, Measuring domain engineering effects on software change cost, in: Proc. 6th
International Symposium on Software Metrics, Boca Raton, FL, IEEE Computer Society Press, Los
Alamitos, CA, 1999, pp. 304–311.
[102] P. Devanbu, S. Karstu, W. Melo, W. Thomas, Analytical and empirical evaluation of software reuse
metrics, in: Proc. 18th International Conference on Software Engineering, 1996, p. 189.
[103] R. Carbonneau, Outsourced Software Development Productivity, Report MSCA 693T.
John Molson School of Business, Concordia University, Montreal, Canada, 2004.
[104] M.J. Earl, The risks of outsourcing IT, Sloan Manage. Rev. 37 (3) (1996) 26–32.
[105] E.T.G. Wang, T. Barron, A. Seidmann, Contracting structures for custom software development:
The impacts of informational rents and uncertainty on internal development and outsourcing,
Manage. Sci. 43 (12) (1997) 1726–1744.
[106] J.D. Herbsleb, A. Mockus, T.A. Finholt, R.E. Grinter, An empirical study of global software
development: Distance and speed, in: Proc. 23rd International Conference on Software Engineering,
Toronto, Canada, 2001.
[107] M. Amberg, M. Wiener, Wirtschaftliche Aspekte des IT Offshoring [Economic Aspects of IT
Offshoring], Arbeitspapier. 6, Universität Erlangen-Nürnberg, Germany, 2004 (in German).
[108] R. Moczadlo, Chancen und Risiken des Offshore Development. Empirische Analyse der Erfahrun-
gen deutscher Unternehmen [Opportunities and Risks of Offshore Development. Empirical Analy-
sis of Experiences Made by German Companies], FH Pforzheim, Pforzheim, Germany, 2002.
[109] P. de Neve, C. Ebert, Surviving global software development, IEEE Software 18 (2) (2001) 62–69.
[110] J. Herbsleb, D. Paulish, M. Bass, Global software development at siemens: Experience from nine
projects, in: Proc. 27th International Conference on Software Engineering, St. Louis, MO, 2005.
[111] A. Trendowicz, J. Heidrich, J. Münch, Y. Ishigai, K. Yokoyama, N. Kikuchi, Development of a
hybrid cost estimation model in an iterative manner, in: Proc. 28th International Conference on
Software Engineering, Shanghai, China, 2006, pp. 331–340.
[112] Z. Chen, T. Menzies, D. Port, B. Boehm, Finding the right data for software cost modeling, IEEE
Software 22 (6) (2005) 38–46.
[113] C. Kirsopp, M.J. Shepperd, J. Hart, Search heuristics, case-based reasoning and software project
effort prediction, in: Proc. Genetic and Evolutionary Computation Conference, Morgan Kaufmann
Publishers, Inc., San Francisco, CA, 2002, pp. 1367–1374.
[114] M. Auer, A. Trendowicz, B. Graser, E. Haunschmid, S. Biffl, Optimal project feature weights in
analogy-based cost estimation: Improvement and limitations, IEEE Trans. Software Eng. 32 (2)
(2006) 83–92.
[115] D. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures, third ed., Chapman
& Hall/CRC, Boca Raton, FL, 2003.
[116] N. Fenton, W. Marsh, M. Neil, P. Cates, S. Forey, M. Tailor, Making resource decisions for
software projects, in: Proc. 26th International Conference on Software Engineering, May 2004,
pp. 397–406.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 241

[117] V.R. Basili, Software Modeling and Measurement: The Goal Question Metric Paradigm, Computer
Science Technical Report Series, CS-TR-2956 (UMIACS-TR-92-96), University of Maryland,
College Park, MD, 1992.
[118] R. Basili, D.M. Weiss, A methodology for collecting valid software engineering data, IEEE Trans.
Software Eng. SE-10 (6) (1984) 728–737.
[119] L.H. Putnam, W. Myers, Executive Briefing: Managing Software Development, IEEE Computer
Society Press, Los Alamitos, CA, 1996.
[120] CHAOS Chronicles, The Standish Group International, Inc., West Yarmouth, MA, 2007.
[121] A.J. Albrecht, Measuring application development productivity, in: Proc. IBM Applications
Development Symposium, Monterey, CA, 14–17 October 1979, pp. 83–92.
[122] Gartner, Inc., press releases, Gartner Survey of 1,300 CIOs Shows IT Budgets to Increase by 2.5
Percent in 2005, 14 January 2005 (http://www.gartner.com/press_releases/pr2005.html).
[123] D. Herron, D. Garmus, Identifying your organization’s best practices, CrossTalk: J. Defense
Software Eng. 18 (6) (2005) 22–25.
[124] N. Angkasaputra, F. Bella, S. Hartkopf, Software Productivity Measurement—Shared Experience
from Software-Intensive System Engineering Organizations, IESE-Report No. 039.05/E Fraunho-
fer Institute for Experimental Software Engineering, Kaiserslautern, Germany, 2005.
[125] B.A. Kitchenham, E. Mendes, Software productivity measurement using multiple size measures,
IEEE Trans. Software Eng. 30 (12) (2004) 1023–1035.
[126] K. Maxwell, L. Van Wassenhove, S. Dutta, Performance evaluation of general and company
specific models in software development effort estimation, Manage. Sci. 45 (6) (1999) 787–803.
[127] J.P. Mclver, E.G. carmines, J.L. Sullivan, Unidimensional Scaling, Sage Publications, Beverly
Hills, CA, 2004.

You might also like