Factors Influencing Software Development Productivity
Factors Influencing Software Development Productivity
Development Productivity—
State-of-the-Art and Industrial
Experiences
ADAM TRENDOWICZ
Fraunhofer Institute for Experimental
Software Engineering, Fraunhofer-Platz 1,
67663 Kaiserslautern, Germany
JÜRGEN MÜNCH
Fraunhofer Institute for Experimental
Software Engineering, Fraunhofer-Platz 1,
67663 Kaiserslautern, Germany
Abstract
Managing software development productivity is a key issue in software organi-
zations. Business demands for shorter time-to-market while maintaining high
product quality force software organizations to look for new strategies to
increase development productivity.
Traditional, simple delivery rates employed to control hardware production
processes have turned out not to work when simply transferred to the software
domain. The productivity of software production processes may vary across
development contexts dependent on numerous influencing factors. Effective
productivity management requires considering these factors. Yet, there are
thousands of possible factors and considering all of them would make no
sense from the economical point of view. Therefore, productivity modeling
should focus on a limited number of factors with the most significant impact
on productivity.
In this chapter, we present a comprehensive overview of productivity factors
recently considered by software practitioners. The study results are based on
the review of 126 publications as well as international experiences of the
Fraunhofer Institute, including the most recent 13 industrial projects, four work-
shops, and eight surveys on software productivity. The aggregated results show
that the productivity of software development processes still depends signifi-
cantly on the capabilities of developers as well as on the tools and methods
they use.
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
2. Design of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
2.1. Review of Industrial Experiences . . . . . . . . . . . . . . . . . . . . . . 190
2.2. Review of Related Literature . . . . . . . . . . . . . . . . . . . . . . . . 191
2.3. Aggregation of the Review Results . . . . . . . . . . . . . . . . . . . . . 195
3. Related Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
3.1. Context Versus Influence Factors . . . . . . . . . . . . . . . . . . . . . . 196
3.2. Classification of Influence Factors . . . . . . . . . . . . . . . . . . . . . . 196
4. Overview of Factors Presented in Literature . . . . . . . . . . . . . . . . 197
4.1. Crosscontext Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.2. Context-Specific Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.3. Reuse-Specific Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
4.4. Summary of Literature Review . . . . . . . . . . . . . . . . . . . . . . . 202
5. Overview of Factors Indicated by Industrial Experiences . . . . . . . . . 205
5.1. Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.2. Cross-Context Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.3. Context-Specific Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.4. Summary of Industrial Experiences . . . . . . . . . . . . . . . . . . . . . 211
6. Detailed Comments on Selected Productivity Factors . . . . . . . . . . . 213
6.1. Comments on Selected Context Factors . . . . . . . . . . . . . . . . . . . 213
6.2. Comments on Selected Influence Factors . . . . . . . . . . . . . . . . . . 218
7. Considering Productivity Factors in Practice . . . . . . . . . . . . . . . . 229
7.1. Factor Definition and Interpretation . . . . . . . . . . . . . . . . . . . . . 229
7.2. Factor Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.3. Factor Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
7.4. Model Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8. Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 233
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 187
1. Introduction
Rapid growth in the demand for high-quality software and increased investment
in software projects show that software development is one of the key markets
worldwide [2, 122]. Together with the increased distribution of software, its variety
and complexity are growing constantly. A fast changing market demands software
products with ever more functionality, higher reliability, and higher performance.
Software project teams must strive to achieve these objectives by exploiting the
impressive advances in processes, development methods, and tools. Moreover, to
stay competitive and gain customer satisfaction, software providers must ensure that
software products with a certain functionality are delivered on time, within budget,
and to an agreed level of quality, or even with reduced development costs and time.
This illustrates the necessity for reliable methods to manage software development
productivity, which has traditionally been the basis of successful software manage-
ment. Numerous companies have already measured software productivity [3] or
planned to measure it for the purpose of improving their process efficiency, reducing
costs, improving estimation accuracy, or making decisions about outsourcing their
development.
Traditionally, the productivity of industrial production processes has been
measured as the ratio of units of output divided by units of input [4]. This
perspective was transferred into the software development context and is usually
defined as productivity [5] or efficiency [6]. As observed during an international
survey performed in 2006 by the Fraunhofer Institute for Experimental Software
Engineering, 80% of software organizations adapt this industrial perspective on
productivity to the context of software development, where inputs consist of the
effort expended to produce software deliverables (outputs). The assumption those
organizations make is that measuring software productivity is similar to measur-
ing any other forms of productivity. Yet, software production processes seem
to be significantly more difficult than production processes in other industries
[7, 8, 120]. This is mainly because software organizations typically develop new
products as opposed to fabricating the same product over and over again. More-
over, software development is a human-based (‘‘soft’’) activity with extreme
uncertainties from the outset. This leads to many difficulties in the reliable
definition of software productivity. Some of the most critical practical conse-
quences are that software development productivity measures based simply on
size and effort are hardly comparable [7, 125], and that size-based software
estimates are not adequate [9].
188 A. TRENDOWICZ AND J. MÜNCH
1
Software sizing (size measurement) as a separate large topic is beyond the scope of this chapter.
For more details on the issue of size measurement in the context of productivity measurement, see, for
example, [10].
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 189
The review of productivity factors presented in this chapter consists of two parts.
First, we present a review of the authors’ individual experiences gained during a
number of industrial initiatives. Here, we would include factors identified for the
purpose of cost and productivity modeling as well as factors acquired during surveys
and workshops performed by Fraunhofer IESE in the years 1998–2006 (also referred
to as IESE studies). The second part presents an overview of publications regarding
software project characteristics that have an influence on the cost and productivity of
software development.
The available papers2 were selected for inclusion (full review) based on the title
and abstract. The review was done by one researcher. The criteria used to decide
whether to include or exclude papers were as follows:
l Availability and age. The papers had to be published no earlier than 1995. Since
the type and impact of influence factors may change over the time [26], we limit
our review to the past decade.
We made an exception for papers presenting software project data sets. Although
indirectly, we did include those that were collected and first published before 1995
but that were applied to validate cost/productivity models published after 1995.
l Relevancy. The papers had to report factors with a potential impact on software
development cost and/or productivity. Implicit and explicit factors were con-
sidered. Explicit factors are those considered in the context of productivity
modeling/measurement. Implicit factors include those already included in
public-domain cost models and software project data repositories. Algorithmic
cost estimation models, for example, include so-called cost drivers to adjust the
gross effort estimated based only of software size (e.g., [121]). Analogy-based
2
Unfortunately, some publications could not be obtained though the library service of Fraunhofer
IESE.
192 A. TRENDOWICZ AND J. MÜNCH
methods, on the other hand, use various project characteristics found in the
distance measure, which is used to find the best analogues to base the final
estimate on. Common software project data repositories indirectly suggest a
certain set of attributes that should be measured to assure quality estimates.
l Novelty. We did not consider studies that adopt a complete set of factors from
other reference and do not provide any novel findings (e.g., transparent model)
on a factor’s impact on productivity. We do not, for instance, consider models
(e.g., those based on neural networks) that use as their input the whole set of
COCOMO I factors and do not provide any insight into the relative impact of
each factor on software cost.
l Redundancy. We did not consider publications presenting the results of the
same study (usually presented by the same authors). In such cases, only one
publication was included in the review.
l Perspective. As already mentioned in the introduction, there are several possi-
ble perspectives on productivity in the context of software development. In this
review, we will focus on project productivity, that is, factors that make devel-
opment productivity differ across various projects. Therefore, we will, for
example, not consider the productivity of individual development processes
such as inspections, coding, or testing. This perspective is the one most com-
monly considered in the reviewed literature (although not stated directly) and is
the usual perspective used by software practitioners [7].
Based on the aforementioned criteria, 136 references were included in the full
review. After a full review, 122 references in total were considered in the results
presented in this chapter. Several references (mostly books) contained well-
separated sections where productivity factors were considered in separate contexts.
In such cases, we considered these parts as separate references. This resulted in the
final count of 142 references. For each included reference, the following information
was extracted:
l Bibliographic information: title, authors, source and year of publication, etc.;
l Factor selection context: implicit, explicit, cost model, productivity measure,
project data repository;
l Factor selection method: expert assessment, literature survey, data analysis;
l Domain context: embedded software (Emb), management information systems
(MIS), and Web applications (Web);
l Size of factor set: initial, finally selected;
l Factor characteristics: name, definition, weighting (if provided).
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 193
Most of the analyzed studies started with some initial set of potential productivity
factors and applied various selection methods to identify only the most significant
ones. The analyzed references ranged from 1 to 200 initially identified factors that
were later reduced down to the 1–31 most significant factors.
In total, the reviewed references mentioned 246 different factors, with 64 repre-
senting abstract phenomena (e.g., team experience and skills) and the remaining 178
concrete characteristics (e.g., analyst capability or domain experience).
l We aggregated the results for each factor using two measures (1) number of
studies a given factor was considered in (Frequency) and (2) median factor’s
rank3 over all the studies it was considered in (Median rank).
3. Related Terminology
3
We assume ranks to be on the equidistant ordinal scale, which allows applying median operation
[29].
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 197
In total, 246 different factors were identified in the analyzed literature. This
section presents the most commonly used productivity factors we found in the
reviewed literature. First, we present factors that are commonly selected across all
development contexts. Then, we present the most popular factors selected in the
context of a specific model, development type, and domain. Additionally, factors
specific for software reuse are analyzed.
Regarding the studies where project data repositories were presented, we ana-
lyzed publicly available data sets that contain more than only size and effort data
(see, e.g., [32] for an overview).
Table I
TOP CROSSCONTEXT PRODUCTIVITY FACTORS
containing factors that were classified as the most significant ones in two or more
contexts are gray-filled. An empty cell means that a factor did not appear in a certain
context at all.
For each considered context number of relevant references is given in the table
header (in brackets).
Table II
TOP MODEL-SPECIFIC PRODUCTIVITY FACTORS
Table III
TOP DEVELOPMENT-TYPE-SPECIFIC PRODUCTIVITY FACTORS
Table IV
TOP PRODUCTIVITY FACTORS USED TO MODEL SOFTWARE COST
Table V
TOP REUSE-SPECIFIC PRODUCTIVITY FACTORS
Domain 9.9%
been expected, the internal (architecture, data) and external (interfaces) complexity
of software is a significant determinant of development productivity.
Yet, software complexity and programming language are clearly factors preferred in
the context of project data repositories. This most probably reflects a common intuition
of repository designers that those factors have a significant impact on development
productivity and numerous software qualities (e.g., reliability, maintainability).
As already mentioned in the introduction, the importance of a certain productivity
factor varies depending on the project context. The skills of software programmers
and analysts, for instance, seem to play a more important role in enhancement/
maintenance projects. Similarly, tool/method usage seems to be less important in
new development projects. On the other hand, software development that is not a
continuation of a previous product/release seems to significantly rely on the quality
of project management and team motivation.
The results presented in the literature support the claim made by Fenton and
Pfleeger [30] who suggest that software complexity might have a positive impact on
productivity in the context of new development (and thus is not perceived as a factor
worth considering) and a negative impact in case of maintenance and enhancement.
What might be quite surprising is that the domain is not considered in the new
development project at all (Fig. 5).
Regarding software domain-specific factors, there are several significant differ-
ences between factors playing an important role in the embedded and MIS domains.
The productivity of embedded software development depends more on tools and
methods used, whereas that of MIS depends more on the product complexity (Fig. 6).
Finally, reuse is not as significant a productivity factor as commonly believed.
Less than 17% of publications mention reuse as having a significant influence on
development productivity. This should not be a surprise, however, if we consider the
complex nature of the reuse process and the numerous factors determining its
success (Fig. 7).
204 A. TRENDOWICZ AND J. MÜNCH
quality 21.1%
39.5%
Tool usage
15.8%
25.6%
Reuse
26.3%
Programming 23.3%
language 21.1%
30.2%
Method usage
10.5%
Management quality 11.6%
and style 26.3%
Team motivation and 9.3%
commitment 26.3%
18.6%
Domain
quality 29.6%
Requirements 18.0%
characteristics 29.6%
Team size 18.0%
29.6%
Programming 20.0%
language 25.9%
Software complexity 34.0%
11.1%
Reuse 18.0%
11.1%
0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0%
Percentage of usage
Domain 6.7%
According to the reviewed studies, the complexity and quality of reusable assets
are key success factors for software reuse. Furthermore, the support of the asset’s
supplier and the capabilities of development team (for integrating the asset into the
developed product) significantly influence the potential benefits/losses of reuse (see
Section 6.2.4 for a comprehensive overview of key reuse success factors).
This section summarizes experiences regarding the most commonly used produc-
tivity factors indicated by our industrial experiences gained in recent years at
Fraunhofer IESE. The studies summarized here are subject to a nondisclosure
agreement, thus only aggregated results without any details on specific companies
are presented.
5.1 Demographics
The IESE studies considered in this section include:
l Thirteen industrial projects on cost and productivity modeling performed for
international software organizations in Europe (mostly), Japan, India, and
Australia.
l Four international workshops on software cost and productivity modeling,
which took place in the years 2005–2006 in Germany and Japan. The workshop
206 A. TRENDOWICZ AND J. MÜNCH
Multiple 48.0%
Development type
Outsourcing 24.0%
Reuse 8.0%
Table VI
INDUSTRIAL EXPERIENCES: CROSS-CONTEXT PRODUCTIVITY FACTORS
Requirements quality 23 1
Requirements volatility 20 1
Requirements novelty 11 1
Team capabilities and experience 23 3
Project manager experience and skills 10 4.5
Programming language experience 10 17
Teamwork and communication skills 9 3
Domain experience and knowledge 9 5
Project constraints 20 4.6
Schedule pressure 13 2
Distributed/multisite development 11 6
Customer involvement 18 2
Method usage and quality 18 4.3
Requirements management 10 2.5
Reviews and inspections 7 5
Required software quality 18 8.5
Required software reliability 16 4
Required software maintainability 10 15
Context factors
Life cycle model 8 3
Development type 3 5
Domain 2 1.5
Team capability
92%
and experience
Customer/user
72%
participation
Required product
72%
quality
Domain 8%
Each cell of Tables VII and IX contains information in the form X/Y, where X is
the number of studies where a certain factor was selected (Frequency) and Y means
the median rank given to the factor over those studies (Median rank). Moreover, the
most significant factors for a certain context are marked with bold font and cells
containing factors that were classified as the most significant ones in two or more
contexts are gray-filled. An empty cell means that a factor did not appear in a certain
context at all.
For each context considered, the number of relevant references is given in the
table header (in brackets).
Table VII
INDUSTRIAL EXPERIENCES: STUDY-SPECIFIC PRODUCTIVITY FACTORS
Table VIII
INDUSTRIAL EXPERIENCES: PRODUCTIVITY FACTORS IN THE OUTSOURCING CONTEXT
different methods play an important role in the embedded and MIS domains. Use of
early quality assurance methods (reviews and inspections) is regarded as having a
significant impact on productivity in the MIS domain, whereas use of late methods,
such as testing, counts more in embedded software development.
Requirements management as well as configuration and change management
activities are significant productivity factors only in the MIS domain. Software
practitioners do not relate them to productivity variance in the embedded domain
because they are usually consistently applied across development projects. According
to our observation, however, the effectiveness of those activities varies widely and
therefore should be considered as a significant influence factor (Table IX).
Table IX
INDUSTRIAL EXPERIENCES: DOMAIN-SPECIFIC PRODUCTIVITY FACTORS
productivity. They also found out that the analyzed projects tend to cluster around a
certain discrete value of productivity,4 and that except for a limited number of
projects (outliers), each cluster actually represents a certain application domain.
Therefore, although an exact productivity measurement would require considering
other influence factors within a certain context (cluster), general conclusions about
productivity within a certain context can already be drawn. The authors provided
evidence that projects in a real-time domain tend to be less productive, in general,
than in the business systems domain. Jones combines productivity variance across
domains with different levels of project documentation generally required in each
domain. He observed [31] that the level of documenting varies widely across various
software domains. The productivity of military projects, for instance, suffers due to
the extremely high level of documentation required (at least three times more
documentation per function point is required than in MIS and Software Systems
domains) [31].
4
The standard deviation of process productivity within clusters ranged from 1 to 4.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 215
5
Putnam [42] analyzed the QSM database and showed that there seems to be a strong correlation
between an organization’s CMM level and its productivity index.
216 A. TRENDOWICZ AND J. MÜNCH
worldwide [3]. Harter et al. [44] investigate the relationships between process
maturity measured on the CMM scale, development cycle time, and effort for
30 software products created by a major IT company over a period of 12 years.
They found that in the average values for process maturity and software quality, a
1% improvement in process maturity leads to a 0.32% net reduction in cycle time,
and a 0.17% net reduction in development effort (taking into account the positive
direct effects and the negative indirect effects through quality).
Yet, according to other studies (e.g., [45, 46]), overall organization maturity can
probably not be considered as a significant factor influencing productivity. One may
say that high maturity entails a certain level of project characteristics (e.g., CMMI
key practices) positively influencing productivity; however, which characteristics
influence productivity and to which extent varies most probably between different
maturity levels. In that sense, process maturity should be considered as a context
rather than as an influence factor. Influence factors should then refer to single key
practices rather than to whole maturity levels.
Overall maturity improvement should be selected if the main objective is a long-
term one, for example, high-quality products delivered on time and within budget. In
that case, increased productivity is not the most important benefit obtained from
improved maturity. The more important effects of increased maturity are stable
processes, which may, in turn, facilitate the achievement of short-term objectives,
such as effective productivity improvement (Fig. 11). One must be aware that
although increasing the maturity of a process will not hurt productivity in the
long-term perspective, it may hurt it during the transition to higher maturity levels.
It has been commonly observed (e.g., [48]) that the introduction of procedures, new
tools, and methods is, in the short-term perspective, detrimental to productivity
(so-called learning effect). Boehm and Sullivan [47], for instance, illustrate
productivity behavior when introducing new technologies (Fig. 11).
This learning effect can be moderated by introducing a so-called delta team
consisting of very skilled personnel who are is able to alleviate the short-term,
negative effect on productivity of implementing certain key process areas (KPAs).
This is, however, nothing else than preventing the productivity decrease caused by
investments on implementing certain KPAs by improving factors that have proved
to have a significant positive impact on productivity (in this case, team capabilities).
Benefits from process improvement and from introducing new technologies can also
be gained faster by sharing experiences and offering appropriate training and
management support [49, 50].
An alternative way of moderating the short-term, negative impact of process
improvement on productivity would be to first implement KPAs that have a
short-term, positive impact on productivity (e.g., team capabilities, personnel
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 217
media. For instance, even in large-scale projects, minimizing the level of work
concurrency by keeping several small, parallel teams (with some level of indepen-
dence) instead of one large team working together seems to be a good strategy for
preventing a drop in productivity [62]. That is, for example, why agile methods
promoting two-person teams working independently (pair programming) may be
(however only in certain conditions) more productive than large teams working
concurrently [63]. Yet, as further reported in [63], well-defined development pro-
cesses are not without significance in agile development. A large industrial case
study proved, for example, that even pairs of experienced developers working
together need a collaborative, role-based protocol to achieve productivity [63].
Observing social factors in hyperproductive organizations, Cain et al. [64] found
the developer close to the core of the organization. Average and low productive
organizations exhibit much higher prestige values for management functions than
for those who directly add value to the product (developers). Collaborations in these
organizations almost always flow from the manager in the center to other roles in the
organization. Even though a star structure (‘‘chief surgical team’’) is mostly the
favored communication structure, it seems that its impact on productivity depends
on who plays the central role, managerial (outward control flow) or technical
(inward control flow) staff. The authors point to architect as an especially presti-
gious role in a team within highly productive organizations.
The use of proper communication media may also significantly improve team
communication and, in consequence, development productivity. Face-to-face com-
munication still seems to be the most efficient way of communication. Carey and
Kacmar [65], for instance, observed that although simple tasks can be accomplished
successfully using electronic communication media (e.g., telephone, email, etc.)
complex tasks may result in lower productivity and larger dissatisfaction with
electronic medium used. Agile software development considers one of the most
effective ways for software developers to communicate to be standing around a
whiteboard, talking, and sketching [66]. Ambler [51] goes further and proposes to
define a specific period during the day (5 h) during which team members must be
present together in the workroom. Outside that period, they may go back to their
regular offices and work individually. A more radical option would be to gather the
development team in a common workroom for the whole duration of the project.
Teasley et al. [67] conclude their empirical investigation by saying that having
development teams reside in their own large room (an arrangement called radical
collocation) positively affected system development. The collocated projects had
significantly higher productivity and shorter schedules than both the performance
of similar past projects considered in the study and industry benchmarks. Yet, face-
to-face communication does not guarantee higher productivity and may still vary
widely dependent on the team communication structure (especially for larger teams).
220 A. TRENDOWICZ AND J. MÜNCH
Finally, it was observed that, besides a core team structure, close coupling to the
quality assurance staff and the customer turned out to be a pattern within highly
productive software organizations [64].
Staff turnover is another team-related project characteristic having a major impact
on development productivity. Collofello et al. [68] present an interesting study
where, based on a process simulation experiment, they investigated the impact of
various project strategies on team attrition. No action appeared to be the most
effective strategy when schedule pressure is high and cost containment is a priority.
Replacing a team member who left a team alleviates the team exhaustion rate
(lightens increased attrition). Even though overstaffing is an expensive option,
replacing a team member who left with several new members minimizes the
duration of the project. Thus, this strategy should be considered for projects in
which the completion date has been identified as the highest priority. It is, however,
not clear how those results (especially regarding overstaffing) relate to Brooks’ real-
life observation that putting more staff at the end of a project makes it even more
late [56].
Task assignment is the next reported human-related factor differentiating software
development productivity. Boehm [43], for instance, introduces in his COCOMO II
model the task assignment factor to reflect the observation that proper assignment
of people to corresponding tasks has a great impact on development productivity.
Hale et al. [61, 69] go further and investigate related subfactors such as intensity,
concurrency, and fragmentation. They observed that the more time is spent on the
task by the same developer (intensity), the higher the productivity. Further, the more
team members work on the same task (concurrency), the lower the productivity
(communication overhead disproportionately larger than task size). Finally, the
more fragmented the developer’s time over various tasks (fragmentation), the
lower the productivity (due to overhead to switch work context).
Finally, team capabilities and skills is traditionally acknowledged as the most
significant personnel characteristic that influences development productivity. Team
capabilities also have an indirect impact on productivity by influencing other factors
with a potential impact on development productivity. The impact of the team
structure on productivity, for instance, usually assumes highly skilled technical
people. Moreover, the full benefits of tool support and technological improvements
cannot be achieved without capable personnel [70]. On the other hand, the impact
of team capabilities on productivity may be amplified or moderated by other
factors. An appropriate management and communication structure can, for instance,
moderate the negative impact of a low-experienced team on productivity; it can,
however, never compensate for the lack of experience [57].
Researchers and practitioners agree about the high importance of team capabil-
ities regarding development productivity. Yet, it is not clear exactly which skills
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 221
Others show that schedule expansion also seems to have a negative impact on
development productivity [26, 57, 75]. One of the possible explanations of this
phenomenon is the so-called Parkinson’s low, which says that ‘‘the cost of the
project will expand to consume all available resources’’ [78].
Yet, the right total schedule is only one part of success. The other part is its
right distribution across single development phases. An investigation on productiv-
ity shown in [57] concludes that some parts of the process simply need to be
‘‘unproductive,’’ that is, should take more time. This empirical insight confirms
the common software engineering wisdom that more effort (and time) spent on early
development stages bears fruit, for example, through less rework in later stages and
finally increases overall project productivity. Apparently, the initial higher analysis
costs of highly productive projects are more than compensated for by the overall
shorter time spent on the total project [23, 79]. If the analysis phase, for example, is
not done properly in the first place, it takes longer for programmers to add, modify,
or delete codes. The Norden’s Rayleigh curve [58] may here be taken as a reference
shape of the most effective effort and time distribution over the software develop-
ment life cycle. Yet, the simple rule of ‘‘the more the better’’ does not apply here and
after exceeding a certain limit of time spent on requirements specification, the
overall development productivity (and product quality) may drop [16]. The results
of a case study at Ericsson [80] determined, for example, that the implementation
phase had the largest improvement potential in the two studied projects, since it
caused a large faults-slip-through to later phases, that is, 85 and 86% of the total
improvement potential of each project.
factors affecting the relationship between tool usage and development productivity
are the degree of tool integration, tool maturity, tool training and experience,
appropriate support when selecting and introducing corresponding tools (task-
technology fit), team coordination/interaction, structured methods use, documenta-
tion quality, and project size [54, 82–85]. For instance, when teams receive both
tool-specific operational training and more general software development training,
higher productivity is observed [83]. The same study reports on a 50% increase in
productivity if tools and formal structured methods are used together.
Abdel-Hamid [1] observed, moreover, that studies conducted in laboratory set-
tings using relatively small pilot projects tend to report productivity improvements
of 100–600%, whereas when the impact of CASE tools is assessed in real organiza-
tional settings, much more modest productivity gains (15–30%) or no gains at all
were found. The author relevantly summarizes that project managers are the main
source of failed CASE tools application, because most often they do not institute
rudimentary management practices needed before and during the introduction of
new tools. What they usually do is to look for a solution they can buy. Actually, in a
simulation presented by Abdel-Hamid, almost half the productivity gains from new
software development tools were squandered by bad management. Moreover,
Ambler [51] underlines being flexible regarding tool usage. He observed that each
development team works on different things, and each individual has different ways
of working. Forcing inappropriate tools on people will not only hamper progress, it
can destroy team morale.
The impact of project size and process maturity on the benefits gained from
introducing tools is shown in [86]. The authors observed at IBM software solutions
that in the context of advanced processes, productivity gains through requirements
planning tools may vary between 107.9% for a small project (five features) and
23.5% for a large project (80 features). Yet, in the context of a small project
(10 features) such a gain may vary between 26.1 and 56.4% when regular and
advanced processes are applied, respectively. A replicated study confirmed this
trend—productivity decreases with growing project size and higher process com-
plexity. The productivity loss in the larger project was due to additional overhead for
processing and maintaining a larger amount of data produced by a newly introduced
tool. Higher process complexity, on the other hand, brings more overhead related to
newly introduced activities (processes). Yet, measuring at the macrolevel makes it
difficult to separate the impact of the tool from other confounding variables (such as
team experience and size/complexity of a single feature). Therefore, the results of
Bruckhaus [86] should be interpreted cautiously [87].
The use of software tools to improve project productivity is usually interpreted in
terms of automated tools that assist with project planning, tracking, and manage-
ment. Yet, nonautomated tools such as checklists, templates, or guidelines that
224 A. TRENDOWICZ AND J. MÜNCH
help software engineers interpret and comply with development processes can be
considered as supporting the positive impact of high-maturity processes on
improved project productivity [71].
The success of reuse depends on numerous factors. Rinie and Sonnemann [92]
used an industrial survey to identify several leading reuse success factors (so-called
reuse capability indicators):
l Use of product-line approach
l Architecture that standardizes interfaces and data formats
l Common software architecture across the product line
l Design for manufacturing approach
l Domain engineering
l Management that understands reuse issues
l Software reuse advocate(s) in senior management
l Use of state-of-the-art tools and methods
l Precedence of reusing high-level software artifacts such as requirements and
design versus just code reuse
l Tracing end-user requirements to the components that support them
Atkins et al. [87, 93] confirm part of these results in an empirical study where the
change effort of large telecommunication software was reduced by about 40%
by using a version-sensitive code editor and about four times when the domain
engineering technologies were applied.
The impact of reuse on development productivity, like most other influence
factors, is strongly coupled with other project characteristics. It should thus not be
simplified by taking as granted the positive impact of reuse on productivity. Frakes
and Succi [94] observed, for instance, a certain inconsistency regarding the relation-
ship between reuse and productivity across various industrial data sets, with some
relationships being positive and others negative.
One of the factors closely coupled with reuse is the characteristics of the person-
nel involved in reuse (development of reusable assets and their reuse). Morisio et al.
[95] observed that the more familiar developers are with reused, generic software
(framework), the more benefit is gained when reusing it. The authors report that
although developing a reusable asset may cost 10 times as much as ‘‘traditional’’
development, the observed productivity gain of each development where the asset is
reused reached 280%. Thus, the benefit from creating and maintaining reusable
(generic) assets increases with the number of its reuses.
Another factor influencing the impact of reuse on development productivity is the
existence of defined processes. Soliman [96], for example, identifies the lack of a
strategic plan available to managers to implement software reuse as a major factor
affecting the extent of productivity benefits gained from reuse. Major issues for
managers to consider include commitments from top management, training for
226 A. TRENDOWICZ AND J. MÜNCH
22.7%
Telephone conferences 17.3%
60.0%
Inter- and intranet 25.3%
20.0%
applications 54.7%
Discussion forums, 44.0%
22.7%
chats 33.3%
38.7%
Workflow systems 30.7%
30.7%
54.7%
Video conferences 25.3%
20.0%
64.0%
Virtual meeting rooms 22.7%
13.3%
0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0% 70.0% 80.0% 90.0% 100.0% 110.0%
Percentage of consideration
There are several essential issues that must be considered when selecting factors
for the purpose of modeling software development productivity, effort/cost,
schedule, etc.
This section presents a brief overview of the most important aspects to be
considered in practice.
6
A minimal amount of factors that would meet specified cost- and benefit-related criteria, for
example, effective productivity control at minimal modeling cost.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 231
easily automated and thus does not cost much (in terms of manpower7). One
significant limitation is that data-based selection simply reduces the set of factors
given a priori as input. This means in practice that if input data does not cover certain
relevant productivity factors, a data-based approach will not identify them. It may at
most exclude irrelevant factors. Maximizing the probability of covering all relevant
factors would require collecting a significant amount of data, hopefully covering all
relevant factors, which would most probably disqualify such an approach due to
high data collection costs.
On the other hand, expert-based factor selection techniques seem to be more
robust, since experts are able to identify (based on their experience) factors unmea-
sured so far. However, experts tend to be very inconsistent in their assessments,
depending, for example, on personal knowledge and expertise. Across 17 IESE
studies where we asked experts to rank identified factors with respect to their impact
on productivity and where we measured Kendall’s coefficient of concordance
W 2 ð0; 1Þ [115] to quantify experts’ agreement, in half of the cases (46%) experts
disagreed significantly (W 0.3 at a significance level p ¼ 0.05).
Hybrid approaches to selecting productivity factors seem to be the best alterna-
tive. In the reviewed literature, however, merely 6% of the studies directly propose
some kind of combined selection approach. Most of the published studies (45%)
select productivity factors based on experts’ opinion or already published factors
(with COCOMO factors [43] coming out on top). A successful attempt at combining
data- and expert-based approaches within an iterative framework has been made, for
example, in [111].
7
Factor selection techniques are easy to automate; however, many of them contain NP-hard algo-
rithms (e.g., [114]), which actually limits their practical applicability.
232 A. TRENDOWICZ AND J. MÜNCH
Increases
Disciplined
Moderates requirements
management
Productivity
Moreover, any software development effort, even if staffed with skilled indivi-
duals, is likely to be unsuccessful if it does not explicitly account for how people
work together [60]. A software development environment is a complex social
system that may squander the positive impact of skillful individuals as well as
software tools and methods if team communication and coordination fail [1].
Factors facilitating team communication and work coordination are particularly
important in the context of software outsourcing. Geographical and, often, mental
distance between the involved parties (e.g., outsourcing company, software pro-
vider, etc.) require dedicated managerial activities and communication facilities to
maintain a satisfactory level of productivity.
The most commonly selected factors support the thesis that schedule is not a
simple derivative of project effort. The negative impact of project schedule on
productivity, however, is considered only in terms of schedule constraints (com-
pression). Parkinson’s low (‘‘cost of the project will expand to consume all available
resources’’) seems not to be considered in daily practice.
Several ‘‘top’’ factors support the common intuition regarding the requirements
specification as the key development phase. First of all, requirements quality and
volatility are considered to be essential drivers of development productivity. Several
further factors are considered as either contributing to the quality and volatility of
requirements or moderating the impact of already instable requirements on produc-
tivity. Distribution of the project effort (manpower) focusing on the requirements
phase as well as significant customer involvement in the early phases of the
development process are the factors most commonly believed to increase require-
ments quality and stability. The impact of already instable requirements may, on the
other hand, be moderated by disciplined requirements management as well as early
reviews and inspections.
Finally, the results obtained here do not support the traditional belief that software
reuse is the key to productivity improvements. It seems that the first years of
enthusiasm also brought much disappointment. A plethora of factors that should
be considered to gain the expected benefits from reuse might explain this situation.
Ad hoc reuse, without any reasonable cost-benefit analysis and proper investments
to create a reuse environment (e.g., creation and maintenance of high-quality
reusable assets, integration support, appropriate team motivation, and training)
usually contributes to a loss in productivity.
The factors presented in this chapter result from a specific aggregation approach
that reflects current industrial trends. However, it must be considered that the
analyzed studies usually differ widely with respect to the identified factors, their
interdependencies, and their impact on productivity. Therefore, each organization
should consider potential productivity factors in its own environment (‘‘what is good
for them does not have to necessarily be good for me’’), instead of uncritically
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 235
adopting factors used in other contexts (e.g., COCOMO factors) [51]. Moreover,
since software is known as a very rapidly changing environment, selected factors
should be reviewed and updated regularly.
Selecting the right factors is just a first step toward quantitative productivity
management. The respective project data must be collected, analyzed, and inter-
preted from the perspective of the stated productivity objectives [117, 118]. Incon-
sistent measurements and/or inadequate analysis methods may, and usually do, lead
to deceptive conclusions about productivity and its influencing factors [102]. In that
sense, one may say that rigorous measurement processes and adequate analysis
methods also have a significant impact on productivity, although not directly [52].
Therefore, such aspects as clear definition and quantification of selected factors,
identification of factor interdependencies, as well as quantification of their impact
on productivity has to be considered.
Acknowledgments
We would like to thank Sonnhild Namingha from the Fraunhofer Institute for Experimental Software
Engineering (IESE) for reviewing the first version of this chapter.
References
[1] T.K. Abdel-Hamid, The slippery path to productivity improvement, IEEE Software 13 (4) (1996)
43–52.
[2] Gartner, Inc. press releases, Gartner Says Worldwide IT Services Revenue Grew 6.7 Percent in
2004, 8 February 2005 (http://www.gartner.com/press_releases/pr2005.html).
[3] M.C. Paulk, M.B. Chrissis, The 2001 High Maturity Workshop, Special Report, CMU/SEI-2001-
SR-014, Carnegie Mellon Software Engineering Institute, Pittsburg, PA, 2002.
[4] National Bureau of Economic Research, Inc., Output, Input, and Productivity Measurement.
Studies in Income and Wealth, vol. 25 by the Conference on Research in Income and Wealth,
Technical Report, Princeton University Press, Princeton, NJ, 1961.
[5] IEEE Std 1045–1992, IEEE Standard for Software Productivity Metrics, IEEE Computer Society
Press, Los Alamitos, CA, 1992.
[6] K.G. van der Pohl, S.R. Schach, A software metric for cost estimation and efficiency measurement
in data processing system development, J. Syst. Software 3 (1983) pp. 187–191.
[7] N. Angkasaputra, F. Bella, J. Berger, S. Hartkopf, A. Schlichting, Zusammenfassung des 2. Work-
shops ‘‘Software-Produktivitätsmessungen’’ zum Thema Produktivitätsmessung und Wiederver-
wendung von Software [Summary of the 2nd Workshop ‘‘Software Productivity Measurement’’ on
Productivity Measurement and Reuse of Software], IESE-Report Nr, 107.05/D, Fraunhofer Insti-
tute for Experimental Software Engineering, Kaiserslautern, Germany, 2005 (in German).
[8] L.C. Briand, I. Wieczorek, Software resource estimation, in: Encyclopedia of Software Engineering,
(J.J. Marciniak, Ed.), vol. 2. John Wiley & Sons, New York, NY, 2002, pp. 1160–1196.
[9] T. Meznies, Z. Chen, D. Port, J. Hihn, Simple software cost analysis: Safe or unsafe? in: Proc.
International Workshop on Predictor Models in Software Engineering, St. Louis, MO, 15 May 2005.
236 A. TRENDOWICZ AND J. MÜNCH
[10] M. Jrgensen, M. Shepperd, A systematic review of software development cost estimation studies,
IEEE Trans. Software Eng. 33 (1) (2007) 33–53.
[11] T. Noth, M. Kretzschmar, Estimation of Software Development Projects, Springer-Verlag, Berlin,
1984, (in German).
[12] F.J. Heemstra, M.J.I.M. van Genuchten, R.J. Kusters, Selection of Cost Estimation Packages,
Research report EUT/BDK/36, Eindhoven University of Technology, Eindhoven, Netherlands, 1989.
[13] C. Jones, Software Assessments, Benchmarks, and Best Practices, Addison-Wesley Longman, Inc.,
New York, NY, 2000.
[14] B.W. Boehm, Software Engineering Economics, Prentice Hall PTR, Upper Saddle River, NJ, 1981.
[15] B.A. Kitchenham, N.R. Taylor, Software project development cost estimation, J. Syst. Software 5
(1985) 267–278.
[16] T.C. Jones, Estimating Software Cost, McGraw-Hill, New York, NY, 1998.
[17] D. Diaz, J. King, How CMM impacts quality, productivity, rework, and the bottom line, CrossTalk:
J. Defense Software Eng. 15 (3) (2002) 9–14.
[18] D. Greves, B. Schreiber, K. Maxwell, L. Van Wassenhove, S. Dutta, The ESA initiative for
software productivity benchmarking and effort estimation, Eur. Space Agency Bull. 87 (1996).
[19] ISBSG Data Repository. Release 9, International Software Benchmarking Group, Australia, 2005.
[20] Software Technology Transfer Finland (STTF). (http://www.sttf.fi/index.html).
[21] L.C. Briand, K. El Emam, F. Bomarius, COBRA: A hybrid method for software cost estimation,
benchmarking and risk assessment, in: Proc. 20th International Conference on Software Engineering,
April 1998, pp. 390–399.
[22] M. Ruhe, R. Jeffery, I. Wieczorek, Cost estimation for Web applications, in: Proc. 25th Inter-
national Conference on Software Engineering, Portland, OR, 3–10 May 2003, pp. 285–294.
[23] C. Andersson, L. Karlsson, J. Nedstam, M. Höst, B. Nilsson, Understanding software processes
through system dynamics simulation: A case study, in: Proc. 9th Annual IEEE International Confer-
ence and Workshop on the Engineering of Computer-Based Systems, 8–11 April 2002, pp. 41–50.
[24] J. Heidrich, A. Trendowicz, J. Münch, A. Wickenkamp, Zusammenfassung des 1st International
Workshop on Efficient Software Cost Estimation Approaches, WESoC’2006, IESE Report
053/06E, Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany,
April 2006 (in German).
[25] B. Kitchenham, Procedures for Performing Systematic Reviews, Technical Report TR/SE-0401,
Keele University, Keele, UK, 2004.
[26] K.D. Maxwell, L. Van Wassenhove, S. Dutta, Software development productivity of European
space, military, and industrial applications, IEEE Trans. Software Eng. 22 (10) (1996) 706–718.
[27] J.M. Desharnais, Analyse statistique de la productivite des projects de development en informa-
tique apartir de la technique des points des function, Master’s Thesis, University of Montreal,
Canada, 1989 (in French).
[28] C.F. Kemerer, An empirical validation of software cost estimation models, Commun. ACM 30
(1987) 416–429.
[29] M. Lattanzi, S. Henry, Software reuse using Cþþ classes. The question of inheritance, J. Syst.
Software 41 (1998) 127–132.
[30] N.E. Fenton, S.L. Pfleeger, Software Metrics. A Rigorous and Practical Approach, second ed.,
International Thomson Computer Press, London, 1997.
[31] C. Jones, Software cost estimating methods for large projects, CrossTalk: J. Defense Software Eng.
18 (4) (2005) 8–12.
[32] C. Mair, M. Shepperd, M. Jorgensen, An analysis of data sets used to train and validate cost
prediction systems, in: Proc. International Workshop on Predictor Models in Software Engineering,
St. Louis, MO, 15 May 2005, pp. 1–6.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 237
[52] S.B. Hai, K.S. Raman, Software engineering productivity measurement using function points:
A case study, J. Inf. Technol. Cases Appl. 15 (1) (2000) 79–90.
[53] F. Niessink, H. van Vliet, Two case studies in measuring software maintenance effort, in: Proc.
International Conference on Software Maintenance, Bethesda, MD, 16–20 November 1998, IEEE
Computer Society Press, Los Alamitos, CA, 1998, pp. 76–85.
[54] G.H. Subramanian, G.E. Zarnich, An examination of some software development effort and
productivity determinants in ICASE tool projects, J. Manage. Inform. Syst. 12 (4) (1996) 143–160.
[55] E. Carmel, B.J. Bird, Small is beautiful: A study of packaged software development teams, J. High
Technol. Manage. Res. 8 (1) (1997) 129–148.
[56] F.P. Brooks, The Mythical Man-Month: Essays on Software Engineering, 20th Anniversary ed.,
Addison-Wesley, Reading, MA, 1995.
[57] J.D. Blackburn, G.D. Scudder, L. Van Wassenhove, Concurrent software development, Commun.
ACM 43 (4) (2000) 200–214.
[58] P.V. Norden, Curve fitting for a model of applied research and development scheduling, IBM J.
Res. Dev. 3 (2) (1958) 232–248.
[59] M.S. Krishnan, The role of team factors in software cost and quality: An empirical analysis, Inform.
Technol. People 11 (1) (1998) 20–35.
[60] S. Sawyer, P. Guinan, Software development: Processes and performance, IBM Syst. J. 34 (7)
(1998) 552–569.
[61] R.K. Smith, J.E. Hale, A.S. Parrish, An empirical study using task assignment patterns to improve
the accuracy of software effort estimation, IEEE Trans. Software Eng. 27 (3) (2001) 264–271.
[62] M. Cusumano, R. Selby, How Microsoft builds software, Commun. ACM 40 (6) (1997) 53–61.
[63] A. Parrish, R. Smith, D. Hale, J. Hale, A field study of developer pairs: Productivity impacts and
implications, IEEE Software 21 (5) (2004) 76–79.
[64] B.G. Cain, J.O. Coplien, N.B. Harrison, Social patterns in productive software development
organizations, Ann. Software Eng. 2 (1) (1996) 259–286.
[65] J.M. Carey, C.J. Kacmar, The impact of communication mode and task complexity on small group
performance and member satisfaction, Comput. Hum. Behav. 13 (1) (1997) 23–49.
[66] A. Cockburn, Agile Software Development, Addison-Wesley Professional, Boston, MA, 2001.
[67] S.D. Teasley, L.A. Covi, M.S. Krishnan, J.S. Olson, Rapid software development through team
collocation, IEEE Trans. Software Eng. 28 (7) (2002) 671–683.
[68] J. Collofello, D. Houston, I. Rus, A. Chauhan, D.M. Sycamore, D. Smith-Daniels, A system
dynamics software process simulator for staffing policies decision support, in: Proc. 31st Annual
Hawaii International Conference on System Sciences, vol. 6, Kohala Coast, HI, 6–9 January 1998,
pp. 103–111.
[69] J. Hale, A. Parrish, B. Dixon, R.K. Smith, Enhancing the COCOMO estimation models, IEEE
Software 17 (6) (2000) 45–49.
[70] I.R. Chiang, V.S. Mookerjee, Improving software team productivity, Commun. ACM 47 (5) (2004)
89–93.
[71] R. Bechtold, Reducing software project productivity risk, CrossTalk: J. Defense Software Eng.
13 (5) (2000) 19–22.
[72] E. Carmel, S. Sawyer, Packaged software teams: What makes them so special? Inform. Technol.
People 11 (1) (1998) 6–17.
[73] Anonymous, Above average(s): Measuring application development performance, Intranet Net-
working Strategies Rep. 8 (3) (2000) 1–4.
[74] T.E. Potok, M.A. Vouk, The effects of the business model on object-oriented software development
productivity, IBM Syst. J. 36 (1) (1997) 140–161.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 239
[75] Y. Yang, Z. Chen, R. Valerdi, B.W. Boehm, Effect of schedule compression on project effort, in:
Proc. 5th Joint International Conference & Educational Workshop, the 15th Annual Conference for
the Society of Cost Estimating and Analysis and the 27th Annual Conference of the International
Society of Parametric Analysts, Denver, CO, 14–17 June 2005.
[76] R. Park, The central equations of the PRICE software cost model, in: Proc. 4th COCOMO Users’
Group Meeting, , Software Engineering Institute, Pittsburgh, PA, November 1988.
[77] R.W. Jensen, in: An improved macrolevel software development resource estimation modelAn
improved macrolevel software development resource estimation model, in: Proc. 5th International
Society of Parametric Analysts Conference, St. Louis, MO, 26–28 April 1983, pp. 88–92.
[78] C.N. Parkinson, Parkinson’s Law and Other Studies in Administration, Houghton Mifflin
Company, Boston, MA, 1957.
[79] M.A. Mahmood, K.J. Pettingell, A.I. Shaskevich, Measuring productivity of software projects:
A data envelopment analysis approach, Decision Sci. 27 (1) (1996) 57–80.
[80] L.O. Damm, L. Lundberg, C. Wohlin, Faults-slip-through—A concept for measuring the efficiency
of the test process, Software Process Improv. Practice 11 (1) (2006) 47–59.
[81] C.F. Kemerer, Software Project Management Readings and Cases, McGraw-Hill, Chicago, IL, 1997.
[82] R. Bazelmans, Productivity—The role of the tools group, ACM SIGSOFT Eng. Notes 10 (2) (1985)
63–75.
[83] P. Guinan, J. Cooprider, S. Sawyer, The effective use of automated application development tools,
IBM Syst. J. 36 (1) (1997) 124–139.
[84] J. Baik, B.W. Boehm, B.M. Steece, Disaggregating and calibrating the CASE tool variable in
COCOMO II, IEEE Trans. Software Eng. 28 (11) (2002) 1009–1022.
[85] C.D. Cruz, A proposal of an object oriented development cost model, in: Proc. European Software
Measurement Conference, Technologisch Instituut VZW, Antwerp, Belgium, 1998, pp. 581–587.
[86] T. Bruckhaus, N.H. Madhavii, I. Janssen, J. Henshaw, The impact of tools on software productivity,
IEEE Software 13 (5) (1996) 29–38.
[87] D.L. Atkins, T. Ball, T.L. Graves, A. Mockus, Using version control data to evaluate the impact of
software tools: A case study of the Version Editor, IEEE Trans. Software Eng. 28 (7) (2002) 625–637.
[88] V. Basili, H.D. Rombach, The TAME project: Towards improvement-oriented software environ-
ments, IEEE Trans. Software Eng. 14 (6) (1988) 758–773.
[89] V. Basili, Viewing maintenance as reuse-oriented software development, IEEE Software 7 (1)
(1990) 19–25.
[90] R.W. Selby, Enabling reuse-based software development of large-scale systems, IEEE Trans.
Software Eng. 31 (6) (2005) 495–510.
[91] D.L. Nazareth, R.A. Rothenberger, Assessing the cost-effectiveness of software reuse: A model for
planned reuse, J. Syst. Software 73 (2004) 245–255.
[92] D.C. Rine, R.M. Sonnemann, Investments in reusable software. A Study of software reuse
investment success factors, J. Syst. Software 41 (1) (1998) 17–32.
[93] D.L. Atkins, A. Mockus, H.P. Siy, Measuring technology effects on software change cost, Bell
Labs Tech. J. 5 (2) (2000) 7–18.
[94] W.B. Frakes, G. Succi, An industrial study of reuse, quality, and productivity, J. Syst. Software 57
(2001) 99–106.
[95] M. Morisio, D. Romano, C. Moiso, Framework based software development: Investigating the
learning effect, in: Proc. 6th IEEE International Software Metrics Symposium, Boca Raton, FL,
4–6 November 1999, pp. 260–268.
[96] K.S. Soliman, Critical success factors in implementing software reuse: A managerial prospective,
in: Proc. International on Information Resources Management Association Conference, Anchorage,
AK, 21–24 May 2000, pp. 1174–1175.
240 A. TRENDOWICZ AND J. MÜNCH
[97] V.R. Basili, L.C. Briand, W.L. Melo, How reuse influences productivity in object-oriented systems,
Commun. ACM 39 (10) (1996) 104–116.
[98] J.A. Lewis, S.M. Henry, D.G. Kafura, An empirical study of the object-oriented paradigm and
software reuse, in: Proc. Conference on Object-Oriented Programming Systems, Languages and
Applications, 1991, pp. 184–196.
[99] C.M. Abts, B.W. Boehm, COTS Software Integration Cost Modeling Study, University of Southern
California Center for Software Engineering, Los Angeles, CA, 1997.
[100] A. Mockus, D.M. Weiss, P. Zhang, Understanding and predicting effort in software projects, in:
Proc. 25th International Conference on Software Engineering, Portland, OR, 3–10 May 2003, IEEE
Computer Society Press, Los Alamitos, CA, 2003, pp. 274–284.
[101] H. Siy, A. Mockus, Measuring domain engineering effects on software change cost, in: Proc. 6th
International Symposium on Software Metrics, Boca Raton, FL, IEEE Computer Society Press, Los
Alamitos, CA, 1999, pp. 304–311.
[102] P. Devanbu, S. Karstu, W. Melo, W. Thomas, Analytical and empirical evaluation of software reuse
metrics, in: Proc. 18th International Conference on Software Engineering, 1996, p. 189.
[103] R. Carbonneau, Outsourced Software Development Productivity, Report MSCA 693T.
John Molson School of Business, Concordia University, Montreal, Canada, 2004.
[104] M.J. Earl, The risks of outsourcing IT, Sloan Manage. Rev. 37 (3) (1996) 26–32.
[105] E.T.G. Wang, T. Barron, A. Seidmann, Contracting structures for custom software development:
The impacts of informational rents and uncertainty on internal development and outsourcing,
Manage. Sci. 43 (12) (1997) 1726–1744.
[106] J.D. Herbsleb, A. Mockus, T.A. Finholt, R.E. Grinter, An empirical study of global software
development: Distance and speed, in: Proc. 23rd International Conference on Software Engineering,
Toronto, Canada, 2001.
[107] M. Amberg, M. Wiener, Wirtschaftliche Aspekte des IT Offshoring [Economic Aspects of IT
Offshoring], Arbeitspapier. 6, Universität Erlangen-Nürnberg, Germany, 2004 (in German).
[108] R. Moczadlo, Chancen und Risiken des Offshore Development. Empirische Analyse der Erfahrun-
gen deutscher Unternehmen [Opportunities and Risks of Offshore Development. Empirical Analy-
sis of Experiences Made by German Companies], FH Pforzheim, Pforzheim, Germany, 2002.
[109] P. de Neve, C. Ebert, Surviving global software development, IEEE Software 18 (2) (2001) 62–69.
[110] J. Herbsleb, D. Paulish, M. Bass, Global software development at siemens: Experience from nine
projects, in: Proc. 27th International Conference on Software Engineering, St. Louis, MO, 2005.
[111] A. Trendowicz, J. Heidrich, J. Münch, Y. Ishigai, K. Yokoyama, N. Kikuchi, Development of a
hybrid cost estimation model in an iterative manner, in: Proc. 28th International Conference on
Software Engineering, Shanghai, China, 2006, pp. 331–340.
[112] Z. Chen, T. Menzies, D. Port, B. Boehm, Finding the right data for software cost modeling, IEEE
Software 22 (6) (2005) 38–46.
[113] C. Kirsopp, M.J. Shepperd, J. Hart, Search heuristics, case-based reasoning and software project
effort prediction, in: Proc. Genetic and Evolutionary Computation Conference, Morgan Kaufmann
Publishers, Inc., San Francisco, CA, 2002, pp. 1367–1374.
[114] M. Auer, A. Trendowicz, B. Graser, E. Haunschmid, S. Biffl, Optimal project feature weights in
analogy-based cost estimation: Improvement and limitations, IEEE Trans. Software Eng. 32 (2)
(2006) 83–92.
[115] D. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures, third ed., Chapman
& Hall/CRC, Boca Raton, FL, 2003.
[116] N. Fenton, W. Marsh, M. Neil, P. Cates, S. Forey, M. Tailor, Making resource decisions for
software projects, in: Proc. 26th International Conference on Software Engineering, May 2004,
pp. 397–406.
FACTORS INFLUENCING SOFTWARE DEVELOPMENT PRODUCTIVITY 241
[117] V.R. Basili, Software Modeling and Measurement: The Goal Question Metric Paradigm, Computer
Science Technical Report Series, CS-TR-2956 (UMIACS-TR-92-96), University of Maryland,
College Park, MD, 1992.
[118] R. Basili, D.M. Weiss, A methodology for collecting valid software engineering data, IEEE Trans.
Software Eng. SE-10 (6) (1984) 728–737.
[119] L.H. Putnam, W. Myers, Executive Briefing: Managing Software Development, IEEE Computer
Society Press, Los Alamitos, CA, 1996.
[120] CHAOS Chronicles, The Standish Group International, Inc., West Yarmouth, MA, 2007.
[121] A.J. Albrecht, Measuring application development productivity, in: Proc. IBM Applications
Development Symposium, Monterey, CA, 14–17 October 1979, pp. 83–92.
[122] Gartner, Inc., press releases, Gartner Survey of 1,300 CIOs Shows IT Budgets to Increase by 2.5
Percent in 2005, 14 January 2005 (http://www.gartner.com/press_releases/pr2005.html).
[123] D. Herron, D. Garmus, Identifying your organization’s best practices, CrossTalk: J. Defense
Software Eng. 18 (6) (2005) 22–25.
[124] N. Angkasaputra, F. Bella, S. Hartkopf, Software Productivity Measurement—Shared Experience
from Software-Intensive System Engineering Organizations, IESE-Report No. 039.05/E Fraunho-
fer Institute for Experimental Software Engineering, Kaiserslautern, Germany, 2005.
[125] B.A. Kitchenham, E. Mendes, Software productivity measurement using multiple size measures,
IEEE Trans. Software Eng. 30 (12) (2004) 1023–1035.
[126] K. Maxwell, L. Van Wassenhove, S. Dutta, Performance evaluation of general and company
specific models in software development effort estimation, Manage. Sci. 45 (6) (1999) 787–803.
[127] J.P. Mclver, E.G. carmines, J.L. Sullivan, Unidimensional Scaling, Sage Publications, Beverly
Hills, CA, 2004.