Dlmcssesp01 Course Book
Dlmcssesp01 Course Book
Masthead
Publisher:
IU Internationale Hochschule GmbH
IU International University of Applied Sciences
Juri-Gagarin-Ring 152
D-99084 Erfurt
Mailing address:
Albert-Proeller-Straße 15-19
D-86675 Buchdorf
[email protected]
www.iu.de
DLMCSSESP01
Version No.: 001-2022-0826
Module Director
Prof. Dr. Damir Ismailović
Table of Contents
Software Engineering: Software Processes
Module Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Introduction
Software Engineering: Software Processes 7
Signposts Throughout the Course Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Learning Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Unit 1
Foundations of Software Processes 12
1.1 The Role of Software Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Unit 2
Software Process Definition and Modeling 26
2.1 Modeling Notations and Meta-Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Unit 3
Basic Software Product Life Cycle Models 64
3.1 Waterfall Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Unit 4
Agile and Lean Development Processes 76
4.1 The Agile Manifesto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2 Scrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Unit 5
The Software Product Life Cycle 98
5.1 Detailed-Level Process Models: Unified Process and V-Modell XT . . . . . 99
Unit 6
Governance and Management of Software Processes 114
6.1 Process Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Appendix 1
List of References 130
6 Contents
Appendix 2
List of Tables and Figures 140
Introduction
Software Engineering: Software
Processes
8 Introduction
Welcome
This course book contains the core content for this course. Additional learning materials can
be found on the learning platform, but this course book should form the basis for your
learning.
The content of this course book is divided into units, which are divided further into sections.
Each section contains only one new key concept to allow you to quickly and efficiently add
new learning material to your existing knowledge.
At the end of each section of the digital course book, you will find self-check questions.
These questions are designed to help you check whether you have understood the concepts
in each section.
For all modules with a final exam, you must complete the knowledge tests on the learning
platform. You will pass the knowledge test for each unit when you answer at least 80% of the
questions correctly.
When you have passed the knowledge tests for all the units, the course is considered fin-
ished and you will be able to register for the final assessment. Please ensure that you com-
plete the evaluation prior to registering for the assessment.
Good luck!
Introduction 9
Learning Objectives
Software processes and life cycle models are applied in software engineering to facilitate
methodical development of software and software-intensive systems. This course will intro-
duce the foundations of software processes. Moreover, it will demonstrate software process
definition and modeling.
In Software Engineering: Software Processes, you will discover the role of software pro-
cesses and life cycle models in software engineering, from initialization to withdrawal of a
software system. You will learn about project management in software engineering, and
explore the origin, establishment, and evolution of software processes and models. Upon
completion, you will have an understanding of modeling notations and meta-models. Fur-
thermore, you will be able to apply applicable notations to define and model software pro-
cesses, as well as interactions between processes.
Basic software life cycle models, Agile, and lean development processes are explained. You
will gain an understanding of the advantages and shortcomings of plan-driven and Agile
development, and the value of applying a hybrid approach. This course will introduce Scrum
development, common Agile practices, and scaling of Agile development.
Furthermore, customizable process models will be explored. Information technology (IT) serv-
ice management and operations will be introduced as a practice that ensures implementa-
tion and delivery of quality IT services for customers. You will learn about the culture of
DevOps, and the management of safety, security, and privacy as it relates to software, data,
and information.
This course will teach you about the management and governance of IT processes and serv-
ices. You will learn about process design and deployment, as well as process tailoring. Fur-
thermore, it will introduce you to means whereby to assess, measure, and improve processes,
and also tools to use support process modeling, process management, and process enact-
ment.
Unit 1
Foundations of Software Processes
STUDY GOALS
… the role of software processes and life cycle models in software engineering.
DL-E-DLMCSSESP01-U01
12 Unit 1
Introduction
Software engineering has been an established field for quite some time. Many busi-
nesses and organizations are increasingly dependent on software and software-inten-
sive systems to perform their core functions efficiently. They must therefore have
access to suitable software that continues to function for their business requirements.
Private individuals are also increasingly reliant on information technology (IT) and its
associated software to ease, improve, and enrich their daily lives. Software malfunc-
tions and failures can cause enormous economic damage or even physical harm. Thus,
it is important to develop and implement quality software that is robust and fit for its
purpose. Software must also be developed and maintained in such a manner that it
adheres to acceptable levels of reliability, consistency, safety, security, usability, and
privacy. Boehm (2006) defines software engineering as “the application of science and
mathematics by which the properties of software are made useful to people” (p. 13).
Several different kinds of software engineering are available, and a number of view-
points can be applied to explore software engineering. As an example, Boehm (2006)
states that software engineering can be classified as “large or small; commodity or cus-
tom; embedded or user-intensive; greenfield or legacy/COTS/reuse-driven; homebrew,
outsourced, or both; casual-use or mission-critical” (p. 12). The relevant viewpoint of
the involved and affected stakeholders, the type of software engineering applied, the
purpose of the envisaged software, and the nature of the software project dictate how
software and software systems will be planned, designed, developed, implemented,
and maintained. Regardless, it will still be by way of one or more pre-defined software
processes and life cycle models that facilitate the methodical and structured design,
development, implementation, and maintenance throughout its life cycle.
Accordingly, based on a variety of factors, such as the available technology at the time,
the range of application, prevalent worldviews, prevailing research on future trends,
and possible underlying causes of project failures in the past, a range of software engi-
neering trends have emerged and evolved over the years. These trends include focus-
ing on the engineering of computer hardware and algorithmic (once-off) programming
in the 1950s, crafting (code-and-fix) as a programming approach of the 1960s, and
Unit 1 13
attempts at formal and structured development methods in the 1970s (Booch, 2018;
Boehm, 2006). Booch (2018) and Boehm (2006) note that increasing productivity and
concurrent, risk-driven processing were the focus areas in the 1980s and 1990s respec-
tively, agility and rapid development became essential in the 2000s, and the 2010s were
characterized by global connectivity and integration. Currently, a multitude of different
software engineering methodologies, software processes, and life cycle models exist
(Kneuper, 2018). Increased awareness of underlying issues and attempts to manage the
challenges and risks associated with software development, as well as technological
advancements, bring about continuing research by academia and practitioners in the
field to improve and refine software engineering processes and models. As a result,
new trends continue to emerge, and processes and models continually evolve.
The various software processes and life cycle models that manage software develop-
ment projects and are applied in software engineering include, e.g., waterfall models,
the V-model, component or matrix-based models, iterative, incremental, and evolution-
ary development models, and agile and lean development. Regardless of the specific
model applied, a life cycle model typically includes a sequential, iterative, or evolution-
ary arrangement of the following generic phases:
• a feasibility study
• requirements elicitation
• an investigation of the existing software and associated infrastructure
• analysis and object design
• design, including the software and system design
14 Unit 1
The feasibility study involves investigating the requirements that are not being met by
the current software in use (if applicable), and the requirements that need to be met
by the new software. Requirements elicitation entails collecting and analyzing business
requirements and translating them into functional and technical requirements. The
investigation of the existing software and associated infrastructure entails detailed
scrutiny of the flaws of the current software and infrastructure (if applicable). It also
aims to identify additional functional and technical requirements, any possible con-
straints, data types, volumes, etc., of the software to be developed. During analysis and
design, the aim is to understand the improvement that the new software should bring
about and the design that can best achieve it. Factors, such as relevant objects, input
and output, processes converting input to output, security, privacy, backup provisions
to be made, the definition of testing, and implementation plans, are also considered.
Implementation refers to the actual implementation, testing, and use of the new soft-
ware in the business environment; it includes change over from the old to the new (if
applicable), drafting documentation, and conducting end-user training according to
defined privacy and security protocols. Finally, verification and maintenance consist of
evaluation and ensurance of continued optimal use of the software by the users until
withdrawal.
According to Münch et al. (2012), processes and models that are suitable and appropri-
ately applied aim to reduce the complexity of large software development projects,
ensure that all members of a development project work together in a coordinated
Unit 1 15
manner, and facilitate the development of software that is consistent with quality crite-
ria, as well as within time and budget constraints. Successfully executing such projects
requires the rigorous application of a suitable approach and appropriately managing
the associated risks and challenges. However, despite the rigorous planning and execu-
tion of IT projects, they often fail. Challenges and issues that are cited as causes of IT
project failure include
It is unfortunate that IT project failures, especially in the software domain, are often
rationalized or ignored, resulting in the repetition of mistakes by others or even by the
organization involved and affected.
To improve the rate of (or, at the least, achieve) success, we must first define it. In
project management guides and standards, such as the PMBOK Guide (Project Manage-
ment Institute, 2013) and PRINCE2 (Axinte et al., 2017), project success is generally meas-
ured by the project triangle. This triangle refers to three constraints that must be bal-
anced when planning and executing a project, i.e., the schedule (timeline of activities),
the scope (the boundaries of what is included versus what is to be excluded), and the
cost (the allocated budget for the project). Balancing these aspects means that the
extension of one of these elements will inevitably impact the others. For example, an
extension in the schedule will cost more, or a scope increase will inflate both the
schedule and the cost. More recently, a fourth element has been added: quality. Quality
can be put in the middle of the triangle, indicating that quality is dependent upon
schedule, scope, and cost, and vice versa.
16 Unit 1
Taking the aforementioned challenges and issues into account, it is clear that IT project
success can potentially be very difficult to concisely determine and define. For example,
when a project serves a supporting role to the core organizational functions, added fac-
tors, such as business, political, cultural, and social undercurrents, must be considered
in addition to the technical complexity of the project. This is typical for software
projects and, as a result, the boundaries and scope definitions of software projects
tend to be fluid, sometimes to the detriment of the project. Also, constant changes and
rapid technological advancements, as well as the need to be up-to-date with the latest
technological developments, often lead to rapidly changing expectations and require-
ments. Additionally, issues, such as the need to accommodate both the old and the
new simultaneously and having to replace legacy systems without interrupting core
business functions, pose more real threats of project failure. Project risks refer to
uncertainties in conditions that can influence a project outcome positively or nega-
tively. Risks are managed using
• risk identification.
• risk analysis in terms of probability.
• size.
• potential risk mitigation, risk avoidance, or risk transference actions.
• implementation of defined mitigation, avoidance, and transference actions (as nee-
ded).
• monitoring to determine if these were sufficient, or whether they should be adjus-
ted.
Specific and unique sources of risks for IT projects are technological, organizational,
and user-dependent (Taherdoost & Keshavarzsaleh, 2015). Technological risks include,
e.g., the technology of the product or service failing to integrate or interface sufficiently
Unit 1 17
• Validity refers to the inclusion of appropriate and relevant functions, where user
communities may have to compromise if these are too diverse.
• Consistency refers to requirements that do not conflict or have contradictory con-
straints or descriptions.
18 Unit 1
• Completeness ensures that all functions and constraints that the user intended are
clearly defined.
• Realism means that requirements can be implemented within budget, time, and
technological constraints.
• Verifiability refers to the ability to use assessment criteria to confirm whether the
software will meet specified requirements after implementation.
Development teams must have both a business and a technical understanding of the
requirements formulated by stakeholders in order to deliver a highly usable and pur-
poseful software system. As software systems are generally integrated into complex
application landscapes via a multitude of technical interfaces, development teams
must be able to identify both business and technical relationships across organiza-
tional and system boundaries. Thus, project teams with business understanding, as
well as technical knowledge of applications, can deal more effectively with the intricate
combination of interrelated and interdependent business, functional, and technical
requirements.
When software projects focus largely on the technology without addressing the benefits
for the end-user, it is referred to as “technology-centeredness.” Similar problems occur
when challenges to technology are only superficially explored, thereby leading to a
high risk of project failure. It is more convenient (and exciting) for developers to dis-
cuss these interesting topics and put new technologies into use than to confront the
business and technical problems of users. In some cases, this leads to systems being
delivered that reflect the latest technology trends but are not designed according to
the actual needs of the users.
Finally, the lack of communication and coordination between those involved in and
affected by a software system also causes project failures. Development projects for
software systems can involve various (diverse) departments (e.g., marketing, sales, IT
application development, external consultants, and the legal department), and each
organizational unit might have unique ideas and objectives that allow them to accom-
plish their tasks more effectively.
Thomas S. Kuhn (1962) introduced the concept of paradigms and paradigm shifts in his Paradigm
book The Structure of Scientific Revolutions. Paradigms are still applied today as a lens A paradigm refers to
through which to explore and explain phenomena, such as the processes and models an accepted model
that are applied to plan and structure development tasks and projects. Therefore, even or pattern and rep-
though the basic elements of software processes and life cycle models are fairly alike, resents the way peo-
the fundamental paradigm of the respective software process and life cycle model ple perceive, view,
guides its choice and determines how it is to be implemented and executed. and explore their
world.
Paradigm shifts occurred over the last few decades in the way that software was (and
is) perceived and used. Shifts also occurred in the way that software is designed, devel-
oped, implemented, and maintained, and software processes and life cycle models
evolved accordingly. The emergence of a new paradigm does not necessarily render the
previous one invalid; Kuhn (1962) argued that it merely reflects new and different ways
of understanding and doing. Therefore, various paradigms can co-exist, and one will be
chosen over another based on, for example, organizational culture, specific require-
ments, or particular circumstances.
About a century later, sufficient resources became readily available and technological
advancements were at such a level that the first programmable computers could
actually be built. These were for government and military use at first, but they were
also adapted for civil use after the Second World War (Zuse, 1980). During the Second
World War, different countries (i.e., Germany, Great Britain, and the United States) were
concurrently busy studying computer technology. Without knowing of each other’s
work, their respective designs were quite similar, as they were mostly inspired by the
work of Babbage and Lovelace (Aiken & Hopper, 1946; Eckert et al., 1951; Flowers, 1983;
Zuse, 1980).
After the war, computer technology advanced astonishingly quickly, from the “51 feet
long and 8 feet high” (Aiken & Hopper, 1946, p. 386) Mark I computer intended solely for
government use, to inexpensive and relatively small (for the time) single-chip micro-
processor architecture. This ultimately resulted in portable and personal computers
that are, nowadays, relatively cheap and therefore widely used in various institutions,
including government and military organizations, businesses, universities, schools, and
private households. Standardized software processes and life cycle models evolved
accordingly, albeit somewhat slower by comparison.
Since programmable hardware was being built beginning in the 1940s, supporting soft-
ware became necessary soon thereafter. For this, initial programming languages, e.g.,
Konrad Zuse’s Plankalkül language, were largely algorithmic and based on Boolean
logic (Zuse, 1980; Boole, 1847). Wallace John Eckert published a pattern language that
was essentially viewed as the first programming methodology in his book Punched Card
Methods in Scientific Computing in 1940 (Eckert, 1940). Commercialization and the wider
use of computers soon led to acknowledgement that standardization was required in
the programming field. Thus, compiler programs were born, such as the one completed
by Grace Hopper in the early 1950s, which is considered to be the first compiler pro-
gram (Hopper & Mauchly, 1997).
John Pinkerton, an engineer at the Lyons Electronic Office (LEO), also realized that low-
level and repetitive programming tasks could be bundled into a library of common, re-
usable routines. Before long, programmers Grace Hopper, Robert Bemer, and Jean Sam-
met, influenced by the work of John Backus, created the powerful, business-oriented
programming language Cobol, and the predecessors to open-source software (the
SHARE organization) and the idea to establish and outsource software development as
a business emerged. This opportunity was seized by British programmer Dina St John-
son when she founded the first software development house (Booch, 2018).
Unit 1 21
It is evident from the preceding discussion that software engineering developed over
many years, starting with typical craft-based, trial-and-error approaches. These make-
and-fix approaches continued to produce expensive artefacts, quite often behind
schedule. Development approaches evolved over time into methodological (engineer-
ing-based) approaches that are more time-based, resource efficient, and structured.
Benington (1983) was the first to present a formal and structured description of
sequenced activities whereby to develop software; it was presented at a 1956 sympo-
sium dedicated to advanced programming methods for digital computers. However, it
was only in the late 1960s that the term “software engineering” was formally intro-
duced. The term is said to have been coined by Friedrich Bauer in 1968 at the first NATO
software engineering conference, where the organizers of this conference recognized
software as becoming an integral part of communities and organizations alike, and
structured software development approaches became more prominent (Naur & Ran-
dell, 1969). Even so, the term “software engineering” may have emerged earlier; a letter
authored by Anthony Oettinger was published in 1966 in the Communications of the
ACM, which used the term “software engineering” to distinguish between computer sci-
ence and the practice of building of software (Oettinger, 1966). Earlier still, an adver-
tisement was published in June 1965 in an issue of Computers and Automation, seeking
a “systems software engineer.” However, unpublished oral history states that Margaret
Hamilton actually coined the term “software engineering” in the early 1960s to distin-
guish her work from that done in the hardware engineering field (Booch, 2018). From
the 1960s to the 1980s, the field of software development and engineering advanced
rapidly. The following advancements occurred in this time period (Booch, 2018):
The software development life cycle (SDLC) model, as introduced by Royce (1970), is still
viewed by many as the first and traditional software development approach as it
greatly influenced software development practices (Avison & Fitzgerald, 2006). The SDLC
model was, however, more generally termed, similar to the set of sequential develop-
ment activities presented by Benington in 1956 (Benington, 1983). It comprises of activi-
ties, such as the definition of system and software requirements, design and analysis,
programming and coding, testing, and continued operations. Royce (1970) argued that
these successive development phases should ideally be iterated before proceeding to
the next phases, rather than executing them strictly sequentially. The term “waterfall,”
which is still widely associated with the traditional SDLC model, was introduced by Bell
and Thayer (1976) when they referred to it as a top-down approach and suggested that
sequences of activities develop software.
While Agile development and the principles behind it were formally introduced in the
early 2000s (The Agile Manifesto was published in 2001), the Scrum method was pre-
sented earlier. “Scrum” was introduced by Takeuchi and Nonaka in 1986. Soon after, Ken
Beck introduced Extreme Programming, and Johnson and Fowler developed refactoring
(Booch, 2018). Agile development consists of techniques that are referred to by the
founders of the Agile development principles as “light” and support rapid development
(Highsmith, 2001). It successfully facilitates iterative and incremental development, as
well as continuous verification of requirements and subsequent evolvement of prod-
ucts (Hijazi et al., 2012). Agile, in the broadest sense, refers to a type of cultural
approach whereby to rapidly design and develop user-centric software. A high-level
historical overview is shown below.
Unit 1 23
Summary
Project management and engineering principles are applied to design, develop, and
maintain software, from initialization to withdrawal. They reduce complexity and
risk and facilitates communication among team members. Still, failure of software
(and IT) projects remains high. Causes of failure continue to be identified, and man-
agement of these projects continues to be researched and refined.
Knowledge Check
You can check your understanding by completing the questions for this unit on the
learning platform.
Good luck!
Unit 2
Software Process Definition and
Modeling
STUDY GOALS
… how to apply the Unified Modeling Language (UML) to model software-intensive systems.
… to use the detailed level notation Business Process Model and Notation (BPMN).
DL-E-DLMCSSESP01-U02
26 Unit 2
Introduction
Software processes are dependent upon internal and external factors, people, and cir-
cumstances; they are complex, unpredictable, and challenging to rationally describe.
Activities, resources, and constraints associated wirh software processes are difficult to
manage (Bendraou et al., 2010). Software process definition and modeling enables opti-
mal design, development, and implementation of robust software; it also facilitates
optimal management of risk as well as verification and maintenance of implemented
software. Models help to understand, visualize, and communicate desired structure,
behavior, and architecture of software-intensive systems, and they also guide construc-
tion and documentation of decisions (Booch et al., 2005).
guage for representing systems that may include combinations of hardware, software,
data, people, facilities, and natural objects” (p. 3). It was derived from and extends a
portion of UML.
Kneuper (2018) differentiates between process interaction notation, notation for proc-
ess-internal structure, and combination notation. He classifies life cycle diagrams as
process interaction notations, and notations that are typically applied to model
requirements and business processes are classified under process-internal structures.
The Object Management Group (OMG) (n.d.) developed a meta-modeling architecture
that supports the description of widely used languages, such as UML and BPMN, refer-
red to as the Meta-Object Facility (MOF). MOF is essentially a meta-meta-model, as it
describes the notation used for meta-models (Kneuper, 2018). MOF supports the Soft-
ware Process Engineering Metamodel (SPEM) and provides a basis for tool support.
SPEM was introduced by the OMG and embodies a process engineering meta-model
and a conceptual framework to provide necessary concepts to model, document,
present, manage, interchange, and enact development methods and processes (Münch
et al., 2012).
• structural, behavioral, grouping, and annotational things, which are essentially Class
abstractions of logical or physical entities that are representative of the problem or A class is an abstract
solution being modeled representation of an
• relationships that tie all the elements together object and describes
• diagrams, which are groupings of meaningful collections a set of common
objects.
Then, the rules must be applied to ensure harmonious models; they are semantic rules
to describe and define names, scope, visibility, integrity, and execution. Furthermore,
common mechanisms should be applied in terms of specifications, adornments, com-
mon divisions, and extensibility mechanisms. All of these are applied to draw diagrams
that represent views of the problem space and a proposed solution.
The first class (1), which has the name “Window,” indicates that a “Window” has the
attributes “origin” and “size” and can perform the operations “open,” “close,” “move,”
and “display.” The second class (2) has the name “Customer” to indicate that its objects
will contain information about customers; it specifies that a “Customer” has the attrib-
utes “name” and “address,” and it indicates that a “Customer” can perform the opera-
tions of “ordering,” “collecting,” and “paying.” In the example above, the third class (3)
shows how notations can also be used to indicate the status of the operations. For
example, in the class “Transaction,” the public (+), protected (#), and private (-) opera-
tions are specified. The name of the class “Transaction” is written in italics to indicate
that this is an abstract class. These examples are simplified for the purpose of illustrat-
30 Unit 2
ing how to model them, as classes typically contain more attributes. Objects are a spe-
cial form of a class; they represent concrete manifestations of classes. When objects
are modeled, the attributes are associated with actual values. An example is shown
below.
A use case describes a set of sequenced actions performed by a system that yield a
tangible, valuable result for a specific actor. It is applied to structure behavioral things
and is realized by a collaboration. It is drawn as an ellipse with solid lines. It typically
includes its name in the ellipse; however, the name can also be shown underneath the
ellipse. A use case “Complete transaction” is indicated below. In this example, the use
case “Complete transaction” refers to the collective actions that must be performed to
complete a specific transaction as it is encapsulated in and required for a specific busi-
ness process and for an actor. Both notations are shown—the first includes the name in
the ellipse, the second shows the name below the ellipse.
32 Unit 2
A use case can also be indicated using a standard rectangle with an ellipse icon drawn
in the upper right-hand corner of the rectangle. Furthermore, it may also include sepa-
rate compartments to show additional features. In the example below, a use case “User
registration” includes additional features to register a user profile.
An active class is a class with objects that owns one or more processes or threads; it
can initiate control activity and its behavior is concurrent with that of other elements.
It is drawn as a class, but with a heavy outline, and may include its name, attributes,
and operations. The active class “EventManager” is shown below.
Unit 2 33
Components use two types of interfaces: provided and required. A provided interface
implements the component. In the example above, the interface “News bulletin” imple-
ments the component “NewsServices,” when a news bulletin is being broadcasted. A
required interface is one that the component requires to manifest. The example above
shows the component NewsServices that requires the interface NewsFilter (e.g., to pro-
vide current news, news must be filtered so that only current news is aired).
Unit 2 35
Behavioral things and abstractions, representing behavior over space and time, include
interactions and state machines. A state machine specifies the sequences of states that
an object or interaction undergoes during its lifetime, including its responses to the
events. It is drawn as a rounded rectangle and typically includes its name and sub-
states (if any). State machines can represent behavioral states or protocol states.
Behavioral states specify discrete behavior, e.g., an electronic banking system waiting
for a customer to transact. It may also include labels, such as activity labels to indicate
behavior to perform upon entry during, or when exiting the state. The behavioral state
machine “Waiting,” with activity labels “Entry” and “Exit,” is shown in the example below.
Unit 2 37
Protocol states in state machines present the external view. An interaction comprises a
set of messages that are exchanged among a set of objects—it is done within a specific
context and for a specific purpose. It is drawn as a direct line and includes the name of
the operation. A diagram containing protocol state machines (simple state) with proto-
col transitions is shown below.
Boxes (referred to as packages), are used to group things and abstractions; models can
be grouped into packages. A package is a general-purpose, purely conceptual, mecha-
nism that is used to organize and group elements. Variations of packages are frame-
works, models, and sub-systems. They are drawn as tabbed folders and typically
include only their names, but can also include their contents. A package “Transaction
rules” is shown below.
The following figure is a model diagram that includes a “container” model (“Layered
application”). It contains three other models, and their associated packages and
dependencies are indicated.
38 Unit 2
Annotational things and abstractions are comments that describe and illuminate ele-
ments in a model; in other words, they are notes. A note is a symbol used to render
constraints and comments attached to an element or collection of elements. It is
drawn as a rectangle with a dog-eared corner and includes a textual or graphical com-
ment as shown below.
Unit 2 39
The basic forms of relationships are based on either dependency, association, generali-
zation, or realization. A dependency denotes a semantic relationship where the inde-
pendent thing affects the semantics of the dependent thing. It is drawn as a dashed
line, and may include a label. An association is a structural relationship that describes
a set of links. It is drawn as a solid line, and may include a label and other association Link
adornments, e.g., aggregation, role names, and multiplicity. In the example below, its A link is a connec-
multiplicity indicates that zero to infinity (denoted with the asterisk *) students can be tion between
associated with between zero and one lecturers. A generalization shows a relationship objects.
where the specialized (child) element’s objects are substitutable for those of the gen-
eralized (parent) element, i.e., the child object shares the parent object’s structure and
behavior. It is drawn as a solid line with an arrowhead pointing in the direction of the
parent. A realization is a semantic relationship between classifiers; one classifier speci-
fies a contract that another guarantees to carry out. They occur between interfaces and
the classes or components that realize them, and also between use cases and the col-
laborations that realize them. In addition to these basic relational things, the following
variations can occur:
• refinement
• trace
• include
• extend (for dependencies)
40 Unit 2
Rules in UML
Rules ensure that models are well-formed, i.e., that they are semantically self-consis-
tent and harmonious with related models. Models follow the semantic rules for
Since models tend to evolve, and are typically viewed by many stakeholders in various
ways and at different times, models can also be built to be elided, i.e., with certain ele-
ments hidden for a simplified view, incomplete, missing certain elements, or inconsis-
tent (not guaranteeing the integrity of the model). Hence, to satisfy diverse stakeholder
groups, compromises are made between the level of abstraction versus the amount of
detail. These models may also naturally evolve and mature over time as more informa-
tion becomes available.
UML diagrams
A UML diagram graphically presents a set of elements. These diagrams are used to visu-
alize a system from various perspectives and are typically drawn as connected graphs
of vertices (things) and arcs (relationships). In UML, one can differentiate between dif-
ferent types of diagrams. The main categories of diagrams are static structural dia-
grams and dynamic behavioral diagrams. They are illustrated below.
Unit 2 41
• activity diagrams
• use case diagrams
• state machine diagrams and interaction diagrams
• communication diagrams
• sequence diagrams
• interaction overview diagrams
• timing diagrams
In practice, diagrams are used selectively and on a case-by-case basis, as they are
applicable and useful for the context to be modeled, as well as for the stakeholders
involved. The table below indicates static structure and dynamic behavior diagrams
that are often used in UML.
The class diagram is one of the most frequently used diagrams; it models classes with
their associated attributes, methods, and relationships. The other types of UML dia-
grams are essentially based on the modeling concepts of the class diagram. The struc-
tures in which information is stored within the relevant IT systems are made visible by
class diagrams. Below is an example that shows relevant associations between these
44 Unit 2
classes. In this diagram, its multiplicity indicates that an infinite number (denoted by
the asterisk *) of customers can visit one of 60 (denoted by 1…60) available branches of
a store. The superclass “Store” is indicated as having a single-inheritance hierarchy,
meaning that the two subclasses (the “Clothing department” and the “Cosmetics
department”) inherit all the features from only one superclass, “Store,” and have more
features of their own.
A use case diagram describes a system’s functionality from the perspective of the users
(also referred to as actors in UML). They do not describe procedures, but are non-tech-
nical diagrams that show, e.g., an overview of a business process. They show associa-
tions or interactions (using lines) between named actors (indicated by stick figures or
any suitable symbol indicating applicable characteristics) and relevant use cases (indi-
cated by ellipses). These interactions are from the perspective of the user and there-
fore also entail a view of the user requirements. In terms of the software systems, use
case diagrams give an external view. An example of a use case diagram for a portion of
a business system that entails purchasing (i.e., “Purchasing of equipment”) is shown
below.
Unit 2 45
In this example, the involved actors are a “Purchaser,” a “Sales person,” and an “Opera-
tor.” They are involved in purchasing equipment (as the depicted business process), so
the focus of this diagram is on the process of purchasing equipment, hence, the sub-
ject is indicated in the diagram as “Purchasing of equipment.” It entails the following
use cases “Obtain quote,” “Check quote,” “Accept quote,” “Receive order,” “Ship order,”
and “Sign-off on purchase.” The “Include” relationship, indicated between the use cases
“Receive order” and “Ship order,” signifies that receiving an order also includes ship-
ping an order. To summarize, the purchaser will obtain a quote, check it, accept it, and
finally sign-off on the purchase made. The salesperson will also be involved in the
functionalities to check quotes, receive orders, and sign-off on purchases. The operator
will be involved to receive order information and ship orders. The sequence of these is
not necessarily depicted in chronological order, but merely show the required function-
alities and actors’ involvements. The actors in use case diagrams can be humans (as in
the example above), but they can also be non-human and subsequently be depicted by
class diagrams. For example, an actor can be a piece of hardware, software, or another
system.
46 Unit 2
Following on from use case diagrams, sequence diagrams show the sequenced interac-
tions between the objects and messages. They are used to expand the detail of a use
case, and show the lifelines of objects, i.e., they help to visualize and specify the flow of
control and they indicate, in a visual and simplified way, when objects should start,
iterate, and cease to exist. Sequence diagrams describe a chain of chronological inter-
actions; they indicate data and information that are exchanged between actors. They
are widely used for their simplicity and ease of use as they require few graphical ele-
ments and are easy to read. Similarly, communication diagrams show the relationships
between objects, but the focus is on their topology. Sequence diagrams and communi-
cation diagrams essentially show what happens in a system when a user accesses and
uses it. In practice, sequence diagrams are used more often than communication dia-
grams (Baumann et al., 2005).
UML allows for variations of diagrams. In the top diagram of the figure below, a basic
high-level sequence diagram shows an object “Customer” that is involved to order, pay
for, and receive equipment from an object “Store,” which receives the payment and
ships the equipment. The diagram is annotated with comments, e.g., indicating that the
“Customer orders the equipment.” The messages are depicted in increasing chronologi-
cal order, from the top to the bottom, with the direction of the arrows indicating the
direction in which the message is being sent. In the second diagram of the figure below,
a basic high-level sequence diagram is illustrated. It shows what happens when the
object “Customer” obtains a quote and places an order. The diagrams below are simpli-
fied examples that do not indicate what happens if any errors are encountered.
Unit 2 47
Activity diagrams describe the procedures that make up a process; they are related to
flowcharts and illustrate the chronological and parallel order of activities, and relevant
alternatives. They facilitate functional thinking, rather than object-oriented thinking,
and are therefore particularly useful when exploring and illustrating business pro-
cesses (Baumann et al., 2005). Below is an example of an elementary high-level activity
diagram indicating the procedures involved to receive, accept, and reject a quote.
48 Unit 2
State machine diagrams illustrate the different legitimate states in which an object may
be; they also indicate the states that objects may transfer to when events occur and
when they receive messages. The “state” of these diagrams refers to a specific set of
values that the attributes of the object have at a specific time. The object’s state
changes when the values of the attributes change. Activity diagrams initially originated
as a variation of state machine diagrams that focuses on internal flows and activities of
an object, a set of objects, or an entire use case. Activity diagrams were re-formalized
in the latest version of UML and are now based on semantics that are similar to that of
Petri nets, thus increasing the scope of situations with which it can be modeled. State
machine diagrams give either a behavioral view, meaning that they show everything
that can happen with or to an object or a protocol (external) view. The following figure
shows an example of a simple behavior state machine diagram for the object “Order”
and a protocol state machine diagram for indicating whether a “Product line” is opera-
tional or inactive (deactivated).
Unit 2 49
SysML is regarded as flexible and expressive, and is relatively easy to learn and to
apply. It is aligned with the standard IEEE-Std-1471-2000, i.e., the IEEE Recommended
Practice for Architectural Description of Software Intensive Systems (IEEE, 2000). Frie-
denthal et al. (2012) explain that SysML concepts involve the following three parts:
1. an abstract syntax or schema that defines the language concepts and is described
by means of a meta model
2. a concrete syntax or notation that defines how language concepts are represented
and is described by means of notation tables
3. semantics or meaning that gives the meaning of the language concepts
• model elements,
• requirements,
• blocks,
• activities,
• constraint blocks,
• ports and flows, and
• allocations.
Friedenthal et al. (2012) say that a block is “the principle structural construct of SysML”
(p. 119) and define it as “the modular unit of structure…used to define a type of system,
component, or item that flows through the system, as well as external entities, concep-
tual entities or other logical abstractions…describes a set of uniquely identifiable
instances that share the block’s definition…is defined by the features it owns, which
Unit 2 51
may be subdivided into structural features and behavioral features” (p. 119). A block is a
representation of a conceptual (logical) or physical entity or object. It can describe
components that are reusable across various systems. A block is similar to an object-
oriented class in that it describes a set of similar instances or objects that share com-
mon characteristics. The table below defines structural (block) diagrams.
The Multi-View Process Modeling Language (MVP-L) is used to formally describe pro-
cesses, in terms of external behavior, without considering internal structure. MVP-L ori-
ginated in the 1980s at the University of Maryland, USA, and continued to be refined at
the Universität Kaiserslautern, Germany. Rombach and Verlage (1993) explain that “MVP
focuses on process models, their representation, their modularization according to
views” and that “MVP-L was designed to help build descriptive process models, package
them for reuse, integrate them into prescriptive project plans, analyze project plans,
and use these project plans to guide future projects” (p. 154). MVP-L uses processes,
products, resources, and quality attributes, and applies instantiations of these project
plans. Münch et al. (2012) explain that processes refer to the activities performed dur-
ing a project that produce, consume, or modify products. Products are the resultant
software products of the development or maintenance processes, including the final
software, documentation, etc. Models entail the activities performed during a project
that produces, consumes, or modifies products. Attributes refer to defined measurable
properties of products, resources, and processes.
All of the diagrams presented in the first and third section of this unit can also be used
to model interactions between processes and connected segments of larger processes.
As an example, both activity diagrams and BPMN represent interactions between single
54 Unit 2
steps, activities, or tasks. Complex processes and processes addressing different and
diverse parts of a business and system can be modeled by dividing them into sub-pro-
cesses (or segments) and modeling these individually to visualize the detail, then com-
bining them again using one overview diagram.
The example below shows three diagrams. The first diagram illustrates the sub-process
(these are fragments or segments of a bigger business process) followed to obtain a
quote for an order to be placed. The second diagram illustrates the ensuing sub-proc-
ess that details the activities when an order is placed; this diagram also shows that an
activity diagram can include an object (i.e., “Invoice”). In the first activity, diagram parti-
tions are depicted, using swim lane notation and indicating interactions between the
stakeholders—a customer and supplier. The final diagram gives a high-level overview
where these are combined to show the overall purchasing process using BPMN nota-
tion.
Unit 2 55
Moreover, behaviors of use cases can be described using natural language text or dif-
ferent UML diagrams. The example below shows that the use case “User registration”
owns the behavior represented by the activity of “User registration”. This illustrates the
binding of a use case to an activity. The activity diagram that follows shows the behav-
ior of the activity to register a user with an online retailer.
Process patterns describe tasks, stages, and phases from the bottom-up, and with ref-
erence to relevant activities, actions, products, and behaviors to solve a problem. It can
include, e.g., the problem, context, solution, typical roles, and artifacts, as in the
approach presented by Neatby-Smith (1999). In this approach, task patterns define
steps that must be executed to complete tasks. Stage process patterns include several
task process patterns that are required to be completed in order to move to a next
stage, and phase patterns are comprised of two or more stage patterns. Furthermore,
input and output artifacts, as well as roles, are assigned for each pattern. In contrast to
task process patterns and phase patterns, only stage process patterns follow an actual
pattern. Stage process patterns are initial context, solution, project tasks, resulting con-
text, secrets of success, and process checklist.
Another modeling notation that is particularly relevant, and widely used as part of soft-
ware engineering, is the Business Process Model and Notation (BPMN). It is based on
business process management and modeling principles (Mathias, 2019). It was origi-
nally developed by the Business Process Management Initiative (BPMI) and is now also
maintained by the Object Management Group (OMG) (these organizations merged in
2005). BPMN remains, regardless of some drawbacks and issues, one of the most widely
applied notations, and is used by both software architects and business analysts (Kos-
sak et al., 2014). The ISO standard for BPMN is ISO/IEC 19510:2013, which provides an
easily understandable and readable notation and is accordingly suitable for both tech-
nical and non-technical users and audiences (International Organization for Standardi-
zation, 2013). BPMN diagrams describe a similar type of activity as UML activity dia-
grams, but they use a different notation. It is process-oriented, making it useful in the
business process domain, and hence used to illuminate a business process. BPMN is an
expressive notation that describes the flow of activities within a process. Diagrams are
graphical and deliberately kept simple to ease understanding and readability. They typ-
ically employ the following elements:
A BPMN diagram can also contain other illustrations, e.g., documents, different types of
catching and throwing messages, compensations, timers, errors, signals, and links. A
selection of BPMN elements are illustrated in the following figure.
58 Unit 2
An example of a BPMN diagram is shown below. The diagram illustrates the flow of
activities in the “Project governance” business process, which describes the process of
assessing a project’s front-end loading (FEL) status, involving stakeholders and repre-
sentatives from the Project Management Office (PMO), a review team, the project team
and project management, and document management.
Unit 2 59
The use of BMPN does not eradicate the need for other system and software-centric
notations to visualize integration and detail. For example, notations that continue to be
applied in practice are Event-Driven Process Chains (EPC) and Petri nets. However, EPC
lacks the standardization typical of notations such as BPMN and UML. Modeling using
Petri nets are less expressive and can become quite complex relatively quickly; they
may therefore be regarded as less suited to collaborating and communicating with
non-technical users and business representatives (Kossak et al., 2014). BPMN diagrams
and UML activity diagrams are notably similar, and activity diagrams effectively model
business processes. UML activity diagrams have been redesigned in current versions of
UML (2.0 and later) in terms of syntax and semantics, to enhance the capability of these
diagrams to represent business processes (Geambasu, 2012). The symbols used in
BPMN and activity diagrams are similar, with the exception that UML activity diagrams
sometimes use groups of symbols to represent elements, whereas BPMN uses a single
symbol. This is because BPMN uses complex symbols to holistically describe informa-
tion (Geambasu, 2012).
An activity diagram can also be used to view detailed actions taken by, e.g., a customer
that uses an online shop to purchase books. In the following figure, partitions are not
used, and it is assumed that checkout includes both registration of a new user and log-
60 Unit 2
in. The circled letters indicate where additional information is provided, or where addi-
tional information can be added later when more detail is included or the work flow
expands to include more activities. For now, using letters as indicators keeps the dia-
gram simple and uncluttered.
Furthermore, use case diagrams can also be used to model business. Use case dia-
grams describe the functionality of a system from the user’s perspective. They show an
overview of a business process in terms of requirements for a system, not in terms of
procedures. The example of a use case diagram seen previously indicates, e.g., that a
system is required to facilitate the purchasing of equipment. However, use case dia-
grams can also be used to focus on the business function, process, or activity. Rational
Unified Process (RUP) introduced business use cases. RUP is “an ‘architecture-centric’
process” (Avison & Fitzgerald, 2006, p. 462) that applies three key concepts: use cases,
architecture, and iteration. The UML use case elements of actor and use case will then
be extended to include the business actors and business uses, and it will describe the
business boundary rather than the subject (the system boundary). Business use cases
are indicated with an ellipse containing a skewed line, as shown in the figure below.
Unit 2 61
Summary
The interconnected nature of software processes means that various factors, peo-
ple, and circumstances affect them, making them difficult to describe and manage.
Software process definition and modeling aim to simplify and optimize the design,
development, and implementation of software-intensive systems. They also help to
optimally manage risk, software verification, and maintenance.
Knowledge Check
You can check your understanding by completing the questions for this unit on the
learning platform.
Good luck!
Unit 3
Basic Software Product Life Cycle Models
STUDY GOALS
… how models, such as the Rational Unified Process (RUP), support the entire software life
cycle process.
DL-E-DLMCSSESP01-U03
64 Unit 3
Introduction
Software is planned, designed, developed, and maintained using phased methodolo-
gies that encompass “procedures, techniques, tools, and documentation” in order to
effectively “plan, manage, control, and evaluate” these projects (Avison & Fitzgerald,
2006, p. 24). Different approaches are typically based on different philosophical views.
Accordingly, methodological approaches are applied according to a model and in the
context of a framework. Over time, various models and frameworks have emerged,
which are suitable for specific purposes and have specific areas of application. Since
the software or system development life cycle (SDLC) emerged in the early 1960s, cur-
rent models and methodologies continue to incorporate its elements.
Waterfall model The waterfall model is often referred to as the first and most traditional development
A predictive, pre- approach. According to Avison and Fitzgerald (2006), the waterfall model emerged in
scriptive, and plan- the “early-methodology area,” which was “characterized by an approach […] focused on
driven software the identification of phases and stages […] thought to help control and enable better
engineering method- management […] and bring a discipline to bear” (p. 577). This model is still widely used
ology for the SDLC. and has its advantages, but it also poses significant challenges and risks. As a result,
variations of this model that attempt to counteract these challenges have emerged.
The waterfall model with feedback, the sashimi model, and the V-model are examples
of variants of the traditional waterfall model. The waterfall model with feedback and
the sashimi model enable iterations back to previous phases, whereas the V-model
focuses explicitly on verification and validation during each phase in order to develop
high quality products.
According to this model, a software development project typically starts with the
requirements gathering phase, continues to the analysis and design phases, moves to
development and verification, and concludes with the deployment and maintenance
phase, where the solution is released to and used by the customer. A new phase can
only begin upon completion of the previous phase. The waterfall model’s phases and
sequential nature are illustrated below.
In terms of scheduling, roughly 20 to 40 percent of time is spent on the first three pha-
ses: eliciting requirements, analyzing, and designing the system according to these
requirements. 30 to 40 percent of time is spent coding and developing, and the remain-
ing time is used for testing and deployment activities. The benefits of predictive model-
ing can only be realized when developers thoroughly understand all requirements from
the outset. They are then able to analyze and design the software system accordingly
and prior to the development phase. Hence, it is recommended that sufficient time is
allocated for these initial phases.
66 Unit 3
The waterfall model has a number of strengths. Standardized activities are defined in
detail, and detailed and high-quality documentation is generated concurrently with
other activities (Matković & Tumbas, 2010). The sequential application of these stages
facilitates the phased development of a stable solution and is ideal for small and
short-term projects where requirements are not likely to change during the project life
cycle and for projects where quality is of critical importance. If all requirements have
been properly considered in advance, the waterfall model enables improved planning
and structuring of phases. Use of standardized, documented practices enables the
drafting of complete specifications. It also enhances communication practices within
teams and with stakeholders. In addition, project progress can be easily monitored and
controlled, and is clearly visible to all stakeholders (Avison & Fitzgerald, 2006).
Variations of the waterfall model that attempt to overcome these challenges have
emerged. Variations of the traditional waterfall model have one feature in common:
they all add iterative elements. A waterfall model that allows feedback and iteration to
previous phases is an example of such a variation. Another variation is the sashimi
model, which allows phases to overlap. However, with these variations, iterations to
previous phases and overlapping of phases must be carefully planned in order to limit
the negative effect that they will have on cost and schedule. The further back an itera-
tion goes, and the deeper the levels of overlap, the bigger the impact on both cost and
schedule. The waterfall model with feedback is illustrated below.
Unit 3 67
The sashimi model is a variant of the waterfall model that allows phases to overlap. It
is also referred to as the sashimi waterfall or the waterfall with overlapping phases. The
name of this model was derived from the resemblance that it has with the Japanese
dish, sashimi, which consists of overlapping thin slices of fish. For example, during the
first phase, a portion of the requirements will be defined, allowing team members to
proceed with analysis and design. Similarly, at a later phase, when a portion of the cod-
ing work has been completed, it will be tested while other sections of the system are
being developed or designed. Greater overlaps can also be considered to allow differ-
ent parts of the project to move forward at different paces, as long as it does not cause
conflict or is dependent upon an unfinished portion. For example, the solution can only
be deployed when all parts have been tested. In the sashimi model, the documentation
is one unified document, unlike the traditional waterfall model, which prescribes the
documenting of each phase separately. The volume of documentation is thus signifi-
cantly reduced (Matković & Tumbas, 2010). The sashimi model is shown below.
68 Unit 3
The sashimi model has the advantage of accelerating development, and the overlaps
enable optimal use of resources and expertise across various phases. For example, a
database designer may start to develop the database tables and indices as soon as the
primary requirements have been identified and before the finalization of user interfa-
ces. Similarly, a network designer may begin setting up hardware, such as routers and
switches, prior to finalization of the network topology, which also enables the explora-
tion of specific aspects of a solution, and facilitates increased understanding before
final decisions are made. Valuable insights are gained and risks can be reduced. In this
way, the sashimi model facilitates receiving feedback through an internal loop process
and shapes the direction of the project from later phases back to earlier ones. This
model enables, for example, the identification of errors made during design while
design is still ongoing; however, it also presents some challenges. For example, key
development milestones are unclear, monitoring individual activities can be difficult,
and communication can be hindered (Matković & Tumbas, 2010).
require high levels of safety and security. The V-Model is a waterfall, but one that forms
a V shape. The left-hand side follows a deductive approach, decomposing tasks into
more detailed ones, while the right-hand side follows an inductive approach towards
higher levels of abstraction where the various portions of the project are integrated.
The model is illustrated below.
The design process is centralized around architecture views. At first, the architecture is
defined at a high level; it then evolves as the requirements for the software system
evolve. This model is similar to the risk-based spiral model (Boehm’s spiral method) in
that it facilitates incremental and iterative evolution of requirements while simultane-
ously mitigating risks. RUP’s aims are to reduce the size and complexity of products
that are to be developed, to streamline the development process, to create teams that
are more adept and proficient, and to exploit automation through the use of integrated
tools (Boehm & Turner, 2004). There are two versions of RUP (RUP Classic and RUP for
Small Projects); however, RUP is typically applied to large projects, as it remains rela-
tively difficult to tailor to smaller projects.
Prototyping is not a new concept. Royce (1970) argued that, even when following a typi-
Prototype cal waterfall approach, a prototype should be used in the first iteration in order to
“The entire process derive apt specifications. A subsequent iteration is then used to apply learnings and
done in miniature, to provide a final product. Prototyping can also be used evolutionarily to develop a solu-
a time scale that is tion, i.e., to accommodate changing and evolving requirements during development. An
relatively small with example of this is when a prototype is developed, evaluated, and reworked until all
respect to the over- stakeholders are satisfied (Gull et al., 2009).
all effort” (Royce,
1970, p. 334)
Unit 3 71
Floyd (2011) suggests the following steps to effectively use prototyping as a learning
tool:
There are different kinds of prototypes. Floyd (2011) distinguishes between them as fol-
lows:
While evolutionary development has various advantages and often results in increased
user satisfaction, there are also some risks to consider. It can be successfully applied to
larger and more complex systems if an initial and low-functionality version of the final
product can be evaluated early on in the process (MacCormack, 2001). However, one of
the shortcomings of evolutionary prototyping is that expectations must be astutely
managed to ensure that customers do not confuse an incomplete prototype with a final
product. Constant changes to the product may also result in poorly structured software
that is difficult to maintain, the process followed may not be sufficiently visible, and
progress may be difficult to manage (Sommerville, 2011). It can also be difficult to accu-
rately assess resource requirements up front and plan for integration, and documenta-
tion tends to remain incomplete (Matković & Tumbas, 2010).
Boehm introduced the spiral method in 1988. It is a risk-driven (rather than document
or code-driven) approach that can incorporate any method or combination of methods
(Boehm, 1988). It endeavors to integrate evolutionary and specification-based develop-
72 Unit 3
ment approaches (Sommerville, 1996). The spiral model contributed to the promotion
of iteration in software development. It “was devised as an ode to iteration” (Matković
& Tumbas, 2010, p. 168).
Boehm (1988) illustrates this model in the form of a spiral with loops. Each loop repre-
sents a non-prescribed phase of a software process. It is split into the following four
sectors: objective setting, risk assessment and reduction, development and validation,
and planning. This model does not iterate implementations; it revisits each phase, until
final implementation, to manage and reduce risks (Avison & Fitzgerald, 2006). As the
team continues to revisit phases, their understanding of the solution evolves, risks are
identified and mitigated, and a solution can evolve or be refined by means of incre-
mental development of the solution or prototyping, for example. Activities are initially
highly abstract, progressing gradually to become more detailed.
The spiral model shares some of its philosophical footings with Agile development. For
example, it has the following advantages:
However, it remains a systematic and plan-driven approach and, as such, retains the
advantages that come with following a well-planned method. One of this model’s main
drawbacks is that proper risk analysis requires specific expertise, which can be costly
and sometimes not readily available for smaller-scale projects (Matković & Tumbas,
2010).
Summary
Predictive and plan-driven models are useful for small projects and quality-driven
projects. They facilitate effective (strict) management of small software engineering
projects and are easy to plan, document, follow, and monitor. The strictly sequential
nature of the traditional waterfall model makes it rigid and unable to accommodate
new or changing requirements. To counteract this disadvantage, variations, such as
the waterfall model with feedback and the sashimi model, were introduced.
Another variation of the waterfall model, the V-Model, emphasizes continuous veri-
fication and validation. It is also suited for projects driven by reliability, correctness,
and quality. It is suitable for projects that demand high levels of safety and secur-
ity, or projects in a financial environment.
Knowledge Check
You can check your understanding by completing the questions for this unit on the
learning platform.
Good luck!
Unit 4
Agile and Lean Development Processes
STUDY GOALS
… how to scale Agile methods using the Scrum of Scrums (SoS), Large-Scale Scrum (LeSS),
and Scaled Agile Framework (SAFe).
DL-E-DLMCSSESP01-U04
76 Unit 4
Introduction
Agile development methods emerged to counteract the challenges of waterfall devel-
opment approaches. The Agile manifesto makes it clear that people, working software,
collaboration, and responsiveness must be prioritized; technical tools only support the
people that will develop and use the software. Agile development is iterative and incre-
mental; it applies short increments in order to develop solutions that meet customer
expectations.
The advantages and successes of Agile approaches, such as Scrum, made them attrac-
tive, and attempts to scale them for larger and more complex projects soon followed.
Scaled Agile approaches, such as the Scrum of Scrums (SoS), Large-Scale Scrum (LeSS),
and the Scaled Agile Framework (SAFe), are progressively refined and applied. These are
used as overarching organizational frameworks, rather than project management or
software development approaches, to manage and guide the planning, development,
and integration of software systems into organizational structures.
Waterfall models established strict rules for working, as well as documentation within
the development process. Proper and complete documentation of activities, and the
software system, are only useful if they work well and meets the customer’s require-
ments. However, it is also true that the continuous application of rigid processes and
the drafting of documentation ties up many resources that are then unable to react to
changes or continue with development work. In response to this challenge, seventeen
leaders in the software industry created and signed the Agile Manifesto in 2001 (Beck et
al., 2001a). The Manifesto supports customer-centric and streamlined ways of working;
it does not disregard processes and tools, but favors customer-centricity and respon-
siveness. The values of the Agile Manifesto are as follows (Beck et al., 2001a):
Beck et al. (2001a) explicitly state that, with these principles, customer-centric aspects
such as individuals and interactions, working software, customer collaboration, and
response to change, are prioritized over technical aspects, such as processes and tools,
comprehensive documentation, contract negotiation, and following a plan.
• satisfying customers with valuable software that is delivered early and continuously,
• harnessing change throughout the process,
• using shorter time scales,
• continuous collaboration between business users and developers,
• building projects around individuals that are supported and hence motivated,
• favoring face-to-face conversations to convey information,
• measuring progress based on whether software works,
• following a constant and sustainable development pace,
• focusing on technical excellence and good design,
• maintaining simplicity,
• self-organizing teams,
• regularly reflecting within the team about how to work more effectively, and
• adjusting behaviors as required.
nificant new features are major releases. Releases follow a strict versioning scheme in
the format “major.minor.build.revision,” e.g., version 4.2.5.9 indicates that it is the fourth
major release, the second minor release, the fifth build, and the ninth revision. Itera-
tions are relatively short, ranging from a week to a couple of months depending on the
project. Each iteration is, fundamentally, a waterfall structure in itself; it typically
includes all of the development phases, i.e., analysis, design, coding, testing, and verifi-
cation (Coram & Bohner, 2005). This is illustrated below.
Sprints Agile projects are divided into short phases (sprints) to build working software (incre-
A series of time ments), enabling prompt feedback, which is used to adjust and improve as required.
boxed incremental Daily meetings (Scrums) are 15-minute check-in sessions where team members share
iterations. The dura- what they have worked on the previous day, what they will work on the present day,
tion of each sprint and whether they are experiencing or expect any difficulties. Sprints are time-boxed to
typically does not manage scope, develop a cadence, and facilitate release planning. Teams should be
exceed one month. able to deliver working software at the end of each sprint.
Cadence Challenges involved with a pure Agile approach include the following:
This is an Agile term
for the number of • distributed teams. Agile is designed for face-to-face collaboration, for example, for
days or weeks in a Scrum meetings and requirements gatherings.
sprint or release. • resistance to change. If previous approaches have been followed, teams may not be
willing to adapt to a new Agile approach.
Unit 4 79
4.2 Scrum
Scrum is an Agile approach where a team works together to advance development in
short time spans. The “Scrum” idea was initially conceived as a metaphor that descri-
bed the need to apply six interrelated characteristics in a holistic manner in order to
innovatively develop new products. According to Takeuchi and Nonaka (1986), to “move
the Scrum downfield,” the following characteristics were identified to be utilized for
effective product development:
• “built-in instability,”
• “self-organizing project teams,”
• “overlapping development phases,”
• “organizational transfer of learning,”
• “subtle control,” and
• “multilearning” (p. 138).
These characteristics, when applied holistically, result in a new set of dynamics. This
practice, named Scrum (short for Scrummage, the rugby term that describes how two
teams form a circle and use their feet to gain possession of the ball after it is thrown
into the middle of the circle), was later applied in the field of software engineering. It is
characterized by the consistent organization of activities in short cycles and self-organ-
izing teams.
Ken Schwaber and Jeff Sutherland co-presented Scrum in 1995. Scrum is documented
and defined in The Scrum Guide (Schwaber & Sutherland, 2020). It does not define any
specific software engineering-related roles or activities. Instead, it comprises a set of
values, a team, events, and artifacts. Guidelines and rules prescribe how these fit
together. Scrum applies fixed time and cost to control requirements rather than fixed
requirements that control cost and schedule, as is often the case with traditional
project management approaches. Fixed time and cost are achieved by means of time
boxes, collaborative ceremonies, a prioritized product backlog, and frequent feedback
cycles. Scrum is a framework used to plan and structure work; it is not a prescriptive
methodology. As an Agile approach, it provides structure for delivery while simultane-
ously leaving specific practices to be followed, which are for the team to determine.
80 Unit 4
Scrum teams commit to embracing and following the values of commitment (to the
goals of the team); courage (to do the right thing and overcome problems); focus (on
the work and goals); openness (about the work and challenges); and respect (toward
other team members as capable and independent professionals). These values direct
the team’s work, actions, and behavior (Schwaber & Sutherland, 2020).
“The Scrum Team is responsible for all product-related activities from stakeholder col-
laboration, verification, maintenance, operation, experimentation, research and devel-
opment, and anything else that might be required” (Schwaber & Sutherland, 2020, p. 5).
A Scrum team consists of one Product Owner, one Scrum Master, and a number of
developers. The team is cross-functional and self-managing (i.e., self-organizing), and
focus on the product goal. A team typically consists of a maximum of ten people work-
ing on one product. Multiple cohesive teams can focus on one (large) product, provided
that the teams have a shared product goal, product backlog, and Product Owner. These
roles are described by Schwaber and Sutherland (2020) as follows:
They also create release plans, prioritize requirements, and approve results in each
cycle. The Product Owner has contact with the team on a daily basis to answer ques-
tions and provide clarifications. They generally do not interfere with iterations when
they are in progress, but can make changes to be incorporated in future iterations or
even cancel future iterations.
Developers
Developers are responsible for developing the solution. They create a plan, perform
technical conceptualization, design, assure quality, and adapt as required. They manage
themselves, deciding among themselves how the workload is distributed, developing
relevant metrics and estimates, and reporting to each other during daily Scrum ses-
sions. The team members hold each other accountable in order to work professionally.
They form a part of an equal and egalitarian group, sharing duties and responsibilities.
Scrum events include: sprint, sprint planning, daily Scrum, sprint review, and sprint ret-
rospective. Each event provides a formal opportunity to inspect work and adapt as
required (Schwaber & Sutherland, 2020). A Scrum process begins with a customer who
provides a clear vision and a set of product features, listed in order of importance.
These features form the product backlog, which is maintained by the Product Owner as
the customer’s proxy. This initiates a series of time-boxed, incremental iterations, refer-
red to as “sprints.” Schwaber and Sutherland (2020) refer to sprints as “the heartbeat of
Scrum, where ideas are turned into value” (p. 7).
A time box is a common Agile concept that describes an approach where the amount of
time dedicated to an activity is fixed. If the planned work is not completed, the time-
frame will not be extended; rather, what is ready will be delivered. In this way, tasks are
concretely defined. A “sprint” is a container for a series of events where the durations
are fixed; durations cannot be modified after the sprint starts. Typically, the duration of
82 Unit 4
a sprint does not exceed one month, as it is presumed that longer durations may lead
to changes in requirements and definitions, resulting in increased complexity, risks,
and even costs.
Sprint planning initiates the sprint—it results in a plan that addresses the following
topics: “Why is this sprint valuable?” “What can be done during this sprint?” “How will
the chosen work get done?” (Schwaber & Sutherland, 2020, p. 8). The team selects a list
of items from the product backlog to be completed in the sprint and lists them in the
sprint backlog. When everyone agrees, the work commences. No interruptions are
allowed once work commences in order to ensure that the team can focus, meet the
set goals, and complete the selected items.
Daily Scrums are 15-minute events to inspect progress and adapt the sprint backlog if
needed. They lead to the sprint review and potentially to the sprint retrospective at the
end of the sprint cycle. In Scrums, the team coordinates work and discusses and
reviews progress. Additional tools, such as a task board and a burn down chart, are typ-
ically used in these sessions to aid the process. This is because many common tasks
that must be performed are not defined as part of Scrum.
A sprint ends with a sprint review. This is an event, held over a maximum of four hours,
where the team demonstrates a solution (an increment) to stakeholders and takes note
of their feedback. The sprint retrospective follows this review, unless the developed
increment has met all the specifications. It is time-boxed for a maximum of three
hours. The sprint retrospective allows the team to reflect on the status of the project
and consider improvements for the next sprint. They also consider ways to improve
product quality, streamline work processes, and clarify what it means for a project to
be “done.” The Scrum Master ensures that this meeting takes place and that it is con-
ducted in a positive and productive spirit.
The sprint process continues to iterate until there are no further items in the product
backlog. Then, a release iteration (release sprint) is planned in order to prepare for
deployment. At this point, all relevant documentation produced during individual
sprints are finalized, and any remaining defects are resolved. Further, physical items
such as installation media, manuals, and packaging are prepared to be shipped. If
required, system integration and testing take place. An acceptance review and the ship-
ping of items to the customer signals the end of the project. An overview of a Scrum
process is illustrated below.
Unit 4 83
There are three Scrum artifacts. These are the product backlog, sprint backlog, and Scrum artifacts
increment. The product backlog commits to a product goal, the sprint backlog commits The three Scrum
to a sprint goal, and an increment commits to the Definition of Done (DoD). artifacts are the
product backlog, the
The product backlog details the requirements, features, and their priorities; it can range sprint backlog, and
from a draft of an initial idea to detailing fully-specified functionality. Initially, it may the increment.
contain several rough outlines of goals and requirements. As the project progresses,
requirements are refined, supplemented, and prioritized by the Product Owner. The
product backlog contains the user stories that the team will be working on. A user story
is a high-level definition of a requirement and is also used to estimate the relative
time required to complete it.
The sprint backlog is a subset of the product backlog. It contains a list of the require-
ments that will be implemented during the sprint. The requirements included deter-
mine the batch size of the sprint, which depends upon the velocity of the team. The
velocity of a team indicates how many (and which) of the requirements a team can
complete during a cycle, and therefore how many (and which) of the requirements a
team can load from the product backlog to the sprint backlog. It is derived from the
quantity and scope of the functions, as was implemented in previous cycle(s). Velocity
typically stabilizes after the first five to seven cycles.
84 Unit 4
An increment is a concrete and usable stepping stone; the sum of all increments
equals the product goal. Work can only be considered an increment when it is comple-
ted and meets the DoD. An overview of the Scrum framework is illustrated below.
The frequent sprints and the involvement of the whole team in the decision-making
process are the core advantages of Scrum. Scrum success is a result of transparency
and visibility, as well as empowerment coupled with accountability. The egalitarian
nature of the team enables members to accommodate changes quickly and easily
when required, resolve issues, reduce cost, and improve performance. The main chal-
lenge with Scrum is the individual and collective understanding that the team mem-
bers have of the process. A more experienced team that is also familiar with the proc-
ess will establish a working rhythm relatively quickly; less experienced team members,
and specifically Scrum Masters, can ruin the development process. For this reason, a
Scrum team will be assigned to their next project as an established team, rather than
as individuals and to new teams. Scrum teams gain experience as they progress with a
project; as they work through sprint backlogs, they gain knowledge from the partially
delivered results and apply it to subsequent sprints.
instantly and spontaneously. This calls for inspiring leadership that guides and sup-
ports the team and encourages team members to take initiative. It takes time for team
members to gain momentum and form collaborative and self-sustaining hierarchies of
responsibility and accountability within the team.
Communication is key for Agile development and Agile teams (Cockburn & Highsmith,
2001). Agile teams communicate frequently with each other and with customers (or
their proxies) to ensure that the project remains on track, and to address and manage
expectations. A solution must be inspected frequently to get feedback in order to iden-
tify and address any misalignment or issues and improve the solution. Frequent com-
munication also ensures that Agile teams remain focused on quality and, hence,
develop high-quality solutions.
During the sprint planning process, it is important that the Product Owner and team
select a suitable set of prioritized and appropriately detailed items from the product
backlog for the sprint backlog. As previously mentioned, the velocity of the team deter-
mines the number, as well as size, of elements in the sprint backlog. The sprint backlog
should also only include as many elements as the team agree that they can reasonably
manage. During the cycle, the number of elements in the sprint backlog is fixed—fur-
ther elements cannot be added. Several increments can be created during one sprint,
and work can only be considered as part of an increment when it adheres to the Defini-
tion of Done.
Readiness to release relates to the DoD. Agile teams strive to develop high-quality sol-
utions. For this, they must write solid, high-quality code that will not require major
modifications later on. Accordingly, teams emphasize proper coding principles, dedicate
time to develop a common style, and ensure formal coding practices. A solution is only
formally delivered once it adheres to the DoD, as indicated by a checklist. The sprint
backlog elements that are worked on during a sprint are only marked as complete
(done) when they have been completed according to the checklist and requirements
have been met. Moreover, the sprint backlog entails a quality check for all the items on
the list. Consistent application of these is also intended to ensure high process quality
86 Unit 4
and must be enforced. Quality assurance and documentation are tedious tasks that are
easily left behind, especially when working under the pressures of delivery or in high-
stress situations. The DoD is continually updated and typically includes reference to
specific items. The checklist specifies that an element adheres to the DoD when
Requirements that are adequately defined for development and testing relates to the
DoR. During sprint planning, the team negotiates the items to be selected for the next
sprint and can only commit when requirements are sufficiently detailed, i.e., ready. The
planning results in a checklist that indicates when items meet the DoR. The checklist
specifies a clear definition in terms of development requirements, resulting business
value post implementation, and pre-development enablers to be added. The checklist
also specifies the following criteria:
• Rough estimating (sizing) indicates that it can fit within one sprint.
• No pending dependencies to external resources or elements will be needed during
the sprint.
• In case of unavoidable live, external dependencies, adequate coordination has been
arranged and will be closely tracked.
A Task Board
Task boards are often used during Scrum sessions. A task board, which is typically dis-
played in the common meeting area and populated with sticky notes, is used to track
progress. A task board can indicate, for example, the work that is still pending (“to do”),
the work that is still ongoing (“doing”), the work that has been completed (“done”), and
any additional items, such as the user requirements and required features. An example
task board is shown below.
Unit 4 87
Velocity
Velocity refers to the speed at which something moves in a certain direction. In Scrum,
it indicates the amount of work that a team can successfully complete during a sprint.
It is typically measured using the metric “story points” or “person days.” Story points
are a relative and abstract measure used to put the size of different backlog elements
into perspective. Higher points imply that it is more complex and that its implementa-
tion is estimated to be more costly. People are generally better at estimating in relative
rather than absolute terms (Key, 2016), hence the frequent use of statements, such as
“the element is approximately the same size” or “the element is significantly larger.” A
standard number of achievable story points will typically emerge for a team once they
have completed a number of sprints. Initial estimates will be imprecise, but they stabi-
lize over time. Story points can be used to estimate time for tasks within a team, but
cannot be transferred between teams. Story points do not indicate the actual scope of
results that a team delivers. For example, a velocity of 40 story points for Team A can
be very similar to a velocity of 35 story points for Team B, as it only indicates their own
relative definition (El Deen Hamouda, 2014). Story points translate to stories, which are
then implemented by the team. However, the number of person days is an absolute
estimate; it indicates the time (in working days) that the team will need to complete a
backlog event.
Burn down charts illustrate the progress of an Agile project. An Agile project’s critical
path can change daily, making traditional means to track progress of items, e.g., a Gantt
chart, inadequate. Instead, a burn down chart is used to show the current progress of
88 Unit 4
the entire project, a work package, or a sprint. It is based on the finished elements of a
product backlog or a sprint backlog, as the elements relate to time planned versus time
still available. It indicates the velocity of development.
A burn down chart shows an ideal course (usually indicated by a linear line, i.e., the
ideal line), the actual course (indicated by open tasks), and deviation from the ideal. If
the actual course deviates to be above the ideal line, it indicates slower progress, and
vice versa. At the end of the reporting period, the actual course should ideally reach
the zero line, indicating that all open tasks have been completed, i.e., “burned.” The
horizontal x-axis represents time; the first value is the starting point of a considered
time period and the last point represents the end of the period. The vertical y-axis rep-
resents the number of the unit that is illustrated in the chart, e.g., the number of open
tasks or the amount of resources (money or time) used. A burn down chart is illustra-
ted below. It shows six intermediate measurement points between the start of the
sprint, indicated by “1,” and the end of the sprint, indicated by “8,” as well as the default
value of 80 open tasks.
non-prescriptive in the sense that it only introduces constraints for workflow visualiza-
tion and to limit work-in-progress (WIP) (Tanner & Dauane, 2017). In general, Kanban
involves the application of general principles that, when applied by software develop-
ment teams, assist them greatly in optimizing work in an Agile, iterative manner. These
principles include the
A task board is a way to visualize the tasks to be completed in a sprint. It indicates the
status and progress of tasks. A Kanban board is a variant of a task board that is used to
visualize a workflow. The use of Kanban cards originated at a Japanese Toyota assembly
plant; they were applied to exert control over the company’s production processes and
resulted in lead times being shortened by approximately one-third when compared to
other similar plants (Poppendieck & Cusumano, 2012). The type and quantities of inter-
mediate products to be produced, and those required for production processes, are
recorded on signal cards and visibly displayed on boards; Kanban literarily translates to
the word “signboard” in Japanese (Tanner & Dauane, 2017, p. 181). Upstream stations
then use the signal cards to determine what will be required in the near future. The
wall or board where these cards are displayed is the Kanban board. This principle is
applied to visualize and organize tasks for and within Agile Development Teams. All
tasks of a cycle (or a sprint) are indicated on the Kanban Board, i.e., tasks still to be
done in the first column, WIP in the second column, and completed tasks in the third
column. It is also referred to as the pull system, as it “pulls” items from one column to
the next as tasks are completed.
Tanner and Dauane (2017) explain that Kanban development applies the following con-
cepts that should be defined as policies and outline the rules to be followed:
The lean development (LD) methodology was developed by Bob Charette (Anderson,
2012). LD is not exclusively used for software development; it is applied to achieve busi-
ness value by means of business strategies and projects. It combines principles and
concepts from risk management, as per Charette’s experience, and lean manufacturing,
as per the work of Womack, Jones, and Roos (Boehm & Turner, 2004). In an LD context,
agility encompasses the ability to tolerate change. Accordingly, LD entails a three-tiered
approach that is focused on change in order to achieve competitiveness. LD is a prod-
uct-focused process, comprising of three phases, i.e., start-up, steady state, and transi-
tion renewal. Overall planning, including business cases and feasibility studies, are
completed during start-up. The steady state phase involves iterations of designing and
building. The transition renewal phase involves development and delivery of documen-
tation, as well as the maintenance of the delivered product.
Dingsøyr and Moe (2013) further explain that scaling Agile principles and practices
poses challenges, e.g., longer planning horizons, levels of delegated authority, and syn-
chronization of deliverables. Larger projects typically span over more than a year,
requiring product-to-market roadmaps of 12—18 months, while the product backlogs for
the teams are refined in shorter periods, typically between two and three weeks. This
results in a disconnect regarding the level of refinement of the solution. The Develop-
ment Team may work on details and finalize code, only to receive change requests from
a higher level at a later stage, resulting in time-consuming and costly re-work or even
the discarding of finalized code. The lack of management frameworks at higher levels,
when compared to well understood and practiced Agile frameworks within the smaller
teams, also poses a risk. Furthermore, levels of authority in larger and more complex
projects interfere with practices, such as Scrum, where the Product Owner is responsi-
ble for the full life cycle of the product, including return on investment (ROI) and mar-
ket performance, versus larger projects where responsibilities are segregated and
project or portfolio managers may be jointly or individually responsible for different
Unit 4 91
aspects. Disconnects such as these negatively affect the performance of small Agile
teams, impose upon their autonomy, hinder the self-organizing dimension of the team,
and are an obstacle to effective synchronization of delivery and integration of solu-
tions. The context of application and decisions regarding modification of an Agile
approach must be considered carefully when tailoring it to a large, complex project.
This resulted in variants of Agile, e.g., the Scrum of Scrums (SoS), Large-Scale Scrum
(LeSS), and Scaled Agile Framework (SAFe) emerging.
Scrum of Scrums (SoS) involves dividing a project team into groups of Agile teams,
where each team selects an “ambassador” to participate in frequent meetings with
other selected ambassadors (Raps, 2017). The ambassador roles are typically under-
taken by the Scrum Masters, but it can also be another team member. These meetings
occur as frequently as necessary, and last approximately 15 minutes. The SoS discusses
the status of each team, and also other issues or challenges that they may face. They
typically discuss identified obstacles and ways to overcome them, issues regarding
interfaces between separately developed solutions and integration of the complete
solution, and responsibilities of and boundaries between individual teams. The SoS is
presented below.
92 Unit 4
Unit 4 93
Large-Scale Scrum
Large-Scale Scrum (LeSS) is the result of efforts to apply the purpose, elements, and
elegance of Scrum to large projects. LeSS aims to enable organizational simplicity and
purposefulness, but in a non-prescriptive manner (Larman & Vodde, 2017). Adapting to
such a large scale, as is required by LeSS, encompasses profound organizational
change. LeSS is to be viewed as an organizational design framework, rather than a
project management practice. LeSS includes many of the principles and ideas of Scrum.
For example, it comprises a single product backlog, a DoD, a Product Owner, a sprint, an
increment at the end of each Scrum, and cross-functional teams. Additionally, it entails
two-part sprint planning, where part one involves a typical Scrum team, and part two is
similar to SoS. LeSS differs from SoS in that it observes the large-scale view as a single
Scrum, following the principles, rules, elements, and purpose of Scrum, rather than as
another management level on top of individual Scrums. According to The Less Company
B.V. (n.d.), LeSS has the following nine core principles:
1. “Transparency,” which is evident from tangible items that are completed (done),
entailing “short cycles, working together cooperatively, common definitions, and
driving out fear in the workplace” (para. 3)
2. “More with less,” which includes the concepts of “empirical process control: more
leaning with less defined processes”; “lean thinking: more value with less waste and
overhead”; and scaling through “more ownership, purpose, and joy with less roles,
artifacts, and special groups” (para. 4)
3. “Whole-product focus,” which means having one product backlog, Product Owner,
shippable product, and sprint, regardless of the number of teams involved. This is
based on the fact that customers want valuable functionality in a single cohesive
product, rather than technical components in separate parts.
4. “Customer-centric,” which involves learning about customers’ real problems and
solving them by involving customers in meaningful feedback loops.
5. “Continuous improvement towards perfection,” which means striving to delight cus-
tomers with perfect products, as well as improving the environment and lives, by
continuously doing “humble and radical improvement experiments” (para. 7).
6. “Lean thinking,” which involves “respect for people and continuous improvement”
(para. 9) by means of an organizational system aimed at eliminating waste, e.g., by
reducing partially done work and delivering results as quickly as possible; simulta-
neously, decisions are made as late as possible to reduce uncertainty and while tak-
ing into account the continuously changing environment.
7. “Systems thinking,” which entails exploring and optimizing the system as a whole
(rather than only individual parts of it), and applying systems modeling techniques
to explore system dynamics.
8. “Queuing theory,” which involves an understanding of how systems with queues will
behave in the research and development (R&D) domain, and the application of the
insights to manage aspects such as “queue sizes, work-in-progress limits, multitask-
ing, work-packages, and variability” (para. 10).
94 Unit 4
The latest version of the Scaled Agile Framework (SAFe), i.e., SAFe® 5.0, entails seven
core competencies that are applied in lean enterprises. The competencies relate to:
lean-Agile leadership, a continuous learning culture, team and technical agility, Agile
product delivery, enterprise solution delivery, lean portfolio management, and organi-
zational agility (Scaled Agile, Inc., 2019). The competencies (with the exception of lean-
Agile leadership) target different levels in the hierarchy, i.e., the Essential SAFe configu-
ration, the Large Solution SAFe configuration, and the Portfolio SAFe configuration.
Mastering all of this is critical to achieving and sustaining business agility and a com-
petitive advantage in a modern marketplace. SAFe extends to guide both enterprises
and governments.
Knaster and Leffingwell (2020) describe the competencies as follows: The lean-Agile
leadership competency describes driving and sustaining organizational change. These
leaders empower teams and lead by example to instill change. The continuous learning
culture competency entails values and practices to encourage all to continue to
increase knowledge, and also to constantly improve and innovate. Team and technical
agility comprise the critical skills, principles, and practices applied to create solutions
of high-quality for customers. Teams should remain productive and continue to deliver
value. Agile product delivery refers to the approach that enables organizations to con-
tinuously define, build, and release products and services of value, in order to delight
customers and remain competitive. The enterprise solution delivery competency
describes the application of principles and practices to specify, develop, deploy, oper-
ate, and maintain large applications and systems. Lean portfolio management involves
the alignment of strategy and execution through the application of lean and systems
thinking approaches; it enables organizations to meet commitments while also con-
tinuing to innovate. Organizational agility refers to the ability to optimize business pro-
cesses, properly evolve strategy, and rapidly adapt as and when required.
Knaster and Leffingwell (2020) state that all SAFe configurations are built upon the
Essential SAFe configuration, which applies the principles and practices of the three
core competencies: Agile-lean leadership, team and technical agility, and Agile product
delivery. The Agile Release Train (ART) anchors SAFe. ART refers to an organizational
structure that involves dedicating Agile teams, key stakeholders, and other resources to
an ongoing mission. The Large Solution SAFe configuration is built upon the Essential
SAFe configuration as it supports the development of large and complex solutions that
require multiple ARTs and suppliers, and adds additional artifacts, events, roles, and
coordination. It is implemented through a Solution Train, i.e., an organizational con-
struct to facilitate development of large, multi-disciplinary, and complex software and
systems. Large Solution SAFe adds the competency of enterprise solution delivery, over
and above the core competencies of the Essential SAFe. The Portfolio SAFe can also be
built on the Essential SAFe configuration; it signifies the minimum set of competencies
and practices required for complete business agility. It provides three competencies in
addition to the core competencies of the Essential SAFe: lean portfolio management,
organizational agility, and continuous learning culture.
Unit 4 95
When all three configurations are applied to form the Full SAFe, it forms the most com-
prehensive configuration, including all seven competencies. Multiple instances of vari-
ous SAFe configurations can also be used in an organization. The Spanning Palette is
always indicated as part of the SAFe. It entails the specific elements, roles, and artifacts
that an organization decides to includes in a SAFe. The Spanning Palette is a selection
of vision, roadmap, milestones, shared services, community of practice (CoP), system
team, lean user experience (UX), and metrics. Each SAFe also includes a description of
its foundation. It outlines the organization’s selection, as applicable to them, of sup-
porting elements that they require to deliver value. These include the lean-Agile lead-
ers, core values, lean-Agile mindset, SAFe principles, SAFe program consultants (SPCs),
and an implementation roadmap (Knaster & Leffingwell, 2020).
The latter is also referred to as the Water-Scrum-Fall, which was introduced by Forrester
Research, Inc. as a reality that organizations are faced with, rather than a methodology.
However, it is now acknowledged as a useful hybrid approach, taking advantage of the
best of both Agile and plan-driven development (Kneuper, 2018). It can include the
application of the beginning and end phases of a waterfall model to analyze require-
96 Unit 4
ments, and do acceptance testing and deployment, but implements typical Scrum
sprints in the middle phases, e.g., during the design, implementation, and testing pha-
ses.
Summary
Process and tools are important, but cannot be valued over customer-centric prin-
ciples, such as collaboration and communication. Well-defined process models and
the use of tools can only be effective when applied collaboratively and when stake-
holders cooperate to achieve a mutual goal. Agile development approaches aim to
overcome the challenges of traditional development approaches; however, in prac-
tice, both still have value. Since it is crucial to find the optimum blend, individual
projects are often implemented using “hybrid” approaches.
Agile approaches achieve success and have therefore become popular. They are
most suited for smaller scale projects, so attempts are made to scale them for
larger and more complex projects. Examples of these include the SoS, LeSS, and
SAFe. Patterns in the application of hybrid approaches emerged in the form of the
Water-Scrum-Fall, combining the waterfall, Scrum, and Agile Unified Process, as well
as V-Modell XT and Scrum, with Kanban and XP practices.
Knowledge Check
You can check your understanding by completing the questions for this unit on the
learning platform.
Good luck!
Unit 5
The Software Product Life Cycle
STUDY GOALS
DL-E-DLMCSSESP01-U05
98 Unit 5
Introduction
Software projects are implemented by means of software processes and life cycles.
They facilitate systematic structuring and execution of applicable design, development,
implementation, and maintenance activities, throughout the life cycle of software and
software systems. The popularity of iterative and incremental development frameworks
is increasing; they are believed to be more responsive to risks and ever-changing cus-
tomer needs and market demands. However, many of these approaches are still rela-
tively new and anecdotal, whereas large, integrated, and complex projects still require
the meticulousness of traditional plan-driven and architecture-centric methods.
Accordingly, hybrid approaches are often applied, and, in view of that, many organiza-
tions choose to adopt customizable iterative and incremental development frame-
works, such as the Unified Process (UP) (Scott, 2002) and German V-Modell XT (Deutsch-
land, 2004). They are modular and therefore adaptable; projects are tailored according
to the characteristics and relative complexity of the envisaged project without compro-
mising quality.
ted and published by the German Federal Office for Information Security (BSI). The IT-
Grundschutz catalog provides a framework for the management of IT security; it
describes aspects that can threaten security and response measures (ENISA, n.d.).
When applying UP, a project is planned and executed according to four core phases:
inception, elaboration, construction, and transition. Phases are typically divided into
multiple iterations (Scott, 2002). Additional phases, such as production and disposal,
can be added if required. Most iterations will include general activities, such as require-
ments gathering, design, implementation, and testing; the relative effort and emphasis
of these will, however, differ as the project progresses.
During the inception phase, a business idea is considered and elaborated upon to pro-
duce a project. This phase is comparable to a feasibility study—goals are outlined, the
scope is defined, and an initial schedule and cost estimate are prepared. Furthermore,
a business case is formulated based on core requirements, key features, and con-
straints (Scott, 2002). This phase ends with the life cycle objective milestone.
Throughout the elaboration phase, project requirements are elicited and potential risks
are identified so that the most critical risks are addressed as early as possible in the
process. Risks are identified for mitigation in order to prevent project failure. The core
elements of the system to be developed are also identified; the level of detail must be
sufficient so that developers can conceptualize ideal solutions. In this phase, the sys-
tem architecture is established and validated. Tools, such as the Unified Modelling Lan-
guage (UML), are used to explore and model the architecture views of solution compo-
nents. Additionally, they implement package diagrams, object-oriented class hierarchies
(which are only implemented during development), and use cases (which identify the
actors and ways that they will be interacting with the system) (Arlow & Neustadt, 2002).
The aim of this phase is to demonstrate a stable architecture that supports key func-
tionalities and behaves suitably in terms of, e.g., performance, scalability, and cost. It is
validated by implementing an executable architecture baseline, i.e., a partial implemen-
tation of a system that includes its most significant components (Scott, 2002). This
phase provides a detailed plan for the next phase.
100 Unit 5
The solution is deployed in the transition phase (Scott, 2002). In this phase, require-
ments for infrastructure (e.g., computers, servers, and networks) are fulfilled, documen-
tation is completed, and users are trained. The solution will be in production, and users
can begin to use it. The solution is handed over to a maintenance team for the man-
agement of fixes, upgrades, and customer requests. Based on initial feedback from the
customer, the project may temporarily revert (i.e., iterate) back to previous phases to
address serious issues.
When required, additional phases are considered in order to address other project
aspects and the remainder of the full software life cycle, e.g., a production phase may
be implemented after the transition phase to determine how users respond to the sol-
ution. Feedback from this phase may be useful for upgrades or for future projects. A
disposal phase may also be considered, where users transition to a system replacing
other systems to ensure minimal disruption to the users and their organizations.
Unified process (UP) phases typically iterate several times to address phase-specific
issues, challenges, and obstacles. The exception here may be the inception phase, as it
will typically not iterate for smaller projects (it is relatively abstract by nature); however,
in large complex projects, the inception phase can also be divided into iterations. Risks
are often identified during elaboration and then refined for mitigation during the sub-
sequent iterations of the elaboration phase. Similarly, during the first construction
phase, the focus will be on developing critical and important features, and the remain-
ing features will be added during subsequent iterations. UP iterations are shown in the
figure below.
Unit 5 101
A hypothetical case that incorporates RUP is given in the figure below, which shows
how it supports resource planning and workload balancing. It illustrates that business
modeling will require more effort (as indicated by the bright orange color) at the incep-
tion phase, but this effort will gradually reduce in the remaining phases. The deploy-
ment phase requires little effort in the beginning, and more towards the end of the
project. This figure also indicates the multiple iterations within each phase, with the
exact number being defined as part of the project planning.
102 Unit 5
V-Modell XT
The V-Modell XT enables flexibility and tailoring (the XT stands for extreme tailoring). It
is designed to guide tailored planning and execution of system development projects
by defining project results, i.e., products. The model is divided into a number of process
modules describing the roles, responsibilities, procedures, and activities that must be
performed to create these results (Microtool, n.d.). The user selects only the applicable
modules, according to the characteristics of the project, and its tailoring guide facili-
tates the creation of a set of adapted procedures whereby to execute the project based
on the applicable project characteristics (Kneuper, 2018). It is a state-of-the-art soft-
Unit 5 103
ware process and life cycle model initially developed for the German government for V-Modell XT
use with large projects, but it is adaptable for any size project. It includes a meta- A tailorable model
model, a model, and tool and documentation (template) support. The V-Modell XT (divided into process
supersedes and replaces its previous version, V-Modell 97. modules), the V-
Modell XT describes
On the meta level, the model is organized into different packages. The different pack- roles and activities
ages and their purposes are as follows (Kneuper, 2018): to be performed in
order to create work
• The base package describes the structure and manner in which the model is docu- products.
mented, e.g., its chapters and tool references, as well as its method.
• The statics package explains all of the process modules, including the basic compo-
nents of each module and the relationships and dependencies of the components.
• The dynamics package states the ordered sequence of all the activities, as defined
in the statics package.
• The adoption package provides standardized criteria that define the tailoring of the
model.
• The mapping to standards package enables references to compatible current stand-
ards and regulations (e.g., ISO9001:2000, ISO/IEC 15288, and the Capability Maturity
Model Integration (CMMI®)).
The V-Modell XT is based on the V-Model for effectively guiding the planning and exe-
cution of a project throughout its entire life cycle and aims to describe “in detail ‘who’
has to do ‘what’ and ‘when’ within a project” (Deutschland, 2004, p. 5). In addition, it
offers traceability and delivers high-quality and reliable results. The model aims to
minimize project risks, improve and guarantee the quality of solutions, reduce the cost
of a project, and improve communication between stakeholders, customers, and con-
tractors. It implicitly demands and mandates structured management and control over
requirements and changes, but does not explicitly prescribe how to do so (Microtool,
n.d.).
According to the V-Modell XT, projects are classified according to two characteristics:
the subject of the project and the project role. The subject is based on the purpose of
the project, namely, whether the project will be to develop a system or an organization-
specific process model. The project role differentiates between the role of a customer
and a supplier. Furthermore, it also distinguishes between different types of projects,
and provides suitable project execution strategies for the different types. The types of
projects include the following (Deutschland, 2004):
Project execution strategies include sequences of activities and tasks, work products to
be developed, and the relevant roles required to create the products—these are
defined in process modules. A project execution strategy also comprises a series of
decision gates, indicating the stages where project progress should be evaluated. At the
104 Unit 5
In addition to providing tailoring options, this model also supports both V-shaped and
iterative and incremental development. It is an Agile, yet plan-driven, approach that
provides visibility with regard to project progress and status, and results in high-quality
solutions. It enables explicit reflection on and incorporation of all applicable processes
of both the customer and the supplier.
Nowadays, IT services are an intrinsic and key component embedded in modern busi-
ness; they are no longer merely supplemental services to enable and support business
processes (Esposito & Rogers, 2013). Therefore, IT services must be implemented and
managed aptly. ITSM ensures that applicable and high-quality IT services are provided
and managed appropriately and IT assets continue to provide business value. ITSM
aims to provide customers with a range of suitable and valuable IT services that are
standardized, yet scaled and tailored to their requirements (Thejendra, 2014). It involves
understanding the customer’s needs and managing expectations; standardization of IT
services as and where applicable; the establishment and regulation of IT processes,
tools, and roles; the measurement and evaluation of services; and the optimization of
services. Traditionally, ITSM efforts seem to call for structure and long-term planning;
they involve continuous and longstanding change management and improvement
projects. However, ITSM can also be effectively implemented in an Agile manner when
improvements are prioritized in the short-term and made iteratively by responsive and
collaborative business and IT teams (Ferris, 2017).
Unit 5 105
SLAs define the required quality criteria for functions and services (Esposito & Rogers,
2013). They are written in non-technical (business) terms and define, e.g., when IT serv-
ices must be available; the number of supported users; guaranteed simultaneous
access to a service; guaranteed reaction, response, and resolution times in case of
minor and major issues or failures; and backup strategies. The significance and critical
nature of the IT services, typically measured in terms of potential harm (economic or
otherwise) caused by the non-availability thereof, determines the levels of quality that
are agreed upon in SLAs. SLAs must be outcome-based and facilitate collaboration
between the teams working together to achieve collective goals (Macdermid, 2019). The
criteria must be measurable, and they can be differentiated. For example, an email
service can be made available to staff during specified working hours only, but it must
also be available after hours and over weekends for executive management. Higher
service levels typically result in higher overheads and costs, meaning that optimum
service levels must be determined for the business. Free-market IT services (e.g., online
and cloud storage or free email) may be sufficient for private users, but the lower and
unguaranteed levels of these services make them inadequate for corporate users.
OLAs are internal and technical service provider agreements that support SLAs and
underpin contracts (Thejendra, 2014). OLAs describe the technical functionality and
actions that will be required and employed to deliver services at required levels of
quality, and are accordingly written in technical terms. They outline roles and responsi-
bilities of stakeholders and IT teams, as well as resources (tools and infrastructure)
required to provide and support the services. OLAs define, e.g., what a server team will
do to patch servers, required server memory, number of servers, bandwidth, CPU speed,
and reliability and redundancy of hardware and resources.
Standardization of IT Services
Standardizing specific assets and products (e.g., limiting customers by providing them
with a limited selection and range of printers, laptops, and desktop computers) makes
it possible to leverage economies of scale when purchasing assets. In addition, it
becomes possible to establish and foster favorable long-term relationships with sup-
pliers when purchasing larger quantities of certain products, or purchasing more fre-
quently from them. It may also be possible to negotiate better deals and favorable sup-
port agreements with preferred suppliers. In addition, having a limited range of
standard products to maintain will reduce costs.
Process model frameworks for ITSM include globally-established best practices to man-
age delivery and support of IT services and assets. The ITIL framework is an example of
such a practice that is widely accepted and applied. It guides the management of IT
services and is used by organizations “to run their business from strategy to daily real-
ity” (Joret, 2019, p. 2).
• Service strategy includes policies and objectives to ensure business value; it is real-
ized through practical decisions and proper planning.
• Service design ensures that services are built, evolved, operated, or withdrawn, as
per the strategy, and taking all stakeholders into consideration.
• Service transition entails planning and management to implement services, ensur-
ing quality and the satisfaction of stakeholders.
• Service operation involves repeating activities for ongoing support.
• Continual service improvement applies methods from the practice of quality man-
agement to continually improve.
The latest version, ITIL 4, is a holistic approach that is still evolving. It continues to con-
sider established practices in a much wider context and embraces new ways of working
(Joret, 2019). It now uses modern practices, such as Agile, lean, and DevOps. ITIL 4 sup-
Unit 5 107
plements and complements ITIL V3, which provided detailed process descriptions that
are still useful. However, ITIL 4 can also be used without applying the somewhat rigid
and complex process descriptions of ITIL V3. In line with agility, organizations can
define their processes in a manner as simplified (or complex) as they required. Hence,
ITIL 4 describes principles, concepts, and practices, rather than comprehensive and
prescriptive process specifications. Furthermore, ITIL 4 refers to key activities, as well as
essential inputs and outputs.
Joret (2019) explains that ITIL 4 integrates four key dimensions to effectively manage
services: organizations and people, information and technology, partners and suppliers,
and value streams and processes with components from the service value system (SVS).
The SVS consists of generally applicable components that aim to facilitate creation of
value using services that are enabled by IT. This includes guiding principles, continual
improvement, and governance and practices guiding the central service value chain
that is also linked to both value and opportunity or demand. ITIL 4 comprises a total of
34 practices that encompass three categories: general management, service manage-
ment, and technical management (Joret, 2019). ITIL has been adopted by various and
diverse small companies, as well as large industries, such as IBM, Microsoft, Sony,
Toyota, HP, and various financial and banking institutions; it is adaptable and can be
used on its own, or in conjunction with other practices for governance, quality, and
architecture management, e.g., Agile, lean, COBIT® (formerly known as the Control
Objectives for Information and Related Technologies), Six Sigma, and The Open Group
Architecture Framework (TOGAF) (Arraj, 2013).
IT services must be measured and evaluated to ensure that the business knows how
well its services are functioning, and where problems may arise (Esposito & Rogers,
2013). Service providers should be able to constantly assure their customers that they
provide services at agreed upon levels, maintain services and assets, and resolve inci-
dents and problems within agreed upon resolution times. Failure to keep to SLAs and
OLAs will typically result in penalties for non-compliance. In addition to actual non-
compliance, perceived quality of service, according to business users, should also be
measured. If the measured service quality differs significantly from perceived service
quality, it indicates a perception gap that must be discussed and bridged.
Optimization of Services
5.3 DevOps
Various organizational IT operations and departments often prefer to work in silos,
rather than collaboratively; this results in diverse and even conflicting leadership
approaches, performance metrics, and, unfortunately, lack of awareness regarding
important changes that impact downstream and across various teams. Patrick Debois
mentioned at the Agile 2008 Conference that development and operations teams
should improve interactions, and also referred to the need for Agile infrastructure
(Debois, 2008). Following that, the DevOps (combining the terms development and
operations) model aims to improve quality while simultaneously increasing deploy-
ment speed, frequency, and reliability (Mishra & Otaiwi, 2020; Perera et al., 2017).
DevOps relies on strong collaboration between teams; it aims to resolve human chal-
lenges, rather than technical challenges, and focuses on rapid delivery by adopting
Agile and lean practices and utilizing appropriate technology. DevOps includes the
entire continuous integration, delivery, and deployment pipeline, i.e., from committing
to a change that will be made to the code to post-deployment testing.
DevOps focuses on the human challenges, i.e., what should be accomplished, while
continuous integration, continuous delivery, and continuous deployment are used to
implement automation, i.e., the technical “how” of DevOps. Automation solutions are
applied to technical challenges. The goal is to make deployment predictable, i.e., a rou-
tine process that can be performed on demand. It aims to produce higher quality solu-
tions and, since errors can be identified and resolved early in the process, also reduce
risks when releasing software.
The continuous delivery concept originated as part of Agile software development and
commits to the first priority referenced in the Agile Manifesto, namely, to satisfy cus-
tomers with valuable software that is delivered early and continuously (Beck et al.,
Unit 5 109
2001b). Continuous integration is also not a new concept; it was applied in the 1990s in
Extreme Programming. It refers to integration of development work continuously and
frequently (at least daily, but preferably more often) (Red Hat, Inc., n.d.). Each small
change that is completed is integrated into a new build and tested. Developers must
check in their changes at least daily. Continuous delivery and continuous deployment
include building, configuring, packaging, and uploading completed software to a reposi-
tory in a way that supports frequent releases, as well as the automatic release of
changes from the repository to production (Red Hat, Inc., n.d.). Ultimately, all concepts,
methods, and tools are needed to facilitate the continuous integration and the contin-
uous delivery and deployment of software to a customer. A continuous delivery pipe-
line (sequenced, and mostly automated, activities that commence with “commits,” i.e.,
changes made to the source code) ends with the installation and configuration of the
solution, and follows through with a series of largely automated integration and test
phases (illustrated below).
A continuous delivery pipeline differs from a traditional software process because the
batch sizes of Agile projects are considerably smaller; developers only bring in a new
function realized within a sprint, rather than a whole component or application
(Mukherjee, n.d.). Also, the order in which testing takes place may be confusing for a
traditional developer or a developer without Agile experience. In a continuous delivery
pipeline, automated acceptance does not occur, as per the International Software Test-
ing Qualification Board (ISTQB) standard, prior to the solution going live, as automated
acceptance tests are done immediately after integration.
DevOps Principles
DevOps is based on three principles: flow, feedback, and continual learning and experi-
mentation (Kim et al., 2016). Flow addresses fast delivery of work. It is achieved, for
example, by making current and future work visible on task or Kanban boards, limiting
the amount of work that is in progress at one time and reducing batch sizes. Feedback
is implemented by continuously informing Development Teams about problems experi-
enced by operations. Continual learning and experimentation can be enabled by fos-
tering an organization-wide learning culture.
110 Unit 5
Challenges of DevOps
DevOps provides a multitude of benefits, but also poses some challenges. For example,
it is not yet sufficiently standardized; the meaning and fundamentals of DevOps are not
yet widely understood and accepted. Lack of knowledge about required supporting pro-
cesses, infrastructure, methods, and tools can result in choices that do not support the
organizational vision. It is also very challenging and expensive to set up, integrate, and
maintain tools. It remains difficult to automate testing and analysis sufficiently, so that
no manual intervention is required. Teams may remain isolated, regardless of efforts to
unite them, due to the very different nature of their work and goals in the organization.
Deep-rooted cultural changes, such as what is required to effectively implement
DevOps, are difficult to implement and maintain in large and established organizations.
These challenges should not deter organizations from implementing DevOps; these are
merely aspects that should be considered in depth during the planning and implemen-
tation processes. Integration of a DevOps culture with governance and enterprise archi-
tecture frameworks should be considered carefully.
• Risk identification supports RA as per the list and classifications of typical threats.
• Risk analysis supports RA as per the detailed description of each threat.
• Risk evaluation uses damage scenarios to assess exposure within an assessment of
protection requirements.
• Risk treatment supports RM as per recommended safeguards that are cataloged and
detailed.
• Risk acceptance is based on the description of how to handle threats in IT-Grund-
schutz.
• “Risk communication is part of the module ‘IT security management’ and especially
handled within the safeguards S 2.191 ‘Drawing up of an Information Security Policy’
and S 2.200 ‘Preparation of management reports on IT security’” (ENISA, n.d., Identifi-
cation section).
Software processes should be enriched to ensure safety, security, and privacy, and cer-
tain steps can be taken in this regard (Kneuper, 2018). These include performing a risk
analysis to identify, classify, and mitigate potential risks; determining relevant requi-
sites, based on identified risks and any additional related requirements that the cus-
tomer may have; defining processes to implement the requirements; and verifying of
the implementation thereof. These should be included in the planning phases and pro-
gressively expanded upon so they become an inherent part of the configuration of the
final version of the software—the properties related to safety, security, and privacy can-
not be added later.
112 Unit 5
Summary
Software solutions must ensure the safety, security, and privacy of customers. Soft-
ware processes must therefore ensure that software adheres to applicable stand-
ards and regulations. Also, IT assets and information services must be protected
and remain secure. The IT-Grundschutz catalog provides a detailed framework for
the management of IT security, describing aspects that can be a threat, as well as
measures to counteract such threats.
Knowledge Check
You can check your understanding by completing the questions for this unit on the
learning platform.
Good luck!
Unit 6
Governance and Management of
Software Processes
STUDY GOALS
… about applicable measures and tools used to assess and improve processes.
DL-E-DLMCSSESP01-U06
114 Unit 6
Introduction
Data and information are the currency of the twenty-first century. Accurate and appro-
priate information, when reliably provided, is indispensable in the new economy and
for modern businesses (IT Governance Institute, 2005). Integrated and interrelated
information technology (IT) and software systems enable this. Technological platforms
drive businesses, and vice versa; they must therefore be managed and governed appro-
priately, collectively, and cooperatively through both business and IT leadership.
The management and governance of processes extends beyond software processes and
models. It involves ensuring that IT assets and services add value and that IT processes
are embedded and used as intended. It also entails highly interdependent IT and busi-
ness systems and processes. Therefore, suitable and unified corporate, enterprise, and
IT frameworks are applied. Frameworks, such as The Open Group Architecture Frame-
work (TOGAF), guide the development of suitably tailored enterprise frameworks (Josey,
2018). IT processes, services, and assets must also be managed and measured in the
context of organizational strategic goals. For this, COBIT® (previously the Control Objec-
tives for Information and Related Technology) is widely used to facilitate IT governance
and management (Dziak, 2019). Agile governance is still a new concept, but it is on the
rise; it relates to the way management and governance frameworks are applied. In
practice, business agility requires the effective employment of both Agile and gover-
nance capabilities (Luna et al., 2014).
Processes are designed and deployed according to process models. They should be
continuously assessed in terms of quality, performance, and effectiveness. Improve-
ment actions must also be identified, implemented, and monitored. To this end, Soft-
ware Process Improvement (SPI) is applied; the SPI Manifesto defines values and princi-
ples to achieve SPI (EuroAsiaSPI2, n.d.). The Capability Maturity Model Integration
(CMMI) Institute provides guidance in terms of maturity models to evaluate the quality
of processes and suggest areas to improve. CMMI models provide a roadmap to achieve
process capability and maturity (ISACA, n.d.-c). Support tools are applied to simplify the
complex tasks of process modeling, management, and enactment.
Unit 6 115
The enterprise architecture concerns the strategic, tactical, and operational manage-
ment of assets, resources, activities, and results. It guides organizational sustainability
and growth, and strives to continually improve IT-business alignment (Gellweiler, 2020).
In the enterprise architecture, the design (architecture) descriptions are divided into
different layers (views); these frameworks are typically designed to fit the specific
organization in which they are being implemented. However, such frameworks still con-
tain a format or structure of generically applicable domains. For example, at the high-
est level, it starts with the business or enterprise domain that drives the (logical)
design of the IT services required to enable the business. IT services are then enabled
by applications, utilizing data and information, which, in turn, directs the design of the
technological platforms (i.e., the physical IT assets and infrastructure). Each of these
dictates the design and implementation of the one that follows; each layer also gives
feedback to the previous layer in order to enable it. This high-level view is illustrated
below.
116 Unit 6
Using a similar structure as that illustrated above, the TOGAF standard is an open-
source enterprise architecture methodology and framework commonly applied to
improve business efficiency (Josey, 2018). It guides organizations to effectively derive
detailed processes and interactions of these layers based on business requirements.
TOGAF
TOGAF is developed and maintained by The Open Group, who work within the Architec-
ture Forum (The Open Group, n.d.). The initial version was developed in 1995 and was
based on the US Department of Defense Technical Architecture Framework for Informa-
tion Management (TAFIM) (Josey, 2018). It is constantly evolving and being adapted to
align with current market demands and new developments. The latest version, TOGAF
9.2, was published in 2020.
TOGAF consists of an extensive reference library that includes various best practice
“guidelines, templates, patterns, and other forms of reference material to accelerate
the creation of new architectures for the enterprise” (The Open Group, n.d., Section 1.2,
para. 1). It supports business architecture; data architecture; application architecture;
and technology architecture (Josey, 2018, p. 5). They are defined as follows:
• Business architecture refers to the overall business strategy and defines the
detailed structure and governance, as well as key business processes.
• Data architecture describes the organization of logical and physical data assets, as
well as resources required to manage data and information.
• Application architecture provides a detailed plan (blueprint) used to deploy individ-
ual applications and describes how they interact with each other; it also explains
how each application relates to the core business processes.
• Technology architecture relates to logical software and hardware capabilities, as
required to deploy business, data, and application services.
IT Governance
COBIT
COBIT was created in 1996 by ISACA for IT management and governance (ISACA, n.d.-a).
The initial aim of COBIT was to normalize standards and uses of IT across different
fields and industries and facilitate proper assessment (and audits) of IT systems (Dziak,
2019). It has continued to evolve, and the latest version, COBIT 2019, is broad and all-
inclusive. It is comprehensive and complicated in its entirety, but can be tailored to
individual organizations, as per their requirements, to implement effective internal
control measures (Rafeq, 2019). With the latest version, ISACA (n.d.-a) aims to facilitate
the development of flexible and collaborative governance structures.
Organizations have one primary, and possibly one additional (secondary), strategic
driver, outlining the strategic focus areas. These are typically as follows (Cooke, 2020):
Unit 6 119
• “Growth/acquisition” (p. 8), i.e., the organization focuses on growing its revenue.
• “Innovation/differentiation” (p. 8), i.e., the organization wants to offer different and
innovative products and services.
• “Cost leadership” (p. 8), i.e., they focus on minimizing cost in the short term.
• “Client service/stability” (p. 8), i.e., the organization aims to provide a stable, client-
oriented service.
Following this strategy, relevant enterprise goals must be achieved. Enterprise goals
include the processes and activities that aim to operationalize strategic focus areas. For
example, they detail business processes to optimize products and services; processes
used to innovate and grow; management of business risks; compliance with laws and
regulations (as well as internal policies); quality information; management of costs and
finances; and development and motivation of staff to ensure optimal productivity
(Cooke, 2020). Furthermore, alignment goals emphasize the alignment of Information
and Technology (I&T) with business. These are as follows (Cooke, 2020):
• “I&T compliance and support for business compliance with external laws and regu-
lations”
• “Managed I&T-related risk”
• “Realized benefits from I&T-enabled investments and services portfolios”
• “Quality of technology-related financial information”
• “Delivery of I&T services in line with business requirements”
• “Agility to turn business requirements into operational solutions”
• “Security of information, processing infrastructure and applications, and privacy”
• “Enabling and supporting business processes by integrating applications and tech-
nology”
• “Delivery of programs on time, on budget, and meeting requirements and quality
standards”
• “Quality of I&T management information”
• “I&T compliance with internal policies”
• “Competent and motivated staff with mutual understanding of technology and busi-
ness”
• “Knowledge, expertise, and initiatives for business innovation” (p. 7)
Lastly, governance and management objectives are defined and measured. Criteria are
explicitly defined in order to determine whether the performance of IT services and
functions are on track or whether adjustments are needed. The core of COBIT 2019 con-
tains governance objectives and management objectives. Each of these is described in
terms of its purpose, how it ties in with the larger organization, and how it aligns with
goals. They are to be applied as a baseline for tailored IT strategies and categorized
according to relevant domains. For example, the COBIT governance objectives are posi-
tioned in the Evaluate, Direct, and Monitor (EDM) domain, which includes the following
objectives (Edmead, 2020):
The COBIT management objectives are categorized in the following domains (Edmead,
2020):
COBIT 2019 also effectively integrates with other frameworks. For example, it is compati-
ble with ISO/IEC, TOGAF, and ITIL. It also aligns well with the CMMI capability and
maturity framework, as it proposes a COBIT Performance Management (CPM) model to
apply to score processes on a quantitative scale of 0—5 (ISACA, n.d.-a). With COBIT 2019,
ISACA (n.d.-a) moved to an open-source model and welcomes feedback from practition-
ers to continuously improve it.
Agile (or lean) governance is a relatively new term that is being used increasingly often.
It is currently being explored by both academics and practitioners. It involves investi-
gating Agile and lean practices to govern, for example, software engineering (Luna et
al., 2014). However, there are still some concerns and ongoing debate on this topic (Juiz
& Colomo-Palacios, 2020). For now, Software Development Governance (SDG) remains
an emerging subject area that aims to empower software teams in reaching project
goals. Broadly speaking, it entails establishing and defining relevant roles, responsibili-
ties, and associated decision-making authorities, as well as engaging in regular reflec-
tion to assess processes and products. The following high-level steps have been pro-
posed for iteration (Dubinsky & Kruchten, 2009):
• Set goals and assign roles and decision rights, and the responsible individuals can
then reach set goals
• Determine measurements and policies, which facilitates understanding of and con-
trol over behavior and organizational performance
• Implement the above mechanisms practically
• Assess implementation of the above and refine and evolve goals
Process design factors that relate to the organizational context refer to external influ-
ences, e.g., the organization’s geopolitical situation, country-specific economic policies,
or regulations. Strategic factors include, for example, the business strategy. It includes
the role that IT plays within the organization (whether it is perceived as a business dif-
ferentiator, or is merely supportive of and enables the business), and its appetite for
(or aversion to) risk. Tactical implementation choices include the practical decisions
regarding resourcing models (e.g., outsourcing versus insourcing versus a blended
approach, the use of cloud computing, purchasing or leasing of IT assets, or
approaches to standardization); IT and software models and processes (e.g., Agile ver-
sus traditional versus blended development approaches, and the adoption and use of
lean and DevOps practices); and choices regarding technology adoption (do they
choose to be first adopters of leading and innovate technology, or be the late major-
ity?) (Rafeq, 2019). As previously discussed, IT processes, tools, and roles are most effec-
tively established within a framework, such as ITIL.
Process Deployment
• Evaluate project goals and environment. This involves evaluation of the product and
process goals and consideration of the specific characteristics of the project, team,
stakeholders, and organization in order to ensure consistency.
• Assess challenges. Challenges typically arise related to resources, communication,
requirements management, political issues, and technical challenges.
• Determine tailoring strategies for the various process elements. This refers to deci-
sions regarding inclusion or exclusion, altering of tasks and artifacts, roles, and
sequencing, as well as iterating work.
• Tailor software processes. This involves using process validation to ensure consis-
tency with goals, the environment, the execution, and its subsequent evaluation.
Software Process Improvement (SPI) aims to improve software processes and apply var-
ious standards and methods to assess the quality and maturity of processes. The SPI
Manifesto originated at the EuroSPI Conference in Spain in 2009. It defines values and
principles to achieve SPI (EuroAsiaSPI2, n.d., Values section, Principles section):
Unit 6 123
• In terms of people, SPI “must involve people actively and affect their daily activities”
(Values section), namely
◦ “know the culture and focus on needs,”
◦ “motivate all people involved,”
◦ “base improvement on experience and measurements,” and
◦ “create a learning organization” (Principles section).
• In terms of the business, SPI “is what you do to make business successful” (Values
section), namely
◦ “support the organization’s vision and objectives,”
◦ “use dynamic and adaptable models as needed,” and
◦ “apply risk management” (Principles section).
• In terms of change, SPI “is inherently linked with change” (Values section), and
should
◦ “manage the organizational change in your improvement effort,”
◦ “ensure all parties understand and agree on process,” and
◦ “do not lose focus” (Principles section).
The relative capability and maturity of the processes of an organization can be effec-
tively appraised using capability maturity models, such as those provided by the CMMI Capability maturity
Institute. These maturity models were initially created so that the US Department of models
Defense could evaluate the quality and capability of software contractors (ISACA, n.d.-c). These are models
At present, they are applied in the software engineering domain and beyond. They used to assess the
serve several purposes, including demonstrating organizational capability to external relative capability
stakeholders by illustrating how processes compare to best practices and identifying and maturity of
areas of improvement, assisting organizations to meet their contractual obligations, organizational pro-
and supporting corporate and IT governance. cesses.
The defined maturity levels refer to the maturity of processes within an organization.
They offer a phased means of appraising and improving processes, subsequently
improving performance. Process maturity is, according to the CMMI suite of models
(n.d.-b), described using a quantitative scale which ranges from a maturity level of zero
to a maturity level of five. Maturity levels are evolutionary, meaning that one level must
be fulfilled before moving to the next (ISACA, n.d.-b). These levels define whether a
process is incomplete, initial, managed, defined, quantitatively managed, or optimizing.
The maturity levels are defined as follows (ISACA, n.d.-b):
124 Unit 6
• Incomplete. Work is done ad-hoc, in a way that is undefined and unclear, and the
completion of work cannot be guaranteed.
• Initial. Work is completed in an unpredictable, reactive way, and budgets and sched-
ules are often exceeded.
• Managed. Project management principles are applied to pro-actively plan, organize,
and finish work in a controlled manner.
• Defined. Work is planned and executed using projects positioned within organiza-
tionally-driven programs and portfolios. These are guided by organization-wide
standards.
• Quantitatively managed. Work is effectively measured and controlled by means of
quantitative objectives that are predictable and data-driven in order to meet all the
stakeholders’ needs.
• Optimizing. The organization is stable, but flexible: it is responsive, Agile, innovative,
and strives to improve continuously.
Process editors are used to document processes using relevant modeling notations.
Standard text XML editors can be used to document processes. However, use of com-
plex meta-models, e.g., the Software Process Engineering Metamodel (SPEM) and the V-
Unit 6 125
Modell XT, can be simplified with particularly tailored editors. Custom-made editors are
used to ease process tailoring. For example, the Eclipse Process Framework (EPF) com-
poser can be used to define, adapt, and enact SPEM-based software processes,
whereas two different tools, e.g., the V-Modell XT Editor and V-Modell XT Assistant, can
be used to edit and tailor processes that apply the V-Modell XT.
Process Enactment
Process enactment tools are applicable to all stages of a software product life cycle.
Computer-based tools that support software development increase the development
efficiency, quality, and maintainability of the software. This supports the documenta-
tion of processes, simplifies management of projects, and improves collaboration
within and among teams (Kneuper, 2018). These tools can also be useful to facilitate
coordination and consistent version control in large and complex projects (Avison &
Fitzgerald, 2006).
Though projects are executed by different teams with different individual aims, they
still work towards a shared goal. Integrated tools, e.g., in Integrated Development Envi-
ronments (IDE), support tasks such as code editing, design, compiling and debugging,
source code control, and build management (Kneuper, 2018). However, IDE services
must still be used thoughtfully. For example, developers are often unaware of all the
methods that can be invoked on given variables and therefore appreciate the concept
of intelligence code completion. However, some variables can result in hundreds of
method proposals, and choosing the correct one is overwhelming (Bruch et al., 2010).
Summary
Processes, within all practice areas, can be effectively appraised using the levels
shown in capability and maturity models. This can provide an organization with a
high-level roadmap for improvement, i.e., improvement actions to be implemented
in order to systematically improve operations. Additionally, supporting tools go a
long way to simplify complex models and processes. However, the selection, imple-
mentation, and application of supporting tools should still be performed appropri-
ately and in the context of the business and organizational culture.
Knowledge Check
You can check your understanding by completing the questions for this unit on the
learning platform.
Good luck!
Evaluation 127
Congratulations!
You have now completed the course. After you have completed the knowledge tests on
the learning platform, please carry out the evaluation for this course. You will then be
eligible to complete your final assessment. Good luck!
Appendix 1
List of References
130 Appendix 1
List of References
Aiken, H. H., & Hopper, G. M. (1946). The automatic sequence controlled calculator. Elec-
trical Engineering, 65(8—9), 384—391. https://doi.org/10.1109/EE.1946.6434251
Alreemy, Z., Chang, V., Walters, R., & Wills, G. (2016). Critical success factors (CSFs) for
information technology governance (ITG). International Journal of Information Manage-
ment, 36(6), 907—916. https://doi.org/10.1016/j.ijinfomgt.2016.05.017
Arlow, J., & Neustadt, I. (2002). UML and the unified process: Practical object-oriented
analysis and design. Addison-Wesley.
Arraj, V. (2013). ITIL®: The basics [White Paper]. Compliance Process Partners.
Axinte, S.-D., Petrică, G., & Barbu, I.-D. (2017). Managing a software development project
complying with PRINCE2 standard. Proceedings of the 9th international conference on
electronics, computers and artificial intelligence (pp. 262—268). IEEE.
Babbage, C. (1864). Passages from the life of a philosopher. Longman, Green, Longman,
Roberts, & Green.
Baumann, H., Grassle, P., & Baumann, P. (2005). UML 2.0 in action: A project-based tuto-
rial. Packt Publishing.
Beck, K., Beedle, M., Van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Gren-
ning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R. C., Mellor, S.,
Schwaber, K., Sutherland, J., & Thomas, D. (2001a). Manifesto for Agile software develop-
ment. http://Agilemanifesto.org
Beck, K., Beedle, M., Van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Gren-
ning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R. C., Mellor, S.,
Schwaber, K., Sutherland, J., & Thomas, D. (2001b). Principles behind the Agile manifesto.
http://Agilemanifesto.org/principles.html
Bell, T. E., & Thayer, T. A. (1976). Software requirements: Are they really a problem? Pro-
ceedings of the 2nd international conference on software engineering (pp. 61—68).
Association for Computing Machinery.
Bendraou, R., Jézéquel, J.-M., Gervais, M.-P., & Blanc, X. (2010). A comparison of six UML-
based languages for software process modeling. IEEE Transactions on Software Engi-
neering, 36(5), 662—675. https://doi.org/10.1109/TSE.2009.85
Appendix 1 131
List of References
Boehm, B. W. (2006). A view of 20th century software engineering. In ICSE ‘06 proceed-
ings of the 28th international conference on software engineering (pp. 12—29). Associa-
tion for Computing Machinery. https://doi.org/10.1145/1134285.1134288
Boehm, B. W., & Turner, R. (2004). Balancing agility and discipline: A guide for the per-
plexed. Addison-Wesley Professional.
Booch, G. (2018). The history of software engineering. IEEE Software, 35(5), 108—114.
https://doi.org/10.1109/MS.2018.3571234
Booch, G., Rumbaugh, J., & Jacobson, I. (2005). The unified modeling language user
guide (2nd ed.). Addison-Wesley.
Boole, G. (1847). The mathematical analysis of logic: Being an essay towards a calculus
of deductive reasoning. George Bell.
Bruch, M., Bodden, E., Monperrus, M., & Mezini, M. (2010). IDE 2.0: Collective intelligence
in software development. Proceedings of the FSE/SDP workshop on Future of software
engineering research (pp. 53—58). Association for Computing Machinery.
Clegg, B., & Shaw, D. (2008). Using process-oriented holonic (PrOH) modeling to increase
understanding of information systems. Information Systems Journal, 18(5), 447—477.
https://doi.org/10.1111/j.1365-2575.2008.00308.x
Cockburn, A., & Highsmith, J. (2001). Agile software development: The people factor.
Computer, 34(11), 131—133.
Cooke, I. (2020). Enhancing the IT audit report using COBIT 2019. ISACA Journal, 4, 6—10.
Coram, M., & Bohner, S. (2005). The impact of Agile methods on software project man-
agement. In J. Rosenblit, T. O’Neill, & J. Peng (Eds.), 12th IEEE international conference
and workshops on the engineering of computer-based systems (pp. 363—370). IEEE.
Debois, P. (2008). Agile infrastructure and operations: How infra-gile are you? Agile
development conference, AGILE 2008 (pp. 202—207). IEEE.
132 Appendix 1
Dingsøyr, T., & Moe, N. B. (2013). Research challenges in large-scale Agile software devel-
opment. ACM SIGSOFT Software Engineering Notes, 38(5), 38—39. https://doi.org/
10.1145/2507288.2507322
Dubinsky, Y., & Kruchten, P. (2009). Software development governance (SDG): Report on
2nd workshop. ACM SIGSOFT Software Engineering Notes, 34(5), 455—456. https://
dl.acm.org/doi/10.1145/1598732.1598760
Dziak, M. (2019). COBIT (Control objectives for information and related technologies).
Salem Press.
Eckert, J. P., Weiner, J. R., Welsh, H. F., & Mitchell, H. F. (1951). The UNIVAC system. AIEE-
IRE ’51: Papers and discussions presented at the Dec. 10—12, 1951, joint AIEE-IRE com-
puter conference: Review of electronic digital computers. Association for Computing
Machinery.
Edmead, M. T. (2020). Using COBIT 2019 to plan and execute an organization's transfor-
mation strategy. ISACA. https://www.isaca.org/resources/news-and-trends/industry-
news/2020/using-cobit-2019-to-plan-and-execute-an-organization-transformation-
strategy
Esposito, A., & Rogers, T. (2013). Ten steps to ITSM success: A practitioner's guide to
enterprise IT transformation. IT Governance Publishing.
Flowers, T. H. (1983). The design of colossus. Annals of the History of Computing, 5(3),
239—252. https://doi.org/10.1109/MAHC.1983.10079
Appendix 1 133
List of References
Friedenthal, S., Moore, A., & Steiner, R. (2012). A practical guide to SysML: The systems
modeling language (2nd ed.). Morgan Kaufmann.
Geambasu, C. V. (2012). BPMN vs. UML activity diagram for business process modeling.
The Bucharest University of Economic Studies, 11(4), 637—651.
Gellweiler, C. (2020). Types of IT architects: A content analysis on tasks and skills. Jour-
nal of Theoretical & Applied Electronic Commerce Research, 15(2), 15—37.
German Federal Office for Information Security. (n.d.). IT-Grundschutz [IT basic protec-
tion]. https://www.bsi.bund.de/EN/Topics/ITGrundschutz/itgrundschutz_node.html
Green, R., Mazzuchi, T., & Sarkani, S. (2010). Communication and quality in distributed
agile development: An empirical case study. World Academy of Science, Engineering
and Technology, 61, 322—328.
Gull, H., Azam, F., Haider, W. B., & Iqbal, S. Z. (2009). A new divide & conquer software
process model. International Journal of Computer and Information Engineering, 3(12),
2795—2800.
Hijazi, H., Khdour, T. J., & Alarabeyyat, A. (2012). A review of risk management in different
software development methodologies. International Journal of Computer Applications,
45(7), 8—12.
IEEE. (2000). 1471-2000 - IEEE Recommended Practice for Architectural Description for
Software-Intensive Systems. IEEE Std 1471-2000 (pp. 1—30). https://doi.org/10.1109/
IEEESTD.2000.91944
134 Appendix 1
Iivari, J., Hirschheim, R., & Klein, H. K. (2000). A dynamic framework for classifying infor-
mation systems development methodologies and approaches. Journal of Management
Information Systems, 17(3), 179—218. https://doi.org/10.1080/07421222.2000.11045656
Jacobson, I., Booch, G., & Rumbaugh, J. (1999). The unified software development proc-
ess. Addison-Wesley.
Joret, S. (2019). Everything you wanted to know about ITIL® in one thousand words!
[White Paper]. AXELOS. https://www.axelos.com/case-studies-and-white-papers/every-
thing-you-wanted-know-about-itil-1000-words?u=2821ea07-1c81-40bf-8a6b-
e2fbc9a13513
Josey, A. (2018). An introduction to the TOGAF® standard, version 9.2 [White Paper]. The
Open Group. https://publications.opengroup.org/c182?
_ga=2.235274242.2088482123.1610273961-1410346690.1610273961
Kaur, R., & Sengupta, J. (2011). Software process models and analysis on failure of soft-
ware development projects. International Journal of Scientific & Engineering Research,
2(2), 1—4.
Kim, G., Humble, J., Debois, P., & Willis, J. (2016). The DevOps handbook: How to create
world-class agility, reliability, and security in technology organizations. IT Revolution
Press.
Knaster, R., & Leffingwell, D. (2020). SAFe® 5.0 distilled: Achieving business agility with
the Scaled Agile Framework. Addison-Wesley.
Appendix 1 135
List of References
Kneuper, R. (2018). Software processes and life cycle model: An introduction to model-
ing, using and managing agile, plan-driven and hybrid processes. Springer.
Kossak, F., Illibauer, C., Geist, V., Kubovy, J., Natschläger, C., Ziebermayer, T, Kopetsky, T.,
Freudenthaler, B., & Schewe, K.-D. (2014). A rigorous semantics for BPMN 2.0 process dia-
grams. Springer.
Larman, C., & Vodde, B. (2017). Large-scale Scrum: More with less. Addison-Wesley.
Luna, A. J. H. de O., Kruchten, P., Pedrosa, M. L. G. do E., de Almeida Neto, H. R., & de
Moura, H. P. (2014). State of the art of Agile governance: A systematic review. Interna-
tional Journal of Computer Science & Information Technology, 6(5), 121—141.
Macdermid, K. (2019). SLAs of the future: Measuring outcomes, not IT availability: ITIL 4—
The evolution of ITSM part 5. AXELOS. https://www.axelos.com/news/blogs/febru-
ary-2019/slas-of-future-measuring-outcomes-not-it-availity
Matković, P., & Tumbas, P. (2010). A comparative overview of the evolution of software
development models. International Journal of Industrial Engineering and Management,
1(4), 163—172.
Microtool. (n.d.). Das V-Modell XT. Ein Standard für die Entwicklung von Systemen [The
V-Model XT. A standard for the development of systems]. https://www.microtool.de/
wissen-online/wie-funktioniert-das-v-modell-xt/
Mishra, A., & Otaiwi, Z. (2020). DevOps and software quality: A systematic mapping. Com-
puter Science Review, 38. https://doi.org/10.1016/j.cosrev.2020.100308
Münch, J., Armbrust, O., Kowalczyk, M., & Soto, M. (2012). Software process definition and
management. Springer.
136 Appendix 1
Naur, P., & Randell, B. (Eds.). (1969). Software engineering: Report on a conference spon-
sored by the NATO science committee, Garmisch, Germany, 7th to 11th October. Science
Affairs Division, NATO.
Perera, P., Silva, R., & Perera, I. (2017). Improve software quality through practicing
DevOps. 2017 seventeenth international conference on advances in ICT for emerging
regions (ICTer) (pp. 1—6). IEEE.
Poppendieck, M., & Cusumano, M. A. (2012). Lean software development: A tutorial. IEEE
Software, 29(5), 26—32.
Porter, M. E., & Millar, V. A. (1985). How information gives you competitive advantage.
Harvard Business Review, 63(4), 149—160.
Prakash, V., Senthil Anand, N., & Bhavani, R. (2012). Agile-fall process flow model: A right
candidate for implementation in software development and testing processes for soft-
ware organizations. International Journal of Computer Science Issues, 9(3), 457—461.
Randell, B., & Zurcher, F. W. (1968). Iterative multi-level modeling—A methodology for
computer system design. In A. J. H. Morell (Ed.), IFIP Congress (Vol. 2, pp. 867—871).
Raps, S. J. (2017). Scrum of Scrums: Scaling up Agile to create efficiencies, reduce redun-
dancies. Defense AT&L, 46(5), 34—37.
Renard, L. (2016). Essential frameworks and methodologies to maximize the value of IT.
ISACA Journal, 2, 1.
Appendix 1 137
List of References
Rombach, H. D., & Verlage, M. (1993). How to assess a software process modeling formal-
ism from a project member’s point of view. 1993 Proceedings of the second interna-
tional conference on the software process-continuous software process improvement
(pp. 147—158).
Ruiz, P. H., Agredo-Delgado, V., Camacho, M. C., & Hurtado, J. A. (2018). A canonical soft-
ware process family based on the Unified Process. Sciantica et Technica Año, 23(3), 369—
380.
Sandobalin, J., Insfran, E., & Abrahao, S. (2020). On the effectiveness of tools to support
infrastructure as code: Model-driven versus code-centric. IEEE Access, 8, 17734—17761.
Sawyer, P., Sommerville, I., & Viller, S. (1997). Requirements process improvement
through the phased introduction of good practice. Software Process: Improvement &
Practice, 3(1), 19—34.
Scaled Agile, Inc. (2019). Achieving business agility with SAFe® 5.0 [White Paper]. https://
www.scaledAgile.com/resources/safe-whitepaper/?
_ga=2.66751662.1225605162.1606904158-548916454.1606904158
Schwaber, K., & Sutherland, J. (2020). The Scrum guide. The definitive guide to Scrum:
The rules of the game. Kenn Schwaber and Jeff Sutherland.
Shukla, A. K., & Saxena, A. (2013). Which model is best for the software project? “A com-
parative analysis of software engineering models.” International Journal of Computer
Applications, 76(11), 18—22.
Sommerville, I. (1996). Software process models. ACM Computing Surveys, 28(1), 263—271.
Sutherland, J. (2001). Agile can scale: Inventing and reinventing SCRUM in five compa-
nies. Cutter IT Journal, 14(12), 5—11.
Takeuchi, H., & Nonaka, I. (1986). The new new product development game. Harvard
Business Review, 64(1), 137—146.
138 Appendix 1
Tanner, M., & Dauane, M. (2017). The use of Kanban to alleviate collaboration and com-
munication challenges of global software development. Issues in Informing Science &
Information Technology, 14, 177—197. https://doi.org/10.28945/3716
The Open Group. (n.d.). About the TOGAF standard, version 9.2. https://www.open-
group.org/togaf
Thejendra, B. S. (2014). Practical IT service management: A concise guide for busy execu-
tives (2nd ed.). IT Governance Publishing.
Theocharis, G., Kuhrmann, M., Münch, J., & Diebold, P. (2015). Is Water-Scrum-Fall reality?
On the use of Agile and traditional development practices. In P. Abrahamsson, L. Corral,
M. Oivo, & B. Russo (Eds.), Product-Focused Software Process Improvement: 16th Interna-
tional Conference, PROFES 2015 Proceedings (pp. 149). Springer.
Xu, P., & Ramesh, B. (2008). Using process tailoring to manage software development
challenges. IT Professional, 10(4), 39—45.
Zuse, K. (1980). Installation of the German computer Z4 in Zurich in 1950. IEEE Annals of
the History of Computing, 2(3), 239—241. https://doi.org/10.1109/MAHC.1980.10028
Appendix 2
List of Tables and Figures
140 Appendix 2
Example of Classes
Source: Venter, 2021.
Example of Collaborations
Source: Venter, 2021.
Example of an Interface
Source: Venter, 2021.
Example of a Package
Source: Venter, 2021.
Example of a Note
Source: Venter, 2021.
A Value Chain
Source: Venter, 2021.
A Process Landscape
Source: Venter, 2021.
BPMN Elements
Source: Venter, 2021.
The V-Model
Source: Jamsa & Harkiolakis, 2019.
A Task Board
Source: Jamsa & Harkiolakis, 2019.
Mailing address:
Albert-Proeller-Straße 15-19
D-86675 Buchdorf