Quasar 3.0 - Capgemini

Transcription

Quasar 3.0 - Capgemini
Quasar 3.0
A Situational Approach to Software Engineering
Quasar 3.0
A Situational Approach to Software Engineering
Version: 1.1 – Copyright ©2012 Capgemini.
This Patternbook is protected by copyright. The online version may be printed and distributed in
hardcopy free of charge. Any other duplication and distribution, as well as modifications, translations
and reproductions require the previous written approval of the copyright owner.
Quasar 3.0
Foreword
Quasar has for years symbolized and embodied the know-how of our top software
engineers. The sd&m school of software engineering thought has for years
developed the best engineers and run the most challenging projects.
Quasar 3.0 embodies and symbolizes the pursuit, renewal and development of this
glorious tradition. Now the Custom Software Development tradition matches
Capgemini’s own development heritage, and enriches it substantially – opening
new horizons for this know-how.
Quasar 3.0 and its situational approach illustrate the two ways we have to build on
our software engineering tradition.
The first one is the way of continuity. Engineering principles and engineering
discipline are always needed for good software development – from a complex
IBM 360 MVS / IMS application to the slickest app to be found in the store.
The second way is the way of change, to stay in tune with the new world of
technology. This requires radical moves – to cope with applications ranging from
well known trains and buses, to adventurous cars and scooters.
Allow me to congratulate and thank the authors and editors of this book: they
show the courage to innovate, and the stamina to build on years of applications
and systems intimacy.
Paris, December 2011
Pierre Hessler
Pierre Hessler joined Capgemini in 1993. In his various roles
he had a long and close relation with the former sd&m. Notably, he chaired its supervisory board as the representative of
Capgemini. He was appointed as a member of the Capgemini’s Board of Directors in May 2000; after leaving his general
management functions in 2002, he became a non-voting director. He entertains relations with some of Capgemini’s major
clients and is one of the fathers of TechnoVision (see [8]).
5
6
Quasar 3.0
Foreword
It’s well accepted that IT has become the key technology in all business areas in
the last decade. This phenomenon has broadened the market opportunities for our
business as an IT service provider. Somehow parts of IT have now become a
commodity. Therefore, the global market situation forces our potential and
current customers to ask for efficiency and timeliness of solutions for their
business requirements, which impact the IT. These requirements comprise a wide
range of different IT solutions from smart web-based applications to huge
integrated application landscapes, and from migrating long-standing legacy
applications to building applications based on novel technologies.
Thus, in order to stay competitive and to be successful in this highly dynamic IT
market, we as an IT solution provider had to evolve to meet our customers’
requirements. The driving force, in the past, was our software engineering and
architecture approach which mainly deployed a standardized model for designing
and implementing software solutions. We understand that this general-purpose
process model is still our foundation but it has to be enhanced to cope with the
diversity of our customers, their business domains, and the required IT solutions
aligned to the latest technology. This book contains an appropriate answer to the
changing situation and to the different diversity dimensions, for developing
customised IT solutions in an efficient and timely manner. It’s Quasar 3.0, a
customisable engineering model, which enables our software engineers to tailor
the methodical approach, for developing suitable IT-solutions for today’s business
requirements. Quasar 3.0 is an important step forward for the continuation of our
engineering-based approach to software development based on more than 20
years of software engineering experiences within our company. It has been
developed by our CSD Research team with support from many experienced
software engineers. It not only describes the maturity level of our software
engineers and the architect team. But, it is also in line with our tradition to share
our experiences with our customers, the software engineering community, and the
academic world. I’m absolutely convinced that this book will be of great benefits
to all of the interested stakeholders.
I would like to thank all the authors for their efforts in writing this book. Special
thanks go to Dr. Marion Kremer, our head of CSD Research, as well as to Prof.
Dr. Gregor Engels, our scientific director of CSD Research.
Munich, December 2011
Dr. Uwe Dumslaff
Chief Technology Officer and Member of the Management
Committee Application Services Capgemini Germany
7
8
Quasar 3.0
This book
Industrial software development has to cope with constantly increasing cost, time,
and quality requirements, which require an effective, efficient, and economic use
of available resources for developing novel IT-solutions. CSD Research as a
division of Capgemini Germany provides the software development methodology
Quasar as an answer to this challenge. Quasar stands for Quality Software
Architecture (see [32]). The goal of Quasar is to provide all methodical means for
software engineers and enterprise architects to produce their results in a very
professional way.
Quasar 3.0 adds the situational aspects to Quasar. It comprises of a tailoring
method, which allows customisation of a software development according to the
needs of a specific project situation. It supports to compose a set of methodical
building blocks into a full-fledged software development method. Thus, instead of
deploying a company-wide general-purpose software development method,
Quasar 3.0 comes along with the framework for the definition and use of
situation- and project-specific development methods. Therefore we also talk about
eee@Quasar (efficiency, effectiveness and efficiency of price), accentuating the
fact that efficiency etc. do mean different things in different contexts.
The objective of this book is twofold. First, it provides an overview of the basic
framework of Quasar 3.0, i.e. its customization method as well as its constituents
comprising domains, disciplines, and activities. Second, to illustrate its
distinctiveness compared with other development methodologies, we illustrate
some of the most notable features of Quasar 3.0 using examples of particular
engineering patterns for the development of IT landscapes and custom software
solutions. An extensive description of each of Quasar 3.0’s disciplines can be
found in separate documents that are referenced at the end of this book.
The aim is not to substitute good books on software engineering but to help
through the rising complexity in this area by providing concrete structuring means
based on concrete project experience.
Even if Quasar was originally developed for internal use at Capgemini, this book
will hopefully provide new insights to software engineers, both business architects
and technical architects undertaking software development in and outside
Capgemini.
This book describes the current version of our ongoing work. So if you are
interested in further information, please contact us.
We hope you will enjoy reading our book!
The team of authors
9
10
Quasar 3.0
Authors
Overall concept,
eee@Quasar,
Domains Design
and Development
Overall concept,
eee@Quasar
Prof. Dr. Gregor Engels
Dr. Marion Kremer
Prof. Dr. Gregor Engels is the Scientific
Director of CSD Research at Capgemini
and has a chair of computer science at the
University of Paderborn. His research
interests are software architectures,
model-based software development, and
quality assurance techniques.
Dr. Marion Kremer, a Computer Scientist
with a PhD in Computer Science, has been
working since 15 years in customer
projects in different roles with Capgemini
(former sd&m). She is heading CSD
Research since end of 2009. Her major
research interest is software engineering
and how it can help to effectively fulfil
customer needs.
11
Domains
Requirements
Engineering and
Analysis
Domains Design
and
Development
Thomas Nötzold
Thomas Wolf
Thomas Nötzold has been working since
25 years as a team and project lead,
software architect and consultant mainly
for banking and insurance companies. His
focus points are Requirements
Engineering, Specification and model
driven development.
Thomas Wolf has been working for
Capgemini since 11 years as a developer,
consultant and senior technical architect
mainly for the automotive industry. He
has designed frameworks that help in
building component oriented software for
large-scale projects. He leads a group of
top architects of Capgemini to distill their
experiences of building flexible and long
term software architectures into the
architecture guidelines for application
design of Capgemini.
Domain
Software
Configuration
Management,
Evolutionary
Quasar
Methodology
Domain
Software
Configuration
Management
Dr. Karl Prott
Jörg Hohwiller
Dr. Karl Prott has been working with
Capgemini since 1998 and was very
successful in designing and handling the
architecture for big software projects.
Furthermore he’s an expert for the service
oriented realization of cross application
processes and the design of complex
application landscapes.
Jörg Hohwiller has been working for
Capgemini since almost 10 years. He has
gained in-depth knowledge about all
phases of software engineering. His
passion is to design the software
architectures. Currently he is working on
the business process management
approach to increase the flexibility and
efficiency of IT solution development.
12
Quasar 3.0
Domain
Analytics
Domain
Analytics
Alexander Hofmann
Dr. Andreas Seidl
Alexander Hofmann is working as a
Business Area Head with Capgemini. His
area of focus are technical design and
project mangement for big software
development projects. In addition to this,
he leads the Competence Centre
Architecture. From 2006 - 2009 he was
heading the CSD Research department of
Capgemini, developing beside others
Quasar Analytics methods and tools.
Dr. Andreas Seidl has worked for
Capgemini as a test manager and business
analyst before joining CSD Research in
2010 to promote analytical quality
assurance.
Evolutionary
Quasar
Methodology
Domain
Enterprise
Dr. Diethelm Schlegel
Oliver F. Nandico
Dr. Diethelm Schlegel is a part of
Capgemini’s CSD Research team. In his
professional career of more than 20 years,
he has acquired expert know-how in many
different science fields like software
architecture, computer networks, IT
security, and project management. At the
moment, he explores the development of
methods and best practices in the area of
service oriented architectures and business
process management.
Oliver F. Nandico is a Certified Enterprise
Architect and works as Principal
Enterprise Architect at Business
Technology of Capgemini Germany,
where he leads a team of architects.
His 20 years of professional experiences
comprehend the shaping of architectures
of entire IT landscapes, the design of large
information systems and the introduction
of enterprise architecture management,
covering all aspects of architecture.
He also advises on the subjects of
enterprise integration as well as
architecture management and governance.
13
This book is based on the Quasar 2.0 - Software Engineering Patterns [13]. The
new version is in most parts intensively reworked and reorganized. Some chapters
are even new. So the domain structure is more aligned with RUP (especially in the
context of software configuration management), the offshore topic is integrated in
the other domains, the situational aspects are as well introduced as the BPM topic.
But, even if this book is definitely a rework of [13] it cannot and does not want to
hide its roots. Therefore we explicitly name the authors of the predecessor version
in this chapter as well.
Authors of [13]
Prof. Dr. Gregor Engels, Alex Hofmann
Bettina Duwe
Ralf S. Engelschall
Oliver Flöricke
Dr. Frank Salger
Melanie Wohnert (geb. Späth)
Johannes Willkomm
Thomas Wolf
14
Quasar 2.0 Overall Concept
Artifact Model, Measurement
Discipline
Development Subdomain,
Assembly, Configuration
Requirements Subdomain,
Analysis Discipline
Offshore CSD (OCSD) Domain,
Analytics Subdomain
Analytics Subdomain
Enterprise Domain
Design Discipline
Quasar 3.0
Contents
1
2
Motivation and Introduction
19
1.1
Demanding market requirements . . . . . . . . . . . . . . . . . .
19
1.2
Quasar 2.0: The History and Basic Concepts . . . . . . . . . . . .
20
1.3
Quasar 3.0: An Extension of Quasar 2.0 . . . . . . . . . . . . . .
22
1.4
Future prospects . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
1.5
Document Structure . . . . . . . . . . . . . . . . . . . . . . . . .
23
The Quasar 3.0 Framework
25
2.1
Domains and Disciplines: the Backbone of Quasar . . . . . . . .
25
2.2
Enterprise Architecture Management and System Development . .
28
2.3
The Situational Approach . . . . . . . . . . . . . . . . . . . . . .
29
2.3.1
Relevant Characteristics and Their Clusters . . . . . . . .
31
2.3.2
Definition of Types . . . . . . . . . . . . . . . . . . . . .
35
2.3.3
The Integration . . . . . . . . . . . . . . . . . . . . . . .
40
Usage of Quasar 3.0 in a concrete project . . . . . . . . . . . . .
41
2.4
3
4
The Requirements Domain
43
3.1
Basic Methodology . . . . . . . . . . . . . . . . . . . . . . . . .
43
3.2
The Industrialised and the Manufactural Approach . . . . . . . .
46
3.3
Offshore Methodology . . . . . . . . . . . . . . . . . . . . . . .
48
The Analysis & Design Domain
49
4.1
The Analysis Discipline . . . . . . . . . . . . . . . . . . . . . . .
50
4.1.1
Basic Methodology . . . . . . . . . . . . . . . . . . . . .
50
4.1.2
Industrialized Methodology . . . . . . . . . . . . . . . .
50
4.1.3
Manufactural Methodology . . . . . . . . . . . . . . . .
55
4.1.4
Offshore Methodology . . . . . . . . . . . . . . . . . . .
56
The Design discipline . . . . . . . . . . . . . . . . . . . . . . . .
58
4.2
15
Contents
5
6
58
4.2.2
Industrialized and Manufactural Approach . . . . . . . .
63
4.2.3
Offshore Methodology . . . . . . . . . . . . . . . . . . .
64
67
5.1
Basic Methodology . . . . . . . . . . . . . . . . . . . . . . . . .
67
5.1.1
The Implementation discipline . . . . . . . . . . . . . . .
67
5.1.2
The Debugging discipline . . . . . . . . . . . . . . . . .
71
5.2
Industrialized and manufactural Approach . . . . . . . . . . . . .
74
5.3
Offshore Methodology . . . . . . . . . . . . . . . . . . . . . . .
74
The SCM Domain
75
6.1
The Configuration Discipline . . . . . . . . . . . . . . . . . . . .
76
6.1.1
Version Control . . . . . . . . . . . . . . . . . . . . . . .
77
6.1.2
Issue Tracking . . . . . . . . . . . . . . . . . . . . . . .
79
6.1.3
Artefact Management . . . . . . . . . . . . . . . . . . . .
80
6.1.4
Version Identification . . . . . . . . . . . . . . . . . . . .
82
The Build & Deployment Discipline . . . . . . . . . . . . . . . .
83
6.2.1
Build Management . . . . . . . . . . . . . . . . . . . . .
83
6.2.2
Deployment Management . . . . . . . . . . . . . . . . .
88
The Release & Change Discipline . . . . . . . . . . . . . . . . .
91
6.3.1
Change Request Management . . . . . . . . . . . . . . .
92
6.3.2
Release Management . . . . . . . . . . . . . . . . . . . .
93
6.3
The Analytics Domain
97
7.1
98
7.2
16
Basic Methodology . . . . . . . . . . . . . . . . . . . . .
The Development Domain
6.2
7
4.2.1
The Quality Inspections discipline . . . . . . . . . . . . . . . . .
7.1.1
Basic Methodology . . . . . . . . . . . . . . . . . . . . . 100
7.1.2
Offshore Methodology . . . . . . . . . . . . . . . . . . . 103
The Measurement discipline . . . . . . . . . . . . . . . . . . . . 104
7.2.1
Basic Methodology . . . . . . . . . . . . . . . . . . . . . 105
7.2.2
The Offshore Methodology . . . . . . . . . . . . . . . . . 108
Quasar 3.0
7.3
8
9
The Test discipline . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.3.1
Basic Methodology . . . . . . . . . . . . . . . . . . . . . 111
7.3.2
Manufactural Methodology . . . . . . . . . . . . . . . . 118
The Evolutionary Approach
121
8.1
Impact on Requirements Engineering . . . . . . . . . . . . . . . 121
8.2
Impact on Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 122
8.3
Impact on Design . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.4
Impact on Development . . . . . . . . . . . . . . . . . . . . . . . 127
8.5
Impact on Software Configuration Management . . . . . . . . . . 128
8.6
Impact on Analytics . . . . . . . . . . . . . . . . . . . . . . . . . 130
8.7
Impact on Operations and Monitoring . . . . . . . . . . . . . . . 131
8.8
Impact on Project Management . . . . . . . . . . . . . . . . . . . 132
The Enterprise Domain
135
9.1
The Term Enterprise Architecture . . . . . . . . . . . . . . . . . 135
9.2
The Quasar Enterprise® Method . . . . . . . . . . . . . . . . . . 136
9.3
Architecture Context . . . . . . . . . . . . . . . . . . . . . . . . 139
9.4
The Business Architecture Discipline . . . . . . . . . . . . . . . 142
9.5
The Information Architecture Discipline . . . . . . . . . . . . . . 145
9.6
The Information System Architecture Discipline . . . . . . . . . . 148
9.7
The Technology Infrastructure Discipline . . . . . . . . . . . . . 154
A Appendix: Situational Software Engineering
157
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
A.2 The framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
A.2.1 Development of the framework . . . . . . . . . . . . . . 160
A.2.2 Usage of the Framework . . . . . . . . . . . . . . . . . . 166
Bibliography
167
17
Contents
18
Quasar 3.0
1
1.1
Motivation and Introduction
Demanding market requirements
Capgemini’s customers employ modern and dynamic business models that
function in large IT landscapes. In order to stay competitive, they must be able to
quickly deliver innovative new business products to the dynamic market. The
ability to change business processes at frequent intervals is essential to their
success. Therefore, they need high-quality IT solutions which put them ahead of
the competition in the mission critical areas of their businesses.
All in all, information technology has become a key success factor for the fast
evolution in the business world.
Specifically, IT solutions must nowadays be able to respond to the following
requirements:
Alignment of business and IT: The IT landscape and its systems have to be
flexible enough so that they can support the rapid integration of new
business requirements. The software development approaches and
architectural design methodologies deployed must provide for the
transformation of business requirements into IT solutions.
Low cost of ownership: Build for change is the design principle that must
govern applications and IT landscapes, so that the cost of maintenance and
operation is minimized. Solutions have to be structured in accordance with
well-defined architectures that comprise replaceable standard and custom
components having clearly defined interfaces.
Decreased release time: Today, the release milestones for software production
are directly linked to our customer’s market milestones. Therefore, effective
software development processes and appropriate tools and methods are
needed.
Increased scope: Custom software solutions are being integrated into
large-scale, heterogeneous IT landscapes. Any software development
approach must be based on a global overview of an IT landscape that takes
into account both business requirements and IT requirements.
Increased diversity: Software solutions can be differentiated in an increasing
number of dimensions. They might be, e.g., web-based or embedded, used
by single or multiple users, used by experienced or occasional users,
deployed in local environments or in a network environment, stand-alone or
as part of an integrated solution. Thus, an effective and efficient software
development has to take into account this diversity of the software solution
to be developed.
19
Chapter 1. Motivation and Introduction
Increased complexity: The integration of standard package and custom software
solutions is becoming increasingly important. Software developers must be
familiar with both open-source and vendor product portfolios, as well as
effective techniques for enabling robust service-based integration with
custom software solutions.
Increased demand for standard compliance: In order to allow for legal
compliance, high interoperability, reusability as well as understandability of
components, the importance of using technology and language standards
for the realization of software solutions is constantly increasing.
Distributed software development: Today, software is being developed in
distributed teams. Our strategic decision with regard to production depth
permits attractive pricing models and addresses today’s increased
competition by offering flexible concepts for the distributed development of
software solutions. Additionally, our customers are demanding
collaborative delivery models that include more than one development
partner.
This list makes clear, that quality is not quality in an absolute sense and not in
every case a software is needed with a perfect algorithm. Therefore, in our sense
(and in the one of EN ISO 9000:2005) quality is a measure for reaching the
concrete requirements.
1.2
Quasar 2.0: The History and Basic Concepts
These changing and ever more competitive market requirements have a
substantial impact on the way that software is being developed. An industrialized,
engineering like software development process is needed to meet all these
requirements. Particularly, a high quality of developed software products have to
be guaranteed, where resources like budget, time and personnel are deployed in an
efficient, effective and cost-conscious way.
Capgemini has striven to develop high-quality custom software solutions since its
inception decades ago. Subsequently, an important asset has been the
company-wide agreement on the common architectural style for structuring
software systems that is known as Quasar (Quality Software Architecture, see
[32]).
As any industrial software development methodology needs to be based on
existing standards, e.g. process standards like RUP™ (s. [22] and [24]), quality
standards (ISO 9126), modelling standards (e.g. UML), architectural frameworks
like TOGAF or IAF, and de facto standards like ISTQB and ISAQB, Quasar 2.0 is
based on these open standards, too. This standards-driven approach is a central
issue for our customers.
To enable it to cope with today’s demanding market requirements, Quasar has
20
Quasar 3.0
been enhanced by transforming it from a design centric framework towards a fully
fledged software development methodology for custom software development
termed Quasar 2.0 (s. [13]).
According to RUP™ Quasar 2.0 is structured into domains like Requirements,
Analysis & Design (see figure 1.1). It offers concrete methodological guidelines
and tools for the entire software development process. Quasar 2.0 applies
particular construction techniques (light blue in figure 1.1) for designing,
developing, building and deploying both IT landscapes and custom applications,
as well as analytical techniques (dark blue in figure 1.1) for the parallel assurance
and control of quality.
Figure 1.1: Quasar 2.0 (reduced to the core)
©2011 Capgemini. All rights reserved.
The activities are the backbone of the disciplines. Each activity is characterized
by its interface, i.e. input and output artefacts, as well as several methodical
building blocks as accelerators like a method description, a set of best-practice
engineering patterns, tailored tools and utilities, and pre-fabricated software
components or frameworks.
The engineering patterns are the result of the intensive investigation and
identification of commonalities in development projects previously undertaken by
Capgemini, combined with the latest insights from software engineering research.
They make the method easily accessible and immediately usable, and differentiate
it from comparable generic approaches.
Quasar 2.0 separates the process-related aspects from the particular software
development methodology. The discipline-oriented structure permits a clear
separation of concerns with respect to tasks and responsibilities within the
21
Chapter 1. Motivation and Introduction
development process. Accordingly, resources such as personnel, budget and time
are all expended in an optimized, cost efficient manner. We are able to offer
flexible process models that ensure a well-balanced, distributed delivery of
software components. We achieve a continuous alignment of requirements and
solutions that supports (if needed) traceable links between all development
artefacts, thereby preventing the possibility of an overlooked or misunderstood
functionality arising within a solution. Right from the beginning, our Build for
change approach takes future evolution into account, resulting in a low cost of
ownership. Accordingly, Quasar 2.0 does not represent a monolithic approach.
Instead concrete methodical building blocks can be exchanged if there is a need
for as for example in case a customer’s IT department insists on using a specific
tool in a customer project.
For further enhancement of efficiency (i.e. cost reduction) Capgemini added the
Rightshore® approach to our delivery model. This means we find the right shoring
model with resources onsite, from near-shore locations and from far-shore
locations. Going truly global adds a new class of problems: different languages,
cultures and time zones can have a major impact on a project. In order to cope
with requirements resulting from distributed and off-shore developments, Quasar
2.0 has been already enhanced with methodical elements to cope with the
increased risks associated with offshore projects. System specifications might not
be detailed enough for offshore developers because they lack contextual
information. Development is based on diverging understandings of what code
quality consists of. Project management is complicated due to differing estimation
methods on the part of the onshore and offshore teams. The handover of
deliverables and responsibilities between the distributed teams happens in an
unsynchronized and uncontrolled manner. Knowledge transfer becomes a major
issue. All this makes the development process more difficult to manage, and could
put the success of the project at risk. Thus, adapted and novel methods and tools
have been defined and added to Quasar 2.0 under the name Offshore Custom
Software Development Method (OCSD Method). This method enables to
effectively control production depth and to assure the overall quality during the
entire project across all sites.
1.3
Quasar 3.0: An Extension of Quasar 2.0
As we found that one methodology for software engineering projects does not fit
in all cases, Quasar 3.0 – described in this book – adds the situational approach
to Quasar 2.0. It supports an optimal choice between efficiency, effectiveness, and
cost-effectiveness resulting in economic development processes. The overall goal
is to provide a software development method for each project that fits best the
needs of that project while gaining as much standardization as possible.
To reach this situational dependency each of the methodical building blocks of
Quasar 2.0 mentioned above is tagged as usable for the situations it is appropriate
22
Quasar 3.0
for (multi tagging is possible). A step-wise selection process supports the
architect or project leader to compose a concrete method bundle out of these
methodical building blocks. In parallel he uses analogous criteria to identify the
right software development process model.
The offshore extension OCSD of Quasar 2.0 can be viewed as an approach to
react to the requirements of a special project’s situation, in this case of handling
offshore development. But, offshore projects are only an example of possible
diversity of a development project. Thus, the next step in enhancing and
improving Capgemini’s software development method is to generalize this idea
and to develop an extended methodology, termed Quasar 3.0, which consists of
methodical building blocks that can be composed into a concrete software
development process model and method according to the current project’s
situation.
Augmenting of specifications so far that a direct automatic interpretation becomes
possible breaks somehow the concept of artefacts being used as input to produce
something different as output of a concrete method. Augmenting of artefacts
results in input artefacts that are the same as the output artefacts for a given
domain. In addition at least part of the development is done implicitly within the
analysis. Therefore, the semantics of analysis and development change. To reflect
this appropriately we introduced an additional evolutionary approach to Quasar
3.0.
1.4
Future prospects
Quasar 3.0 is not an academic exercise, but the result of a joint effort on the part
of Capgemini as a whole. It has been defined and described by members of
Capgemini’s CSD Research team, which consists of experienced Capgemini
software engineers and architects. It has been reviewed and commented on by
numerous software engineers and IT architects working at Capgemini.
We are already sure that Quasar 3.0 is not the end of the methodology evolution.
As the used standards will be developed further, Quasar will change, too. This
book will help to go on with the joint effort, to improve our software engineering
methodology, to cope with future requirements of the business.
1.5
Document Structure
The following chapter describes in detail the structure of Quasar 3.0 and
introduces our situational approach for the engineering part of it. Using this as a
base we describe each of the engineering domains together with their artefacts and
the methodical building blocks in a structure reflecting the situational approach.
Hereby the focus of this book is not on all building blocks, but on patterns that
have to be thought of for efficient, effective and price economic software
23
Chapter 1. Motivation and Introduction
development by using Quasar 3.0. The patterns provided are described in the
following way:
Pattern 1: Example Pattern
Short description of the Pattern
After the pattern itself we typically provide further information on this pattern, it’s
usability etc.
The evolutionary approach is dealt with in a separate chapter because of its
integrative structure over the domains. Lastly, we describe the Enterprise domain,
where we do not have the same situational structuring as in the other domains
because up to now we did not meet the need for that.
Capgemini follows an open book approach, sharing its knowledge and experience
of development methods with its customers. This is supported by an open
ontology in which all the terms used, plus the artefacts and responsibilities, are
defined. In conjunction with our customer’s input, this enables implementation of
well defined cooperation models, with a high level of transparency during the
development process; in other words, it helps everyone to speak the same
language.
In the appendix, you find more about the underlying conceptual and theoretical
framework.
24
Quasar 3.0
2
The Quasar 3.0 Framework
In this chapter we present our approach to situational software engineering,
Quasar 3.0.
2.1
Domains and Disciplines: the Backbone of Quasar
Traditional software engineering approaches, structure the tasks to be fulfilled in
an enterprise or a custom software development (CSD) project into domains. In
accordance with this we follow the IBM™ Rational Unified Process™ (RUP™),
both a Capgemini and an industry standard for this (s. [24] p. 45 and [22]). The
Quasar backbone is formed by its constituent disciplines. All Quasar disciplines
are grouped into subdomains and domains. Figure 2.1 details the Quasar 3.0
domain structure. It consists of four domains: Enterprise Architecture
Management (EAM), System Development (SD), Project Management (PM), and
Operations.
Figure 2.1: The Quasar 3.0 domain model
©2011 Capgemini. All rights reserved.
This structure covers other types of projects as well. In particular, Quasar may be
applied for projects where the solution is based on packages or the integration of
customised solutions with modernized legacy systems. This does not only apply
to new or different areas of software engineering, but for different methods and
tools provided for each discipline. Therefore, "System Development” is referred
to as the respective domain, and is not limited to custom solution development.
25
Chapter 2. The Quasar 3.0 Framework
The focus is on shaping the complete IT landscapes and the custom development
of individual systems. Currently maintenance or operations are not covered, but
are taken into consideration while defining our methodology and approach. This
results from the strong impact of these Quasar domains on the requirements for
the earlier phases.
Each of the Quasar 3.0 domains is structured into disciplines that consist of
activities. They operate on input artefacts which the project team uses to construct
the output artefacts. These transformation steps are supported by
discipline-specific
methods,
sets of best-practice engineering patterns,
tailored tools and utilities and
pre-fabricated software components or frameworks
These supporting elements are the building blocks of our methodology. For
special purposes we cluster them in sets, which then build the methodology for
the particular purpose.
The disciplines of the Enterprise Architecture Management (EAM) domain focus
on defining the requirements and architecture of an IT landscape while the custom
solution development disciplines support the development and deployment of an
individual software system.
Understanding the challenges and requirements of our customers is the purpose of
the Requirements Engineering domain, which incorporates the activities
necessary for professional requirements management.
Based on the resulting requirements artefacts, the disciplines of the Analysis &
Design domain include activities for modelling and structuring the solution.
The disciplines of the Development domain define activities for implementing and
debugging information systems efficiently.
Following this, applications are built and deployed by the activities of the
disciplines within the Software Configuration management domain. Additionally,
product deployment, release and change management are supported. The
resulting artefacts have to be constantly checked to ensure that they meet quality
requirements. This is a mission critical task especially in a distributed
development scenario.
The disciplines of the Analytics domain comprehend the associated quality
assurance activities.
26
Quasar 3.0
This structure of Quasar 3.0 domains and disciplines is quite similar to RUP™.
The differences between RUP™ and Quasar 3.0 result from the following reasons:
The Enterprise Domain includes the Business and Information architecture,
and exceeds the respective scope of RUP™, for details see section 2.2
Development relates to Implementation in RUP™ extended by Debugging
Analytics includes Test, but also includes additional activities
Deployment, Configuration and Change Management in RUP™ are
reorganized due to the System Development focus (see table 2.1):
In Quasar we recognise Build and Deployment Management instead
of just Deployment, because we think of Build being critical and
non-trivial
Configuration & Change Management are separated into
Configuration and Release & Change Management because of their
different nature
We skipped Environment because these activities are mapped to other
domains
We have added Operations because the operations requirements have to be
taken into account during analysis, design and analytics. To be compliant
with an industry standard, we choose ITIL [1] for structuring the
Operations domain. We have refrained from describing it in detail, as it is
not a part of the core Software Engineering
Project Management (PM) is structured according to PMI [28], considering
that we talk about relevant process groups instead of domains. The selected
software development process model guides the combination of all
activities within a discipline. The appendix shows further details. [7]
provides more information about our understanding of PM.
We have added an explicit super domain for the software development (SD) to
make the cooperation between domains within SD and process groups and
Operations more apparent.
RUP™
Deployment & Build
Configuration & Change Management
Quasar 3.0
Deployment
Configuration
Release & Change
Figure 2.2: Comparison of RUP™ and Quasar
27
Chapter 2. The Quasar 3.0 Framework
2.2
Enterprise Architecture Management and System Development
To derive the right architectural decisions it is necessary to have a close look at
the entire IT-landscape. But, the design and evolution of an IT-Landscape cannot
be achieved without knowledge on the individual information systems.
Therefore, we see the Enterprise Architecture Management (EAM) and the
System Development as closely related.
One major task for the EAM is the description of business and information
architecture. In the context of IT landscape we see architecture principles which
we later apply within the system development model, i.e. Analysis and Design.
An example of such a principle: Ensure as little coupling as possible between the
domains while at the same time to be assured the domain is self-contained.
The level of detail for the EAM depends on its purpose. When the required level
of detail is achieved, the scope changes to individual solutions. The use cases
derived from information system services become the requirements for individual
information systems, regardless of being implemented as custom developed or a
package based solutions (see figure 2.3).
Figure 2.3: The constructive flow illustrates the transformation of content between the
disciplines
©2011 Capgemini. All rights reserved.
This process is similar to the one used for the design of a concrete system as we
have in both cases stepwise refinement (see figure 2.4).
28
Quasar 3.0
Figure 2.4: The Way from outside a system to the inside
©2011 Capgemini. All rights reserved.
The top down process for the development of an IT landscape alone is incomplete,
changes to the requirements discovered during the design of information systems
are to be fed back to the enterprise level.
This two way approach becomes crucial for the implementation of a service
oriented architecture, where common integration patterns are defined at the
enterprise level. Changing the functional scope of a service may affect the whole
IT landscape, e.g. considering BPM.
2.3
The Situational Approach
Our situational approach focuses on building individual information systems. We
do not differentiate on EAM projects, because up to now we use the same
methods and tools for all of them.
Our situational software engineering approach consists of various elements:
the software development process models and the software engineering
methods, tools etc. that are used to build the software in the given
environment/situation. These are methodical building blocks clustered
along with the Quasar 3.0 domain structure.
the software projects/systems themselves
We categorize all of them to govern the development process to achieve efficiency,
29
Chapter 2. The Quasar 3.0 Framework
effectiveness and price economics (eee@Quasar). Our goal is to help identify the
right building block out of two or more given ones for a given situation. So we are
looking for the differentiation of production means on the one hand and of
systems on the other so that waste is avoided.
Figure 2.5 shows this in a more allegory way: we will not use a hammer for each
project, but only when appropriate. An example within software engineering: it
may be helpful to have more than one methodology for the analysis domain, to
deal with different situations like the function-identical migration of a software
system or the design of an innovative functionality together with the customer.
Figure 2.5: Building Blocks versus Systems (allegorical)
©2011 Capgemini. All rights reserved.
Based on our experience this results in a differentiation between a more
industrialized versus a more manufactural approach:
Industrialized Approach with a high degree of division of labour and
optimization of single steps and Therefore, efficiency, reproducibility,
traceability and standardization; with a parallel low in house production
depth
Manufactural Approach with focus on flexibility and creativity and only little
standardization
30
Quasar 3.0
In the following chapters we show which attributes are used to differentiate the
methodical building blocks and the systems and bring these elements into relation.
This chapter follows the approach presented in a more abstract way in the
appendix.
2.3.1
Relevant Characteristics and Their Clusters
We categorise our methodical building blocks along the following parameters:
Way of Requirements Engineering and Quality Assurance: In the case of a
Greenfield project, where there is no blueprint (= old system) and where
business requirements are still to be defined with the customer, requirement
engineering is methodically completely different from a migration project
where a 1:1 – replacement of an existing system is needed. The same is
valid in the case of configuration projects, where an existing (standard)
software is to be adapted, or in the case of integration projects, where a
multiplicity from existing services is composed into a new application. The
equivalent is valid for the other domains.
Necessary sustainability of architecture: Usually a project team with low skill
sets requires a much more strict packaging of extremely complex facts
which a highly skilled team could do without. This means methodical
building blocks are required in the first case, that guarantee, for example via
encapsulation, technical prototypes etc., a stable architecture with the
appropriate characteristics is developed. The same is valid, if high demands
are made at non functional requirements like performance etc. It is
meaningful here to take an explicit design phase into account with which
the architecture of the future system is developed and examined according
to the requirements. The super ordinate design principles are to be specified
in order to retain their consistency over the entire engineering process.
Necessary sustainability of technology: If we have to ensure that the
technology used for the system will have a longer life time than the usage
of proven and widely spread technology with slow upgrade cycles is
sensible. The underlying technical basis has also to be taken into account as
it is not helpful to upgrade the entire operating system for a minor business
change. One major challenge in this area is the forecast of the lifetime of
the used products. The usage of a framework that determines the entire
architecture of a system has to be well though of if the system has to be
used for 20 years and the source code for the framework is not accessible.
On the other hand it is not necessary to restrict oneself to "old fashioned”
technology if only short life time is required. This requirement is very often
difficult to realize because of the prerequisites especially for the user
interface where the complexity and appearance wanted very often can only
be gained efficiently with modern components and frameworks.
31
Chapter 2. The Quasar 3.0 Framework
Kind and amount of necessary documentation: The sourcing in a project and
the maintenance phase determines heavily how much and what kind of
documentation is necessary. In purely onsite projects where all members
have at least a common cultural and linguistic understanding the
requirements for documentation are less than in projects where these
prerequisites do not hold or where the team members are staffed on phases
not on components (see [30]).
The same holds for the documentation of decisions. Depending on the
context it is sufficient to document only the decisions themselves or the
complete process has to be written down.
Complexity of project setting: This complexity results from the complexity of
the methodical building blocks, maximum team size and shoring concept. It
can be economically meaningful in a long-term project to use a more
complex development methodology with appropriate tools if this reduces
the pure developing and ramp up cost for new colleagues.
Similarly a team with 100 co-workers needs another methodology than one
with two colleagues, who sit at one table.
We differentiate several software development process model types (see [17]).
These models are based on industrial standards (see figure 2.6).
Figure 2.6: Different Software Development Process Models
©2011 Capgemini. All rights reserved.
32
Quasar 3.0
Agile: an evolutionary approach with time boxes, iterations, following the agile
manifesto and agile principles
Iterative: Sequence of non-overlapping development cycles (iterations) of less
than 3 months, prioritized by risk and business value
Incremental: Piece (increment) wise realization of functionality
From our experience it is clear that we do not have to look at the characterisation
of projects but at the systems and their environment, because we not only have to
take into account the initial project but the maintenance phase, too.
To find out which systems fit for the industrialized approach or the manufactural
approach we differentiate our systems along the characteristics in figure 2.7 on
page 34. The sourcing model hereby may include internal sourcing, so for
example if testers come from a different division.
33
Chapter 2. The Quasar 3.0 Framework
Figure 2.7: Characteristics of systems
Structural Dimension
Business or Problem characteristics
degree of substitution of an existing
system (low=green field)
requirements dynamics
number of differentiating business services
problem size or number of services to
be provided
user interaction complexity
neighbouring systems complexity
dependency between components or
features
safety requirements
lifetime
environment / context
client culture
expected customer participation
possible degree of off-shore (low = all
on-shore)
possible degree of outsourcing (low =
inhouse)
possible acceptance of package-based
solutions / open source
priority of time vs. scope
sourcing concept
skill-level
team changes over life time
amount of necessary traceability of decisions
operations internal operations vs.
cloud (multi-tenant) vs. external)
necessary NFA’s (performance, reliability, etc.)
way of operations
Extremes
low - high
low - high
low - high
low - high
low - high
low - high
low - high
low - high
low - high
agile - plan-driven
low - high
low - high
low - high
low - high
time - scope
onsite - offshore
low - high
low - high
low - high
internal - external
low - high
centralized - distributed
©2011 Capgemini. All rights reserved.
34
Quasar 3.0
2.3.2
Definition of Types
We build clusters on the above characterised systems to get a manageable number
of system types. We use transport means to describe our system types following
[8]. So we identify two extremes:
The Scooter: To a considerable degree, these projects dynamically compile
and/or in the further life cycle repeatedly change the business requirements
together with the customer. The speed of the solution development and the
innovative character of the solution are of crucial importance, while a long
maintenance phase is not in the focus.
Typically we are not talking about small, standard Scooters but about
high-tech Scooters. This Scooter does not need to follow a fixed path but
can go in any direction. On the other hand as the focus of the Scooter does
not lie on efficiency only a few people or goods can be transported (to stay
within the metaphor). Here it is possible to replace reasonable efficiency
and thus speed up effectiveness: if required a development path is pursued,
which does not go into production, but helps the business to decide a
solution. Comprehensive documentation is of little help, due to the rapid
changes.
Typically this type of system comes along with a complex user interface,
which would preferentially require “state of the art” technology necessary
for the conversion and the development environment. Long term
architecture is not the criteria here as it would continuously alter as per the
current business requirement. It may be beneficial to use innovative
components and frameworks for building the system as sustainability of
these would be of minor importance. At the same time organically
fundamental architecture principles may not be neglected, because such
applications may also have high non-functional requirements. Furthermore
they are often embedded into a more complex application landscape, which
may no effected by all the applications changes.
It is often counter productive to aim for small unit cost prices (i.e. usually
small labour costs), as the high degree of innovation and the necessary
conversion speed requires the co-workers to work locally. At the same time
it can be very meaningful to invest into tools in order to achieve the desired
speed.
In the extreme version of this type, the assumed life time is relatively short.
If this changes and the implemented business functionality becomes more
stable, a re-implementation might be a good idea to get into the world of the
tanker.
35
Chapter 2. The Quasar 3.0 Framework
The Train: Frequently we have a situation where a new system, which exhibits a
substantial life expectancy, has to be built to replace an already long
running core system. As a result of this, the maintenance is given weightage
and by extension even the related matters pertaining to sustainability.
Simultaneous, the team changes are counted on the basis of long
maintenance phase. So, clearly more effort is being invested in an
appropriate documentation. Simultaneous, here it is important to ensure
that technology, methodology and tools as well as the architecture are
suitable for sustainability and for team changes.
For this type of system, effectiveness and price economics are of central
importance, because a large number of goods or people are to be transported
(i.e. transaction to be executed, data volume to be managed and interfaces
to be served). Therefore, changes take much more time than in the case of
the Scooter. This is reflected in the visual needs of the railway network.
In reality, typically none of these two extremes really occurs. Instead, we often
have Trains with fast changing requirements for the GUI (graphical user
interface), sometimes even Scooters have a part (for example interfaces) where
stability is important. Therefore, we introduce two more types that are mixtures.
These help to differentiate in size and degree of requirements volatility.
The Overland Bus: This is the big mixture. This means it has a Train basis with
long lasting business components as well as components that are realized as
Scooters because it is quite clear from the beginning that here we will have
a lot of change in the business requirements over time. Metaphorically,
these changes are related to the WC, the bar and the coffee machine.
Overall, the Bus does not need the same kind of stable infrastructure as a
Train but typically comes along with a more or less stable timetable. For a
Bus, it is vital to define which components are of the Train type and which
ones are more of the Scooter type in the beginning of the project and to
choose the appropriate methodology (and perhaps software development
process model). Simultaneously it has to be taken into account, that we can
gain effort, speed and maintainability through homogeneity.
The Car: She is the little sister of the Bus. Here, it is usually not helpful to
distinguish in the methodology to be used because the possible synergy
effects are too little. It is better to decide in the beginning what type is the
more relevant and to choose the methodology accordingly.
These traffic means and their relationships are shown in figure 2.8. The graphical
representations of the different traffic means used here are taken from [8].
Again following [8] we introduce a fifth type of system, The Hub. The Hub does
not really provide business functionality of its own but builds the platform, to
which all other system types can be connected. Looking at it from the enterprise
36
Quasar 3.0
Figure 2.8: System types and their interdependencies in an overview
©2011 Capgemini. All rights reserved.
perspective, it implements the integration pattern for the entire landscape. From
an engineering point of view the Hub is a separate kind of system but the
development methodology needed in most cases has the same characteristics as
the Train because it must provide a lot of stability. Therefore, we do not look at it
in detail in the following discussion.
The situational approach follows the ideas described in the appendix (chapter A).
It contains both the industrialized and the manufactural methodology box as well.
As they both come from a more sequential approach (the classical one), we
additionally identified a more evolutionary approach.
Since long there has been the idea to produce a specification that can be directly
used as an executable (or interpretable) file. Up to now, this goal has not been
fully achieved, but there are steps in this direction: model driven development
(MDD) is very promising by generating specification, additional documentation,
and implementation from a single source. UML based approaches, however, may
cause problems with respect to round-trip engineering whenever the generated
artefacts need to be manually modified later on. Model changes could
automatically be incorporated into the generated artefacts as long as the
modification is kept apart from the generation results. But virtually no UML
based MDD tool kit sufficiently supports this separation leading to a lot of
unnecessary effort for adapting the generated artefacts. To prevent this, the source
has to be expressive enough to generate the full result without the need for
37
Chapter 2. The Quasar 3.0 Framework
changing it manually afterwards. This is addressed by domain specific languages
(DSL) that are specific to particular domains and therefore, are very expressive in
this area. Such DSLs also include standardized notations for business processes or
business rules.
Using powerful BPM-Suites (BPMS) let the vision become true to model business
processes and make them fully executable based on the same model. This leads to
an evolutionary development approach in which all sub-domains work on the
same artefacts that are continuously enriched and refined. This leverages a large
gain in efficiency as well as more flexibility for changes, so we get the
relationships in figure 2.9.
Figure 2.9: Comparison of the different development approaches
©2011 Capgemini. All rights reserved.
However, the evolutionary approach does not fit in all cases. Especially, in
situations where only few business processes are to be materialised and they
change slowly, the evolutionary approach is perhaps not the right approach
because the BPMS is more expensive than the effort that can be saved over the
time. Another example are real time applications. Using a BPMS implies that the
solution cannot be optimised for very hard timing constraints. So again, we need a
selection method for finding the right model.
For our structuring purpose we find that the different methodology boxes are
connected to each other in the way described in figure 2.10.
Looking now at our methodology portfolio we get a characterization of all
elements (methods, tools, patterns) along with these methodology types. For sure
this characterization is not mutually exclusive. Therefore, we put all the elements
38
Quasar 3.0
Figure 2.10: The different methodologies are related to each other
©2011 Capgemini. All rights reserved.
that can be used with either methodology in the basic one. The basic methodology
contains the ontology. It consists of all the terms used, plus the various concepts,
artefacts and their interrelationships. Thus, it supports a common understanding
within and across different groups of stakeholders, i.e. project members both
onside and offshore plus our customers. While working in distributed teams, this
is a clear precondition for success. It also contains further elements, for
configuration management for example. Using different methodology and tools
for the industrialized and the manufactural approach would make it quite difficult
to build a Bus consistently. Together with the characterization of methodology
elements and the domain structure we get a multi- dimensional approach. To be
able to cope with this complexity we introduced a kind of tagging that allows to
assign a concrete element of the methodology with more than one methodology
type. Furthermore, our methodologies are divided into methods, tools, patterns
etc. (s. appendix A). This results in an additional tag.
Example: The Licence Management Method is tagged with ’Software
Configuration Management’, basic, industrialized, manufactural, offshore
and methodology. So searching in nearly all dimensions (beside BPM)
will provide the same asset.
Thus, it is quite easy to find everything that can be used for offshore delivery or
for the manufactural approach etc. Last but not least we can have more than one
39
Chapter 2. The Quasar 3.0 Framework
asset (typically tools) for the same purpose simultaneously if this helps to fulfil
customer’s needs. On the other hand we can use our methods even if the customer
insists on having his own tool landscape.
Thus, this figure makes it clear that the offshore methodology does not stand
beside but it is seen as an addition to the sequential methodology.
We do not further differentiate the software development process models because
they are self-contained packages, so we take them as they are.
2.3.3
The Integration
On the basis of the above remarks we come to the correlation between systems
types, methodology and software development process model. Figure 2.11 shows
this schematically.
Figure 2.11: Correlation between the different methodologies and system types
©2011 Capgemini. All rights reserved.
Looking at the criteria for systems, software development process models and
methodical building blocks, it becomes quite clear that they match to each other
(otherwise there would be something wrong).
40
Quasar 3.0
2.4
Usage of Quasar 3.0 in a concrete project
On the basis of the approach described above we get one single pattern for the
usage of Quasar 3.0 in a concrete project.
Pattern 2: Categorize your project along with the system types
Be aware of the type of system you are building and choose actively the methodical building
blocks you plan to use along with the above criteria.
Determine the type of system you are going to develop in the proposal phase
itself. Look very carefully on all the system characteristics described above. On
the basis of the system type, build your methodology out of the system
development process and methodical building blocks.
In the following chapters, we provide the description of the activities with their
input/ output structures within a discipline and a lot of helpful patterns for the
enterprise and system development domains in Quasar 3.0. As we cannot provide
this information in this book in the above multi–dimensional structure, we
structure our following description along with the domains and disciplines and
note when a special aspect only holds for some kind of situation. This particularly
means that the OCSD methodology is an integral part of all domains (if there is a
relevant aspect). The only exception we make to this concept is for the
"Evolutionary Approach” chapter (see page 121), because here so many things are
different that it seemed more reasonable to describe this approach in a separate
chapter.
41
Chapter 2. The Quasar 3.0 Framework
42
Quasar 3.0
3
The Requirements Domain
"I have a problem!"
These words, so often expressed by our customers, are the starting point of
virtually all our projects. The initial task in these projects is to identify the
problems our customers are struggling with. So the primary goals of this domain
are to elicit these problems, define the problem space and formulate concrete
requirements with respect to the problems. Requirements engineering is not an
easy task. It takes time, a lot of effort and does not come for free. So why is it so
important? Why not just start directly with the creation of the system
specification?
We need this domain and its results for three key reasons:
First, to get all stakeholders on the same page. As projects become larger, our
customers often do not have a consistent single view of the problem they are
trying to solve. A number of business divisions maybe involved as well as
several IT functions, often with conflicting views, which need to be aligned.
Second, to structure the problem space. The larger the project gets, the more
difficult it becomes to retain an overview. Requirements Engineering serves
to divide project scope into manageable parts, each aligned to a specific
business problem.
Third, to assure that the problem is fully understood. Before we begin to
define any solution – and software engineers often tend to proceed to the
solution too fast – we must ensure that we have correctly understood the
customer’s problem.
3.1
Basic Methodology
The Requirements Engineering discipline describes the problem space in terms of
what the customer expects from the system and its development, and what lies
outside its scope. To do this, the process is based on the artefacts that are
produced in the Quasar Enterprise domain such as business services, business
vision, business drivers and domains, plus the artefacts provided by the customer.
On the basis of this input, it is possible to describe the corresponding
requirements with respect to the application in a requirements specification.
Figure 3.1 shows the high level artefacts of the Requirement discipline.
The requirements specification embodies a vision of the system that orients all the
relevant stakeholders in a common direction. A detailed use case model describes
how the problem space can be structured. Functional requirements define the
measurable functional demands on the application. Quality requirements define
what fundamental properties and characteristics the application must have, e.g.
43
Chapter 3. The Requirements Domain
Figure 3.1: High level artefacts in the Requirements discipline
©2011 Capgemini. All rights reserved.
usability-related or performance-related properties. Finally, the architectural
requirements describe prerequisites that have to be met when defining an
appropriate solution, to ensure that the application fits into its future system
environment. These requirements are grouped together in figure 3.2 under
Application Requirements, while migration and integration requirements are
grouped together under Integrational Requirements.
We use the Quasar Requirements Engineering Methodology to define a
requirements specification of this type. This methodology provides a uniform
recommendation for requirements engineering that applies across all project
types. It defines the basic principles of requirements engineering, plus the tasks
necessary to achieve the goals of the discipline.
Pattern 3: Big picture first
Create a system vision to set the scope of the project.
Many development projects encounter scope creep right from the beginning. This
is especially true of complex development projects in which many stakeholders
have numerous different interests regarding the application. The system vision
(see figure 3.2) is therefore the initial component of the requirements
specification. It is a means of defining the scope and boundaries of the application
and of aligning all the stakeholders in a common direction.
44
Quasar 3.0
Figure 3.2: Structure of the system vision
©2011 Capgemini. All rights reserved.
The system vision describes what the application will basically become and steers
the development project in the desired direction, and enables a subsequent
successful and structured requirements engineering process design.
Pattern 4: Structure the problem area with use cases and scenarios
Employ use cases and scenarios to facilitate communication and to enhance the
documentation of functional and quality needs.
One of the biggest problems when creating a requirements specification is making
it comprehensible to all the project participants. Here, use cases and their
associated scenarios (see figure 3.3) are a good way to describe a user’s needs
with regards to the application. The reason is that they are written mostly in a
non-technical manner which allows all stakeholders to understand them easily.
Use cases describe how future users will use the application and offer a
non-technical route to understanding exactly how an application will support them
as they perform their daily business processes.
Additionally, we refer to use cases and scenarios not only in their "classic" sense
for describing the functionality of an application, but also for describing its
quality requirements (e.g. the maintainability-related requirements).
Furthermore, use cases effectively support project management in making initial
estimations of complexity and effort by means of the Use Case Point (UCP)
method, e.g. during initial pricing activities as part of a bidding process.
45
Chapter 3. The Requirements Domain
Figure 3.3: Use cases and scenarios
©2011 Capgemini. All rights reserved.
Pattern 5: Employ early prototyping to elicit Use Cases
Build a user interface prototype using simple tools early in the requirements engineering
process and iterate quickly to obtain a common understanding.
Supposing we start out a project from a blank slate, how do we arrive at use cases
and scenarios? Capgemini has developed a method and a set of tools to efficiently
support this task, called Rapid Design and Visualisation (RDV), see figure 3.4.
RDV is an innovative and structured way to specify software by visualising
requirements interactively and collaboratively early in the process, in order to
reduce risk, avoid rework, and build a system that users identify with. It employs
Scenario Definitions, Whiteboarding, Simulation and Stakeholder Reviews to
quickly arrive at visual representation of the project vision shared by all
stakeholders.
3.2
The Industrialised and the Manufactural Approach
In the requirements engineering discipline there is no elementary difference
between the two approaches. Yet there are minor deviations resulting from two
aspects: the level of detail and the degree of standardisation.
46
Quasar 3.0
Figure 3.4: Rapid Design and Visualization Process
©2011 Capgemini. All rights reserved.
Pattern 6: Determine early which level of detail is required
Think of all aspects like planned lifetime, volatility of requirements, maintenance team etc. (list
see chapter 2) to clarify necessary artefacts and level of detail during the bid phase.
The shorter the lifetime of the final product or the newer the business problem to
the client, the less detail will be needed from requirements engineering. It will be
more helpful to let the customer see results as early as possible. On the other hand,
important non-functional requirements must be known early and completely.
In general, the industrialised approach tends to need a higher level of detail due to
the longer lifetime of the product. At the same time, already existing knowledge
of the customer about his problem allows a more precise definition.
47
Chapter 3. The Requirements Domain
Pattern 7: Use business domain maps
There are many business contexts where some kind of standard approach for a business
problem is already established. Take this into account when gathering requirements and inform
yourself before getting too involved in the details.
Especially in the manufactural approach, there is a higher degree of standard
aspects because it makes no sense to develop everything from scratch. But
standardisation plays an increasingly important role for the industrialised
approach too. In many cases, it is common practice or even a customer
requirement to use a particular standard component.
We must be able to decide on the requirement level whether a package based
solution, a framework providing business functionality, or a prefabricated
component will be part of the solution. If not, we will get into trouble due to two
reasons:
The decision to incorporate prefabricated components covering certain
business functions has an impact on the requirements. Customers will need
to decide whether they want to stick to the full set of their original
requirements or whether they are perhaps willing to change their processes
to fit the features provided by the selected standard component.
In any case, it is not a good idea to start Requirements Engineering without
bearing in mind the environment of the future solution. Instead, we must
consider in which business areas standardisation is advanced enough to pose
a viable alternative to custom development. Business domain maps and
sound understanding of standard business topics help to prevent pointlessly
capturing requirements for aspects where standard components are usable.
3.3
Offshore Methodology
On the requirement level there is so far no need for an extension with offshore
aspects. The idea is to provide all requirements engineering onshore. Ambiguous
requirements will be identified during analysis and there enforce a step back to the
requirements engineering.
48
Quasar 3.0
4
The Analysis & Design
Domain
The Analysis & Design sub-domain aims at finding and precisely defining an
unambiguous solution for a set of requirements that are laid down for an
application system.
Firstly, a system specification is created based on the requirements that have been
identified and described in the previous Requirements sub-domain. The system
specification contains a full description of the solution from the business point of
view.
Secondly, the application’s technical architecture is laid down. Using the
requirements and the system specification, the technical framework is defined in
order to create an initial solution for the requested functionality.
Once the work of the Analysis & Design sub-domain is done, the subsequent
Development sub-domain can start on the completion of the solution.
Pattern 8: Enable Round trip
The methodologies and tools for analysis and design must be aligned and must allow round trip
engineering.
Often, new requirements are discovered while designing a solution. And typically
these new requirements arise when the analysis is partially complete (independent
of the project model).
Pattern 9: Align structure
The fewer explicit explanations are needed for the connection between analysis and design, the
better. The easiest way to reach this goal is to define design as a refinement of analysis, only
adding information, not changing the structure.
The original idea of writing the specification before building the design (first
what, second how) does not necessarily hold if an iterative project model is used.
At least the rough underlying design will be fixed in early iterations and has to be
taken into account in the Analysis during later iterations. This iterative process
becomes much easier if there is a structural alignment.
Defining a solution is very often no longer a pure custom development job. An
increasing challenge rises from the fact that more and more functionality is
provided by frameworks and prefabricated components. The architects (both the
49
Chapter 4. The Analysis & Design Domain
lead business analyst and the technical architect) have to arrive at a common
understanding where to use frameworks and prefabricated parts. These may not
only dominate the design but will also have consequences for the functionality
provided and its representation during the analysis. The more functionality these
components provide, the less it makes sense to discuss functionality with the
customer as if we were building a solution from scratch. Instead, we must be able
to explain what they will get from the chosen components and illuminate the
consequences. Delta specifications become more important.
We do not focus on this topic any more in this part of the book but show some of
the relevant aspects in the BPM chapter (8) because using a BPM suite is a good
example for the resulting changes.
4.1
4.1.1
The Analysis Discipline
Basic Methodology
The main focus of the Analysis discipline is the transformation of the problem
description into a system specification. The problem scope has already been
defined under the Requirements sub-domain. Now we transform the requirements
that were elicited into a system specification that is binding, complete, consistent
and comprehensible.
4.1.2
Industrialized Methodology
In order to create a system specification that includes all the required information
in a well-structured form that is comprehensible to all the project participants, we
use the Quasar Specification Methodology. The system specification describes
what the future system will do. It includes (among other artefacts):
a logical data model that defines the objects the system will act upon
use cases that define the possible interactions of the user with the system
dialog descriptions that allow the customer to see what the system will look
like
It is most relevant for projects with many participants, potentially spread over the
globe and working in different units and where the project is expected to have a
long life span with a large number of future releases.
The Quasar Specification Methodology comprises everything a system analyst
needs to know to create the system specification: information about basic
concepts like the meta model on which the methodology is built, the processes
that take place in the specification phase, and how the artefacts of a specification
are created based on the ideas of Concept and Form. In addition, a basic example
of the structure of a specification is given. Then, at a more practical level, the
50
Quasar 3.0
methodology explains how artefacts such as use cases or the data model must be
created. It defines exactly which information is needed for each artefact, provides
templates, and describes best practices.
In addition, the methodology provides tools for creating the specification
documents from models using Enterprise Architect, as well as background
reading and teaching material.
Figure 4.1 shows the high level artefacts in the Analysis discipline following this
approach.
Figure 4.1: High level artefacts in the Analysis discipline
©2011 Capgemini. All rights reserved.
51
Chapter 4. The Analysis & Design Domain
Pattern 10: Master complexity
Use conceptual components to keep the complexity of the application under control.
Although the well-known basic principle of divide and conquer may be applied
during the implementation of a system, it is often wrongly disregarded during the
early phases. Even at this early stage it offers significant benefits:
Complexity becomes manageable.
The division of work is eased.
Conceptual components for organizing the project into sub-tasks and
sub-teams (i.e. team structure) can be used.
The systematic transition to system design is facilitated.
The entire development process is systematically oriented to the
architecture.
Knowing this, we strictly apply the divide and conquer concept in the
specification phase. The following figure 4.2 gives an impression of what the
conceptual components look like:
Figure 4.2: Example of conceptual components
©2011 Capgemini. All rights reserved.
Pattern 11: Strive for completeness
Create a functional overview to summarize the logical functionality of the application.
52
Quasar 3.0
While the basic (and detailed) elements like use cases and dialog specifications
are important, it is also necessary to see a complete picture of what will be
developed. This picture typically lives in the head of the chief designer, but
should also be available to all the project participants. More importantly,
customers need to know at the end of the specification phase whether the system
being designed is the one they actually want. However, this is hard to assess if
only the details are visible. Therefore, a big-picture perspective is needed. By
making the functional overview a required part of the system specification, the
Specification Methodology makes this perspective prominent.
The functional overview comprises the following aspects:
Expert solution idea: The fundamental solution approach being pursued in
the project.
Core concepts: Definition of key concepts and terms.
Solution sketch: Ideas about a possible solution, e.g. with the help of work
flows.
System overview (external/subsystem): Overview of the conceptual
components and interfaces to external applications.
Use case model overview: Process description of the product with the most
significant use cases.
Figure 4.3: Structural composition of the functional overview
©2011 Capgemini. All rights reserved.
With the help of the various elements of the functional overview, top-down
discussions are possible at any time.
53
Chapter 4. The Analysis & Design Domain
Pattern 12: Drill down
Use a from high to low approach to avoid getting bogged down in details.
This principle embodies the notion that it is a big mistake to view the specification
phase as a large and monolithic block in which everything is specified in detail
right from the start. On the contrary, it is necessary to start first at the topmost
level, elicit the primary elements, create the first big-picture perspectives, and
only then get down to the details. And – most importantly – it is necessary to
pause occasionally and reflect on the progress made up to that point. To
encourage this kind of process, the Specification Methodology splits the
specification phase into two sub-phases:
Coarse-grained analysis
Detailed analysis
The aim of the coarse-grained analysis phase is first to produce the overviews, i.e.
the functional overview, system overview etc. To provide a basis for this phase,
work on aspects like the logical data model, use cases etc. will already be started
but will not be completed. In fact, work will often still be taking place on the
issues arising from the previous phase rather than on the components.
The important thing about the coarse-grained phase is that it ends with a
milestone. We use this milestone to designate a specific moment that enables an
evaluation to be made as to whether the project is on the right track. The work
completed so far is reviewed and presented to the customer. Moreover, this
milestone will be important if the customer commissions new work after this
phase: on the basis of the results reached so far, it will be possible to generate a
new estimate regarding the rest of the phase, or, if desired, the rest of the project.
Following the functional overview of the coarse-grained phase, the subsequent
detailed analysis phase drills down into the details.
In the detailed analysis phase, the remaining artefacts are created and everything
is finally put together; the dialog specifications are written, the use cases are
completed etc. Finally at this point, there is a complete switch to a
component-oriented view.
The phase ends with a milestone that includes the customer’s official acceptance
of the entire system specification. Only with this acceptance can the next project
phase start. See figure 4.4 for a summary of the tasks carried out during this phase.
Pattern 13: Use standards
Standardisations for different kinds of functionality and information exchange already exist on
the business level for interfaces, data models and reference architectures.
54
Quasar 3.0
Figure 4.4: Tasks and individual stages of the specification phase
©2011 Capgemini. All rights reserved.
Find out the aspects for which your client or the whole domain already uses
standard interfaces, data structures or reference architectures. Use them where
possible, this will help focus effort on those aspects where real business
differentiation for the customer is possible.
4.1.3
Manufactural Methodology
The manufactural approach follows the industrialized approach in many patterns
(such as divide and conquer). The main difference results from the level of detail
required from the specification before beginning the implementation. For
example, it may be very helpful to use a toolset for developing user interfaces that
provides output which can be integrated directly in the final system. Similar to the
Requirements subdomain, it may not be helpful to describe use cases in much
detail if the project is small, if users need to see the system before they can define
what they really want, or if no extensive need for maintenance is foreseeable.
55
Chapter 4. The Analysis & Design Domain
4.1.4
Offshore Methodology
Dealing with ambiguity and incompleteness is always a challenging task while
writing a specification. But geographic distance reduces team communication and
awareness, and therefore allows for more misinterpretations of specifications than
in co-located development settings. In other words, any ambiguity is magnified
when a specification is realized offshore. Also, the system specification is a
crucial communication hub between the onshore and the offshore team. Remote
teams do not have the same opportunity to ask for clarification from business
analysts or domain experts as those in co-located projects. Specification defects
will not easily be uncovered by the remote development team. And it is more
difficult to rectify these defects without the ability to use highly interactive means
of communication. The truth of the saying, "A software engineer’s most valuable
tool is a whiteboard", becomes more evident in offshore projects.
Pattern 14: Make specifications fit for offshore
Use a clear structure and be as explicit as possible.
Everybody in the team must understand the overall structure and its constituents:
What is a business rule? How do we use a use case?
Giving precise answers to these questions is critical to the mission. Although
offshore developers will be closely integrated into the team, they will sometimes
have different methodical backgrounds. So take into account the fact that they
may have different needs concerning the structure of a specification. Making a
specification fit for offshore involves clearly defining a single common
terminology and structuring it in a way that takes the offshore developers’
perspective into account. The Quasar Ontology is the foundation on which we
build a common terminology.
The onshore team not only has a lot of tacit knowledge about the content of a
specification, but a lot of implicit knowledge about meta information like the
mapping between parts of the specification and the high-level requirements. In
offshore projects, making this kind of structural knowledge explicit is paramount:
in large-scale offshore projects, the traceability of software artefacts becomes
even more important than in co-located settings, because many other disciplines
like testing depend on tracing information. Determining the impact of changes
requires elaborate tracing, especially if the changes affect some components that
are developed by offshore teams.
Offshore projects can pose genuinely new challenges. An example is the potential
language mismatch. Writing a specification in a foreign language is very difficult,
and translations can introduce inconsistencies.
56
Quasar 3.0
Pattern 15: Watch the language of your specification
Write consciously, and check translations for consistency.
If a specification document refers to the client on page 7, the customer on page 66
and the user on page 768, with all these terms meaning the same thing, it may
create problems. A comprehensive bilingual glossary started right from the
beginning of the project will save a lot of headaches here.
57
Chapter 4. The Analysis & Design Domain
4.2
4.2.1
The Design discipline
Basic Methodology
On the basis of the requirements and system specifications, the Design discipline
tackles the construction of the solution. It comprises all the preliminary activities
needed to guarantee the efficient implementation of the system.
Design is the discipline under which artefacts such as architectural views and
models, programming guidelines and instructions, utilization strategies etc. are
developed. Three architectural perspectives have been established within
Capgemini:
Application architecture: This describes the structure of the software and its
function on the basis of a precise understanding of the logical sequences involved,
but as independently of the technology as possible and reasonable.
Technical architecture: This describes the runtime environment for the software
described in the application architecture. This means:
the technical solutions used in connection with the technical infrastructure
architecture contained in the system
the structure of the technical services
all other software components that are independent of the application.
Technical infrastructure architecture: The system’s technical infrastructure
(TI) comprises the physical devices (computers, network connections, printers
etc.), the installed system software (operating system, transaction monitor,
application server, interface software), including versions and patch levels, as well
as the interaction between the hardware and the system software. The TI
architecture also defines the programming languages employed.
Figure 4.5 shows the high level artefacts of the Design discipline.
Pattern 16: Apply a strictly component-oriented focus
Master complexity by structuring the problem area as individual components.
A component-oriented approach allows you to handle the business complexity of
the application, and to simultaneously implement the application.
Classes of object-oriented languages like Java or C] are not a suitable basic design
unit. They are too small. If such a class is used as the basic design unit, the result
is that large systems will be subdivided into countless classes and the
comprehensibility of the architectural principles employed will be completely
lost. Instead, the component has become established as a suitable design unit.
Components are considerably coarser than classes and are therefore suitable for
58
Quasar 3.0
Figure 4.5: High level artefacts in the Design discipline
©2011 Capgemini. All rights reserved.
designing large systems. Figure 4.6 shows there relations.
The Capgemini architecture guide [6] defines a construction model that shows
how component-oriented applications should be structured. We distinguish the
technical components of the technical infrastructure from the application
components that structure the application logic. Application components can be
grouped into subsystems in order to enable large projects to be handled. We
define various types of application components and specify how they can carry
out their respective tasks. In this way, the construction model becomes a blueprint
for the practical component-oriented design of an application.
Pattern 17: Make it simple
Define a clear, easy and uniform programming mode.
One of the important design objectives is embodied by the KISS principle. In its
polite form, KISS stands for "Keep it small and simple", and is thought to have
originated in NASA’s Apollo project. In software, it means reducing the
complexity of the systems to a minimum and ensuring that the structures
employed are clear, simple and traceable.
59
Chapter 4. The Analysis & Design Domain
Figure 4.6: The structure of a component–oriented application
©2011 Capgemini. All rights reserved.
Pattern 18: Focus on the application architecture
This contains the application component model and is the basis for the implementation of the
business functionality.
This stage involves designing the application component model that contains all
the application components and the component interfaces which the application
components provide and use. It also indicates the data sovereignty the
components have over the entity types of the application. This model – the
application component model – is the building plan used by developers to
implement the application.
Pattern 19: Build for change
Pay attention to the 10 basic quality principles of architecture in order to build a system that
permits changes over the long term.
The Capgemini architecture guide defines 10 principles of software architecture
(separation of concerns, minimization of dependencies, information hiding,
homogeneity, zero redundancy, differentiation of software categories, layering,
design-by-contract, data sovereignty, re-use) that must be followed while
designing applications.
60
Quasar 3.0
Pattern 20: Use standards
Technical standardisations for different kinds of functionality and information exchange already
exist for interfaces, data models and reference architectures.
Find out the aspect for which your client or the whole technical domain already
uses standard interfaces, data structures or reference architectures. Use them
where possible, this will help to focus effort on those aspects where real business
differentiation for the customer is possible.
One example of such technical standard is our inhouse component "Quasar
Client" that provides an elaborated framework for developing complex client /
server applications. The focus hereby lies on the client side and its interaction
with the server. Using such components prevents the project from developing
standard technical functionality and concepts again and provides proved solutions.
In addition, the community available to answer queries is much bigger in case of
proved solution than in case of a project specific solution.
Pattern 21: Subdivide the application components into layers
The layers structure the components parts of an application component.
To subdivide the application components into layers, we assign each of its
components parts to a single layer. Consequently, an application component can
extend across multiple layers (see figure 4.7).
61
Chapter 4. The Analysis & Design Domain
Figure 4.7: The five layers of an application and the communication rules
©2011 Capgemini. All rights reserved.
62
Quasar 3.0
4.2.2
Industrialized and Manufactural Approach
These two approaches are differentiated based on the duration for which the
resulting system has to live and the frequency at which the requirements will
change during this lifespan.
Pattern 22: Design your systems architecture along with life time aspects
Divide your system in spe along with life time aspects and clarify the non-functional needs of
each component. Invest effort in good architecture along with this differentiation and ensure that
components with different life time requirements and requirement volatility have clear interfaces.
No doubt, a good architecture following the above patterns helps avoid effort on
searching errors, stability and performance problems etc. But a good architecture
in the strong sense of the above pattern itself costs time and effort to develop and
follow. Therefore, it should be invested carefully. Typically, the systems we are
designing have both types of components, Scooter and Train.
Looking at a scooter-like system, where time for delivery is much more important
than life time, it is not advisable to spend too much time in reducing redundancies
etc. as described in Pattern (Pattern 18 and 19). The same holds if a component
has a long required life time but will often change its functionality in an
unforeseeable way. Second aspect resulting from the life time perspective applies
to the usage of prefabricated components, both frameworks and packages. The
longer an application lasts and the lesser it has to deal with the changes
beforehand, the better it is to avoid relying too much on fast changing
prefabricates. This results from two reasons: First, if our customer is forced to
invest in technically induced changes he will not be satisfied in the long term.
Same holds for us if we have to provide a team over a longer period. The more
complex the know how, the more ramp up time is required for new colleagues to
become productive. Therefore, each change within the team becomes expensive if
not anticipated in the design through encapsulation etc. On the other hand, using
prefabricates will be helpful in case an application has to be built very fast.
Pattern 23: Follow the design of your major prefabricates
If your system is developed based on an existing framework or package, do not try to overrule
the architectural structure.
Depending on the kind of prefabricate, the above mentioned pattern may or may
not be useful. If you are working with a framework that produces the entire
application for you out of your data definition, then trying to force this solution
into a three-tier application may not be helpful.
63
Chapter 4. The Analysis & Design Domain
4.2.3
Offshore Methodology
Similar to the Analysis discipline, there are some issues with designing software
that are always important but have an even higher impact in offshore projects.
For example, a component-based development. This is always important but in
offshore projects it quickly becomes obvious that cohesive components or
subsystems are critical for achieving efficient communication between distributed
teams. Therefore, for offshore topics, the aforementioned basic discussion holds
but so do the aspects particularly mentioned for the industrialized approach.
From a business point of view, cohesiveness is only one dimension to be
considered while crafting a component model. It is equally important to take the
management aspects into account from the outset. If the components are too
small, they will easily lead to management overhead and become an integration
nightmare. If they are too big, effective risk management is no longer possible.
Features or use cases that are subject to strict quality requirements should not be
spread across several components. And there are a lot more aspects to consider
for achieving a well-balanced component model as shown in figure 4.8
Figure 4.8: Architecting communication
©2011 Capgemini. All rights reserved.
Pattern 24: Design loosely coupled component based architectures
Build the team along with these components.
64
Quasar 3.0
The bottom line is to minimize communication for "component internals" but
support communication for the "big picture". Every type of instability is
magnified in offshore projects, as changes need more time to be replicated to the
offshore team. Stability is a crucial aspect in providing the foundations on which
the whole team organizes its work.
Pattern 25: Use a stable technical basis
No experiments with central frameworks and tools.
Learning about the newest cutting-edge frameworks and tools is always exciting,
especially for software engineers. However, in offshore projects stability and
maturity is important. For example, using a hot open-source OR-mapper which is
updated every few months (and patched every few weeks) can seriously disrupt an
offshore project.
Pattern 26: Team design
Jointly discuss and craft the design in order to distribute knowledge.
Depending on the experience of the offshore team and the complexity of the
component to be developed, the offshore or onshore team is responsible for the
component’s internal design. But in any case, the design deliverables are not
simply handed over with no discussion. Crafting them jointly, or carrying out
joint walkthroughs, is one of the most effective forms of knowledge transfer.
Cooperating during design is an important basis for the effective transfer of
responsibility for components or even entire subsystems. It helps the offshore
team to understand the architecture and to put their work into the overall context
of the system.
65
Chapter 4. The Analysis & Design Domain
66
Quasar 3.0
5
The Development Domain
There is no software without code. Hence two basic disciplines are fundamental
to software development: Implementation, the constructive task of writing a code,
configuration mechanisms and data structures; and Debugging, the analytical task
of making the software functionally error-free and simultaneously ensuring that
non-functional requirements such as performance and security are met.
5.1
5.1.1
Basic Methodology
The Implementation discipline
Implementation is the software development discipline in which a software
product is actually developed using the output of the preceding design disciplines.
In implementation, there is no more abstraction from the resulting software
product. Instead, it is entirely concerned with specific low-level configuration,
code and data, and the processes of programming, documenting, patching,
refactoring and re-engineering programs. The main objective of this discipline is
to develop software products. We call the output of this discipline application
parts. With the build discipline, we develop software solutions that are based on
these application parts. Figure 5.1 shows the high level artefacts of the
Implementation discipline.
Pattern 27: People first, computers second
Write concise, uniform, well-structured and easily comprehensible programs, placing people
first, computers second.
Clean code means code with good quality. Good quality code helps to reduce, the
total amount of efforts required. To achieve this, it is essential to follow a concise
programming style; to use a uniform syntactical code style through the entire code
base; to clearly structure the program units; and to make the program control flow
easily comprehensible to the programmers. Always keep in mind that programs
have to be written so that people can easily read and change them.
For instance, use domain-specific data structures (e.g. Map<User, Profile>)
instead of generic ones (e.g. HashMap with Object that needs to be downcast).
Keep your solutions as simple, straightforward and appropriate to the problem as
possible, and do not complicate them more. Additionally, structure your code
primarily in accordance with what is appropriate for the application domain, not
as per technology preferences; use as little technology as possible, and only as
much technology as necessary.
67
Chapter 5. The Development Domain
Figure 5.1: High level artefacts in the Implementation discipline
©2011 Capgemini. All rights reserved.
68
Quasar 3.0
Pattern 28: Don’t repeat yourself
Always try to avoid redundancy and separate out reusable functionalities into self contained
program units in order to minimize change effort and to avoid code duplication that will lead to
maintenance problems in the long term.
Except for specific performance (speeding-up) and safety (fall-back) solutions,
redundancy is highly undesirable, as over the long term it can result in a
maintenance problems. Hence you should always try either to avoid any
accidental redundancy in configuration, code or data, or at least be prepared to
justify your intentional redundancy.
Pattern 29: Defensive programming
Let your programs be input forgiving, operation defensive and output precise and try to analysis
and focus on the possible errors cases.
The often-cited, but unfortunately rather bad clichè, garbage in, garbage out
(GIGO), is a feeble excuse for unexpected runtime behaviour and is acceptable for
single applications only. However, for multiple applications working closely
together, it is essential to be forgiving with regard to inputs, defensive during
operation, and precise with output. Only in this way can stabilise and secure the
operation to be achieved.
Additionally, when implementing the operations of your programs, focus on all
possible error cases first and on the valid cases second. Keep in mind that please
do not blindly catch errors when you don’t know how to handle! For instance, in
Java avoid including empty catch clauses just to make the compiler happy.
Instead, you should at least pass the caught exception back up the call chain.
Pattern 30: Balance code, configuration and data
Use the most suitable approach for implanting your tasks, but avoid highly configurable
solutions.
Every craftsman knows that if all you have is a hammer, every problem looks like
a nail. Programmers should remember that besides their hammer (imperative
code), their toolbox also contains two other, rather useful tools: configurations for
declaratively expressing scenarios, and data for holding factual information. Good
programmers always balance the use of these three tools when they develop their
solutions.
For instance, instead of directly hard-coding authorisation rules in your
application code, define a Domain Specific Language (DSL) for expressing your
69
Chapter 5. The Development Domain
authorisation rules, and use it to place all your rules into a configuration file. Or
even parse the configuration file and persist the rules as data inside your
application database.
But in any case, do not try to look too far into the future. Make those aspects
configurable where you know the need for configuration exists for today and
perhaps tomorrow, but do not try to configure everything possible.
Pattern 31: Step-by-step refinement and continuous refactoring
Grow your code base and implement it in distinct and continuous refactoring steps so that there
is no accidental increase in complexity, and ensure that refactoring is fully completed!
The art of programming is not about the ability to perform a straightforward
all-in-one top-down encoding of entire functionalities in a particular programming
language. This only works for very small and simple solutions.
Instead, good programming centres on the process of refining stepwise and
continuously refactoring pieces of code until the intended functionality is encoded
in a sufficiently complete manner.
Only with this incremental approach will you actually get through the
implementation of large and/or complex functionalities in practice. For instance,
you should first define the external API of your library, then implement the
corresponding method bodies, then fill in stubs and mock-ups to get your first test
cases working. Then implement the first-order functionalities, followed by the
second-order functionalities etc. Finally, add additional data assertions,
error-checking code and code documentation.
Pattern 32: Abstraction and generation
If possible, instead of manually developing, generate lower level code, configurations and data
out of higher level models.
Every application consists of a large amount of code, configurations and data. But
not all of this should be manually developed. If possible, model it in a more
abstract way at a higher level and then generate the derived lower-level artefacts.
This is especially the case if the required lower-level artefacts have to contain a lot
of boilerplate code and redundancies.
Pattern 33: Use existing algorithms
Try to abstract from your concrete problem and see whether there is a standard algorithm that
can be used.
70
Quasar 3.0
For many known problems within computer science, there are already well proved
algorithms with known characteristics. At least quite strong non-functional
requirements should make you use existing algorithms. This helps on the one side
with readability and on the other side with reliability.
Pattern 34: Analyse the problem before programming
Try to find out before coding whether the problem you have to solve in software is one with
reasonable complexity
Sometimes even for apparently simple problems, every algorithm for solving the
problem results in bad performance at least in some cases. Make sure before you
start programming that you understand the problem quite well and that you can
determine the complexity (in a computer science sense) of the problem. If the
problem comes along with high complexity then try to find simplifications for the
relevant cases. In the worst case this means that you should go back to the
analysis and clarify the relevant aspects with the business.
5.1.2
The Debugging discipline
Debugging is the analytical software-development discipline of determining and
locating the root cause of problems in a software implementation. Debugging is
concerned with functional faults, defects and side-effects, all non-functional
issues and the processes of code instrumentation, operation debugging, operation
logging, execution tracing and execution profiling. It is the primary discipline that
has to resolve the issues that the Test discipline has initially pinpointed. The main
objective of this discipline is to ensure that the software developed runs without
problems.
Figure 5.2 shows the high level artefacts of the Debugging discipline.
Pattern 35: Check the plug first
Do not be the victim of mistaken assumptions
Before directly jumping into the code base of an application, please always
question your assumptions first. It is important to know and validate the
assumptions that underlie a problem situation. For instance, do not blindly assume
that certain external services in the environment of an application are always
available.
71
Chapter 5. The Development Domain
Figure 5.2: High level artefacts in the Debugging discipline
©2011 Capgemini. All rights reserved.
Pattern 36: Use reproducible test cases
You can’t fix what you can’t reproduce, so always try to find a deterministic test case first.
You have to make the program fail for you, too. The best outcome is a
deterministic failure situation, but even nondeterministic failures are better than
not being able to reproduce the problem at all. Always keep in mind that usually
you cannot fix something effectively if you are unable to reproduce the original
problem, because blindly fixing a problem usually introduces a couple of new
ones.
Pattern 37: Change one thing at a time
Do not expose yourself to the conflicting side effects of changes.
You always have to isolate the key factor in a failure. For this, change one thing at
a time only and compare the changed application behaviour first. Never change
more than one thing during a particular investigation step, or your investigation
space will become larger than necessary and you will too easily fall prey to the
conflicting side effects of changes.
72
Quasar 3.0
Pattern 38: Be aware of instrumentation side effects
Instrumentation and debugging change the problem situation, so be prepared for this and do
not make incorrect comparisons.
It is necessary to understand that an instrumented and debugged program will
perform differently from the original one. Instrumentation (the addition of
symbols, source code references and extra run-time hooks) usually at least
implicitly changes run-time timings, and often also explicitly requires the
disabling of certain code optimizations. Accordingly, you should always perform
a single instrumentation and debugging operation at a time, and perform multiple
instrumentations and debugging operations in sequence.
Pattern 39: Use problem-solving heuristics
Leverage proven heuristics of problem solving for the debugging process.
You cannot apply a simplistic approach to problem-solving. Hence during
software debugging, it is useful to know and apply classic problem-solving
heuristics:
Analogy: does it help if I think in terms of a similar problem?
Generalization: does it help if I think more abstractly?
Specialization: does it help if I solve a special case first?
Variation: does it help if I change the problem situation or express the problem
differently?
Backward search: does it help if I look at the expected results and determine
which operations could bring me to them?
Divide and conquer: can I split the problem into more easily solvable partial
problems?
Reduction: can I transform the problem into another one for which a solution
already exists?
Backtracking: can I remember my path to the solution, and on failure track back
and attempt a new solution path? and
Trial-and-Error: as a last resort, can I just brute-force test all possibilities?
73
Chapter 5. The Development Domain
Pattern 40: Perform a layer-based profiling exercise
Architecture layers represent reasonable measurement points for profiling.
The task of performance profiling an application in order to find hot spots where
too much run-time is spent can be a complex one. Often the reason is that the
profiling tools used have no information concerning the domain specific
components contained in the architecture of the application. Therefore, try to
focus first on profiling the measurement points at the architectural boundaries of
your application. For instance, first profile each subsystem independently, then
profile the entire application.
5.2
Industrialized and manufactural Approach
The aspects we mentioned earlier for the design discipline also hold for the
development domain. Life time and volatility of requirements are also important
for good development.
5.3
Offshore Methodology
In the development domain one major issue for offshore development is a
common understanding of what good software quality means. Nevertheless the
following patters may also be useful for all types of projects but are of special
importance for offshore development.
Pattern 41: Define quality measures for software in advance
For many software languages, rules already exist for good coding.
Define right from the beginning of your design activities the coding standards and
make clear how you are going to check the compliance. For getting a common
understanding it may be quite helpful to provide pair analysis of selected portions
of software. Actually this is an analytics activity as well (constructive analytics),
therefore we have duplicated this pattern because of its importance in the context
of offshore development.
74
Quasar 3.0
6
The Software Configuration
Management Domain
Software Configuration Management (SCM) identifies and tracks configurations,
builds, deployments, releases and changes of all artefacts required for building a
proper software system. The main purpose of SCM is to maintain integrity and
traceability throughout the whole software life cycle. Furthermore, it is a
supporting discipline for efficiency and quality in development, project
management, change management and operations. Figure 6.1 explains this model.
en
t
De
onfiguration Mana
ve
re C
gem
a
l
w
t
f
e
nt
So
A
n
Ma
an
ag
V
Tra I
M
em
en t
De p
n
me
l oy
l
ode
Sof
eM
twa
iplin
re C
Disc
onfigu
ration Management (SCM)
t
ns
ag
tio
an
ra
M
en
t
AV
em
en
ion
M
se
se
ea
ea
ue
sscking
t
n t i f i c at
s
ge
em
nt
ersio
ue
an
ag
a
m en
eq
eR
Ch
an
M
D
me
R el
&
n
I de
Ch a ng
tM
il d
lo y
R el
ld
ep
f igurat ion
Con
Version
Control
Bu
r
a gt e f a c t
eme
nt
ct
oje
Pr
B ui
t
e
ang
en
Ch
nt
ge
&
me
na
an
e
ag
m
M
na
m
ge
op
a
Figure 6.1: Software Configuration Management Discipline Model
O
pe
t
©2011 Capgemini. All rights reserved.
75
Chapter 6. The SCM Domain
As shown in chapter 2 and figure 6.1, Software Configuration Management
essentially consists of three main disciplines:
Configuration
Build & Deployment
Release & Change
Pattern 42: Start with a Software Configuration Management Plan
Especially in complex projects a plan for software configuration management helps you to save
budget and time by following the right procedures.
After reading the following sections about SCM, you will realise that SCM
requires various concepts and processes dependent on the distinct tasks. A plan
can help you to coordinate all activities and follow the right procedures within the
stipulated deadline. Refer to the IEEE 828 Standard for Software Configuration
Management Plans ([31]) to develop a proper plan.
In the following sections, the software configuration management disciplines are
discussed in further detail.
6.1
The Configuration Discipline
The configuration discipline is identifying and recording all assets of a software
product maintaining their history of changes over the time. It has the objective to
gain control and traceability throughout the entire lifecycle of the software
product. Configuration is composed out of the following sub disciplines that are
discussed in the upcoming sections:
Version Control
Issue Tracking
Artefact Management
Version Identification
It is recommended to think about and produce the four artefacts shown in figure
6.2. In sub-discipline Version Control, you will find a pattern for a recommended
branching and merging strategy. Hints for a proper repository structure can be
concluded from several patterns especially in Version Control and Artefact
Management. Issue Tracking includes patterns for an issue tracking concept. A
versioning concept helps to identify the proper versions of delivered products.
76
Quasar 3.0
Figure 6.2: High level artefacts in the Configuration discipline
©2011 Capgemini. All rights reserved.
6.1.1
Version Control
Version Control is identifying and persistently recording the source artefacts of a
software product. It has the purpose of systematically controlling changes to the
source artefacts in order to maintaining the integrity and traceability throughout
the life-cycle of the software product.
Pattern 43: Clear strategy for branching and merging
Follow a well-defined strategy for branching and merging your source artefacts. Balance
flexibility for development with stability during maintenance. However, keep the number of
parallel branches at a minimum.
Simply using a version control system (VCS) for managing your source artefacts
is not sufficient. You also have to follow a well-defined strategy for creating
branches and merging them. A typical VCS is generic and will not guide you or
restrict wrong merging directions. The overall goal of a version-control strategy is
to support the creation of high quality releases. Therefore, we distinguish between
release branches and feature branches and order them in a Change Flow Map (see
figure 6.3 and [19]) according their maturity in the manner of proximity to
production.
Fixing the production problems and similar stabilising changes will take place on
the branch closest to the next production release. Such changes are merged
77
Chapter 6. The SCM Domain
continuously with less mature branches so they reach the mainline and potential
feature branches. After the release is shipped, the merging procedure is conducted
to ensure that all fixes reach the mainline.
New release branches are created from the mainline as late as possible but as soon
as required by changes for the succeeding release. On the other hand, feature
branches are used to decouple destabilising changes (experimental changes) and
should only be merged upwards if they are successfully completed. This is shown
in figure 6.3
Figure 6.3: Change Flow Map
©2011 Capgemini. All rights reserved.
Further, the branching and merging strategy will be aligned with the release
engineering phases. This is illustrated by the blueprint in figure 6.4. During the
initial development of a release, developers can create personal feature branches.
These are merged back into the original branch before testing phase. The
integration phase aims for stability and quality. Then the release is branched off to
build release candidates and finally the productive release while the mainline is
open for development of the succeeding release.
Pattern 44: Strict tagging
Create a tag for every release. Treat tags as strictly immutable.
Do not build releases from the trunk or release branch except for testing the build
78
Quasar 3.0
process. Instead create a tag named after the current product version and build the
release out of that tag. If there will be changes for the release after tagging, never
recreate or even modify tags. Instead, create a new version on the trunk and tag it
again. This implies that your version number schema contains a build-number
segment for such a purpose. So you do not have to change fix versions assigned to
issues or communicated to the customer (see also section 6.1.4).
Figure 6.4: Version control blueprint
©2011 Capgemini. All rights reserved.
6.1.2
Issue Tracking
Issue Tracking is identifying and persistently storing issues (problem tickets,
change-requests, etc.) with their metadata and thread of communications for the
purpose of tracking the possible long-term evolution, state transition and
assignment of the issues.
Pattern 45: Appropriate classification of issues
Define adequate metadata for your issue-tracking model and ensure that issues are properly
classified. However, if there is low acceptance to provide the data, then less could be more.
Ensure that the meta-model of your issue-tracking matches your project
requirements.
79
Chapter 6. The SCM Domain
In particular, verify that you have the right options for the following
classifications:
Issue-type (bug, change-request, task [specification, design,
implementation, test])
Products and components (including build, deployment, your
business-components, technical products, etc.)
Affected release and target release (maintain drop-down choices vs. free
text)
Issue-states and according lifecycle (customized for your project and
depending on issue-type)
Pattern 46: Association of commits with issues
Define a simple but strict rule on how to link commit messages to your version-control with
associated issues of your issue-tracking.
The history of version-control and the history of issue-tracking contain important
information. Maximum value comes out of it, if they both are connected. E.g. for
reviewing commits, it typically saves a lot of time if the related issue is known.
This should be done by mapping commits with issues through their unique ID.
Some higher-level version control systems (VCS) already offer a dedicated field
for issue-ID in commits. Otherwise just follow this template for commit
messages:
#<issue-id>: <issue-title>
<actual message what has been done (e.g. fixed)>
Next, your issue-tracker can be integrated with your version-control system in
order to display the related commit history for an issue. Depending on your tool,
you may also think of adding the revision number containing the change or bug
fix. This helps identifying the build or version containing that fix.
Furthermore your IDE should be integrated with your version-control and
issue-tracker to support you with this associated commit messages. In an ideal
situation, you have no change without an issue and no commit without
issue-association. However, you have to consider, what is efficient and good
enough for your project.
6.1.3
Artefact Management
Artefact Management is managing release artefacts of own and foreign software
products. This includes the identification, persistent storage, dependency tracking,
80
Quasar 3.0
and the license management of all pristine release artefacts from the low-level
artefact to the deliverable products for the customer.
Pattern 47: Strictly versioned artefacts
Ensure that all artefacts carry the proper version in their filename and metadata.
Strictly apply the version number of each of your artefacts (from intermediate to
deliverable) to the filename and metadata. Ensure the same for third-party
artefacts that are integrated in your application. See also pattern 51.
Pattern 48: Do not put intermediate artefacts under version control
Place the input artefacts of your build process under version control, strictly avoid
version-control for any intermediate build-time artefacts, and consistently persist and archive all
final output artefacts.
From our experience, it makes no sense to have version control for temporary
intermediate files. Instead, it is important to control the versions of all source
input for the build process, e.g. source-code, scripts, compilers, and configuration
files. This is done by using a version control system. If you can guarantee the
same version of the complete environment and not only of the input artefacts, only
then you can be sure to reproduce the same final output artefacts. The final output
artefacts are deliverables of the project and need to be archived in a persistent
store.
Pattern 49: License compatibility check
Properly link the licenses of the artefact build to the software product. Ensure that no license
incompatibilities exist in the final product.
Nowadays software products integrate a lot of third party components. Especially
open source software has become popular and becomes a natural piece of
products. However, the massive utilisation of third party artefacts requires
extreme care according to their different terms of license. Otherwise, you easily
end up integrating two components with incompatible licenses. Tool support
helps to figure out which artefacts are included in which version (see pattern 56)
as well as to find obvious license incompatibilities (e.g. GPL component built into
a commercial closed-source product). However, only lawyers can do a positive
verification of complex situations. To prevent problems, check licenses from the
start when you are looking for a particular component from an external source.
81
Chapter 6. The SCM Domain
Pattern 50: Domain-specific slicing
Align your configuration boundaries and granularity with those of the units in your
domain-specific architecture.
The configuration boundaries of your software product for edit-time, build-time,
and run-time should be aligned as much as possible with the domain-specific units
in your architecture. In particular, do not let your configuration be dictated by
your preference for using particular tools.
6.1.4
Version Identification
The sub discipline Version Identification ensures precise product identification
throughout its entire life-cycle. It is defining the structure, syntax and semantic of
version identifiers.
Pattern 51: Strong version identification
Use a syntactically and semantically strong scheme for identifying product versions
consistently, precisely and explicitly.
Precise product version identification is a prerequisite for effective management
of software configuration. It is essential to use an identification scheme that is
both syntactically and semantically robust. The use of rigorous syntax is
necessary for visual consistency and to enable automated processing. Clear
semantics is necessary for unambiguously expressing the nature and scope of a
particular product version.
The following template defines a common syntax for a version identifier.
<major>.<minor>.<micro>.<phase>.<build>.<date>
The segment <major> identifies the main version of your product. Often it is
identical with the main release number under which you are going to market it.
Well-known examples are "2010" for MS Office 2010. Major releases can be
incompatible with older releases and should have significant changes compared to
the previous major release.
The segments <minor> and <micro> are used for smaller functional changes of
a major release and for bug fixes. If you introduce both <minor> and <micro>,
then you can use them to distinguish the level of changes. Increment <minor>
for example can indicate that there have been changes influencing the API of the
product. So, it indicates that everyone using your product has to expect
adaptations before upgrading the product. Under this aspect, <micro> can be
incremented to indicate internal changes or fixes that do not require regression
82
Quasar 3.0
tests for systems using the product. This distinction may be difficult in complex
projects. So decide on a cost-benefit-ratio when you introduce this distinction.
The <phase> indicates the release engineering phase and the expected quality of
a release. Examples are test, release-candidate (RC), release (also GA, REL,
RTM, etc.), or update (U, SP, SR).
The previously discussed segments identify a planned functional feature set of a
release (e.g. for assignment of issues to be fixed for a release and for
communication of milestones to a customer). The version identification schema
should clearly define how to determine the next planned release version.
However, to build and deliver a planned release you may need multiple iterative
builds with unique identifiers. Therefore, additional technical segments are added.
The <build> segment is a number incremented for each build, independent of
whether the release will be delivered. It may be directly derived from the version
control system or from a counter of the build system. This segment is very
important for a clear distinction of your build results.
Finally the <date> segment concludes the date of building potentially including
the time. It is not required for identification but can be very helpful for human
interpretation. As an example the version identifier "3.5.3.RC.5678.20110817"
identifies the release-candidate "3.5.3" with the 5678th build that has been done
on Aug 17, 2011.
6.2
The Build & Deployment Discipline
The Build & Deployment discipline is about building a complete software product
out of its source artefacts and deploying the product on according target
environments. It consists of the following sub disciplines that are explained in the
next sections:
Build Management
Deployment Management
6.2.1
Build Management
Build Management is assembling a software product out of its source artefacts. To
achieve this, the processes of generation, transformation, dependency resolution,
compatibility checking and packaging are performed. The main objective of this
discipline is to build a software product step by step and bottom-up from its many
input artefacts, and thereby to generate a complete software product as deliverable
units for the Deployment Management sub discipline. Figure 6.5 shows the
artefacts of this discipline.
83
Chapter 6. The SCM Domain
Figure 6.5: High-level artefacts in the Build Management sub discipline
©2011 Capgemini. All rights reserved.
Pattern 52: Automated build-process
Automate your build-process up to the generation of deliverable units.
Each manual step has to be documented and can be done inaccurate. Further, a
general requirement for the build-process is that it is reproducible and potentially
even audit-proof. Be lazy and automate!
The build-process is not just the compilation and assembly of some library. For
building a full release it starts with checking out the code base from the according
tag and goes up to the generation of the deliverables that are released and finally
provided to the customer. Design this process from the start and automate as
much as possible. However, you should be effective and do not try to automate
things that are intractable. For example, deploying the software and running
integration tests would be nice to automate but will be hard to do and does not
belong into the regular build-process. Although some integration tests may be
inappropriate for automation, follow the following pattern.
Pattern 53: Continuous integration
Integrate unit-tests into the build and do continuous integration.
Create unit tests for your code and have them integrated into the build-process.
84
Quasar 3.0
Then use a tool for continuous integration that watches your source code
repository and performs a build after changes have been committed. If
compilation of the code or tests got broken, the continuous integration sends an
email to according persons in the team. To avoid that everybody in the team starts
investigation of the problem in parallel, establish a small set of persons for this
purpose. Ensure that exactly one person is responsible for one certain problem at
a time. This person shall have the freedom to resolve or to delegate the problem
quickly.
Continuous integration should be permanent. The earlier you get feedback, in
case the build fails, the better this will save time to track down the reason. This is
especially important for offshore projects. Because of the time difference, the
team causing incompatible code might be asleep when the other team recognizes a
failed build. It might then be a lot of effort to determine the cause of the problem.
Pattern 54: Establish short system integration cycles
Short system integration cycles ensure the permanent fit for all system parts.
Despite the continuous integration, complex systems often can’t be built
completely every time. Reasons may be that build time would be too long or
integrating with other systems is too complex. In this case, ensure regular system
integration cycles. Weekly cycles could be good target. This pattern is especially
important in offshore projects where insight into the status and awareness of
colleagues’ workspaces is impeded. Integration cycles are well proved to monitor
the progress and quality of the system.
Pattern 55: Ensure hierarchical builds
Build software systems hierarchically step-by-step and bottom-up from smaller lower-level input
artefacts to larger higher-level output artefacts. This includes introducing only hierarchical
dependencies between artefacts. Furthermore, your repository structure should reflect this
hierarchy.
A software system consists of one or more domain-specific subsystems. A
subsystem consists of one or more components, and a component consists of one
or more modules and finally classes of code. Even if you do not distinguish all
these structuring types, it is a best practice to organize the system in a strong
hierarchy. This is a presumption for a hierarchical build.
This hierarchy is the basic physical structure of a software system, and is created
systematically in a bottom-up manner by building its larger and higher-level
artefacts out of its smaller and lower-level artefacts. This build approach even
allows the efficient building of very large software systems. If an artefact is
85
Chapter 6. The SCM Domain
updated or rebuilt, only the parent artefacts must be rebuilt as consequence of
modifications in this substructure. Of course, this presumes that your
dependencies between artefacts (compare pattern 56) follow this hierarchy.
This is especially important for offshore projects or big projects with distributed
teams. A hierarchical system allows independent development of artefacts that are
in parallel sub-trees.
Pattern 56: Check dependencies
Explicitly track and resolve the transitive build-time dependencies between artefacts to ensure a
transparent and traceable build result.
Even average-sized software systems have extensive build-time dependencies
between their contained artefacts, especially those generated by frequently-used
third-party libraries and frameworks. Hence it is imperative to explicitly track and
transitively resolve all those dependencies during the hierarchical artefact building
process. This becomes more important when you are heading towards a release.
Tools like Maven and Ant/Ivy can even help to fully automate this process. You
get the most power if your tool supports transitive resolution of dependencies.
However, tiny mistakes in the dependencies of your own or third party artefacts
can cause unexpected results. E.g. if you update the version of a third party
artefact the new version might weave in new dependencies that are undesired but
would be build into your product. To reduce problems restrict modifications of the
dependency-management in your project to few team-members that are skilled
accordingly.
With this approach, the build is completely transparent and therefore traceable.
Additionally, only in this manner an automated build procedure can safely resume
a build at any step and (re)build the necessary artefacts only. Finally, this also
allows the precise checking of complete license compatibilities between your own
and all third party units (see pattern 49).
Pattern 57: Automated transformation and generation
Avoid handcrafting intermediate or output artefacts. Instead, try to generate such artefacts
automatically.
Leaf artefacts in the artefact build hierarchy are the ones you create yourself, and
comprise the actual input artefacts for building. By contrast, all intermediate and
final output artefacts should be built fully automatically. This can be done either
by generating them from scratch or by subjecting existing artefacts to format
transformation.
86
Quasar 3.0
Pattern 58: Build-time beats run-time
Try to perform as many sanity checks and transformations as possible during build-time. Do not
inadvertently delay them until run-time.
Whatever you can do during build-time should not be delayed until run-time.
Hence, you should use the build-time period to conduct as many sanity checks
and transformations as you can.
For instance, during build-time, try to pre-validate your configuration (e.g. XML)
and script files (e.g. Groovy), and to format-transform and generate artefacts (e.g.
image, localization and internationalization data).
Pattern 59: Domain-specific bundling
Bundle your domain artefacts in accordance with domain specifics in the architecture and the
expected change frequency.
In order to optimally support the Test and Build & Deployment disciplines always
try to bundle your build-time artefacts in accordance both with your architecture’s
domain specifics and the expected change frequency of the artefacts.
For instance, if your architecture contains a domain-specific subsystem, ensure
that it is available as a separate build artefact in order to allow it to be easily tested
separately. And if two artefacts have two completely different change frequencies
during Implementation or even Operations, it is advisable to keep them separate.
All platform-specific sections should also be bundled separately.
Pattern 60: Separate environment configuration
Keep the deliverables of your software product free of any environment specific configuration.
Separate environment specific configurations from configuration-packages and
keep your software-packages portable. This allows building of software-packages
only once and then deploying them to different environments for testing. After
successful tests, the exact same binary package can be deployed for production.
This will guarantee that the software going live has been properly tested.
The environment specific configurations can also be bundled as packages per
target environment. However, it is important that responsibility and maintenance
of these objects belong to the system operators instead of the development team.
87
Chapter 6. The SCM Domain
6.2.2
Deployment Management
Deployment Management is provisioning, installing, updating, upgrading,
migrating and uninstalling software packages in defined environments. It has the
purpose of building and maintaining structured and reproducible software stacks
constrained by package dependencies and individual product life-cycles. The
main objective of this discipline is to ensure that a software product is available
for operation. The deployment units as output of the Build Management sub
discipline form the input to this discipline. Figure 6.6 shows the artefacts of this
discipline.
Figure 6.6: High-level artefacts in the Deployment Management sub discipline
©2011 Capgemini. All rights reserved.
Pattern 61: Take care of the entire software lifecycle
Focus on the entire lifecycle of a software product by covering the installation, update, upgrade,
migration, and removal tasks.
Deployment is concerned with more than the initial installation of software
products or the brute-force overwriting of an installation during updates and
upgrades. Instead, this discipline covers initial installation, any recurrent
automated updating with compatible bug- or security-fixed versions,
semi-automated regular upgrading with possibly incompatible new versions, the
migration of the entire installation to a different target environment, and the
residue-free removal of the entire software product at the end of its operational
life.
88
Quasar 3.0
If you are going to plan the first installation in a production environment, then you
should think about the succeeding installations.
Some tasks are easy during the initial installation stage, but may become difficult
later and vice versa. For the initial installation, you need not think about database
schema migration. This can be a very difficult and time-consuming task for the
installation of update releases. If you plan this in advance, you can introduce
guidelines, which will simplify the schema migration for later releases.
Additionally, it is necessary to ensure that the product installation, update,
upgrade, migration and removal procedures are deterministically repeatable. This
is independent of whether the process is executed manually or automatically. In
case of manual installation, a detailed documentation of the process is required for
repetitions. Shutting down servers or starting schema migrations are typical
manual tasks.
Pattern 62: Master the artefact dependencies
Explicitly track and resolve the transitive run-time dependencies between deployment artefacts
to ensure a transparent and traceable installation.
First, clearly distinguish between build-time, test, and run-time dependencies, as
they typically have to be resolved at different times (build versus deployment),
and plain build-time or test dependencies shall not be included in the release
package. Refer to the pattern 56 for further details with reference to this issue.
Pattern 63: Separate environments
Separate environments for development, test, acceptance, pre-production and production.
There are three types of fundamentally required environments: development, test and
production.
All software is executed in a particular environment. For Custom Solution
Development (CSD) we distinguish the following environments:
The development environment is used for editing source code and building
the software product in the context of a single developer’s work. It usually
hosts the Integrated Development Environment (IDE) of developers and is
located on the developers’ desktops or on a dedicated development server.
The test environment is used for regularly integrating the software product,
executing test cases and centrally reporting the results to the developer
team. It may also host the Continuous Integration (CI) service.
The acceptance environment is used for final acceptance tests. It should be
a production like environment and is usually provided by the customer.
89
Chapter 6. The SCM Domain
The pre-production environment is used for preparing the going-live
process. The environment should be identical to the production
environment. It can be used for realistic load tests, or to reduce the
downtimes due to the installation of the new system or the new release.
The production environment is used for operating the software product in
its intended production context. It is owned and controlled by the customer.
According to the actual requirements, you may have even more environments
(especially in large projects) or you may omit the pre-production or even
acceptance environment. In projects without an acceptance environment, the
acceptance test is done in the test environment, that therefore needs to be a
production like environment. In case of the very first release, the acceptance test
can also be done in the production environment.
With or without a pre-production environment, you should always plan a
smoke-test before you release the new system. The smoke-test should ensure that
the most important functionality is in a good working condition.
Pattern 64: Deploy the complete software stack
During deployment, focus on complete software stacks instead of single software packages to
ensure that the external requirements of a software package are exactly fulfilled.
Normally, deployment is not only about the installation of a single product.
Nowadays, almost all software products have numerous external run-time
dependencies (application libraries, system services, run-time configurations,
shared databases, etc.). It is in the responsibility of architects and developers to
ensure that all these external run-time requirements are precisely known in
advance and fulfilled during deployment. Therefore, it is a good practice to focus
on the deployment of complete software stacks instead of single software
products.
Pattern 65: Involve operations
Perform production environment deployments hand-in-hand with the operations team and
domain experts to ensure a smooth going live process.
The deployment of the production environment must involve operations and your
customer’s domain experts. The reason is that service windows of operations and
the domain-related business processes have to be taken into account for glitch-free
deployment. All operation departments have regulations on how to deploy
software systems and what are the preconditions to deploy them into production.
This includes, for example, that the new software has to support specialised
90
Quasar 3.0
monitoring or logging features, or that it must offer some dictated services, for
shutting down or starting the server process.
Hence, for every smooth going live process, you must always ensure that the
deployment is truly performed hand-in-hand with these stakeholders. This
involves clearly communicating what version is being deployed, when it is
deployed, why it is deployed, what user-visible changes will be apparent
subsequently, etc.
Pattern 66: Make and follow a detailed deployment plan
A detailed plan is required for all subtasks of the deployment including all tasks that are
prerequisites for the deployment.
The deployment of complex software systems with potentially many interfaces to
neighboured or external systems is a task that should already start in the analysis
or design phase. Here you should e.g. think about questions like "In which order
shall old interfaces be replaced with new ones?" or "How to migrate the business
critical data?". Therefore, make a plan including topics as data migration,
database schema migration, software deployment, environment specific
configurations, fallback scenario, point of no return, smoke test, down times etc.
Important questions to be answered for each topic are:
How long does it take?
Who executes the steps and when?
What prerequisites have to be met?
What can be automated and what is better done manually?
From such a plan, you should systematically develop a detailed documented
deployment process. This deployment process should be used and improved
during each deployment in the test environment. That is your chance to be as sure
as possible that everything will work fine for the go-live.
6.3
The Release & Change Discipline
Release & Change is the steering discipline through which the stable delivery of a
software product is ensured. It combines both the management of complete
releases and the management of single change requests assigned to releases.
Figure 6.7 shows the high-level artefacts of this discipline. Therefore, Release &
Change is composed out of the sub disciplines Change Request Management and
Release Management.
91
Chapter 6. The SCM Domain
Figure 6.7: High-level artefacts in the Release & Change discipline
©2011 Capgemini. All rights reserved.
6.3.1
Change Request Management
Change Request Management is managing the process of submission, recording,
analysing, assessing, judging, prioritising, scheduling, implementing, testing and
verifying changes to a software product in a controlled and coordinated manner
and within defined scope, quality, time and cost constraints.
Pattern 67: Establish a well-defined change request process
Clarify how the process from submitting to closing a change request works.
All software projects have to deal with less or mostly more unplanned requests for
changes – so called CRs (change requests). Without a well-defined process in
which all effects on the project are properly analysed and evaluated, a project is in
danger to miss its targets in time and budget. The process should define the
responsibilities for submitting CRs, updating CRs, evaluating the business benefit
of the CRs, estimating the cost and other effects on the project for the CR, and
also rejecting CRs. If the CR will be accepted, it must be assigned to a release,
designed, developed and tested. The typical states involved in the process will be
submitted, accepted, duplicate, rejected, assigned, resolved and closed.
Pattern 68: Aligned change management
Align the change request management of your project with the customers change management.
92
Quasar 3.0
Your project and product is changing with every release. On the other hand, the
customer has higher plans for changes of the entire enterprise that have impact on
your project. Only if these two go together and you have your change request
management in sync with the customers change management you will add high
value for your customer. Therefore, exchange with your customer about ongoing
changes on both sides. Be careful to distinguish ideas from plans on the political
level.
6.3.2
Release Management
Release Management is planning and controlling the development, stabilisation,
release and maintenance process of a particular software product version in order
to maximise the quality of the product version within the defined scope, time and
cost constraints.
In contrast with change request management, release management deals with
planned changes and implementation of all requirements in several releases. As
soon as the positive decision for a change request has been approved, the change
request becomes a part of the release management to decide the proper release
that shall contain the change.
Pattern 69: Distinguish several release types
A release should have a type indicating the scope of the change compared to the previous
release. This will help to determine the impact of an upgrade. Best practice is to distinguish
between major, minor and micro releases.
Differentiation between at least the following types of releases is recommended:
A micro release contains only bug fixes, no new features and has no
syntactical or semantical effect on interfaces. The result is that minimal
testing is required to upgrade to such a release.
Minor releases may have some new features and/or changes in the API. In
an ideal case, the interface changes are compatible with the previous
version. Therefore, you need more testing and more effort when deploying
a minor release than deploying a micro release.
Major releases have several completely new features and may be
incompatible with older releases. Therefore, you should not only test them,
but sometimes you will also have to adapt neighboured systems. So the
effort to introduce a major release is quite more than for a minor release.
Align the types of releases with your product manager and reflect them in
your version identification (see also pattern 51).
93
Chapter 6. The SCM Domain
Pattern 70: Plan at least these engineering phases
Each release has to pass through all the typical software development phases: specification,
design, development, integration test, acceptance test, and production. This also includes
maintenance releases.
We consider the requirement engineering as done. Now for every release we start
with a specification phase. Even for micro releases consisting only of bug fixes,
you have to specify, which bugs have to be fixed. During the specification phase
the software is not modified. Especially for bug fixes and smaller changes, design
and development can go hand in hand. It is good, if the same person who
designed the solution also implements it. During design and development, the
product is initially implemented or modified. It can be unstable and incomplete.
This phase should not end before all requirements, change requests and assigned
bug fixes have been developed. It also includes the development tests and the
resulting bug fixes. For major releases this phase typically takes months, whereas
micro releases may only need days.
The integration test phase can start if all changes and new features are
implemented. A software release is built and installed in the suitable environment
where the tests take place. The target is to get a stable version. Identified bugs are
tracked and fixed in a new development cycle. If the tests pass, you can go on to
the next phase.
Start the acceptance test phase with a suitable version. The environment used for
acceptance should be as close to the production environment as possible. The final
release candidate is the last version of the acceptance test phase that is built before
deploying it to the production environment. No changes at all are allowed
between the final release candidate version and the release version. If you find
bugs in a release candidate, that will be fixed, then you have to build a new release
candidate and acceptance tests should be repeated. Therefore, automation of
integrative tests can save a lot of effort (see chapter 7). Normally you will only
find a few minor bugs in your release candidate. As explained, bug fixing mainly
requires a new test cycle. But at this point, the next step depends on the criticality
of your system and your estimation of costs versus risks. Think about the
following alternatives:
Fix some or all known minor bugs and schedule a new complete user
acceptance test cycle.
Fix some or all known minor bugs and schedule an incomplete user
acceptance test concentrating on the most important use cases and on tests
verifying that the bugs are fixed. Such a decision depends also on your
estimation of the quality of the system, because you will estimate the
probability of introducing a critical bug, which your incomplete test will not
show.
94
Quasar 3.0
A third alternative may be, not to fix minor bugs. Although highest possible
quality should always be your goal, in some special cases it might be better
to know of some minor bugs than to risk critical bugs, because there is no
time for a new complete acceptance test.
The last phase is the production phase. It starts with going live of the release. This
go-live may include critical and complex steps such as migration of data, which
also needs to be planned and tested before. If you have done a good job, it will be
stable and no bug fix releases required.
The above comments also apply for agile methods, but here the phases are not as
strict and separated.
Pattern 71: Customer interaction is the key
For custom solution development, a focus on close customer relationships is the key to
succeeding.
In contrast to the standard software, the customer is well known to the IT solution
provider in the custom solution development. A strict focus on close customer
relationships in the release activities and change request management is therefore
the key to success. This means that at least, release management and product
management activities must be performed hand-in-hand with the customer and his
domain experts.
95
Chapter 6. The SCM Domain
96
Quasar 3.0
7
The Analytics Domain
Distributed development, variable production depth and the trend towards
complex and integrated software systems increase the complexity and size of our
projects. As the scale of the projects grows, it becomes increasingly difficult to
achieve transparency in the quality of the software of the implemented application
components or the overall completion grade.
However, achieving this transparency is essential. Only if we control time, budget
and quality equally, can we undertake projects successfully.
Quasar Analytics® derives its name from analytical quality assurance. The
Analytics subdomain replaces the RUP discipline test with three disciplines, viz.
Quality Inspection, Measurement and Test. In doing this, we emphasis that
dynamic testing has to be complemented by static quality assurance, i.e. manual
or automated reviews of code and documents, for effective quality control.
Another advantage of having three separate disciplines is that each discipline can
be aimed at different project phases and at a different audience. The following are
the three disciplines in detail:
Quality inspection for ensuring the sustainability of artefacts developed early in
the project life cycle, such as the requirements specification or the system
architecture. This discipline is aimed at the chief architects.
Measurement to ensure the ongoing control of quality. In this discipline, special
emphasis is given to the maintainability and reliability aspects of the
application being developed. This discipline is aimed at the chief technical
architects.
Tests for reliably assessing the quality of the application as early as possible in
the development process. Through testing we focus on functionality as well
as certain non-functional quality characteristics such as performance,
usability, security, robustness and portability. This discipline is aimed at the
test managers and to some extent at other testing roles.
With Quasar Analytics® , we provide tools and methods for actively controlling
quality. We receive early feedback about the stage of completeness and the quality
of the application being developed. As a result, we are able to recognize risks to
quality at an early stage, and we are therefore able to take remedial action in a
timely manner.
The methods, utilities and tools of the Quasar Analytics® disciplines bring
transparency to the quality and thus making it controllable during the entire life
cycle of the software project. One key success factor here is concentrating on
artefacts instead of merely assessing processes. Processes may be effective or not,
but the crux lies in the artefacts developed. We concentrate on making their
97
Chapter 7. The Analytics Domain
internal and external quality visible at all times to everyone that is involved in the
project.
7.1
The Quality Inspections discipline
The correct functionality of code is just one side of the quality coin. It can be
ensured by intelligent and structured testing. But as valuable as testing is, it is less
powerful for assessing the immanent qualities of code, such as maintainability.
This is where the automatic measurement of metrics proves its worth, such as in
relation to the compliance of the code with the high-level architecture. However,
such measurements can only be performed on real code. But an inferior
architecture can impede maintainability from the very beginning, and this
becomes very expensive if the architecture has to be changed only after thousands
of lines of code have been written. Error correction costs increase exponentially
over time as shown in figure 7.1. Fortunately, quality attributes like performance,
reliability or security can be assessed a lot earlier in the project life cycle. We
achieve this with our quality gates.
Figure 7.1: Why do we need quality gates?
©2011 Capgemini. All rights reserved.
At Capgemini, quality gates are comprehensive evaluations executed at specific
points in time in order to assess the maturity and sustainability of the artefacts
produced and the processes that have been followed in order to produce them.
The quality gates are carried out by external assessors. They are complemented
by management assessments and internal code reviews. Figure 7.2 shows the high
level artefacts of this discipline.
98
Quasar 3.0
Figure 7.2: High level Artefacts in the Quality Inspection discipline
©2011 Capgemini. All rights reserved.
99
Chapter 7. The Analytics Domain
7.1.1
Basic Methodology
Specifications for large-scale business information systems usually consist of
hundreds of use cases and business rules, dozens of dialogs and batch jobs, and
large concepts for multi-tenancy or access authorization. All in all, a complete
system specification can easily consist of a couple of thousand pages. If we
discover that a specification contains systematic errors only after it has been
completed in its entirety, the effort required to correct these errors can easily
mushroom. The same applies to software architectures. Although software
architecture descriptions are less voluminous than system specifications,
consistency and completeness are still a major issue. If quality requirements are
not taken into account by the architecture, the resulting system will not meet the
customer requirements. Such issues have effects that cut across various
boundaries, and usually large portions of the code base have to be corrected
afterwards. Consequently, we have to find errors before they propagate
systematically through the entire product. Otherwise, the resulting negative
ramifications can be disastrous.
Pattern 72: Conduct quality gates after the first third of a phase
At the latest, do so in the middle of the phase.
Another important reason for not waiting to conduct the quality gates only after a
detailed completion of a specification or architecture is to prevent the feeding
back of problems into the preceding discipline. For example, if it becomes clear
that the quality requirements in a system specification are not detailed enough to
construct the architecture properly, there must be immediate feedback to people
who specify the system (business analysts). If this happens too late, there is the
risk that the domain experts are no longer accessible and it is difficult or no longer
possible to retrieve the necessary information. Figure 7.3 shows the methods that
can be used here.
Pattern 73: Use active evaluation methods
Walkthroughs are complemented by tough test questions that truly pinpoint the weak spots.
As soon as a reviewer is able to lean back while the project members explain the
product of their work, his attention will slacken off. This is due to what we call
the cinema effect, or looking on and mentally drifting off instead of carefully
scrutinizing what is going on. If reviewers have to navigate through the work
product themselves, they will necessarily stumble across under-specifications
because the team members cannot "auto-complete" missing information that
resides only in the heads of the authors.
100
Quasar 3.0
Figure 7.3: Evaluation methods of the specification quality gate
©2011 Capgemini. All rights reserved.
Quality gates are not guided sightseeing tours, but tough reviewer-driven
assessments. To judge the viability of a work deliverable, the reviewers must be
able to understand the work deliverable on their own. Ultimately, every
programmer must be able to do this. We use different evaluation techniques, such
as specific checklists, walkthroughs or change scenarios, in order to obtain a
complete picture of the maturity of a work product.
Pattern 74: Use project-external reviewers for quality gates
Project-external senior architects are not "routine-blinded” or biased and have a broader
background.
Others will find an error more easily than the individual who produced it. The
basic idea of a walkthrough is shown in figure 7.4.
If someone invests a lot of effort into creating a work deliverable, he will have a
corresponding emotional investment in his product. But this makes it increasingly
less likely that he will be able to step back and assess his work with an unbiased
eye. This is why it is very important that work deliverables comprising the basis of
numerous downstream activities be evaluated by project-external senior architects.
This also provides valuable fresh insights and leverages the time-served
experience of such senior employees. We have often encountered situations in
which an assessor has exclaimed, "Hey, in my last project, I also used distributed
101
Chapter 7. The Analytics Domain
Figure 7.4: Basic idea of a walkthrough
©2011 Capgemini. All rights reserved.
caching. But after a couple of weeks I learned that this decision regarding the
architecture conflicted with our failover requirements. And you too have very
strict requirements of this sort".
Pattern 75: Concentrate on the quality 4 + 1
Efficiency, maintainability, reliability, usability and their tradeoffs.
Efficiency issues like performance almost always prove to be a challenge when
devising large and complex multi tier systems. They often compete directly with
maintainability and reliability. The most difficult thing is to get these tradeoffs
right from the customer’s perspective. Concentrating the evaluation on these
attributes and their mutual tradeoffs results in a very good cost/ benefit yield from
the quality inspection).
Beware of concentrating your inspection on issues that are easily assessable but
less critical. Taking system specifications as an example, it is much easier to
evaluate the structure and form of the specification: "Do they really use use cases?
Do they really comply with the specification method?" Of course, such checks are
important in their own right. But the really critical issues can most often be found
at the level of content. This is also where the active methods come into play.
Scrutinizing a use case with a walkthrough can reveal critical performance issues.
ATAM (architecture tradeoff analysis method) can reveal that a software
architecture might not support the business drivers. Change scenarios might shed
102
Quasar 3.0
light on insufficient tracing between work products. Poor traceability can
dramatically diminish the capability for conducting impact analysis, which in turn
can make effective change management impossible. Figure 7.5 shows the
advantages gained by using quality gates.
Figure 7.5: The advantages gained by using quality gates
©2011 Capgemini. All rights reserved.
7.1.2
Offshore Methodology
Inspecting system specifications is one of the most effective and efficient quality
assurance techniques, as systematic defects can thus be detected early in the
project life cycle. As the clarification and resolution of specification issues at a
later stage is communication-intensive and is therefore heavily impeded by the
distributed nature of the team, such inspections become especially important in
offshore projects.
Pattern 76: Check your specification early and often
Concentrate on the offshore-specific challenges connected with specifications.
Rigorous inspections performed by senior architects external to the project are
important. We complement these with continuous peer-to-peer reviews. In
offshore projects, such reviews are carried out jointly by members of the onshore
and offshore teams. This increases the logistical effort required to prepare and
schedule the reviews, but pays off handsomely in terms of knowledge transfer and
team building.
103
Chapter 7. The Analytics Domain
Pattern 77: Use handover checkpoints to effectively delegate work
Bundle deliverables into work packages and check them from the recipients’ point of view.
Clearly defined handover checkpoints help us control the development process,
because they allow the transfer of responsibility in a defined manner. The
checkpoints ensure that the recipient team can safely start its work.
We use a clear definition of work package derived from the Quasar Ontology that
helps bundle cohesive and self-contained sets of work units. This helps regulate
the communication between distributed teams, as it reduces the need for
repeatedly chasing up missing information. In the so-called Work Package
Handover Checkpoint we also assess whether the team is employing precisely
defined acceptance criteria and quality objectives. This makes it clear when a
single work unit or a complete work package is finished, and forms the basis for
effective earned value analysis.
In the Result Handover Checkpoint, compliance of the work package results with
the defined criteria and objectives is checked. We also assess whether the work
package results were constructed on time and within budget. We evaluate how
potential deviations from the plan are being considered by the project
management, and whether their impact on the overall schedule is being estimated.
This continuously realigns the engineering disciplines with the management
disciplines.
Taken together, the notion of handover checkpoints and work packages supports
the clear handover of responsibilities based on precisely defined work packages,
allows earned value analysis, aligns the engineering disciplines and ensures
effective knowledge transfer in offshore projects. The handover checkpoints are
also the prerequisite for enabling the transfer of larger work packages and for
avoiding failure resulting from the micromanagement of the coding done by the
distributed teams.
7.2
The Measurement discipline
Unlike time or budget control, the systematic measuring and judging of software
quality is less straightforward. There are hardly any conventional methods and
tools that help assess the overall quality of the application. The Quamoco project
(s. [33]), for example, to which Capgemini contributed as one of the partners,
aimed at this goal. In software engineering, quality control is a challenging skill.
Software measurement is our answer to the requirement for ongoing quality
control. Quality control requires genuine transparency based on facts put into
context. We want to make software quality as tangible, transparent and provable
as possible at any point in time. Figure 7.6 shows the high level artefacts of the
Measurement discipline.
104
Quasar 3.0
Figure 7.6: High level artefacts in the Measurement discipline
©2011 Capgemini. All rights reserved.
On the basis of the quality requirements and a definition of what quality means for
your project or scope, you define indicators that are capable of assessing the
quality of your system in this defined context. You also select analysis patterns
that help to identify quality risks and provide interpretation aids.
The outcome is the assistance provided to the project management team for
assessing and evaluating the quality and the completion status of the software on
the basis of facts instead of mere gut feeling. It will help to provide an early
warning system for recognizing quality risks in the battle against software erosion.
7.2.1
Basic Methodology
Pattern 78: Specify your quality model
Take our standard quality model and tailor it to your needs. Use GQM to derive requirements
that are specific to your project.
Measurement is merely measurement and nothing more. It cannot improve your
software, the completion status or the software’s quality. So quality control
requires appropriate responses to be effective.
But how do you know when to respond? It is crucial to define what quality means
in relation to your project and which indicators are relevant for assessing this
quality. In other words, you have to define your quality model. A good way to
105
Chapter 7. The Analytics Domain
derive indicators is the goal-question-metric approach. You define the goal,
identify the concrete indicators that will ensure the goal is reached, and finally
define the metric for measuring these parameters. Some of these indicators may
originate from customer requirements. Most of these indicators do not need to be
reinvented; we provide a standard quality model that contains best-practice
recommendations.
The defined quality model should become an integral part of the development,
risk management and test guidelines. So team members have concrete rules that
proactively help to ensure quality.
If quality requirements are not met or do not exist, the project management team
needs to take action. Once they do, these actions in turn change the quality model.
So the quality model is not rigid and unchangeable, but needs to be adapted in
response to project developments and trends, changed awareness and improved
knowledge.
Pattern 79: Don’t measure everything
Use a small set of quality indicators with practical relevance for our software blood count.
What is important for software quality? Not everything can be measured; some
aspects cannot be easily measured at all some should not. Focus on metrics that
affect software quality in practice.
Choose the right scope. Some parameters require a risk-based approach.
Performance, for example, does not need to be measured across all functions, but
only in relation to performance-critical functionality. This functionality has to be
defined. Some parameters need to be measured thoroughly. The number of errors
or completion statuses can be interpreted correctly only if they are measured
across the entire system.
You do not have to reinvent these metrics. The software blood count is
accompanied by a set of proven metrics and quality indicators which cover a wide
range of the quality spectrum. You can adapt them or use them for your project as
they are. The most important indicators are surely code analysis, test and
architecture management indicators, project status figures and change rates. Be
aware that software measurement is not based on static code analysis alone.
Quality measurement requires a holistic perspective that considers all
quality-relevant sources of data. Figure 7.7 explains what we understand by
Capgemini Software Blood Count V2.0.
Take care to ensure that your metrics stay simple and understandable. Not
everybody is a metrics expert or is able to comprehend a sophisticated metric.
You need provable and simple figures that anyone with good software engineering
expertise can understand. Ultimately, the outcome will affect your project
106
Quasar 3.0
Figure 7.7: Capgemini Software Blood Count V2.0
©2011 Capgemini. All rights reserved.
processes, as it may require policies, process guidelines and process changes.
That will only be accepted by the team when you achieve practical and substantial
added value. Nobody will understand process changes just for the sake of
improving statistical values. Less is more in this case.
Speaking of the team, there is one more point to mention. Be aware while
defining quality indicators that the measured values will change their perceived
significance; people may be tempted to optimize them. If you measure the change
rate of lines of code, that number will decrease as soon as the people in your
project recognize its significance. But that will not necessarily improve your
software’s quality.
Pattern 80: Watch the trend
Continuously measure and understand trends – not just the absolute numbers, but their
direction.
Trends are more important than individual snapshots, especially for the early
recognition of quality deficiencies. The conclusions drawn from observing trends
are less vulnerable to the effect of temporary outliers, and are therefore more
reliable. Be aware that every metric interpretation has to take the context into
account. Some trends may be subject to multiple interpretations. Did the number
of defects which were found during system testing decrease because the latest
round of testing was ineffective, or did fewer defects slip past development into
107
Chapter 7. The Analytics Domain
the testing process? Make sure you understand what the data is telling you, but
don’t rationalize your observations away.
Experience and expertise can help you interpret the figures and trends correctly.
Detection patterns in tools like the software cockpit can be used to recognize
typical and recurring quality problems associated with certain metric
constellations. They help to identify quality risks. You will also find proven
recommendations for threshold values in the form of best practices and
preconfigured reports.
Pattern 81: Humans do it better
Quality is too complex to be measured automatically – the evaluation should be left to people
who know the context.
Some quality characteristics can only be vaguely measured. You will not be able
to fully quantify those kinds of quality parameters, so you should not even try.
Certainly, those kinds of characteristics should be considered, but they should be
treated as what they are, merely impressions of quality indicators, not quantitative
numbers. Only humans are able to place metric values into the context in which
they were measured.
Some tools build on automatically computed quality indices to give the software
quality a number. We don’t encourage this, because quality is inherently complex
and complicated, and cannot be presented as a single number. There is no simple
measure for overall software quality. Quality is the result of careful, creative
work, not a consequence of simply tuning software metrics.
Don’t forget: the measurement of quality indicators is just the beginning. It does
not in itself improve quality. That is achieved via the definition of the quality
model, the interpretation of the measurement results, the risk classification of
indicators and the implementation of appropriate measures to increase quality in
the event that action is needed.
7.2.2
The Offshore Methodology
What is code quality? Even within a single team in the same company, code
quality can mean something completely different to two different developers.
Again, this problem is further intensified in offshore projects where different
process models and engineering philosophies are applied. Code quality must be
defined early and explicitly.
108
Quasar 3.0
Pattern 82: Nail down code quality
The whole team must commit to a common code quality model.
Distributed working also makes it difficult to apply proven software engineering
practices like pair programming. Techniques like code reviews suffer from the fact
that its participants cannot sit next to each other. Thus, planning for code quality
becomes even more critical in distributed project settings, and must start early.
Pattern 83: Continuously review the code for comprehensibility
Concentrate on structuring it from the business logic point of view in order to enhance
maintainability.
In offshore projects, we especially review whether the code can be read and
structured from a business-logic perspective. This greatly increases the
maintainability of the code base.
A particularly important issue for offshore projects is to provide continuous
onshore support for the offshore colleagues. And bear in mind that some
colleagues might need some training before they start coding.
Pattern 84: Plan code quality assurance early
And continuously adapt it to the experience level of the developer.
Continuously adapting the code quality assurance plan is necessary to cope with
the increasing experience level as well as the potential attrition of the team. This
ensures an optimal cost/benefit factor of code quality assurance. Figure 7.8 shows
the code quality life cycle.
Pattern 85: Exploit the Software Cockpit to achieve transparency
Continuously monitor code quality and evaluate quality trends.
The Software Cockpit is not a tool for controlling or supervision. It is a powerful
tool for enabling the team to achieve a common understanding of where it actually
stands. It shows the overall status and uncovers hidden quality problems. All
these things are especially critical in offshore projects.
109
Chapter 7. The Analytics Domain
Figure 7.8: Code quality life cycle
©2011 Capgemini. All rights reserved.
7.3
The Test discipline
Testing yields transparency regarding software quality in a very different way than
other quality assurance measures do. Testing is the first hands-on experience of
the actual implemented system. For the first time in the life cycle of the software,
one knows for sure what the system really feels like. One can experience the
different system parts, dialogs and functions that were hitherto only described on
paper. And now the customer can judge much more easily how well the system
actually meets his expectations. Figure 7.9 shows the high level artefacts of the
Test discipline.
However, experiencing the system is not enough. Testing must estimate the
quality of the software, reliably and comprehensibly. The main goal of testing is to
lower the product risks. But you will not achieve certainty by just "trying out" the
software. That is why you need a structured and well thought-out testing process.
At Capgemini we structure this process in accordance with ISTQB1 in both test
levels and test activities. With regard to test levels, we distinguish between
component tests (also known as developer’s tests), subsystem tests, system tests
and acceptance tests. For the single test activities like test planning, the
Capgemini Test Methodology provides step-by-step descriptions. It also provides
checklists, templates and tool support for a quicker start in actual projects. In the
following, we describe the underlying principles.
1
ISTQB stands for "International Software Testing Qualifications Board" and is the international
de-facto standard for software testing.
110
Quasar 3.0
Figure 7.9: High level artefacts of the Test discipline
©2011 Capgemini. All rights reserved.
7.3.1
Basic Methodology
Pattern 86: Test early and continually
Have a separate test team systematically test the subsystems already during implementation
while involving the customer. Test the integration step by step.
Intensive testing is often performed only a few weeks before production. You may
sometimes hear the maxim First implement, then test, as early testing is often
associated only with the developer’s xUnit tests. However, if this approach is
followed you will end up with really late testing, which brings with it enormous
risks as shown in figure 7.10.
The intensity of the component (or xUnit) tests depends mostly on the time
pressure placed on the corresponding development team. The more pressure a
team is under, the more errors they will produce and, at the same time, the less
well thought-out their unit tests will be. This usually results in intensive xUnit
tests of cleanly programmed components and definitely not enough testing of the
error prone ones. Consequently most of the problems will come to light in the
system tests, just a few weeks before productive operation. At this point time is
short. Now you have almost no chance of resolving fundamental errors,
refactoring error-prone components or integrating additional and important
customer needs.
111
Chapter 7. The Analytics Domain
Figure 7.10: Late testing carries enormous risks
©2011 Capgemini. All rights reserved.
Therefore, we promote:
Goal setting and identification of critical test aspects for component testing
too, plus monitoring of the achievement outcomes.
Testing every subsystem as soon as it has been completed, by a separate test
team, using realistic test data, in parallel with the implementation.
Integrating subsystems and testing their integration step-by-step.
Performing non-functional tests as early as possible.
Involving your customer in the functional subsystem tests.
If you adhere to these fundamental rules, most of the functionality will be tested
concurrently with implementation and will thus be validated early and reliably.
Errors contained in single subsystems or their interfaces will be revealed early in
the process. Once implementation is complete, "only" the system test remains.
During this test, the test team can concentrate on testing the overarching
processes. Figure 7.11 makes clear that early and continual testing lowers risks.
All this is true not only for the functional tests but also for the non-functional
ones. As test manager, you have to ensure that important requirements regarding
performance or load testing are validated early. You should know these
requirements and adjust the project planning accordingly.
These early tests are the first indication of what the system really feels like. We
strongly recommend sharing this knowledge with the customer. You can do this in
different ways: via previews in the form of sub-system presentations, in hands-on
112
Quasar 3.0
Figure 7.11: Early and continual testing lowers risks
©2011 Capgemini. All rights reserved.
workshops, by writing and executing tests together, or even through simultaneous
operations that include the new application components alongside the system to
be replaced. This results in early benefit and valuable feedback about the system
being built. Only the customer can tell you how well the system actually meets his
expectations. And the earlier you find this out, the better.
Pattern 87: Perform risk-based testing
Set priorities for testing according to the product risks. Where there is high priority, automate
the derived test cases and test them as early as possible.
Complete testing is impossible for all non-trivial IT systems. Therefore, you have
to set priorities in order to focus your testing efforts as shown in figure 7.12.
Our test methodology helps you define a suitable number of test cases by focusing
on the product risks. Accordingly, the first process step is setting up a
prioritization matrix. In this matrix, the test manager evaluates the importance of
the project-relevant quality characteristic for the objects being tested. For
example, performance can be critical for one subsystem but insignificant for
another.
The prioritization matrix suggests appropriate test types for each quality
characteristic. Thus, you can use the matrix for the initial planning of what to test,
how intensively, and at what test level. A quality characteristic with a high
113
Chapter 7. The Analytics Domain
Figure 7.12: Priorities in testing help to focus on the critical parts oft he system
©2011 Capgemini. All rights reserved.
priority should end up in very structured derived test cases and with a high grade
of test automation. You can also reduce product risks by executing the
high-priority test cases as early as possible. Therefore, the test manager should
interact closely with the project manager, who should align the implementation
planning with the risk-based prioritization.
Time after time, we encounter objections such as, "In our project, everything is
important. Setting priorities doesn’t work for us.” However, keeping the product
risks under control is only possible when you consistently set your priorities.
Every project faces a risk of not completing on time or on budget. If you do not
set priorities, things are left to chance when time and budget pressures increase.
Ultimately, it will always make an important difference whether all the elements
of the system are 90% complete and are still full of defects, or whether certain
critical, central system elements are operating reliably while other, less critical
elements have not yet been intensively tested. Don’t delay priority decisions until
the end of the project, because by then you will have very little room for
manoeuvre left.
Pattern 88: Do preliminary tests at a subsystem level
Run preliminary tests first in order to estimate the quality of the software. If it is good enough,
carry out thorough testing. Otherwise, think about how to improve its quality.
Testing is not an appropriate method for substantially enhancing software quality.
114
Quasar 3.0
Often, testing exhausts both time and budget because of inadequate software
quality. Finding, reporting and administrating numerous defects is
time-consuming, and retesting them is even more so. The "Testing quality into
software" approach is often tried, but it is inefficient. If there is still high error
density in a subsystem or a system, other measures for enhancing software
quality, such as code reviews, are much better.
This is why we recommend assessing the quality of the (sub)system before
embarking on intensive testing. You can do this by a preliminary test. In this test,
you take a random sample of your test cases and count the errors found. From this
error count and the time invested you extrapolate the total testing time, thereby
obtaining a rough estimate of the (sub)system’s total error count. If the estimated
quality achieved is good enough, you proceed to test the (sub)system. Otherwise,
measures for improving the software quality will have to be taken.
Pattern 89: Draw up a two-stage test specification
First test aspects, then test cases.
A structured and well thought-out test design is a prerequisite of sufficient and
auditable test quality. A fundamental issue here is to split the test case
specification into two steps:
Step 1: Methodically develop test aspects from the specification. Test aspects
describe briefly what must be tested for a special use case, dialog, etc.
These aspects are captured in a list. Together with the analysts and the
customer, the testers prioritize the test aspects and assign them to the
planned test levels. While plausibility checks, for example, can already be
performed during the component test, single use cases can be tested in
subsystem tests, while complex processes must wait until the system test.
Step 2: For every high-priority test aspect, describe at least one test case as
shown in figure 7.13. If appropriate, automate it immediately.
With this two-stage test specification you can effectively avoid redundancies
between test cases at different test levels. Test aspects make comprehensible what
is tested, as well as when, why, and how intensively. The scope of testing and the
prioritization will be assured through early collaboration with the analysts and the
customer.
115
Chapter 7. The Analytics Domain
Figure 7.13: Derive test cases via test aspects
©2011 Capgemini. All rights reserved.
116
Quasar 3.0
Pattern 90: Use test automation
This is especially important for high-level functional tests.
Many people shy away from automating high-level functional tests. The reasons
for this are mostly inadequate tool expertise in the project, the fear of substantial
maintenance effort, and a lack of time. Many testers are still haunted by memories
of the early automation tools. Using these, it was necessary to adapt the
automated test cases or even re-record them after every tiny change to the GUI. At
that time, the maintenance effort expended during the execution of the tests
greatly exceeded the cost savings. Test automation tools have improved greatly
since those days. The recording of test cases is no longer restricted to GUI
coordinates, and recorded test cases can usually be read and modified, while many
tools now support the testing of different GUI types as well as databases and
direct function calls. Test automation has become an investment which – if used
correctly – is rewarding. Figure 7.14 motivates this.
It provides the following benefits:
Reduced time pressure and risks during the late project phases: With
automated test cases you can assure software quality much faster. When bug
Regression testing on late implemented bug fixes is therefore less critical.
More efficient protection against side effects.
Cost savings: While test automation initially costs more than writing
manual test cases, it pays off in the long run, because the execution of
automated test cases is less time-consuming. Normally, the break-even
point lies between 2 and 5 test runs. This number is reached in every
project, usually even with a single subsystem test.
Less monotonous manual testing.
Faster delivery for both releases and hot fixes.
In particular, you should not underestimate the flexibility achieved when
delivering follow-up releases and hot fixes. Without test automation, you must
either devote a large amount of test effort to hot fixes or accept a considerable
potential risk when releasing the hot fix for production. The bigger the system is,
the worse this dilemma becomes. As a system grows, the amount of old
functionality that has to be safeguarded increases with every release. If it is not
regression-tested there is a growing risk of unknown side effects and serious
errors occurring directly in production.
We therefore recommend performing test automation at an early stage and using it
throughout the life cycle of the project. To this end, it is indispensable to have a
clear test automation strategy that offers guidelines as to which test cases should
be automated, as well as when and how. The aim is not to achieve 100% test
117
Chapter 7. The Analytics Domain
Figure 7.14: Test automation pays off
©2011 Capgemini. All rights reserved.
automation. Instead, in most projects an automation level of 50-80% can be
effectively reached with intelligent test automation.
7.3.2
Manufactural Methodology
Agile Projects or projects with agile elements such as test driven development
(TDD) have become increasingly more relevant for us over the last couple of
years. We have conducted interviews with test managers involved in such projects
[11]. Besides this, we have spoken with professionals from the “Web Solutions
competence centre”. Especially, in the web engineering domain, agile practices
are popular. In addition, we contributed talks like [25] to conferences, where we
discussed the topic of typical testing problems in agile projects with other
industry companies. On the strength of our experience the test method we
developed for traditional projects can also be used in agile projects. To structure
the tests by the known test phases, test levels and test management topics is viable
in an agile environment as well. While the superstructure of the method prevails,
some elements have to be adapted in a way to cater for the demands of the agile
approach. We describe some of the identified patterns pertaining to the test phases
test planning and steering,
test design and
test execution.
118
Quasar 3.0
These patterns are primarily based on the agile approach of cross-functional teams
where developers do coding and testing and the techniques “short iterations” (two
to four weeks), “continuous integration” and “collective ownership”. The
following patterns combine the named agile elements with our test method.
Pattern 91: Scale testing with cross-functional integration team(s)
Additional integration team(s) keep(s) track of the high-level view of the system and its
requirements.
Agile methods emphasize the importance of individuals and their interactions.
The communication and collaboration of developer teams with more than nine
members are too complex. So in large projects we need to scale up teams with
parallelisation and integration to a higher level. In Scrum, the most common agile
methodology, this kind of scaling is called Scrum of Scrums. Team members of
lower level teams participate in higher level teams.
A proven practice is to have dedicated agile testers who assure quality with the
focus on testing and the progress towards “Done”. These testers should be
members of higher integration teams. The higher integration teams do not only
test, but they also code integration functionally, do bug fixing and refactoring to
form the complete system.
Pattern 92: Realize integration tests in additional iteration(s)
Consolidate high-level requirements in additional iteration(s) with a short offset.
High-level requirements cannot be realized by only one team. Integration teams
consolidate lower-level artefacts into higher-level artefacts. Corresponding
lower-level requirements are implemented by associated lower-level teams. All
teams write continuously unit, acceptance and non-functional tests according to
their definition of “Done”. They consider simplicity and separation of concerns.
Integration teams start their iteration with a short offset compared to the
lower-level teams. All associated lower–level teams start their iteration at the
same time. The iteration length is equal for all teams.
Pattern 93: Use the structured test design pattern for critical user stories
Test aspects are the key to synchronize collaborating teams.
One of our key findings is that the use of test aspects, as a means to synchronize
test levels, is not only applicable to but of paramount importance for testing in
agile projects. Therefore, dedicated agile testers design a test strategy by making a
119
Chapter 7. The Analytics Domain
trade-off between risks and effort and use structured test design to ensure that test
coverage meets prioritization of artefacts. Agile testers choose the right test
design method and derive the required test aspects for critical features. These test
aspects can then be implemented by lower-level teams. Such an approach
provides a framework which determines the appropriate granularity for the test
aspects and reduces the risk of implementing too many and wrong tests.
120
Quasar 3.0
8
The Evolutionary Approach
Business needs to quickly adapt to the fast changing market demands. Large IT
systems tend to slow down these changes, thus becoming the business snag. The
flexibility concerning such changes will be one of the major success factors, in the
future, eventually turning into a critical surviving factor to be competitive on the
market. As mentioned in chapter 2 of this book, the sequential methodology
would be too rigid for these purposes. Trends such as service oriented architecture
(SOA) and business process management (BPM) combined with business rule
management (BRM) allow evolutionary approaches and promise to get more
efficiency and flexibility. However, it is not sufficient to just follow hypes but to
have a strategy on how to combine new trends to an overall solution. There are
many patterns of the sequential methodology that are valid for the evolutionary
approach, too. Nevertheless, evolutionary software development requires chunks
of additional or variant methods. Here, we will go into more detail with respect to
these differences compared to the sequential methodology by describing the
individual domains and disciplines and defining new patterns where they are
useful and needed.
8.1
Impact on Requirements Engineering
As a matter of principle, the statements and patterns concerning requirements
engineering are relevant for evolutionary methodologies, too. Before modelling
business processes and business rules, the goals of our customers must be well
understood to fulfil their expectations to the maximum extent. It is necessary to
develop system visions formalizing this information in an appropriate way. One
possible solution in this context is the business motivation model (BMM) of the
Object Management group (OMG).
Pattern 94: Capture process goals
Describe process goals first e.g. using business motivation model (BMM) and associate them
with assets (processes, rules, etc.).
The goals and requirements shape the high-level view that is broken down into
use cases and main business processes. Main business processes get detailed by
sub-processes and business rules and services. By linking these artefacts with
use-cases and requirements, projects become structured and transparent. This is
very helpful for estimating, planning and monitoring.
121
Chapter 8. The Evolutionary Approach
Pattern 95: Manage goals in BPM-Suites
Goals, requirements, and use cases should be captured and mapped to processes and rules in
the BPM-Suite itself, if supported by the product. By doing so, future changes and maintenance
is improved because dependencies remain visible in the system itself.
If you are very familiar and efficient with the BPMS modeller you can even
directly capture the objectives (DCO) in workshops with the business department.
Otherwise use the whiteboard and transfer the results to the BPMS afterwards.
8.2
Impact on Analysis
Several BPMS products are based on more IT-centric notations like the business
process execution language (BPEL). As a result, business models have to be
converted before they can be executed. Afterwards, the generated IT models have
to be extended and modified. This leads to trouble in round-trip engineering when
the original process models have to be adapted to new business needs.
Pattern 96: One model approach
Create business-oriented process models and make the same models executable. Use a single
notation for modelling and execution. Our current suggestion is to use the standard BPMN 2.0.
The main idea of evolutionary software development is the existence of one
central model that is subsequently enriched by business and technical details until
it eventually becomes executable. As opposed to model driven development
(MDD), it does not need to be translated into another representation by generating
documentation or code. This implies the use of a single notation that is able to
express all these details and, on the other hand, can easily be executed by BPMS
products. Besides vendor specific languages fulfilling this requirement, the
business process model and notation (BPMN) is a prominent common notation
that is supported by a growing number of systems.
Pattern 97: Simple notation
Reduce the set of modelling elements to the required minimum to make the models more
understandable by business and IT.
Even by using a widespread notation, the individual BPMS products often lack
full support of all elements that are defined by the standard. In particular,
extensive notations like BPMN 2.0 contain a lot of constructs for special cases
that can be omitted without large impact on the expressiveness of the resulting
diagrams. Unfortunately, there is no commonly agreed subset. Moreover, many
122
Quasar 3.0
BPMS vendors undertake modifications of the syntax and semantics leading to
incompatible implementations. This hinders the exchange or even the migration
of business process models from one system to another. To minimize these
effects, it is recommended to use a well-defined subset of relevant modelling
elements. As an additional benefit, the models become more comprehensible.
Pattern 98: Use process layers
Structure process models on at least four layers: high-level, happy path, reality path, and fully
executable. It helps to make the models more manageable.
The one model approach is very helpful to facilitate future adaption of the
business processes. On the other hand, it runs the risk of overloading business
models with a lot of detailed information. This will be a serious drawback
because such models are difficult to understand and hard to maintain. It can be
compared with a JAVA application consisting of only one main method with
thousands of lines of code. The solution in respect of JAVA would be splitting the
code into packages, classes and methods. In the context of business process
models, the complexity can be reduced by structuring the models on multiple
layers using sub-processes. There are four important abstraction layers:
High-Level: In the first step, main business processes are collected and described
with a very high level of abstraction. This includes a suitable name for the
process, the process goal and summary, and the responsible owner(s) and
users. Further, a rough flow with the main steps in the process and the
relations between the main business processes are modelled with input and
output management.
Happy Path: The individual main business processes are detailed ignoring
special cases and exceptions in their execution. Only "best case" or "normal
case" behaviour is covered by this kind of model.
Real Path: The happy path model is enriched by exceptional business behaviour
(e.g. if an important information is not available).
Fully Executable: Unlike the other models containing only business details, this
layer introduces technical information needed for execution like process
variables, service calls, or user interfaces. This model depends on the
selected BPMS product and is vendor-specific.
The four layers can be divided again by structuring the happy path into
sub-processes or separating the fully executable model into several technical
aspects. Again, the realization of model layering depends on the elements and
artefacts of the individual modelling notation (for example, BPMN 2.0 offers
collapsible sub-processes).
123
Chapter 8. The Evolutionary Approach
Pattern 99: Separate processes and rules
Normally, BPM should always be combined with business rule management (BRM). Use
business processes for the flow and business rules for finer grained decisions.
Business processes need to be business-oriented and readable to business
departments. Therefore, implementation details should not be visible in the
process model. Here BRM helps as an additional layering to reduce complexity.
BRM deals with capturing, defining, and implementing business rules.
For example the process model should only show decisions in the language of the
business. The actual implementation of the decision is realized by a business rule
that gets connected to the decision element (gateway) in the process model (see
figure 8.1). BRM offers powerful concepts such as decision tables to realize such
decisions in a way that is readable to business departments.
Figure 8.1: Interaction of BPM and BRM
©2011 Capgemini. All rights reserved.
A very important aspect is to find the golden section between processes and rules
(see [21]). A lot of experience is necessary to achieve good interfaces. The right
combination increases the flexibility and maintainability of the solution.
Most BPMS products support BRM as an integrated solution. Otherwise, it is
possible to exhibit the rules as services that are invoked by the business process.
In both cases, it helps to make the process models more understandable.
124
Quasar 3.0
Pattern 100: Classify rule sets
Tag your high level rule sets with one of the four categories each: integrity, derivation, reaction,
or deontic. This classification helps to find and understand business rules.
BRM typically leads to a rich set of rules. To reduce complexity they should be
well structured and organized. This is done by grouping finer grained rules to
higher level rule sets. We further recommend classifying these rule sets into the
following categories (see [IntBpmBrm]):
Integrity rule set: They are used to ensure data consistency by defining
assertions that must always be satisfied and are permanently ensured.
Derivation rule set: They derive new knowledge from existing one, typically by
mathematical calculations that are assigned to variables in the data model.
Reaction rule set: They describe decisions that control the order of the process
steps.
Deontic rule set: They connect the organization model to activities in a process
(e.g. authorization rules or allocation of human tasks).
8.3
Impact on Design
The evolutional development of applications has major influence on the IT
architecture as it is based on service oriented architecture (SOA) and a BPMS
product. The services are accessible through some kind of integration mechanism
like enterprise service bus (ESB) or enterprise application integration (EAI).
Mostly, the BPMS is integrated into an enterprise portal or brings along its own
front end. Figure 8.2 shows the relations.
Pattern 101: SOA Alignment
Ensure that the services in the underlying SOA are tailored in accordance with the business
activities needed by the BPM models and vice versa.
Because the automation of BPM models are based on invocations of functions that
are implemented in the SOA it is important that activities and services are
corresponding to each other. Therefore, it makes sense to keep an eye on business
processes when defining domains and service interfaces in the underlying SOA
layer (see chapter 9).
125
Chapter 8. The Evolutionary Approach
Figure 8.2: BPM Reference Architecture
©2011 Capgemini. All rights reserved.
The BPMS itself can be divided into several parts:
Business modelling client: This component is mainly used by the business
architects to edit the process models and rules according to the business
needs.
Business process engine (BPE): This is the core component as it eventually
executes the process models, invokes underlying rules and services, and
realizes the human task management combined with user interface forms.
Business rule engine (BRE): The business rules are made available by the BRE
that implements the forward and/or backward chaining algorithms and
exposes their interfaces.
Business activity monitoring (BAM): The controlling uses BAM for
monitoring and reporting of running or finished process passes.
Large BPMS products contain all these parts whereas lightweight solutions –
especially open source frameworks – focus on a smaller subset. Nevertheless, it is
possible to combine multiple products to a holistic BPMS solution.
Pattern 102: BPMS Selection
Choose the appropriate BPMS product based on functional, technical, and economic aspects.
Perform selection early and start modelling based on the selected tool.
126
Quasar 3.0
BPMS products cover a wide scope including modelling, integration, monitoring,
reporting, and human workflow management. Therefore, the features of the
BPMS have large impact on the realization of the actual project and the whole
enterprise. In particular, processes and rules should be modelled in the targeted
BPMS product to avoid wasting effort. Even though BPMN 2.0 standardises well,
BPMS vendors provide more or less differing dialects. The translation between
notations is time-consuming and difficult. Furthermore, by using BPMS products
for modelling allows seamless integration of all relevant aspects like BRM or
BAM from scratch on. This leverages additional efficiency.
8.4
Impact on Development
In the context of the evolutional approach Development is mainly limited to
creation and succeeding enrichment of models representing business processes
and business rules. Indeed, this presumes several conditions about the technical
environment of the BPMS system:
Service orientated architecture (SOA): The services needed for the
implementation of automated processing are appropriately available. This
includes the connection to legacy applications by integration patterns like
EAI or individual adapters.
User interfaces (UI): There are UI components for input and output of
information needed by human centric activities. Ideally, they can be
accessed as UI services in the underlying SOA and integrated in the
enterprise portal. Often, the BPMS product brings along its own proprietary
mechanism, e.g. UI forms.
Master data management (MDM): The different domain specific data are well
defined and may be integrated to a holistic master data model. This is very
important whenever two self-reliant systems have to be integrated by the
BPMS solution. With growing complexity, the introduction of a MDM
system will be necessary.
Transaction management: There are mechanisms for transaction handling like
distributed transaction processing (DTP) or two-phase commit (2PC). More
often than not, existing systems lack transaction management support. This
means that alternative solutions or workarounds must be found. In large
service oriented environments compensation has proved to be a practical
solution.
If some of these requisites are not given, the project must realise them on its own.
This can range from designing UI forms in the BPMS itself over configuring
service adapters to initiating sub-projects for the implementation of new technical
or business system components. Once the effort is invested, flexibility is given to
make functional changes fast and cheap.
127
Chapter 8. The Evolutionary Approach
Pattern 103: Draft run
Individual draft runs of processes and rules after creation or update are helpful to find logical
and technical faults in early phases.
A wide range of BPMS products allow the execution of processes or rules before
the environment is completely available and the enrichment of the models has
been finished. This is very helpful in many ways. First, the business units can
approve the correctness of the model very soon. So, change requests arise early
reducing the effort to implement them. Second, additional components can be
developed in a more independent way. And last but not least, it is possible to
detect errors as soon as possible. Moreover, many BPMS tools even support
periodical execution of unit tests.
8.5
Impact on Software Configuration Management
In the context of the evolutional methodology and business process management,
a sound software configuration management is just as crucial for successful
projects as it is in the area of the sequential methodology. Moreover, the particular
patterns described in chapter 6 with respect to the basic and sequential approaches
are equally valid for evolutional software development, too. However, some new
challenges arise in the context of BPM (see [20]) that are addressed by the
following patterns.
Pattern 104: Versioning strategy
Establish a holistic strategy for versioning all relevant artefacts including a reliable mechanism
to keep them in sync. This also includes the whole relevant system environment.
Products like the BPMS have central roles in the resulting system architecture.
Thereby, process models and business rules are important artefacts for
development and execution. As a result functional changes are performed directly
inside the BPMS that brings its own versioning rather than in externally managed
source code. Hence, it gets difficult to maintain a unique versioning along the
complete system and keep everything in sync. If branches are needed or seamless
collaboration must be supported the situation becomes even more complicated. In
every case, there must be a clear strategy of how to establish an all-embracing
version management.
128
Quasar 3.0
Pattern 105: Use staging
Establish a staging process for developing, testing, and deploying new features. Thereby, the
environments must cover the complete architecture stack including processes, rules, services,
and even external components (e.g. cloud services).
Albeit it seems very easy to incorporate modifications directly into the running
system, changes must be tested before they are deployed for productive use. This
is valid for the evolutional methodology as well. As described in pattern 63, there
are different environments for development, testing, approval, and production
(DTAP) (see figure 8.3). Releases or versions of the product need to be staged
after successful test from one environment to another. This staging process has to
be implemented technically as well as organisational considering governance
aspects.
Figure 8.3: DTAP Environments in BPM Projects
©2011 Capgemini. All rights reserved.
Typically the BPMS is embedded into a complex system landscape consisting of
services, user interfaces, enterprise portals, and underlying data. So, each
environment should rather contain all components that are part of the complete
solution. This even involves cloud services.
Pattern 106: Service versioning
Be aware of software updates in the face of long lasting processes. Include versioning
information in service interfaces as prevention.
129
Chapter 8. The Evolutionary Approach
Typically business processes involve human interactions and therefore last a
longer period of time (especially if human interaction includes external
communication). By all means, processes can take several weeks or month until
completion. Now assume some change requires the modification of service
interfaces. In this case, it is not sufficient to change these service interfaces
together with the service invocations in the process models because there may be
process instances that have been started beforehand. So, they are running
according to the previous model using the old service definitions and expecting
the former behaviour.
In general, a service interface should be designed in a very stable way. However,
it is impossible to prevent changes at all. To remedy these problems, version
information should be added to the endpoint addresses of all services. This allows
providing current and previous versions simultaneously.
8.6
Impact on Analytics
Customers expect software solutions to be delivered at best quality. Therefore, it
must be kept in mind during development. This also holds true for evolutional
methodologies.
Pattern 107: Comprehensive unit testing
Unit tests should be executed for business rules, processes, and services. Additionally,
integration tests must be carried out periodically.
As mentioned above, it is possible to draft run business processes and rules in
many BPMS products . This feature can easily be used to introduce unit testing
for the individual models. It helps to detect errors in good time. Draft runs can
also be used to determine performance information like throughput or waiting
times. Some BPMS products even support the simulation of several functional or
technical aspects. This can be very helpful, to ensure compliance with critical
customer demands.
Pattern 108: Static model analysis
Use static analysis for measurement of your processes (BPMN-Q), rules, and services. At
least, perform regular model review sessions.
Additionally, it makes sense, to establish measurement to control and improve the
quality of the upcoming system. For programming languages like Java established
metrics and tools (checkstyle, findbugs, sonar, etc.) are available. Measuring the
quality of process models or business rules is currently rather challenging.
BPMN-Q (see for example [2]) is the first step towards this direction.
130
Quasar 3.0
Nevertheless, there is hardly a mechanism supported by any BPMS vendor. There
remains the possibility to perform manual model reviews for quality estimations.
It has to be mentioned that testing as well as measurement must take place on a
regular basis. Observing the trend of indicators is crucial for reacting fast on
problems and eventually achieving high quality results.
8.7
Impact on Operations and Monitoring
The main difference to the sequential methodology consists in the fact that most
BPMS products bring along some kind of monitoring and reporting. So, it can
simply be used and it is not necessary to develop it separately as part of the
solution. Sometimes, the indicators are hard coded and predefined. Commercial
BPMS tools normally can be extended and adapted to the actual needs in an easy
way.
Pattern 109: Operational prior to analytical
Pay attention to the effects of monitoring and reporting on the operative system performance.
In the case of very complex formulas, the indicator computation may lead to
significant load of either the server or the database. Then, it is advisable to use
concepts like mirror databases and redundant server instances to prevent the
operative system from being slowed or locked by concurrent monitoring or
reporting request. The most comprehensive solution is transferring all analytical
tasks into some kind of data warehouse system.
Monitoring processes using these mechanisms is very valuable, particularly since
it is available as part of the tool. It helps to detect processes that are overdue or
problems during the execution of liable services. Further, it gives an economical
basis to plan where to invest in further automation and where not. Based on this
information it is possible to relieve critical situations as early as possible.
Pattern 110: Define KPIs
KPIs are important for both technical and business aspects. They must be regarded and
developed as a part of the solution.
There are a couple of measured values that are important for the operative courses
of business.
Time: Such indicators should cover all aspects with respect to the throughput of
running processes like process duration, waiting times, number of parallel
process instances, and so on.
131
Chapter 8. The Evolutionary Approach
Quality: It is no advantage to run a lot of processes with high performance, if the
results of the processes are poor. Therefore, quality indicators like success
rate or explicit user feedback is of importance for BAM.
Model: This kind of indicators cope with aspects concerning the models itself.
For example, the invocation percentage of the individual activities gives
information of how to further optimize the process models.
Business: All business aspects like costs or efforts can be captured and reported
as KPIs, too.
The main indicators for the performance of an enterprise are called key
performance indicators (KPI). These are typically unique for a company.
Advanced tools providing BAM functionality allow defining these KPIs apart
from the predefined ones. Further it is important to distinguish two types of KPIs:
Reporting KPIs give a good view on the overall performance (e.g. gross
operating profit (GOP)). However poor values only indicate that something
is wrong and not the action required. These KPIs are reported per month,
quarter, or (half) year.
Operational KPIs are less aggregated and indicate what action is required in
case of a poor value (e.g. "product delivered overdue"). Such KPIs should
be reported frequently (e.g. on daily basis).
Pattern 111: Continuous improvement
Establish continuous improvement of your processes. As a rule, some common methodology
like the continuous improvement process (CIP), LEAN, or Six Sigma should be used.
Whenever an indicator reveals that the process should be improved by some kind
of measure this change should be analysed, implemented in the test environment,
approved, and finally deployed for production use. By monitoring and reporting
the KPI after deployment, it is possible to rate the implications of this change.
This leads to a continuous loop called BPM cycle (see figure 8.4).
8.8
Impact on Project Management
With respect to the project management approach, there is no common strategy in
combination with the evolutional methodology. Nevertheless, there are some
general aspects that arise in the context of business process management.
Pattern 112: BPM is agile
Manage BPM projects in an iterative or agile way. This allows the project to implement the BPM
cycle for continuous process optimization.
132
Quasar 3.0
Figure 8.4: BPM Cycle
©2011 Capgemini. All rights reserved.
Implied by the BPM cycle and because the model implementation is evolutionary,
the management of this kind of projects should be oriented towards iterative or
even agile approaches. This means to take smaller steps within shorter
time-frames. Such an approach becomes possible due to the flexibility gained with
BPM and BRM. The benefit is that you get early feedback and market delivery is
quicker. Driven by the BPM cycle using BAM, experiences from preceding
iterations will have direct influence to succeeding implementation scopes.
133
Chapter 8. The Evolutionary Approach
134
Quasar 3.0
9
9.1
The Enterprise Domain
The Term Enterprise Architecture
In IT, according to ISO/IEC 42010:2007, architecture is "the fundamental
organization of a system, embodied in its components, their relationship to each
other, the environment, and the principles governing its design and evolution"
(cited from [27]).
Referring to this comprehensive definition, people use different terms for different
types of architecture. Enterprise architecture, solutions architecture and even
Security or governance architecture as well as the more usual technical,
applications or business architecture are used in the similar manner.
In relation to the Quasar "Enterprise Domain", the system referred to here is the
"enterprise". According to TOGAF9, we consider Enterprise as a set of
organizational units with common goals. Thus, an enterprise may be a public
agency, a subsidiary of an international corporation or even a network of entirely
independent companies working together in supply chain.
Figure 9.1 illustrates how Capgemini relates these types of architecture to one
another, demonstrating the inclusion of Business Architecture within a full
Enterprise Architecture, as well as the need for Solution Architecture to span from
business to technology
Figure 9.1: Types of Architecture
©2011 Capgemini. All rights reserved.
135
Chapter 9. The Enterprise Domain
Enterprise architecture – like the IT of a company and all the operating functions
– is subject to the economic principle. By this principle, enterprise architecture
has to achieve provable benefits with respect to the goals of the enterprise that
exceeds the cost of these functions. Therefore, the organization of IT structures in
an enterprise follows a request triggered by some business cause. Below are some
examples of such causes triggering a change of the enterprise architecture:
Changes in the business strategy
Functional expansion or change of business areas
Merger or acquisition and divestitures or buy-outs
Changes in collaboration with the other companies
Utilizing new technology trends
Consolidating, simplifying and standardizing the application landscape
IT cost reduction
Enterprise architecture is never a means in itself or even generally accepted as
self-evident. Rather it has to continuously demonstrate its benefits for the
company. This does not prevent continuous improvement of the enterprise
architecture based on given business objectives.
Generally, it is the task of the enterprise architecture to organise the IT structure
of the company in such a way that the IT optimally supports the strategy and the
business goals of the enterprise.
9.2
The Quasar Enterprise® Method
Quasar Enterprise® is a comprehensive method, based on experience of many
years and projects for structuring the application landscape within the scope of
enterprise architecture. It is an approach and a roadmap based on the Integrated
Architecture Framework (IAF) Capgemini’s content framework for architecture.
Quasar Enterprise comprises proven procedures, methods and rules to create
artefacts and deliverables that describe the architecture of an enterprise as implied
by IAF as a content framework.
IAF distinguishes four architecture aspects (see [9]):
Business Architecture refers to business goals, business activities, and roles &
resources. Its outcome are models around process, organization, people and
resources.
Information Architecture deals with information structures and relationships. It
describes and defines what and how information is used and how it flows
around in the business.
Information System Architecture describes packaged or bespoke information
systems that will automate and support the processing of the information
used by the business.
136
Quasar 3.0
Technology Infrastructure Architecture describes the technology
infrastructure required to support the information systems architecture.
Additionally, IAF sees different abstraction levels for each architecture aspect (see
[9]):
The Contextual Level is about understanding the question "WHY" there is a
need for a new architecture and the context of the architecture. In the IAF,
the Contextual Level is about understanding WHY the architecture is
required, why the boundaries are set for the new architecture, the current
business and technology contexts (including the eco-system that the
organization relates to), understanding who the key stakeholders are, the
business objectives, success criteria, in short a set of information that will
provide the context for the architecture development.
The Conceptual Level is characterized by the "WHAT?" statement. It is about
answering the question "WHAT is needed to realize the business
objectives"? At the Conceptual Level, the requirements and objectives are
decomposed, ensuring all aspects of the scope are explored, issues are
identified and these issues are resolved without concern over how the
architecture will be realized.
The Logical Level is characterized by the "HOW" statement. It answers the
question "HOW can the architecture be structured and organised in an
optimal fashion to achieve the stated objectives". The Logical Level is
about setting the desired architecture state and structure in an
implementation independent manner.
The Physical Level is characterized by the "WITH WHAT" statement. It is about
determining the real world structure and is concerned with transforming the
logical level outcomes into an implementation-specific structure.
Figure 9.2 shows the relations between the different levels.
Within the Quasar Enterprise® , we see the different architecture aspects resulting
in different disciplines with their respective approaches for procedure, modelling
and evaluation. Here, we focus on the most important of them [9], [34], [12] and
[9] are referred for complete representations in the methodical approach.
The basic modelling principle of Quasar Enterprise® is the orientation to the
services as elements of modelling. At Quasar Enterprise, a service is a complete
and self-contained unit of work that is independent of the user – be it a human
being or an IT system – and is beneficial for the business.
Like TOGAF 9 and the aspects defined in IAF, different phases are defined in
Quasar Enterprise for a development plan of company architecture. They are as
follows:
137
Chapter 9. The Enterprise Domain
Figure 9.2: The Integrated Architecture Framework IAF
©2011 Capgemini. All rights reserved.
Compiling the architecture context Architecture context includes business and
IT strategy as well as the architecture principles derived from them.
Essential for the concrete development plan for organization architecture
are the business goals that justify the change and are stated in the request
for architectural change. The architecture context is always comprehensive,
i.e. it compiles artefacts that form the basis for all the architecture aspects
like business, information and information system architecture as well as
technology infrastructure. In TOGAF 9, this phase corresponds more or
less to the Preliminary Phase and the phase "A. Architecture Vision".
Reviewing the business & information architecture Under the given business
and information architecture, Quasar Enterprise focuses on designing the
application landscape that is optimally aligned to this business and
information architecture. Thus, this phase deals with acceptance of business
and information architecture in a formal, consistent manner and covering
the development plan in the respective artefacts. The phase "B. Business
Architecture" of TOGAF 9 serves this purpose.
Modelling an ideal information system architecture Modelling the ideal
information system architecture is the core and the foundation of Quasar
Enterprise. The goal is to define the information system architecture in way
in which it can optimally support the guidelines from the architecture
context. The result consists of the logical information system components
and the information system services inside them.
138
Quasar 3.0
Information systems architecture is likewise a phase in TOGAF. TOGAF
differentiates here between application and data architecture, which IAF
and Quasar Enterprise do not, based on their service oriented approach.
TOGAF aims only at architectures that are to be implemented and thus
gives away designing a consistent ideal architecture as a model.
Assessment of the current information system architecture In order to
formulate necessary changes in the architecture corresponding to the given
request and according to set business goals, the enterprise architect must
have the necessary information about the existing, current information
system architecture and he assesses it.
Definition of the technology infrastructure The technology infrastructure
consists of all services that are not directly functionally determined but
necessary in IT to implement the information system architecture. This
applies to hardware and operating systems, and low-level software such as
databases, middleware, BPM systems, dialog controls, and much more.
Typically, "platforms" combine several components and their services, e.g.
in the form of an integration platform. Another form is the combination like
technology- stacks that are oriented to specific operating systems and
vendors. The integration architecture is a part of the technology
infrastructure. The integration architecture defines the technology
infrastructure services and their applications, which are necessary for the
coordination of different information system components. Hence, TOGAF
defines the phase "D. Technology Architecture".
Evolution – Planning and monitoring the architecture changes For planning
and monitoring the architecture development, Quasar Enterprise describes
the procedure for defining the to-be architecture that is derived from the
ideal and current architecture which is then implemented at a given point in
time. A roadmap describes the implementation of the to-be architecture
according to its dependencies and a defined schedule.
TOGAF describes these aspects in addition to modelling of to-be
architectures and migration plans in phase "E. Opportunities and
Solutions", phase " F. Migration Planning", phase "G. Implementation
Governance" and phase "H. Architecture Change Management" with
significant emphasis on architecture governance.
Figure 9.3 shows these relations.
9.3
Architecture Context
The cause, justification and approach for the actions of the architect shall be
defined across all disciplines of enterprise architecture and aligned with all
stakeholders. These are based on the business strategy, IT strategy and operating
model, reflecting market and industry trends as well as such emerging
139
Chapter 9. The Enterprise Domain
Figure 9.3: The Quasar Enterprise Method and Roadmap (source: [12])
©2011 Capgemini. All rights reserved.
technologies and innovation in the company. This generally results in an
architecture vision and architecture principles, which guide the architecture
project and a clearly formulated within the request for the architecture project (see
figure 9.4).
Pattern 113: A statement of architecture work based on the business objectives always
precedes architecture work
Clarify the cause and the objective of the intended architectural work with respect to the
business objectives of the enterprise with all stakeholders, define and create alignment on
success criteria of the architecture work.
First understand the "Why", i.e. the business objectives to be supported, before
defining the approach to be taken or starting any architecture work.
Business objectives communicate what the enterprise wants to achieve within a
specified timeframe and set parameters and goals for all initiatives within the
organization. They identify the planned responses to business drivers, based on
taking advantage of opportunities and mitigating threats. Business objectives are
to be aligned with the overall business mission to ensure the basis for continued
business and measurement of planned change.
140
Quasar 3.0
Figure 9.4: Architecture Context
©2011 Capgemini. All rights reserved.
Business objectives ideally have clear goals, are measurable and achievable. They
are not necessarily totally financial but may include organizational aspects,
changes to the image or market etc.
The business objectives communicate the issues that the architecture will address
and the way the architecture will address them. The scope of the architecture
engagement has to be closely aligned to the business objectives and demonstrate
how the architecture supports their goals and measures.
Only if architects ensure that they fully understand the business objectives they
shall support, they will be able to construct solutions that are effective and worth
the effort.
A statement of architectural work not only provides direction, but identifies
stakeholders and sponsors for architecture work as well. So it not just gives the
rationale to the architect for his work, but shows the corporate consensus on this
work.
Relinquishing the statement of architecture work bears a high risk of the designed
architecture being irrelevant, which usually gives architecture a bad name and
turns out to be fatal for the architecture function.
Pattern 114: Derive and align a set of architecture principles to govern the architectural
work (see figure 9.5)
Based on business and IT strategy, business drivers and IT trends, common beliefs within the
enterprise and corporate culture, compile a set of architecture principles and detail them to
them become clear, consistent, complete and operational in the context of the intended
operational work.
141
Chapter 9. The Enterprise Domain
A principle is an approach, a statement or a belief, which provides guidance. It
has to be stated unambiguously and comprehensible. The set of architecture
principles has to be mutually exclusive and collectively exhaustive. For each
principle, there is a rationale, and a prioritization with respect to other principles.
Architecture principles never state obvious or self-evident facts with respect to the
enterprise, its corporate culture and the point of view of stakeholders.
Architecture principles are derived from business requirements. Business
requirements state how a company needs to modify the actual situation to achieve
its business objectives.
Figure 9.5: Deriving Architecture Principles
©2011 Capgemini. All rights reserved.
9.4
The Business Architecture Discipline
Architects must understand the ways and means of how the enterprise does its
business and how this will change in future. Within the Quasar
Enterprise® approach, shaping the IT in alignment and supporting optimally the
business require a formal consistent compilation of the business architecture of
this enterprise.
The Business Architecture discipline takes contextual information referring to the
business as its input. This information includes the business drivers, the business
vision and mission, the business strategy and the operating model. In addition, the
business architecture requires the business objectives as set for the architecture
engagement and the derived architecture principles referring to the business.
142
Quasar 3.0
Output of business architecture activities are business dimensions, services and
objects. Business processes are implementations of business services and logical
components formed out of business services of a lower level. Figure 9.6 shows
these relations.
Figure 9.6: Business Architecture
©2011 Capgemini. All rights reserved.
Pattern 115: Identify business dimensions
Identify business features that distinguish the main business functions of the enterprise
otherwise serving the same purpose.
Business dimensions structure the business by specifying the distinctive features
of a business. Its characteristics to be regarded as relevant reflect the business
objectives of the company. Different organizational units covering the same
business function imply business dimensions.
Business dimensions are always rooted in the business strategy, especially in the
marketing strategy.
Examples of business dimensions:
Customers/brands: Different customer segments does a company address with
different brands
Products: Main product categories does a company offer
Customer channels: Channels by which the enterprise addresses its customers
and sells its products
Part of the value-add chain: Differentiation between self-produce and procured
goods
143
Chapter 9. The Enterprise Domain
Pattern 116: Decompose business services
Decompose top level business functions recursively using a top-down approach until the
appropriate level of detail is achieved.
Architects need to focus on an external view of the business functions and
abstracting from internal details. Thinking in terms of services is the appropriate
paradigm for this situation. Services are managed business functions, which are
defined independently by a specific user, i.e. actor and provide an output relevant
to the business.
Following Capgemini’s IAF service concept, a service supports a clearly defined
goal, is executed by a role and consists of one or more activities. Therefore, we
get the relationship as shown in figure 9.7.
Figure 9.7: Business Service
©2011 Capgemini. All rights reserved.
The architect takes a top down approach on the business service decomposition
not to understand the business, but in order to limit the analysis to the required
extent. Starting point is the overall top level business services of the enterprise as
given by the business mission or reflected in the overall value chain. At each level
the architect details those services relevant to the business objectives and business
requirements of her or his engagement, i.e. the scope of the architecture
engagement (see figure 9.8).
There are two ways of refinement: On the one hand, this can be done along with
the business dimensions. On the other hand, the architect may consider a business
service as a logical business process component consisting of business services of
a lower level.
144
Quasar 3.0
Figure 9.8: Decomposing Business Services
©2011 Capgemini. All rights reserved.
Pattern 117: Identify major business objects
In reflecting the input and output of business services, identify major business objects as the
essential entities a business service is transforming.
Business objects are the real thing. Business services use and consume i.e. create,
transform and eliminate business objects. Different business services will use and
consume business objects in different ways. Generally business objects are
identified as input and output of business services.
Business objects may be or infer information objects. It may be architecturally
significant to understand these differences when structuring the architecture
components. So a business object is a non human resource used by the business
that is significant to the architecture.
9.5
The Information Architecture Discipline
For shaping the application landscape, the architect needs to know about
information flows and responsibilities for consistency, completeness and
correctness of information. The information architecture describes the information
on business uses, the information structure and relationships (see figure 9.9).
The contextual information, in the form of information architecture principles,
determines the information architecture. There is a close relationship between
information architecture and business architecture: For example, business services
145
Chapter 9. The Enterprise Domain
Figure 9.9: Information Architecture
©2011 Capgemini. All rights reserved.
not just handle business objects, but they may add information as they transform
business objects into a certain state. On the other hand, information objects reflect
changes in the respective business objects (see figure 9.10).
Beware: Information is related to the business, it is not referring to data, which is
information handled by information systems.
Figure 9.10: Business Service and Information Services
©2011 Capgemini. All rights reserved.
146
Quasar 3.0
The outcome from the information architecture is typically a series of business
information components that describe what and how information is used, flows
around the business and stewardship on certain information.
Pattern 118: Determine information flows
Identify flows of information going from source to final destination
Information flows describe the use and communication of information. They are
about communication of information through the business irrespective of how that
is achieved. They usually go parallel with business services.
Analysis of information flows is useful in identifying issues on automation. For
example this analysis may reveal areas where there is significant
automated/manual change and areas potential for miscommunication. This will
provide a basis for analysis and optimization in the information systems support.
There is often confusion about the need for information flows, when they match
business processes very closely. This is very common when the business
objectives include the derivation of information system aspects of the architecture.
When a process is the starting point for analysis, the process gets analysed for the
information aspects of the activities of the process. The results in entities, which
are in effect parts of the information flow. Remember, that information flows
focus on information and not activities.
Pattern 119: Determine information flows
Identify information objects which contain the information on major business objects according
to their state. Assign the stewardship, i.e. the responsibility on consistency, correctness and
completeness, of those information objects
An information object is a source of information and subject of communication
for business services, independent of the media used. It describes the information
that is used or communicated within information flows.
Information objects are characterized by statements of the form "a blah is a blur
that bleeps", e.g. "an order is the request of a customer to supply an article".
Defining information objects in this way creates an ontology, that is the basis for
communication between business and IT.
It is crucial for the architecture to analyse and define the responsibility for the
correctness, consistency and completeness of information as reflected within
information objects.
147
Chapter 9. The Enterprise Domain
9.6
The Information System Architecture Discipline
The Information System Architecture discipline is about analyzing and modelling
IT application landscapes (see figure 9.11).
Figure 9.11: Information System Architecture
©2011 Capgemini. All rights reserved.
The Information System Architecture discipline is based on the architecture
context and the outcomes of the business architecture and information architecture
analysis. In addition, the design of an IT landscape requires information on the
current IT landscape and the current project portfolio.
The Information System Architecture defines two set methods:
IT landscape assessment Based on the set business objectives and ensuing
requirements, the architect defines key indicators for IT applications and for
the IT application landscape as a whole. This way he can identify areas
with need for action and evaluate the effect of architectural changes and
measures. In applying this method for the current existing landscape as well
as for future target landscapes and even for ideal landscapes, the architect
shows the way he fulfils the business objectives and justifies the modelled
information system architecture.
IT landscape modelling Based on the architecture context, the outcome of
business and information architecture, and the current information system
architecture, the architect designs future information system architectures.
The outcome is information system services and components formed out of
these services, i.e. the model of an IT application landscape. To show a way
to implement a certain IT application landscape, the architect defines
architectural measures and transitional architectures which result in a
transformational roadmap.
148
Quasar 3.0
Pattern 120: Define the ideal information system architecture as a strategic target
The ideal landscape fulfils all present requirements and aligned with business and IT strategy. It
serves as a beacon for the architecture implementation governance, planning and work.
Architecture work needs endurance, persistence and tenacity. To avoid getting lost
in detail, urgent fire fighting and short term change, the architect needs a beacon
which leads his work. This is the purpose of designing an ideal information
system architecture, which exemplary fulfils the business requirements and is
aligned with both, business and IT strategy.
Though this ideal information system architecture can be realized in principle, it
will probably never get reality. The ideal information system architecture neither
takes the current information system architecture nor limitations by time, effort
and financial constraints into account. There is a reason to design an ideal
information system architecture instead of just go for a target architecture. In this
way, the architect can more easily identify and prioritize areas of change
according to their contribution to the business objectives by comparing the ideal
with the "as-is" information system architecture. The ideal information system
architecture provides the whole picture.
Pattern 121: Define the "To-Be" Information System Architecture as a state of the
information system architecture planned for and achievable at a certain point in time
Compile measures and a roadmap to achieve a state of the information system architecture
according to the "managed evolution" approach.
Based on the ideal and the current information system architecture, and
subsequently identified and prioritised areas of need for action, the architect
defines the "to-be" information system architecture to be achieved measure by
measure at a certain point in time. These measures form the architecture roadmap.
It is crucial, that this roadmap provides an approach balanced between realizing
short term business opportunities and following long term IT strategy goals. We
call this approach "managed evolution", which is shown in figure 9.12.
Pattern 122: Assess current, ideal and future Information System Architecture according
to the alignment with and the contribution to the business objectives
Assess an information system architecture by criteria which are derived from the business
objectives as they were stated in the architecture statement of work
Due to the complexity of an IT application landscape, an architect can collect a lot
of data which may lead to unnecessary efforts and bears the risk of getting lost in
detail. Therefore, the collection of data has to be focused.
149
Chapter 9. The Enterprise Domain
Figure 9.12: The managed evolution of an IT landscape
©2011 Capgemini. All rights reserved.
The data to be collected refers to key indicators about the state of the application
landscape. As business objectives and requirements are defined measurable, the
key indicators should be defined up front and reflect these business objectives and
requirements.
For example, if the enterprise gives cost reduction of certain business transactions
as a business objective, the architect has to compile the cost of a (partial)
transaction within each involved IT application. Starting point for this will be the
overall run costs for an application and the number of transactions processed in a
certain time frame.
Usually, there are four main aspects for assessment of an IT application landscape.
Degree of standardization: Diversity will lead to high operating costs.
Therefore, the IT landscape should be as standardized as possible, i.e. using
as few technology stacks and platforms types of software packages etc.
Degree of complexity: Complexity will lead to increased project costs and an
increase in the number of errors. Complexity is decreased by reducing the
number of IT applications and the number of interfaces between them.
Degree of dependency: Tight coupling between systems leads to high
dependence between them. This results in extra cost when changing,
replacing or outsourcing systems. In addition, having loosely coupled
applications will result in higher availability.
150
Quasar 3.0
Total Cost: Every architecture change needs a business case, even "strategic"
measures have to provide financial value in the long run. The architect has
to reflect run costs as well as cost for future change, and has to take the net
present value of IT applications as assets into account.
Pattern 123: Define a domain model to bridge business and IT
Use a domain model as a structuring concept to link business architecture and information
system architecture artefacts (see figure 9.13)
Many companies still lack a link between business and IT. A domain model fills
this gap. It describes services in both the domain model and the IT applications
that automate these services, resulting in a common model that both business and
IT understand.
Domains in the context of the Quasar Enterprise methodology are abstract
structural elements which describe areas of the information system architecture,
where certain common approaches, principles, rules and points of view apply. In
this way, domains structure the IT application landscape by business aspects.
Figure 9.13: Create Domain Model
©2011 Capgemini. All rights reserved.
To design a domain model, identify appropriate domain candidates. Top level core
and supporting business services as well as major information objects are
candidates. Domain candidates derived from core business services may have to
be detailed by business dimensions. These candidates have to be further detailed
or merged according to the business point of view. This way the domain
151
Chapter 9. The Enterprise Domain
candidates are finalized. The value of the domain model results from mapping
existing IT applications and information system services to the domains. The
methodology employs many domain specific reference models. Use them to
define the ideal and the "to-be" architecture There are many domain-specific
reference models available today which are derived from project experience.
For the Financial Service Provider, Logistic Service Provider, Insurance and
Automotive Production etc., industry domains, the reference models, help to
structure the business area and reduce the risk involved in modelling the
architectures.
Pattern 124: Derive Information System services from business services of the
appropriate granularity
An information system service will be the automated part of a business service really
implemented by means of the IT
The architect categorizes business and information services from the business and
information architecture according to their need for automation and the
enterprise’s ability to automate these services.
Figure 9.14: Business Services and Information System Services
©2011 Capgemini. All rights reserved.
Within the scope of the architecture engagement, i.e. within identified areas with
need for action and according to the business requirements, the architect decides
on the appropriate part of business services at the right degree of granularity to
ideally become information system services. These information system services
are the elements for modelling the information system architecture.
152
Quasar 3.0
Beware of the fact that by this approach, there is no hierarchy of information
system services, though one information system service may call another one.
To start modelling, these services are mapped to the right domain and are
categorized into:
Interaction services: Services which provide interaction on information mostly
with a human user, but also with other systems outside of the enterprise
Process services: These services provide consecutive calls to other services
Functional services: These services implement some kind of a more or less
complicated algorithm
Data portfolio services: Create, read, update and delete services on certain data
Pattern 125: Model logical Information System components by domain and service
category
A logical information system component is created by the grouping of one or more information
system services based on the mapping of those services to domains and category.
The logical information system components represent the "application"
components for a given solution alternative. They describe the implementation
independent solution for Information systems in terms of the ideal state.
We start with the modelling with all information system services in scope being
assigned to a domain and a category. One component candidate each is created for
the information system services of a domain and of a category. The component
candidates are formed according to functional criteria and refined in accordance
with the rules for the component design. Components should be designed such
that they have tight cohesion internally and have a low degree of coupling with
each other.
The physical information system components of the actual application landscape
are used to check the component candidates for completeness. The candidate
components are adapted where necessary. Then, they are given names that are
understood and accepted by those involved. They form the final components.
Logical information system components of different categories should only have
couplings in accordance with the reference architecture of the categorised
application landscape. The couplings between all components should form an
aligned acyclic graph. Logical information system components with services of
the category "data portfolio" represent data stewardship over the information
objects.
153
Chapter 9. The Enterprise Domain
Figure 9.15: Logical Information System Components
©2011 Capgemini. All rights reserved.
9.7
The Technology Infrastructure Discipline
When managing the evolution of IT landscapes, architects need to decide how to
physically connect applications and where and how to use integration
infrastructures.
The Technology Infrastructure discipline focuses on the integration infrastructure
of an IT landscape. It describes best practices for how to physically connect
applications. It also offers a method for deciding on an integration infrastructure.
The Technology Infrastructure discipline uses the clients’ information about the
IT strategy and the existing technology infrastructure as one of its inputs.
Additionally, the "to-be" IT landscape is needed as input information for
modelling the IT infrastructure. The discipline will produce logical and physical
information on the technical infrastructure for integration purposes.
Pattern 126: Standardize the technical integration services
Use reference models for integration to structure and analyse the integration tasks.
If applications are physically connected there will be a set of technical integration
tasks that recur in many contexts. Technical transformation, functional
transformation and addressing the correct application are only some examples. In
154
Quasar 3.0
Figure 9.16: Technology Infrastructure Architecture
©2011 Capgemini. All rights reserved.
many IT landscapes, these recurring tasks are implemented redundantly in almost
every application.
When managing the evolution of an IT landscape, an architect needs to reduce the
redundancies by standardizing and even physically centralizing these technical
integration services.
Technology Infrastructure provides a reference model for integration and maps
this reference model onto the capabilities of different integration platforms. Using
this, an architect can easily structure the integration services needed by the
applications, and can easily map these services to existing integration
infrastructures.
Pattern 127: Take the two-step integration approach
Disentangle the interfaces first, take advantage of the implementation of integration
infrastructures second.
Integration infrastructures help to centralize technology infrastructure services.
However, introducing an integration infrastructure is usually expensive, and
business cases are rare. Therefore, architects prepare as many applications as
possible beforehand for the introduction of a standardized integration
infrastructure. In this way they can reduce investment risks and make the benefits
of Technology Infrastructure offers more visible.
Pattern 128: Patterns for integration make for robust physical integration
Use tried and tested patterns for integration to achieve at the best integration scenario in a
given context.
155
Chapter 9. The Enterprise Domain
When architects design the physical integration of applications, they have to
balance many non-functional requirements that are often poorly compatible. For
example, an interface may be required to have high performance and
simultaneously be subjected to detailed monitoring. Addressing such
requirements is often a time-consuming and error-prone task.
Technology Infrastructure will provide recurring sets of non-functional
requirements and their corresponding solutions in what we call "patterns for
integration". Patterns for integration provide architects with tested
problem/solution pairs, which reduce the time required when seeking the solution
to a given problem and also lower the error rate. Figure 9.17 illustrates a solution
for combining custom software with software packages in a service-oriented way.
Figure 9.17: Solution pattern for the service oriented integration of custom developed
software with standard software packages
©2011 Capgemini. All rights reserved.
156
Quasar 3.0
A
A.1
Situational Software
Engineering: a Systematic
Approach
Introduction
In order to become a successful IT-supplier in the market, a situation-specific
combination of software development process model and methodical elements
have to be found that particularly guarantee high economy of software
development in each individual software development project. Hereby, the
following three factors affecting economy are in conflict (see figure A.1)
Effectiveness the correct activities are done (i.e. those are not performed
unnecessarily)
Efficiency the activities are carried out in the proper way without wasting any
resources
Economics of price Resources are used in such a way that the product of unit
cost prices and expenditure is as less as possible (i.e. it might be worth to
use even more expensive tool if thereby the expenditure for human
activities can be decreased)
Since no optimisation is possible at the same time with respect to all three factors,
it has to be guaranteed with each project that the balance between them is
purposefully achieved. An example for this is the architectural development. A
good architecture is mandatory, since it causes a lesser implementation effort, but
on the other hand it comes along with additional effort for its development. At the
same time typically the last minor optimisations do not bring large savings. Thus,
the balance must be found here between effectiveness and efficiency.
No software development process model and no methodology are automatically
suitable for each project situation. That means for the respective project situation
the ideal combination of process model and methodical building blocks has to be
selected. This observation lead already to the research theme of Situational
Method Engineering (see [35]) within the area of method engineering. Also in the
V-model XT (extreme Tailoring), such a situational adaption is already embodied
in the name. Here, tailoring means to select one of three pre-defined project types
to determine the methodical building blocks for a current project situation.
Recently, agile software development approaches became more popular in
industrial software development. B. Boehm and R. Turner ([3]) investigated the
differences and advantages between classical plan-driven and modern agile
157
Appendix A. Appendix: Situational Software Engineering
Figure A.1: Economics achieving the balance in Effectiveness, Efficiency and Economics
of Price
©2011 Capgemini. All rights reserved.
approaches. They identified a set of key factors, which are helpful, in order to
decide for one of the two approaches. These are
1.
2.
3.
4.
size of the software product and/or the team,
criticality of the software product in the employment context,
quality and experiences of the developers as well as
dynamics of requirements.
On this basis, Capgemini developed a tool for the selection of the right software
development process model (see [17]). Here, in particular the aspect of
architectural dependencies between features and system components as well as
timing aspects were added (see figure A.2). The selection tool supports the project
manager when allocating a project to a suitable software development process
model in accordance with the factors indicated in figure A.2 (agile = dark red,
plan-driven = dark-blue). In case, the project cannot be assigned clearly to a
plan-driven or agile process model (i.e. project characteristics are in the light blue
and/or light red range), the tool recommends a combination, whereby the possible
risks are indicated to the project manager (for details s. [17]).
Experiences with the selection tool show that project managers make now a more
conscious decision for the selection of a software development process model. At
the same time, experiences while using the selection tool show that project
managers have to be supported with the decision, which methodical building
blocks are to be used in which situation. Thus, an extension of the selection tool is
158
Quasar 3.0
Figure A.2: Key success factors in the selection method within Capgemini
©2011 Capgemini. All rights reserved.
required. In this chapter we present, what is the structure of required framework
for this enhanced selection tool.
This interaction between efficiency, effectiveness and economics of price reflects
itself in the new title eee@Quasar for the situational development method
(Quasar 3.0) of Capgemini. It has already been discussed within the scientific
community (s. [14]) and our customers within the “Kundenforum Architecture” in
2010. So here we present the updated version of this concept where all the
valuable hints have been integrated. In parallel this concept of situational
engineering for methodology has been integrated into the TechnoVision ([8]).
A.2
The framework
Our framework consists in two different streams. The first one is the development
of the actual framework, the second a methodology for the identification of the
right method bundle and process model for a concrete project. Figure A.3 gives an
overview over the framework.
159
Appendix A. Appendix: Situational Software Engineering
Figure A.3: Our methodology to find the right correlation between a project and a methodology and process model
©2011 Capgemini. All rights reserved.
A.2.1
Development of the framework
The first step is to identify the methodical building blocks to be taken into account
as well the development process model types. The second is to find a
characterization of the methodical elements, project model and system types that
allow to look at the resulting activities and related artefacts not only from a
development point of view but as well from the later operations and maintenance.
One characteristic in this sense is the lifetime with the two possible extreme
instantiations:
no maintenance phase with new business requirements (example: online
game especially produced for a soccer championship)
very long maintenance phase with few changing business requirements
occurring (example: accounting system with standardized transactions)
Some methodical building blocks will support maintenance later and some will
not. In parallel, we identify the different characteristics of systems to be
considered. For sure there has to be a close interaction between these two steps.
On this basis we build groups both on the methodology side and on the system
side. The last step for building the methodical framework is to find the
correlations between methodical building blocks, software development process
models as well as system types. At least at this point the Quasar 3.0 methodology
and the system characterisations in chapter 2 are to be interpreted as a concrete
160
Quasar 3.0
example. For other environments, there are other possible methodologies as well
as other system characterisations.
On this basis, it can be decided for each project, in which class of project it falls
and therefore, which combination of process model and methodical elements is
suitable.
We will describe the steps in more details as follows. The concrete elaboration
within Quasar 3.0 is described in chapter 2.
Step1: Identification of the relevant elements
The first thing is to identify the types of projects taken into account. This step
seems at the first glance to be quite superfluous but it helps to get the right focus
and to prevent from finding the most generalised solution where the effort for the
methodology support is unacceptable.
Software development methodology is not to be developed in a green field
approach. In this area there are already a lot of existing methodical elements,
tools etc. Therefore, the next step is to identify which activities at all are to be
taken into account, which methodical elements can be used, and what are the
appropriate process models. In this step, it is helpful to use a (standardised)
approach for structuring the elements both of the methodology and of the process
model and to define a way to document the interaction between these elements.
These steps are closely related because we get another set of methodical building
blocks if we look at maintenance or initial build projects. Independent of this it is
quite easy to add more methodical elements or process models without the need
of restructuring the framework completely. Adding new types of projects is from
another quality and might have much more impact as it determines the next steps
strongly.
In our approach, each activity takes input artefacts and processes these under the
usage of methods, tools, utilities, components and patterns into output artefacts
(see figure A.4).
A method hereby is an established practise for fulfilling a given activity. Tools are
instruments for activity execution. They themselves are built of software but
typically do not become part of the solution. The last aspect holds also for utilities
but they are more auxiliary elements and do not need to be built in software.
Typical utilities in this sense are check lists, templates etc. Components are
prefabricates (not necessary software, but typically) that become part of the
solution. Here, we do not distinguish components and frameworks because this
differentiation does not help in this context. So we summarise both under the
component.
161
Appendix A. Appendix: Situational Software Engineering
Figure A.4: Artefacts are input and output to activities
©2011 Capgemini. All rights reserved.
Example: The activity Write Specification takes the requirements and
uses our specification method and the Enterprise Architect as a tool to
provide at last the finalised specification.
For each activity there may be more than one method or tool and, in distinction
from Quasar 2.0, there can be different input and output artefacts depending on
the concrete method.
The activities are part of the disciplines which themselves are part of the domains.
Both domains and disciplines are structuring elements. If we use the terms
super-domain or sub-domain in this books, this should indicate relations between
domains, but in every case we are talking about domains.
The correlation between engineering model and software development process
models is reflected by the activities being also part of the software development
process model concept. The software development process model controls the
flow of activities. These two models are completed by the organisation and
foundation model (s. figure A.6).
To understand the dependencies better and interactions between the various
disciplines, see figure A.5. This presents the major output artefacts comprising the
links between the disciplines for the basic approach.
162
Quasar 3.0
Figure A.5: Artefact flow
©2011 Capgemini. All rights reserved.
163
Appendix A. Appendix: Situational Software Engineering
The artefact flow shows different characteristics:
Constructive flow means that output artefacts from one discipline are used
in a constructive way by the next discipline and are further developed.
Analytical flow means that output artefacts from one discipline are used for
analytical purposes in the following discipline. They are required to derive
information about the software quality or its status.
Document flow means that output artefacts from one discipline are mainly
used for documentation purposes in the following discipline. The artefacts
are not involved further, but are needed as information input for the system
documentation.
With figure A.6 on page 165 it is quite clear, that the artefact flow does not need to
be all the same in every software development process model.
Step 2: Identification of the relevant characteristics
The step is vital for the whole framework and determines the whole structure of
the result and it may end in the need of additional methodical building blocks or
process models.
It is helpful for the definition of the characteristics to look first at the systems and
at the question where they differ. This could be as well
life cycle aspects of the products to be developed (software systems etc.)
because this strongly determines the needs for documentation etc. or
domain specific aspects because for example automotive applications may
need another handling as those for aviation.
Each methodical building block can be associated with one or more situational
aspects it is suited for. Situational aspects may be offshore, industrialised etc.
We create methodologies from these methodical building blocks under different
aspects. These methodologies can be brought into a structure themselves where
one can inherit from another. So it is advisable to have a basic methodology that
contains all elements that can be used in any situation as well as a methodology
for very special situations. For example we have a methodology that is well suited
for offshore development projects (our OCSD methodology). Figure A.6 shows
the complete interrelationships.
Step 3: Definition of clusters
On the basis of the above characteristics we build clusters on the methodology
side as well as on the system and process model side. Again the sub-steps are to
be taken in close interaction because the results must produce a common
164
Quasar 3.0
Figure A.6: The Metamodel of Quasar 3.0: The Ontology
©2011 Capgemini. All rights reserved.
understanding. Otherwise the next step will not be possible. In any case it is
helpful to have not too much different types because again the maintenance cost
for the framework will explode and probably the communication to the involved
colleagues will not work. Therefore, simple pictures will help more than a very
much elaborated concept.
Step 4: Matching system types to methodical building blocks and process
models
In this step, a matrix is build up that allows the correlation of system types,
methodologies and process models. Typically the mapping will not be mutually
exclusive as one method may help for more than one system type.
It is quite clear that the definition of the framework is nothing really static. It has
to be revised carefully if major changes in the assumptions (about methodology,
process models and systems criteria) occur. On the other hand this makes clear
that the framework must not be too elaborated to guarantee a reasonable kind of
sustainability.
165
Appendix A. Appendix: Situational Software Engineering
A.2.2
Usage of the Framework
After defining the framework we can define a selection method that helps to find
the right elements for each project. Analogous to the above described first version
of a process model selection tool, also here it is taken into account that a current
system may not fulfil all characteristics of pre-defined system types.
Step 5: Choice of the project type
Based on the defined characteristics for a system of determination, the system type
results in aspects directly, in which the concrete project does not fit. Therefore, it
seems to be helpful to define one major system type for the system and to make
the deviations explicitly, so the whole project team can stick to this understanding.
Step 6: Choice of the methodology and process model
In the last step, the complete set of methodical building blocks and software
development process model is defined. Hereby it has to be ensured that the
necessary measures for risk reduction are taken in case the system to be built is
different from the standard.
166
Quasar 3.0
Bibliography
[1] APM Group Ltd. ITIL homepage. URL: http://www.itil-officialsite.com/
[cited December 2011].
[2] Ahmed Awad, Gero Decker, and Mathias Weske. Efficient compliance
checking using BPMN-Q and temporal logic. In Proceedings of the 6th
International Conference on Business Process Management, BPM ’08,
pages 326–341, Berlin, Heidelberg, 2008. Springer-Verlag.
[3] B. Boehm and R. Turner. Balancing Agility and Discipline: A Guide for the
Perplexed. Addison-Wesley Longman Publishing Co., Inc. Boston, MA,
USA, 2003.
[4] S. Brinkkemper. Method engineering: engineering of information systems
development methods and tools. In Information and Software Technology 38
(4), pages 275–280, 1996.
[5] Capgemini. Architecture and the Integrated Architecture Framework.
Capgemini, 2006. URL: http://www.capgemini.com/insights-andresources/by-publication/enterprise_business_and_it_architecture_and
_the_integrated_architecture_framework/ [cited December 2011].
[6] Capgemini. Architecture Guide. Capgemini, internal, 2010.
[7] Capgemini. The Engagement Manager’s Handbook. Capgemini, internal,
2011.
[8] Capgemini. From Train to Scouter. Capgemini, 2011. URL:
http://www.capgemini.com/technovision [cited December 2011].
[9] Capgemini Global Architects Community. Integrated Architecture
Framework Version 4.5 Reference Manual. Capgemini, 2009. URL:
http://www.capgemini.com/insights-and-resources/bypublication/enterprise_business_and_it_architecture_and
_the_integrated_architecture_framework/ [cited December 2011].
[10] Philip B. Crosby. Quality is free. McGraw-Hill Book Company, New York,
1979.
[11] Tobias Demandt. Evaluierung von Test First und Test Driven Development
für den Einsatz in Capgemini sd&m Projekten (Bachelor Thesis).
Technische Universität München, München, August 2008.
[12] G. Engels, A. Hess, B. Humm, O. Juwig, M. Lohmann, J.-P. Richter,
M. Voß, and J. Willkomm. Quasar Enterprise. dpunkt. verlag, Heidelberg,
Germany, 2008.
167
Bibliography
[13] G. Engels, A. Hofmann, and et al. Quasar 2.0: Software Engineering
Patterns. Capgemini sdm AG, internal, Munich, Germany, 2009.
[14] G. Engels and M. Kremer. Situational Software Engineering: Ein
Rahmenwerk für eine situationsgerechte Auswahl von
Entwicklungsmethoden und Vorgehensmodellen. In H.-U. Heiß, P. Pepper,
H. Schlinghoff, and J. Schneider, editors, Informatik 2011 – Informatik
schafft Communities, Berlin; Lecture Notes in Informatics (LNI) Proceedings Series of the Gesellschaft für Informatik, Volume P-192, page
469, Bonn, 2011. Gesellschaft für Informatik.
[15] G. Engels and St. Sauer. A meta-method for defining software engineering
methods. In G. Engels, C. Lewerentz, W. Schäfer, A. Schürr, and
B. Westfechtel, editors, Graph Transformations and Model-Driven
Engineering, LNCS 5765, pages 411–440, 2010.
[16] G. Engels, St. Sauer, and Chr. Soltenborn. Unternehmensweit verstehen unternehmensweit entwickeln: Von der Modellierungssprache zur
Softwareentwicklungsmethode. Informatik-Spektrum, 31(5), pages
451–459, 2008.
[17] M. Heinemann and G. Engels. Auswahl projektspezifischer
Vorgehensstrategien. In O. Linssen, T. Greb, M. Kuhrmann, D. Lange, and
R. Höhn, editors, Integration von Vorgehensmodellen und
Projektmanagement, pages 132–142. Shaker Verlag, 2010.
[18] Brian Henderson-Sellers and Jolita Ralyté. Situational method engineering:
State-of-the-art review. J. UCS, 16(3):424–478, 2010. URL:
http://www.jucs.org/jucs_16_3/situational_method_engineering_state.
[19] M. Hohenegger. A pattern-based approach to constructive and analytical
quality assurance of software configuration management (Master Thesis).
Technische Universität München, München, June 2011.
[20] J. Hohwiller and D. Schlegel. Software configuration management in the
context of bpm and soa. In H.-U. Heiß, P. Pepper, H. Schlinghoff, and
J. Schneider, editors, Informatik 2011 – Informatik schafft Communities;
Lecture Notes in Informatics (LNI) - Proceedings Series of the Gesellschaft
für Informatik, Volume P-192, Bonn, 2011. Gesellschaft für Informatik.
[21] J. Hohwiller, D. Schlegel, G. Grieser, and Y. Hoekstra. Integration of bpm
and brm. In Proc. 3rd International Workshop and Practitioner Day on
BPMN, Berlin, Heidelberg, 2011. Springer.
[22] IBM Corporation – Software Group. IBM™ Rational Unified Process™ ,
2007. URL: http://www-01.ibm.com/software/awdtools/rup/ [cited
December 2011].
168
Quasar 3.0
[23] I. Jacobson, G. Booch, and J. Rumbaugh. The Unified Software
Development Process. Addison-Wesley Professional, 1999.
[24] P. Kruchten. The rational unified process: an introduction, Third Edition.
Addison-Wesley – Pearson Education, 2004.
[25] Michael Mlynarski and Andreas Seidl. A collaborative test design approach
for unit tests in large scale agile projects (talk). In imbus Software-QS-Tag,
Nürnberg, November 2010.
[26] OMG. Software and Systems Process Engineering Metamodel Specification
(SPEM), Version 2.0, 2008. URL: http://www.omg.org/spec/SPEM/2.0/
[cited December 2011].
[27] The Open Group. TOGAF Version 9, 2009. URL:
http://www.opengroup.org/togaf/ [cited December 2011].
[28] Project Management Institute. PMI homepage. URL: http://www.pmi.org/
[cited December 2011].
[29] A. Rausch and M. Broy. Das V-Modell XT – Grundlagen, Erfahrungen und
Werkzeuge. dpunkt. verlag, Heidelberg, Germany, 2007.
[30] F. Salger, G. Engels, and A. Hofmann. Assessments in global software
development: A tailorable framework for industrial projects. In W. Visser
and I. Krüger, editors, Proc. ACM/IEEE 32nd Intern. Conf. on Software
Engineering, SE in Practice Track Cape Town, South Africa (ICSE’10), Vol.
2, pages 29–38. ACM, NY, USA, 2010.
[31] J. Scott, J. Brannan, G. Giesler, M. Ingle, S. Jois, et al. 828-2005 - IEEE
Standard for Software Configuration Management Plans. IEEE, 2005. URL:
http://standards.ieee.org/findstds/standard/828-2005.html [cited December
2011].
[32] J. Siedersleben. Moderne Softwarearchitektur: Umsichtig planen, robust
bauen mit Quasar. dpunkt. verlag, Heidelberg, Germany, 2004.
[33] Technische Universität München. QUAMOCO homepage. URL:
http://www.quamoco.de/ [cited December 2011].
[34] J. van’t Wout, M. Waage, H. Hartman, M. Stahlecker, and A. Hofman. The
Integrated Architecture Framework Explained. Spinger, 2010.
[35] R.J. Welke and K. Kumar. Method engineering: a proposal for
situation-specific methodology construction. In Cotterman and Senn,
editors, Systems Analysis and Design: A Research Agenda, pages 257–268.
Wiley, Chichester, 1992.
169
Bibliography
170
Contact:
Dr. Marion Kremer
Capgemini
CSD Research
Berliner Straße 76
63065 Offenbach
marion.kremer@capgemini.com
www.de.capgemini.com
CSD Research is part of the Capgemini Group’s Technology Services unit
­­ About Capgemini
With around 120,000
people in 40 countries,
Capgemini is one of the
world’s foremost providers of consulting,
technology and outsourcing services.
The Group reported 2011 global
revenues of EUR 9.7 billion. Together
with its clients, Capgemini creates
and delivers business and technology
solutions that fit their needs and
drive the results they want. A deeply
multicultural organization, Capgemini
has developed its own way of working,
the Collaborative Business ExperienceTM,
and draws on Rightshore®, its worldwide
delivery model.
Learn more about us at
www.capgemini.com
Rightshore® is a trademark belonging to
Capgemini
Capgemini Deutschland Holding GmbH, Kurfürstendamm 21, D-10719 Berlin.
The information contained in this document is proprietary. Copyright © 2012 Capgemini. All rights reserved.