Modellbasierte Entwicklung eingebetteter Systeme IV

Transcription

Modellbasierte Entwicklung eingebetteter Systeme IV
Tagungsband
Dagstuhl-Workshop MBEES:
Modellbasierte Entwicklung
eingebetteter Systeme IV
Model-Based Development of Embedded Systems
7. – 9.4.2008
Informatik-Bericht
2008-02
TU Braunschweig
Institut für Programmierung und Reaktive Systeme
Technische Universität Braunschweig
Mühlenpfordtstraße 23
D-38106 Braunschweig
Organisationskomitee
Holger Giese, Universität Potsdam
Michaela Huhn, TU Braunschweig
Ulrich Nickel, Hella KGaA Hueck&Co
Bernhard Schätz, TU München
Programmkomitee
Ulrich Freund, ETAS GmbH
Heiko Dörr, Carmeq GmbH
Hardi Hungar, OFFIS e.V.
Torsten Klein, Carmeq GmbH
Stefan Kowalewski, RWTH Aachen
Tiziana Margaria, Univ. Potsdam
Oliver Niggemann, dSPACE GmbH
Ralf Pinger, Siemens AG
Bernhard Rumpe, TU Braunschweig
Holger Schlingloff, Fraunhofer FIRST
Andy Schürr, Univ. Darmstadt
Albert Zündorf, Univ. Kassel
Inhaltsverzeichnis
Automatisiertes Testen mit Message Sequence Charts (MSCs)
Andrea Osterloh, Oscar Slotosch.............................................................1
View-Centric Modeling of Automotive Logical Architectures
Hans Grönniger, Jochen Hartmann, Holger Krahn, Stefan Kriebel, Lutz
Rothhardt, Bernhard Rumpe....................................................................3
Testverfahren für Elektronik und Embedded Software in der
Automobilentwicklung – eine Übersicht
Gerd Baumann, Michael Brost...............................................................13
Durchgehende Systemverifikation im Automotiven Entwicklungsprozess
Oliver Niggemann, Rainer Otterbach.....................................................20
Modeling Guidelines and Model Analysis Tools in Embedded Automotive
Software Development
Ingo Stürmer, Christian Dziobek, Hartmut Pohlheim..............................28
Integrating Timing Aspects in Model- and Component-Based Embedded
Control System Development for Automotive Applications
Patrick Frey, Ulrich Freund.....................................................................40
Clone Detection in Automotive Model-Based Development
Florian Deissenboeck, Benjamin Hummel, Elmar Juergens, Bernhard
Schätz, Stefan Wagner, Jean-Francois Girard, Stefan Teuchert...........57
Modelbasierte Softwareentwicklung mit SCADE in der
Eisenbahnautomatisierung
Stefan Milius, Uwe Steinke.....................................................................68
Tool Support for Developing Advanced Mechatronic Systems: Integrating the
Fujaba Real-Time Tool Suite with CAMeL-View
Stefan Henkler, Martin Hirsch.................................................................78
Composition of Model-based Test Coverage Criteria
Mario Friske, Bernd-Holger Schlingloff, Stephan Weißleder..................87
Testbasierte Entwicklung von Steuergeräte-Funktionsmodellen
Frank Tränkle, Robert Bechtold, Stefan Harms......................................95
A Service-Oriented Approach to Failure Management
Vina Ermagan, Claudiu Farcas, Emilia Farcas, Ingolf H. Krüger,
Massimiliano Menarini........................................................................102
Dagstuhl-Workshop MBEES:
Modellbasierte Entwicklung eingebetteter Systeme IV
(Model-Based Development of Embedded Systems)
Die
grundsätzlichen
Vorteile
einer
intensiven
Nutzung
von
Modellen
in
der
Softwareentwicklung
sind
unter
den
Schlagworten
„Model‐Driven
Architecture“
(MDA)
und
„Model
Driven
Engineering“
(MDE)
ausführlich
dargestellt
worden.
Auch
in
der
industriellen
Anwendung
werden
die
Vorzüge
inzwischen
sehr
wohl
gesehen.
Gerade
in
der
Entwicklung
hochwertiger,
eingebetteter
Systeme
werden
domänenspezifische
Modellierungswerkzeuge
zunehmend
nicht
nur
in
einzelnen
Phasen
der
Prototypentwicklung,
sondern
auch
zur
Seriencode‐
Entwicklung
eingesetzt
(z.B.
im
Automotive
oder
Avionic
Software
Engineering).
Dennoch
sind
viele
Ansätze
in
der
Praxis
noch
isoliert:
Modelle
werden
in
einer
oder
wenigen
Entwicklungsphasen
für
eingeschränkte
Systemklassen
eingesetzt.
Diese
Einschränkungen
werden
in
den
Beiträgen
des
diesjährigen
Workshops
von
den
unterschiedlichsten
Seiten
adressiert:
Die
Durchgängigkeit
von
der
Modellierung
zur
Codeerzeugung,
die
Erweiterung
von
Architekturmodellen
um
Echtzeitaspekte
und
weitere
für
eingebettete
Systeme
relevante
extrafunktionale
Eigenschaften,
die
automatische
Modellanalyse
zur
Qualitätssicherung
oder
als
Unterstützung
bei
der
Evolution
werden
thematisiert.
Auf
dem
Weg
zu
einer
durchgängigen
Verwendung
von
Modellen
von
den
Anforderungen
zur
Implementierung
und
darüber
hinaus
zur
Validierung
und
Verifikation
jedes
Entwurfsschritts
wird
damit
das
nächste
Etappenziel
in
Angriff
genommen.
Ausgangspunkt
der
MBEES‐Workshopreihe
war
und
ist
die
Feststellung,
dass
sich
die
Modelle,
die
in
einem
modellbasierten
Entwicklungsprozess
eingesetzt
werden,
an
der
Problem‐
anstatt
der
Lösungsdomäne
orientieren
müssen.
Dies
bedingt
einerseits
die
Bereitstellung
anwendungsorientierter
Modelle
(z.B.
MATLAB/Simulinkartige
für
regelungstechnische
Problemstellungen,
Statechart‐
artige
für
reaktive
Anteile)
und
ihrer
zugehörigen
konzeptuellen
(z.B.
Komponenten,
Signal,
Nachrichten,
Zustände)
und
semantischen
Aspekte
(z.B.
synchroner
Datenfluss,
ereignisgesteuerte
Kommunikation).
Andererseits
bedeutet
dies
auch
die
Abstimmung
auf
die
jeweilige
Entwicklungsphase,
mit
Modellen
von
der
Anwendungsanalyse
(z.B.
Beispielszenarien,
Schnittstellen‐
modelle)
bis
hin
zur
Implementierung
(z.B.
Bus‐
oder
Task‐Schedules,
Implementierungstypen).
Für
eine
durchgängige
modellbasierte
Entwicklung
ist
daher
im
Allgemeinen
die
Verwendung
eines
Modells
nicht
ausreichend,
sondern
der
Einsatz
einer
Reihe
von
abgestimmten
Modellen
für
Sichten
und
Abstraktionen
des
zu
entwickelnden
Systems
(z.B.
funktionale
Architektur,
logische
Architektur,
technische
Architektur,
Hardware‐Architektur)
nötig.
Durch
den
Einsatz
problem‐
statt
lösungszentrierter
Modelle
kann
in
jedem
Entwicklungsabschnitt
von
unnötigen
Festlegungen
abstrahiert
werden.
Dafür
geeignete
Modell‐Arten
sind
weiterhin
in
Entwicklung
und
werden
in
Zukunft
immer
öfter
eingesetzt.
Dennoch
ist
noch
vieles
zu
tun,
speziell
im
Bau
effizienter
Werkzeuge,
Optimierung
der
im
Einsatz
befindlichen
Sprachen
und
der
Schulung
der
Softwareentwickler
in
diesem
neuen
Entwicklungsparadigma.
Diese
neuen
Modell‐Arten
und
ihre
Werkzeuge
werden
die
Anwendung
analytischer
und
generativer
Verfahren
ermöglichen
und
damit
bereits
in
naher
Zukunft
eine
effiziente
Entwicklung
hochqualitativer
Software
erlauben.
Weiterhin
sind
im
Kontext
der
modellbasierten
Entwicklung
viele,
auch
grund‐
legende
Fragen
offen,
insbesondere
im
Zusammenhang
mit
der
Durchgängigkeit
im
Entwicklungsprozess,
und
der
Evolution
langlebiger
Modelle.
Die
in
diesen
Tagungsband
zusammengefassten
Papiere
stellen
zum
Teil
gesicherte
Ergeb‐
nisse,
Work‐In‐Progress,
industrielle
Erfahrungen
und
innovative
Ideen
aus
diesem
Bereich
zusammen
und
erreichen
damit
eine
interessante
Mischung
theoretischer
Grundlagen
und
praxisbezogener
Anwendung.
Genau
wie
bei
den
ersten
drei,
im
Januar
2005,
2006
und
2007
erfolgreich
durchgeführten
Workshops
sind
damit
wesentliche
Ziele
dieses
Workshops
erreicht:
‐
Austausch
über
Probleme
und
existierende
Ansätze
zwischen
den
unter‐
schiedlichen
Disziplinen
(insbesondere
Elektro‐
und
Informationstechnik,
Maschinenwesen/Mechatronik
und
Informatik)
‐
Austausch
über
relevante
Probleme
in
der
Anwendung/Industrie
und
existierende
Ansätze
in
der
Forschung
‐
Verbindung
zu
nationalen
und
internationalen
Aktivitäten
(z.B.
Initiative
des
IEEE
zum
Thema
Model‐Based
Systems
Engineering,
GI‐AK
Modellbasierte
Entwicklung
eingebetteter
Systeme,
GI‐FG
Echtzeitprogrammierung,
MDA
Initiative
der
OMG)
Die
Themengebiete,
für
die
dieser
Workshop
gedacht
ist,
sind
fachlich
sehr
gut
abgedeckt,
auch
wenn
sie
sich
auch
dieses
Jahr
(mit
Ausnahmen)
auf
den
automotiven
Bereich
konzentrieren.
Sie
fokussieren
auf
Teilaspekte
modell‐
basierter
Entwicklung
eingebetteter
Softwaresysteme.
Darin
enthalten
sind
unter
anderem:
‐
Domänenspezifische
Ansätze
zur
Modellierung
von
Systemen
‐
Durchgängiger
Einsatz
von
Modellen
‐
Modellierung
und
Analyse
spezifischer
Eigenschaften
eingebetteter
Systeme
(z.B.
Echtzeit‐
und
Sicherheitseigenschaften)
‐
Konstruktiver
Einsatz
von
Modellen
(Generierung)
‐
Modellbasierte
Validierung
und
Verifikation
‐
Evolution
von
Modellen
Das
Organisationskomitee
ist
der
Meinung,
dass
mit
den
Teilnehmern
aus
Industrie,
Werkzeugherstellern
und
der
Wissenschaft
die
bereits
seit
2005
erfolgte
Community‐Bildung
erfolgreich
weitergeführt
wurde,
und
damit
demonstriert,
dass
eine
solide
Basis
zur
Weiterentwicklung
des
sich
langsam
entwickelnden
Felds
modellbasierter
Entwicklung
eingebetteter
Systeme
existiert.
Die
Durchführung
eines
erfolgreichen
Workshops
ist
ohne
vielfache
Unter‐
stützung
nicht
möglich.
Wir
danken
daher
den
Mitarbeitern
von
Schloss
Dagstuhl
und
natürlich
unseren
Sponsoren.
Schloss
Dagstuhl
im
April
2008,
Das
Organisationskomitee:
Holger
Giese,
Univ.
Potsdam
Michaela
Huhn,
TU
Braunschweig
Ulrich
Nickel,
Hella
KGaA
Hueck&Co
Bernhard
Schätz,
TU
München
Mit
Unterstützung
von
Matthias
Hagner,
TU
Braunschweig
Das Arbeitsgebiet des Instituts für Programmierung
und Reaktive Systeme der TU Braunschweig sind
Methoden für die Spezifikation, den Entwurf, die Implementierung und die Validierung von Softwaresystemen, insbesondere für eingebettete Systeme. Der
gesamte Bereich des Softwareentwurfs für eingebettete Systeme von der eigenschaftsorientierten Beschreibung des gewünschten Verhaltens über die
modellbasierte Entwicklung und den Architekturentwurf bis zur Implementierungsebene und die Validierung wird durch exemplarische Forschungsarbeiten
abgedeckt. Dabei werden sowohl Grundlagenarbeiten als auch Anwendungsprojekte (zum Beispiel im
Bereich Automotive) durchgeführt.
Der Lehrstuhl für Software Systems Engineering der
TU München entwickelt in enger Kooperation mit industriellen Partnern modellbasierte Ansätze zur Entwicklung eingebetteter Software. Schwerpunkte sind
dabei die Integration ereignisgetriebener und zeitgetriebener Systemanteile, die Berücksichtigung sicherheitskritischer Aspekte, modellbasierte Testfallgenerierung und modellbasierte Anforderungsanalyse,
sowie den werkzeuggestützten Entwurf.
Besonderer Fokus der Arbeit des Fachgebiets "Systemanalyse und Modellierung" des Hasso Plattner
Instituts liegt im Bereich der modellgetriebenen Softwareentwicklung für software-intensive Systeme.
Dies umfasst die UML-basierte Spezifikation von flexiblen Systemen mit Mustern und Komponenten, Ansätze zur formalen Verifikation dieser Modelle und
Ansätze zur Synthese von Modellen. Darüber hinaus
werden Transformationen von Modellen, Konzepte
zur Codegenerierung für Struktur und Verhalten für
Modelle und allgemein die Problematik der Integration von Modellen bei der modellgetriebenen Softwareentwicklung betrachtet.
Als international tätiger Automobilzulieferer begründet
die Hella KGaA Hueck & Co. ihre globale Wettbewerbsfähigkeit aus einer klaren Qualitätsstrategie,
einem weltweitem Netzwerk und der Innovationskraft
der rd. 25.000 Mitarbeiterinnen und Mitarbeiter. Licht
und Elektronik für die Automobilindustrie und automobile Produkte für Handel und Werkstätten sind
unsere Kerngeschäftsfelder. Mit einem Umsatz von
3,7 Milliarden Euro im Geschäftsjahr 2006/2007 ist
unser Unternehmen ein weltweit anerkannter Partner
der Automobilindustrie und des Handels.
Die Validas AG ist ein Beratungsunternehmen im Bereich Software-Engineering für eingebettete Systeme.
Die Validas AG bietet Unterstützung in allen Entwicklungsphasen, vom Requirements-Engineering bis
zum Abnahmetest. Die Auswahl und Einführung qualitätssteigender Maßnahmen folgt dabei den Leitlinien
modellbasierter Entwicklung, durchgängiger Automatisierung und wissenschaftlicher Grundlagen.
Innerhalb der Gesellschaft für Informatik e.V. (GI)
befasst sich eine große Anzahl von Fachgruppen explizit mit der Modellierung von Software- bzw. Informationssystemen. Der erst neu gegründete Querschnittsfachausschuss Modellierung der GI bietet den
Mitgliedern dieser Fachgruppen der GI - wie auch
nicht organisierten Wissenschaftlern und Praktikern ein Forum, um gemeinsam aktuelle und zukünftige
Themen der Modellierungsforschung zu erörtern und
den gegenseitigen Erfahrungsaustausch zu stimulieren.
Schloss Dagstuhl wurde 1760 von dem damals regierenden Fürsten Graf Anton von Öttingen-Soetern-Hohenbaldern erbaut. Nach der französischen Revolution und der Besetzung durch die Franzosen 1794 war
Dagstuhl vorübergehend im Besitz eines Hüttenwerkes in Lothringen. 1806 wurde das Schloss mit den
zugehörigen Ländereien von dem französischen Baron Wilhelm de Lasalle von Louisenthal erworben.
1959 starb der Familienstamm der Lasalle von Louisenthal in Dagstuhl mit dem Tod des letzten Barons
Theodor aus. Das Schloss wurde anschließend von
den Franziskus-Schwestern übernommen, die dort
ein Altenheim errichteten. 1989 erwarb das Saarland
das Schloss zur Errichtung des Internationalen Begegnungs- und Forschungszentrums für Informatik.
Das erste Seminar fand im August 1990 statt. Jährlich kommen ca. 2600 Wissenschaftler aus aller Welt
zu 40 -45 Seminaren und viele sonstigen Veranstaltungen.
Automatisiertes Testen mit Message Sequence Charts
(MSCs)
Andrea Osterloh, Oscar Slotosch
Validas AG
Arnulfstr. 27
80335 München
www.validas.de
andrea.osterloh@validas.de
oscar.slotosch@validas.de
Wir präsentieren einen modell-basierten Ansatz zum automatisierten Testen anhand von
Testspezifikationen in Message Sequence Charts (MSCs). Die im ITU-T Standard Z.120
definierte Sprache MSC eignet sich zur Beschreibung von nachrichtenbasierten Systemen
und zeichnet sich vor allem durch die für den Leser sehr intuitive Darstellung von
Abläufen und Verhalten aus. Auch die Möglichkeit den Testablauf duch Highlevel MSCs
(HMSC) zu strukturieren und damit die Tests aus einer Art Zustandsautomaten
auszuführen, macht diese Sprache für die Testspezifikation interessant. Mit den
Elementen dieses ITU-Standards lassen sich Tests unabhängig von einer
Programmiersprache oder einer Schnittstellenbeschreibungssprache definieren. Auf
einem solchen Abstraktionsgrad definierte Testspezifikationen eignen sich deshalb für
den Einsatz auf mehreren Testebenen.
In einem Entwicklungsprozess in dem mehrere Testebenen durchlebt werden, wie es im
automotive Bereich der Fall ist, besteht Bedarf an der Wiederverwendbarkeit von
standardisierten Spezifikationen. Es ist daher erstrebenswert, dass ein und dieselbe
Testspezifikation sowohl zum Test der Spezifikation, beim Modul-Test der Software, als
auch beim Integrationstest z.B. auf einem Steuergerät verwendet werden kann. In dem
Vortrag wird eine solcher Ansatz vorgestellt.
Mit ihrem MSC2C Codegenerator hat die Validas AG ein Tool entwickelt, dass MSCTestspezifikationen in C-Code übersetzt. Aus den MSCs wird der eigentliche Test
generiert, während aus den HMSCs eine Testablaufsteuerung für die Tests erzeugt wird.
Diese enthält eine intelligente Kombination aus ausführbaren Tests, Test-Coverage und
Benutzerprioritäten.
1
Von der Vielzahl der MSC-Elemente stellen Nachrichten die zentrale Einheit der
Testspezifikation dar, denn sie beschreiben die Kommunikation zwischen Testtreiber und
System under Test (SuT). Der Schlüssel zu der Verwendbarkeit einer MSCTestspezifikation auf unterschiedlichen Testebenen liegt daher in der Umsetzung dieser
MSC-Nachrichten gemäß der Zielumgebung. Zur näheren Spezifikation der Nachrichten
kann ein sogenannter Nachrichtenkatalog verwendet werden. In diesem XML-basierten
Spezifikationszusatz können für alle Nachrichten z.B. Datentypen der Parameter und
Rückgabewerte definiert werden. Dies sind essentielle Informationen für die Umsetzung
der Schnittstelle zwischen Testtreiber und SuT. Der Nachrichtenkatalog kann auch in den
MSC-Editor eingebunden und von dort direkt zur Spezifikation der Nachrichten
verwendet werden. Dies erleichtert die Erstellung valider MSC-Dokumente.
Die Umsetzungen der Testtreiber für die einzelnen Zielumgebungen beinhalten folgende
Charakteristika:
1.
Test einer Matlab/Simulink Schnittstellenspezifikation
Beim Test einer Matlab/Simulink Spezifikation wird der gesamte Testtreiber in eine
S-Function eingebunden. Die S-Function wird mit einem Modul des MSC2C
Generators erzeugt, wobei der Nachrichtenkatalog dazu verwendet wird, alle beim
SuT eingehenden Nachrichten als Outports der S-Function umzusetzen und alle vom
SuT zum Testtreiber laufende Nachrichten als Inports der S-Function zu verwenden.
Das Modul MSC2Matlab beinhaltet Funktionen sowohl zum Generieren der SFunction als auch zum Verbinden der Ein-und Ausgänge der S-Function mit dem zu
testenden Simulink Modell.
2.
Software Modul-Test
Beim Software-Modultest werden die beim SuT eingehenden MSC-Nachrichten
direkt als Funktionsaufrufe des zu testenden Codes umgesetzt. Für vom SuT
ausgehende Nachrichten werden Stubs generiert, welche mitprotokollieren, wann
und mit welchen Parametern die Funktionen aufgerufen werden.
3.
Integrationstest auf einem Steuergerät
Beim Test einer Funktionalität oder Schnittstelle auf einem Steuergerät werden die
beim SuT eingehenden MSC-Nachrichten als CAN-Nachrichten umgesetzt, welche
zum Steuergerät gesendet werden. Das erwartete Verhalten wird ebenfalls anhand
von vom SuT gesendeten CAN-Nachrichten abgeprüft. Manifestiert sich das
Verhalten allerdings in einer Variablenänderung auf dem Steuergerät, so muss die
Nachricht als eine CAN Nachricht, welche eine CCP (CAN Calibration Protocol)Abfrage der Variablen enthält, umgesetzt werden.
Die Ergebnisse der Testausführung werden in einem Logfile mitprotokolliert. Mit Hilfe
des MSC2C Code-Generators, kann aus diesem Logfile ein Protokoll-MSC erstellt
werden. Dieses Ergebnis-MSC protokolliert den Testablauf und stellt die Abweichungen
der Ausführung zur Testspezifikation heraus. Auf dieser Basis kann die Analyse der
Tests leicht durchgeführt werden.
2
View-Centric Modeling of Automotive Logical Architectures
Hans Grönniger1 , Jochen Hartmann2 , Holger Krahn1 ,
Stefan Kriebel2 , Lutz Rothhardt2 , and Bernhard Rumpe1
1
Institut für Software Systems Engineering, TU Braunschweig, Germany
2
BMW Group, München, Germany
Abstract: Modeling the logical architecture is an often underestimated development
step to gain an early insight into the fundamental functional properties of an automotive system. An architectural description supports developers in making design decisions for further development steps like the refinement towards a software architecture
or the partition of logical functions on ECUs and buses. However, due to the large
size and complexity of the system and hence the logical architecture, a good notation,
method, and tooling is necessary. In this paper, we show how the logical architectures
can be modeled succinctly as function nets using a SysML-based notation. The usefulness for developers is increased by comprehensible views on the complete model to
describe automotive features in a self-contained way including their variants, modes,
and related scenarios.
1
Introduction
Developing automotive embedded systems is becoming a more and more complex task,
as the involved essential complexity steadily increases. This increase and the demand for
shorter development cycles enforces the reuse of artifacts from former development cycles
paired with the integration of new or changed features.
The AUTOSAR methodology [AUT] aims to achieve reuse by a loosely coupled componentbased architecture with well-defined interfaces and standardized forms of interaction. The
approach targets at the software architecture level where the interface descriptions are already detailed and rich enough to derive a skeleton implementation (semi)-automatically.
Despite its advantages software architectures are less useful in early design phases where
a good overview and understanding of the (logical) functions is essential for an efficient
adaptation of the system to new requirements. Detailed interface descriptions and technical aspects would overload such a description. In compliance with [Gie08] we argue that
the component-based approach has its difficulties when applied to automotive features that
highly depend on the cooperation of different functions. To comprehend the functionality
of features, it is essential to understand how functions interact to realize the feature. Complete component-like descriptions of single functions are less useful as they only describe
a part of a feature while a comprehensive understanding is necessary.
We identified the following problems with notations and tools proposed or used today:
• Tools that only provide full views of the system do not scale to a large amount of
functions.
3
• Notations that have their roots in computer science are not likely to be accepted by
users with a different professional background.
• Under-specification and the abstraction from technical details are often excluded if
tools aim at full code generation.
Our contribution to the problem of modeling complex embedded automotive systems is
thus guided by the following goals:
• A comprehensible description of functions, their structure, behavior, and interactions with other functions should be supported.
• Interesting functional interrelations should be presentable in a way such that they
are comprehensible for all developers involved.
• The system shall be modeled from different viewpoints in a way that a developer
can concentrate on one aspect at a time and the conformance to the complete system can be checked automatically. In addition, the developer shall be supported in
understanding how a certain viewpoint is realized in the complete system.
In accordance to [vdB04, vdB06], we argue that function nets are a suitable notation for
describing the logical architecture of an automotive system. We further explain how views
for features including their modes and variants can help to ease the transition from requirements to a logical architecture. In addition to the aspects already explained in our
previous work [GHK+ 07, GHK+ 08], we extend the view-based approach to model scenarios with UML communication diagrams [OMG05] that are also consistent to the logical
architecture or other views.
The rest of the paper is structured as follows. In Section 2 function net architectures
and views are explained. Section 3 describes how this notation can be used within an
automotive development process. Section 4 focuses on scenarios which complement the
feature views to simplify the understanding and enhance the testability. Section 5 presents
related work and Section 6 concludes the paper.
2
Function Nets Architecture and Views
For modeling the logical architecture as function nets an appropriate notation has to be
found. We evaluated UML 2.0 [OMG05] and other UML derivates like UML-RT [SGW94]
and SysML [OMG06] which in general were found suitable for architecture and function
net modeling [RS01, vdB04]. We favored SysML over other notations because it uses
Systems Engineering terminology which is more intuitive for people with different professional backgrounds than others which have their roots in computer science. SysML internal block diagrams allow us in contrast to UML composite structure diagrams to model
cross-hierarchy communication without using port delegation which was helpful to model
static architectures. A more detailed discussion can be found in [GHK+ 07].
We use a subset of SysML internal block diagrams to enable compact definitions and
decrease the learning effort for the notation. An internal block diagram (ibd) used as a
4
function net may only contain directed connectors to indicate the signal flow direction.
Multiplicities of blocks and connectors have always the value one and are therefore omitted. An example of a function net diagram can be found in Figure 1. It shows a simplified
complete function net ”CarComfort” which contains a fictitious model of a central locking
functionality. It evaluates the driver’s request and opens or closes the doors accordingly.
In addition, the doors close if the vehicle exceeds a certain speed limit (auto lock).
Figure 1: Part of an automotive function net
Syntactically, the function net is a valid SysML internal block diagram. In that example, we used three layers of hierarchy (top-level, block ”CLRequestProc” and, e.g., ”ButtonOn”). With the signal ”DriverRequestCL”, it also shows an example of cross-hierarchy
communication. Instantiation is also supported. In the example, there are two doors which
are instantiated by giving each block a name, in this case ”left” and ”right”. These two
blocks share their behavior but not their internal state. Instantiation is an important feature that allows one to reuse of a block multiple times which also avoids redundant block
definitions that are poorly maintainable [GHK+ 07].
Views are also modeled as internal block diagrams indicated by the specific diagram use
!view". In general, views focus on a certain aspect of the function net. By adding
a few specific properties, views can also model the environment and context of the considered aspect. Compared to the complete function net, a view may leave out blocks,
signals, or hierarchy information which is present in the related complete function net.
”Env(ironmental)” blocks and non-signal communication can be added to increase the understandability. The environmental elements refer to non E/E elements that have a physical
counterpart. Non-signal communication is modeled by connectors that are connected to
special ports in which ”M” represents mechanical influence, ”H” hydraulics, and ”E” electrical interactions. Blocks marked with the stereotype ”ext(ernal)” are not central to the
view but are necessary to form an understandable model.
The example in Figure 2 shows that whole blocks from the complete function net have
been left out (block ”CLRequestProc”) and that the blocks ”CentralSettingsUnit” and ”VehicleState” have been included in the view to clarify where the signals ”AutoLockStatus”
and ”VehicleSpeed” originate. The physical door look ”LockActuator” is shown in the
figure as an environmental element (marked with the corresponding stereotype !env").
5
Figure 2: View of the autolock functionality including external blocks and environment
Like explained in [GHK+ 07, GHK+ 08], a view and a complete function net of the system
are consistent if the following consistency conditions hold.
1. Each block in a view not marked with the stereotype !env" must be part of the
logical architecture.
2. Views must respect whole-part-relationships. If there is such a relationship between
functions in the view, it must also be present in the complete function net. Intermediate layers may be left out, though.
3. The other way round, if two functions are in a (possibly transitive) whole-partrelationship in the complete function net and both are present in a view, they must
have this relationsship in the view, too.
4. Normal communication relationships (except those originating from M, E, or H
ports) shown in a view must be present in the logical architecture. If the view indicates that certain signals are involved in a communication they must be stated in the
architecture. If no signal is attached to a communication link in a view at least one
signal must be present in the architecture.
5. A communication relationship needs not be drawn to the exact source or target, also
any superblock is sufficient if the exact source or target is omitted in the view.
One view on a function net is a specialization of another view if both are consistent to the
same complete function net and the following context condition holds.
6. The blocks and connectors in the specialization function net are a subset of the
blocks shown in the referred function net.
3
Function Nets and Views in the Automotive Development Process
The above described mechanisms to model the logical architecture as function nets and
to create views on the complete model can be used for a variety of purposes. Mainly the
6
views can be used to model an automotive feature in a self-contained way. The term feature denotes a functionality that is perceptible by customers (e.g., a braking system) or an
automotive engineer (e.g., system functions that effect the whole vehicle like diagnostics).
Further views which are usually specializations of a feature view are used to model the
variants and modes of that feature. The variants provide the same principle functionality
but are distinguishable by having different individual properties. The complete function
net includes all variants and through parameterization the intended variants are chosen.
Modes are used to show that features change their observable behavior in a distinct way
when certain conditions are fulfilled. Modes are described by a statechart where the state
names correspond to the different modes of operation, whereas the transitions are used to
describe the conditions which force a change of the mode of operation. As error degradation is a common example for using modes, views are then used to show the active
subsystems and leave the inactive subsystems out. Further details about the use of views
for modeling features, variants, and modes can be found in [GHK+ 08, GKPR08].
From a technical position, a view is consistent with a function net or view if the given
constraints hold. Especially, each developer decides on his/her own which external or environmental elements are necessary to form a self-contained description. In a company
where many developers concurrently work on a solution, this raises the question for modeling guidelines which standardize the use of additional elements and therefore simplify
cooperative development.
Figure 3 gives an overview of the development process. The requirements are captured textually in a distributed manner and form the basis for modeling features in a self-contained
way by using feature and other according views. These models ease the transition to the
logical architecture mainly because the same notation is used and automatic checks for
conformance can be applied on the basis of the above explained consistency conditions.
The logical architecture can then be further refined towards an AUTOSAR software and
hardware architecture. The realization is then derived within the AUTOSAR process.
Please note that the figure concentrates on the structural view point only. This view point
forms the backbone of automotive software development, but has to be complemented by
behavioral descriptions on the one hand, and exemplary scenarios on the other hand. For
each of the view points, different diagrams exist that are related to the structural diagram
on each layer of abstraction.
The feature models themselves are a unit of reuse: When moving from one product line
to another the still relevant features of the old product line are determined and the necessary changes on requirements are made. The according changes are reflected in the
features views. Finally, the traceable connection between the feature views and the logical
architecture helps to define the relevant changes in the complete function net. During the
transition from one development step to another different design decisions are made. For
example, going from the logical architecture to the technical architecture the developer
has to determine the physical location of a logical function on a certain ECU. The reuse
of existing software from a former product line is often only possible if the function is not
mapped to another ECU with completely different properties. Therefore, annotations can
be added to the logical architecture resp. a feature view to pre-determine the result of the
placement decision. This restriction on the design space maximizes reuse.
7
Figure 3: Automotive development process
4
Using Views to Describe Scenarios
As described above views can be used to model features in a self-contained way. Further
specializations on this view can be used to explain the according modes and variants. This
form of specification is supplemented by behavioral descriptions of the non-composed
blocks using an arbitrary form of description like statecharts, Matlab/Simulink models or
plain text depending on the chosen abstraction layer. These models aim at a complete description of the considered system or feature resp. their modes and variants. Nevertheless,
sometimes (e.g., for safety assessments) it is important to document how the system reacts
to certain external events or failures of subsystems in an exemplary fashion. In addition,
the scenarios help developers to understand the system better by showing representative
situations. For this kind of exemplary behavior we adopted the UML communication
diagram [OMG05]. In this kind of diagram the subsystems exchange signals in the enumerated order. The basic form uses another view on the corresponding complete, feature,
mode, or variant function net which includes only the active subsystems in the scenario.
Therefore, communication diagrams can be used on both abstraction layers, on the feature
level as scenarios on feature views and on the complete function net where scenarios for a
composed automotive system are modeled.
The modeled system may communicate by discrete events or by continuous signal flow.
Therefore, we extended the form of communication normally used in communication diagrams by allowing the following conditions which help to describe continuous forms of
interaction. The notation allows to specify multiple ranges for a certain signal, e.g., to
specify that a signal is either invalid or greater than a certain value.
• The value of a signal s is larger than v: s > v
• The value of a signal s becomes larger than v: s >> v
8
• The value of a signal s has a certain value v : s == v
• The value of a signal s changes to a certain value v : s = v
• The value of a signal s is smaller than v: s < v
• The value of a signal s becomes smaller than v: s << v
For discrete signals also the following notation is useful:
• The value of a signal s changes from v to w: s : v −> w
An example for a scenario that is modeled by a communication diagram can be found in
Figure 4 with a corresponding variant view of the CentralLocking function. More details
on the variant mechanism used can be found in [GHK+ 08, GKPR08]. In the example
the signal “VehicleSpeed” exceeds the value of 10km/h. Therefore, the block “EvalSpeed” changes its output value from “Open” to “Close”. After that, the output value
of the block “Arbiter” must be “Close” and therefore the doors of the car will close (not
modeled here). This interaction is modeled as CmdOpenClose == Close and not as
CmdOpenClose : Open -> Close because the Arbiter may already have closed
the doors by sending the output signal value “Close”, e.g., because of input values from
the not modeled input signal “AutoLockStatus”.
Figure 4: Variant view and a corresponding scenario
Scenarios like described above could also be modeled using sequence diagrams. Discussions with developers turned up that the similarity between the complete function net and
the scenarios are helpful to understand the considered system. However, sequence diagrams might be helpful to model complex interactions between few communication partners, whereas simple interactions between many communication partners can be modeled
more concisely in a communication diagram. Therefore, we allow using both, sequence
and communication diagrams, for modeling scenarios.
Nevertheless, this form of communication diagrams can easily be transformed to a sequence diagram with the same meaning. Therefore, extensions of sequence diagrams like
explained in [Rum04] could also be used in communication diagrams. The main new
concepts are the following:
• Communications can be marked by the stereotype !trigger" to mark that they
trigger the scenario.
9
• The matching policy of the communication partners can be marked as either
– complete to indicate that all occurring communication is already shown in the
diagram. All additional communication that is observed in a system run is
interpreted as wrong behavior.
– visible to indicate that all communication between blocks shown in a diagram
is actually modeled. However, there may exist other blocks not shown in the
scenario that also communicate in a system run.
– free to indicate that arbitrary interactions may occur additionally to the explicitly modeled communications.
As described in [Rum04], these extensions can be used to automatically derive a test case
from the communication diagram (with the sequence diagram as an intermediate step).
The interactions in the sequence diagram are assumed to be ordered as they appear in the
diagram. Additional invariants could be used to model timing constraints in the scenarios.
The test cases use the triggers to invoke the actual implementation and an automaton that
checks if the system shows the correct behavior. This method can be applied only if there
is a traceable connection from the function nets to the actual realization. The connection is
used to translate the triggers which are logical signals to, e.g., messages in the realization.
Vice versa, the observed messages in the realization are transformed back to the according
logical signal flow that is checked against the described interactions. If such an approach is
technically to difficult, the scenarios can still be used for validating and testing the logical
architecture in a simulation.
5
Related Work
Function net modeling with the UML-RT is described in [vdB04]. We extended this approach by using the SysML for modeling function nets and explained its advantages. We
supplement the approach by views that simplify the transition from requirements to logical
architectures in early design phases as they are able to model features and their variants,
modes, and also scenarios.
In [RFH+ 05, WFH+ 06] service oriented modeling of automotive systems is explained.
The service layer is similar to the modeling of features. Another example of an architecture description language that aims at supporting a development process from vehicle requirements to realization is being defined in the ATESST project [ATE] based on
EAST-ADL [EAS]. Both works do not explicitly provide support for modeling exemplary
scenarios. In addition, we explored how feature descriptions can benefit from modeling
the environment together with the feature. AML [vdBBRS02, vdBBFR03] considers abstraction levels from scenarios to implementation. Unlike our use of scenarios, scenarios
in [vdBBRS02] are employed in a top-down development process on a higher level of
abstraction out of which the logical architecture will later be developed.
In [DVM+ 05] the use of rich components is explained that employ a complex interface
description including non-functional characteristics. In that approach, scenarios could be
modeled as viewpoints of a component. In contrast to our approach rich components
10
focus less on the seamless transition from requirements to function nets but assume an
established predefined partitioning in components.
The AUTOSAR consortium [AUT] standardizes the software architecture of automotive
system and allows the development of interchangeable software components. One main
problem of this approach is that software architectures are too detailed in early development phases where functions nets are commonly accepted by developers. In addition, like
also argued in [Gie08], the component-based approach has its difficulties when applied to
automotive features that highly depend on the cooperation of different functions.
6
Conclusion
In this paper, we summarized our approach for modeling the logical architecture of automotive systems using views. We regard views as suitable for describing automotive
features, but also their variants and modes. More information can be found in [GHK+ 07,
GHK+ 08]. In this paper we additionally introduced a scenario notation that is similar
to UML communication diagrams but is a view on a function net at the same time. The
syntactic proximity to the logical architecture simplifies the understanding of scenarios.
Scenarios can be used to document important use-cases of features. These use-cases contain information that help developers to understand and correctly implement the intended
functionality.
Despite the scenarios which show exemplary behavior, the approach described in this paper focuses on structural aspects only. This is mostly sufficient for the modeling of a
logical architecture because architectures mainly focus on structural properties and decomposition of functionality. For the self-contained modeling of features this approach
has to be extended with full behavioral specifications to gain its full usefulness.
When the proposed model types (complete function nets, views for features, modes, variants, and the described scenarios) are used to model an automotive system, a greater number of models have to be created and references between models have to be establised. We
plan to investigate into existing model mangement strategies and adopt them to the needs
of a model-based automotive development process in order to handle the large number
models.
References
[ATE]
ATESST Project website http://www.atesst.org.
[AUT]
AUTOSAR website http://www.autosar.org.
[DVM+ 05]
Werner Damm, Angelika Votintseva, Alexander Metzner, Bernhard Josko, Thomas
Peikenkamp, and Eckard Böde. Boosting Re-use of Embedded Automotive Applications Through Rich Components. In Proceedings of Foundations of Interface Technologies 2005, 2005.
11
[EAS]
EAST-EEA Architecture Description Language, available from
http://www.east-eea.net.
[GHK+ 07]
Hans Grönniger, Jochen Hartmann, Holger Krahn, Stefan Kriebel, and Bernhard
Rumpe. View-Based Modeling of Function Nets. In Proceedings of the Objectoriented Modelling of Embedded Real-Time Systems (OMER4) Workshop, Paderborn,, October 2007.
[GHK+ 08]
Hans Grönniger, Jochen Hartmann, Holger Krahn, Stefan Kriebel, Lutz Rothhardt,
and Bernhard Rumpe. Modelling Automotive Function Nets with Views for Features,
Variants, and Modes. In Proceedings of ERTS 2008, 2008.
[Gie08]
Holger Giese. Reuse of Innovative Functions in Automotive Software: Are Components enough or do we need Services? In Tagungsband Modellierungs-Workshop
MBEFF: Modellbasierte Entwicklung von eingebetteten Fahrzeugfunktionen, 2008.
[GKPR08]
Hans Grönniger, Holger Krahn, Claas Pinkernell, and Bernhard Rumpe. Modeling Variants of Automotive Systems using Views. In Tagungsband ModellierungsWorkshop MBEFF: Modellbasierte Entwicklung von eingebetteten Fahrzeugfunktionen, 2008.
[OMG05]
Object Management Group. Unified Modeling Language: Superstructure Version 2.0
(05-07-04), August 2005. http://www.omg.org/docs/formal/05-07-04.pdf.
[OMG06]
Object Management Group. SysML Specification Version 1.0 (2006-05-03), August
2006. http://www.omg.org/docs/ptc/06-05-04.pdf.
[RFH+ 05]
S. Rittmann, A. Fleischmann, J. Hartmann, C. Pfaller, M. Rappl, and D. Wild. Integrating Service Specifications at Different Levels of Abstraction. In SOSE ’05: Proceedings of the IEEE International Workshop, pages 71–78, Washington, DC, USA,
2005. IEEE Computer Society.
[RS01]
Bernhard Rumpe and Robert Sandner. UML - Unified Modeling Language im Einsatz.
Teil 3. UML-RT für echtzeitkritische und eingebettete Systeme. at - Automatisierungstechnik, Reihe Theorie für den Anwender, 11/2001, (11), 2001.
[Rum04]
Bernhard Rumpe. Modellierung mit UML. Springer, Berlin, May 2004.
[SGW94]
Bran Selic, Garth Gullekson, and Paul T. Ward. Real-Time Object-Oriented Modeling.
John Wiley & Sons, April 1994.
[vdB04]
Michael von der Beeck. Function Net Modeling with UML-RT: Experiences from an
Automotive Project at BMW Group. In UML Satellite Activities, pages 94–104, 2004.
[vdB06]
Michael von der Beeck. Eignung der UML 2.0 zur Entwicklung von Bordnetzarchitekturen. In Tagungsband des Dagstuhl-Workshops Modellbasierte Entwicklung
eingebetteter Systeme, 2006.
[vdBBFR03] Michael von der Beeck, Peter Braun, Ulrich Freund, and Martin Rappl. Architecture
Centric Modeling of Automotive Control Software. In SAE Technical Paper Series
2003-01-0856, 2003.
[vdBBRS02] Michael von der Beeck, Peter Braun, Martin Rappl, and Christian Schröder. Automotive Software Development: A Model Based Approach. In World Congress of
Automotive Engineers, SAE Technical Paper Series 2002-01-0875, 2002.
[WFH+ 06]
12
Doris Wild, Andreas Fleischmann, Judith Hartmann, Christian Pfaller, Martin Rappl,
and Sabine Rittmann. An Architecture-Centric Approach towards the Construction of
Dependable Automotive Software. In Proceedings of the SAE 2006 World Congress,
2006.
Testverfahren für Elektronik und Embedded Software in
der Automobilentwicklung
Gerd Baumann, Michael Brost
Kfz-Mechatronik / Software
Forschungsinstitut für Kraftfahrwesen und Fahrzeugmotoren Stuttgart (FKFS)
Pfaffenwaldring 12
70569 Stuttgart
gerd.baumann@fkfs.de
michael.brost@fkfs.de
Abstract: Beim Test von elektronischen Steuergeräten und Embedded Software
für Kraftfahrzeuge wird eine Vielzahl von Methoden eingesetzt, die sich in den
einzelnen Entwicklungsphasen der Elektronik für ein neues Kraftfahrzeug
signifikant unterscheiden. Hinzu kommt, dass die Kraftfahrzeugelektronik ein
multidisziplinäres Arbeitsgebiet ist und dass sich die Sichtweisen und
Begriffswelten der beteiligten Fachrichtungen (Maschinenbau, Fahrzeugtechnik,
Elektrotechnik und Informationstechnik) fundamental unterscheiden. Ziel des
Beitrags ist es, eine Übersicht zur aktuellen Situation beim Test von
Automobilelektronik und -Software zu geben. Anschließend wird ein konkretes
Beispiel für automatisierte Tests der Kfz-Innenraum-Elektronik vorgestellt.
1 Einführung
1.1 Motivation
Aus der Informationstechnik ist bekannt, dass fast die Hälfte der Entwicklungskosten
von Software auf Tests entfällt [Vig04]. Da die Embedded Software in der KfzElektronik eine dominierende Rolle spielt, liegt der Testanteil bei den
Entwicklungskosten von ECU’s in einer ähnlichen Größenordnung. Das Testen von KfzElektronik und -Software ist deshalb keine Nebenaktivität, sondern ein zentraler
Bestandteil des Entwicklungsprozesses, der einen wesentlichen Beitrag zur Sicherung
und Verbesserung der Qualität von mechatronischen Kfz-Systemen darstellt.
1.2 Ein Beispiel
Ein heutiger PKW der gehobenen Klasse enthält ca. 100 MByte ausführbare Embedded
Software, die sich auf bis zu 100 Mikrocontroller verteilt. In der Softwaretechnik wird
davon ausgegangen, dass bei der Implementierung ca. 25 bis 50 Fehler pro 1000 LOC
13
auftreten. Nach Abschluss aller Tests verbleiben nach Vigenschow ca. 3% Restfehler
[Vig04]. Für eine fiktive Motorsteuerung mit angenommenen 106 LOC bedeutet dies,
dass nach der Serieneinführung noch mehrere hundert Software-Fehler unentdeckt
bleiben [Het07]. Hinzu kommen Fehler bei der Integration der Einzelsteuergeräte in das
Fahrzeug-Netzwerk. Diese Zahlen verdeutlichen die Dimension der Problematik für die
Automobilindustrie.
1.3 Zielsetzung von Tests
Es gibt zwei grundlegende Thesen zu der Frage, was Tests leisten sollen:
A) Tests sollen nachweisen, dass die Eigenschaften eines Produkts unter allen
Bedingungen mit den spezifizierten Eigenschaften übereinstimmen.
B) Tests dienen dem
Produkteinführung.
Auffinden
möglichst
vieler
Fehler
vor
der
Im Bereich der Softwaretechnik gilt es als theoretisch gesichert, dass ein Nachweis der
vollständigen Korrektheit der Implementierung eines gegebenen Algorithmus gemäß
These A) aus verschiedenen Gründen nicht möglich ist, z.B. [How87].
Diese Aussage gilt auch für elektronische Steuergeräte und deren Software, weil niemals
eine Spezifikation existiert, die den Prüfling vollständig beschreibt, und weil die für das
Testen verfügbare Zeit bis zum Produktionsstart stets zu kurz ist. In der Realität der
Automobilentwicklung wird daher stets gemäß der „pragmatischen“ These B) getestet.
2 Die Test-Situation in der Automobilindustrie
2.1 Testebenen
Für einen effektiven Testprozess muss frühzeitig festgelegt werden, welche Arten von
Tests zu welcher Zeit vorgenommen werden und wer dafür verantwortlich ist. Dabei gilt
es einerseits, eine angemessene Testabdeckung zu erzielen, andererseits jedoch
Mehrfach-Tests zu vermeiden. Diese Aufgabe ist alles andere als trivial, vor allem im
Hinblick auf den hohen Vernetzungsgrad der Automobilelektronik und die große Zahl
der Zulieferer, deren Produkte in einem modernen Kraftfahrzeug optimal
zusammenwirken müssen.
In der Automobilindustrie hat sich das in Tabelle 1 dargestellte Vorgehensmodell
durchgesetzt, welches in dieser oder ähnlicher Form von den OEMs und den großen
Zulieferern praktiziert und weiterentwickelt wird, wobei im Einzelfall weitere Ebenen
eingeführt oder mehrere Ebenen zusammengefasst werden.
14
Tabelle 1: Testebenen und -Verfahren in der Kraftfahrzeugelektronik-Entwicklung
Testebene
Fahrzeugtest
GesamtverbundTest im Labor
Teilverbundtest
Einzel-ECU-Test
Testobjekt
(SuT)
Fahrzeug mit
vollständiger
Elektronik
Alle ECUs eines
Fahrzeugs
Alle ECUs eines
Bussegments, z.B.
Antriebsstrang
ECU mit Software,
evtl. mit Peripherie
SoftwareIntegrationstest
Alle Softwareteile
einer ECU
Software-Modultest
Einzelne Funktion
oder Code-Modul
Bevorzugte
Testverfahren
Fahrversuch
Subjektive Bewertung
„EMV-Halle“
„CAN-Mobil“
Kommunikationstests
Bordnetz-Tests
Black-Box-Tests:
HIL-Verbundtest
„Brettaufbau“
Black-Box-Tests:
Fail-Safe-Tester
HIL-Test
White-Box-Tests:
Aufruf-Reihenfolge
Timing-Analyse
SIL/MIL-Verfahren
White-Box-Tests:
Code Reviews
Coverage-Analyse
Metriken
SIL/MIL-Verfahren
Zuständigkeit
OEM,
Zentralabteilung
OEM,
Zentralabteilung
OEM,
Fachabteilung
OEM
Zulieferer
Zulieferer
(Tier 1)
Zulieferer
(Tier 1..n)
2.2 White-Box-Test vs. Black-Box-Test
White-Box-Tests erfordern die vollständige Kenntnis der internen Struktur des Prüflings.
Für den Test von Software bedeutet dies, dass der Quellcode verfügbar sein muss.
Eingesetzt werden White-Box-Tests vor allem für Software-Modul- und
Integrationstests. White-Box-Tests sind in hohem Grade automatisierbar. Es existiert
eine große Bandbreite von Testverfahren und Werkzeugen, die im Bereich der
Informatik entwickelt wurden. Einzelheiten zu den einzelnen Methoden finden sich z.B.
in [Dus00]. In der Kraftfahrzeugelektronik werden White-Box-Tests typischerweise
beim Zulieferer durchgeführt, da der OEM in der Regel keinen Zugriff auf den
Quellcode von Steuergeräte-Software hat (es sei denn, er entwickelt diesen selbst). Im
Rahmen des modellbasierten Entwicklungsprozesses werden auch Software-in-the-LoopTestverfahren (SIL) eingesetzt, wobei diese häufig als Model-in-the-Loop (MIL)
bezeichnet werden, falls grafische Modellierer wie SIMULINK eingesetzt werden.
Beim Black-Box-Test ist die interne Struktur des Prüflings, z.B. der elektrische
Schaltplan einer ECU oder der Quellcode der eingebetteten Software, nur in groben
Zügen bekannt. Deshalb wird der Prüfling an seinen Eingangs- Schnittstellen mit einer
Auswahl an Testsignal-Kombinationen (auch als Testvektoren bezeichnet) beaufschlagt.
Anschließend wird festgestellt, ob die Ausgangssignale plausibel sind. Black-BoxVerfahren sind somit stets funktionsorientiert. Da die Menge der möglichen
15
Eingangssignalkombinationen bei umfangreichen Systemen sehr hoch ist, werden BlackBox-Tests typischerweise stichprobenartig durchgeführt.
In der Kraftfahrzeugelektronik werden Black-Box-Tests auf mehreren Testebenen
angewendet, insbesondere für Einzel-Steuergeräte, mehrere Steuergeräte im Verbund bis
zum vollständigen Fahrzeug-Netzwerk.
2.3 Open-Loop- und Closed-Loop-Verfahren
Während der vergangenen zehn Jahre hat sich das Hardware-in-the-Loop-Verfahren
(HIL) als Standard-Methode für Black-Box-Tests für Kfz-Elektronik etabliert. Dieses
Verfahren ist dadurch charakterisiert, dass eine oder mehrere real vorhandene ECU’s im
geschlossenen Regelkreis mit einem Simulationsmodell des „Restfahrzeugs“ betrieben
werden. Die Simulation wird in Echtzeit auf einem Rechner ausgeführt, der über die
notwendigen Hardware-Schnittstellen zur Ein- und Ausgabe der elektrischen ECUSignale einschließlich der Datenkommunikation (CAN etc.) verfügt.
O p en -L oop-Test
Testreferen z
V ergleich:
System u nder test
S tim uliE rzeug un g
ECU
A usgabe
des
R eglers
(E C U )
gege n
die
R efere nz
C losed -L oo p-Test
Testreferenz
V ergleich:
S ystem und er test
S tim uliE rzeug un g
ECU
M odell
der
R egelstrecke
A usgabe
des
R egelkreise s
gege n
die
R efere nz
R ückführung
Abbildung 1: Open-Loop- und Closed-Loop-Testverfahren
HIL zählt zu den Closed-Loop-Verfahren. Es ermöglicht die Simulation nahezu
beliebiger dynamischer Betriebszustände elektronisch geregelter Kfz-Systeme. Daher ist
es immer dann zu empfehlen, wenn schnelle Regelungen geprüft werden sollen, z.B. bei
Motor- und Getriebesteuerungen sowie bei Schlupf- und Fahrdynamikregelungen (ABS,
ESP). Es ist jedoch zu beachten, dass das Testergebnis nicht nur vom Verhalten des
Prüflings (ECU) beeinflusst wird, sondern auch vom verwendeten Streckenmodell,
welches im Sinne der Test-Theorie streng genommen einen Bestandteil des Prüflings
bildet (Abbildung 1 unten).
16
Open-Loop-Funktionstests (Abbildung 1 oben) sind vor allem für reaktive Systeme
geeignet, also für ECUs, deren Verhalten sich im Wesentlichen durch
Zustandsautomaten beschreiben lässt und die nicht Bestandteil von Regelkreisen mit
kurzen Zeitkonstanten sind. Dies trifft auf viele Systeme im Bereich der Innenraum- und
Karosserie-Elektronik zu. Hierbei lässt sich mit vergleichsweise einfachen Mitteln eine
relativ hohe Testabdeckung erzielen.
Im folgenden Kapitel wird ein systematisches Verfahren zur automatisierten
Generierung von Black-Box-Testabläufen für zustandsorientierte Systeme in der
Automobilelektronik vorgestellt, das auf dem Open-Loop-Ansatz basiert.
3 Automatisierte Testfall-Generierung aus einem UML-Testmodell
3.1 Grundlagen
Am FKFS wird in enger Zusammenarbeit mit einem OEM ein neues Verfahren
entwickelt, das auf Basis einer zustandsbasierten Spezifikation mit verschiedenen
Elementen der Unified Modeling Language (UML) automatisiert Testfälle herleiten
kann. Eine wichtige Maßgabe dabei ist die praktische Umsetzbarkeit in der
Serienentwicklung.
Die funktionsorientierte Spezifikation erfolgt mittels Use Cases und Protokollautomaten.
Daraus werden in einem dreistufigen Prozess Testfälle erstellt. In der ersten Stufe
werden alle Protokollautomaten strukturell umgeformt und UML-spezifische Elemente,
Nebenläufigkeiten und Zustandshierarchien eliminiert. Die zweite Stufe des Prozesses
basiert auf graphentheoretischen Algorithmen und sucht nach gegebenen
Abdeckungskriterien Wege durch die resultierenden flachen Automaten. Im Anschluss
erfolgt in der dritten Stufe durch Auswertung der mit allen Zuständen und Übergängen
verbundenen Bedingungen die Ermittelung von Stimuli und Reaktionen des Systems.
Die daraus folgenden abstrakten Testfälle werden dann mittels eines Konverters für ein
spezifisches Testsystem ausgegeben.
3.2 Beispiel: Teilfunktion eines Türsteuergeräts
In Abbildung 2 ist der Protokollautomat zur Spezifikation der Fensterheber-Funktion
„Komfortschließen“ dargestellt. Nachdem die Zündung (Klemme 15) initial gesetzt
wird, erfolgt die Rücknahme der Klemme 15 und die Freigabe der Komfortfunktion des
Fensterhebers über CAN. Das Schließen des Fensters wird nur bei deaktivierter
Kindersicherung (KiSi) durch zwei unterschiedliche CAN-Signale (KomfortSchliessen1
und KomfortSchliessen2) ausgelöst. Ein nachträgliches Aktivieren der Kindersicherung
hat keinen Einfluss auf den Betrieb des Fensterhebers, der nach einer maximalen
Betriebszeit dTBetrieb wieder deaktiviert wird. Vor einer erneuten Aktivierung des
Fensterhebers tritt eine Verzögerung von dTVzNeustart in Kraft. Die Freigabe der Funktion
wird durch das Öffnen einer Tür oder das Freigabesignal selbst wieder beendet. Der
Ablauf der Nachlaufzeit dTNachlauf beendet ebenfalls den Betrieb des Fensterhebers.
17
Jeder Zustand definiert, ob der Motor des Fensterhebers in Betrieb ist oder nicht. Aus
jedem Zustand wird daher eine Operation zur Überwachung des entsprechenden
Ausganges und eine Sollvorgabe erzeugt.
Abbildung 2: Protokollautomat für die Funktion „Komfortschließen der Fenster“
Abbildung 3: Mögliche Pfade durch den Zustandsautomaten
18
Aus dem Zustandsautomaten ergeben sich bis zu 16 Pfade aus der Automatenstruktur,
die in Abbildung 3 dargestellt sind. Die Angaben an den einzelnen Übergängen zeigen
die Anzahl der Permutationen an, für die die mit diesem Übergang verknüpfte
Bedingung wahr ist. Der erste Wert zeigt dies für das Verfahren der Logikminimierung
an und der zweite Wert für die Mehrfachbedingungsabdeckung. Nach Auswertung der
Guard-Bedingungen gemäß Mehrfachbedingungsabdeckung folgen daraus bis zu 1914
Testfälle. Durch Anwendung der Optimierung mittels Logikminimierung kann dies auf
166 Testfälle reduziert werden. Die theoretischen Grundlagen und Algorithmen des
Verfahrens werden in [Bro08] beschrieben.
Diese relativ hohe Anzahl von 166 Testfällen wird auf einer Open-Loop-Testmaschine
automatisch durchgeführt und die Ergebnisse werden automatisch ausgewertet. Dadurch
erfordert der Test der beschriebenen Beispiel-Funktion nur wenig manuelle Arbeit von
Test-Ingenieuren und führt dennoch zu einer nachvollziehbaren, hohen Testabdeckung.
Zur Zeit kommt in einem Serienentwicklungsprojekt bei einem OEM die am FKFS
entwickelte Testmaschine PATE zu Einsatz [Bau05]. Die generierten Testabläufe lassen
sich jedoch auch auf andere Testmaschinen übertragen.
Literaturverzeichnis
[Vig04]
[Het07]
[How87]
[Dus00]
[Bro08]
[Bau05]
Vigenschow, U.: Objektorientiertes Testen und Testautomatisierung in der Praxis.
dpunkt-Verlag, 2004. ISBN 3-8How879864-305-0.
Hettich, G.: Qualität automobiler Elektronik. Skriptum zur Vorlesung, Universität
Stuttgart, 2007.
Howden, W. E.: Functional Program Testing & Analysis. McGraw-Hill Verlag, 1987.
Dustin, E.; Rashka, J; Paul, J.: Software automatisch testen. Verfahren, Handhabung
und Leistung. Berlin: Springer, 2000.
Brost, M.; Baumann, G.; Reuss, H.-C.; UML-basierte Testfallerzeugung für
Karosseriesteuergeräte. Tagung „Simulation und Test in der Funktions- und
Softwareentwicklung für die Automobilelektronik“, Berlin, erscheint in 5/2008.
Baumann, G.; Brost, M.; Reuss, H.-C.: P.A.T.E. - Eine neue Integrations- und Automatisierungsplattform für den Test von elektronischen Steuergeräten im Automobil. 6.
Internationales Stuttgarter Symposium Kraftfahrwesen und Verbrennungsmotoren.
expert-Verlag, 2005.
19
Durchgehende Systemverifikation im
Automotiven Entwicklungsprozess
Oliver Niggemann, Rainer Otterbach
dSPACE GmbH
Technologiepark 25
33100 Paderborn
oniggemann@dspace.de
rotterbach@dspace.de
Abstract: System- und vor allem Software-Architekturmodelle sind im Bereich
der automotiven Software-Entwicklung gerade dabei, die Phase von
Konzeptprojekten und Vorentwicklungsaktivitäten zu verlassen und in ersten
Serienprojekten Verwendung zu finden. Entscheidend für die Akzeptanz dieser
neuen Ansätze wird in Zukunft u.a. die Simulierbarkeit dieser Modelle auf dem PC
sein. Dies würde die frühere Verifikation der verteilten Steuergeräte- und
Software-Systeme erlauben und spätere, etablierte Testphasen wie z.B. den
Hardware-in-the-Loop Test entlasten.
Dieser Beitrag versucht, anhand eines Prozessmodells Einsatzmöglichkeiten von
Systemmodellen für die PC-basierte Simulation und damit für eine frühe
Systemverifikation aufzuzeigen. Die Autoren möchten dieses Paper vor allem auch
dazu nutzen, einige Fragestellungen aufzuwerfen, die aus Ihrer Sicht aktuell
ungelöst und daher für eine Diskussion während des Workshops besonders
interessant sind.
Für diese Arbeit wurde auf Teile von bereits veröffentlichten Arbeiten
zurückgegriffen, und das dSPACE Modellierungs- und Simulationswerkzeug
SystemDesk wurde zur Illustrierung der Arbeitsschritte verwendet.
1 Ein Prozessmodell für Systemmodelle
Die modellbasierte Entwicklung der Steuergeräte-Software und die automatische
Generierung des Seriencodes haben sich weithin etabliert, zumindest auf
Einzelfunktionsebene. Die Einbeziehung der Software-Architektur steckt dagegen noch
in den Kinderschuhen, obwohl gerade hier der Schlüssel zur Bewältigung der
zunehmenden Komplexität zu finden ist. Ein Grund liegt darin, dass Methoden und
Werkzeuge für die Modellierung und erst recht für die Simulation von automotiven
Software-Architekturen fehlen. Sie können auch nicht einfach aus anderen
Anwendungsgebieten, zum Beispiel der Telekommunikation, übernommen werden, wie
zahlreiche Versuche etwa auf der Basis von UML gezeigt haben. Ähnliches gilt für die
heute bekannten, modellbasierten Entwurfswerkzeuge, die sehr gut die Sicht des
20
Funktionsentwicklers widerspiegeln, aber nicht geeignet sind, Software-Architekturen
im großen Stil darzustellen.
Aus diesem Grund haben sich in den letzten Jahren automotiv ausgerichtete Softwareund System-Architekturmodelle als Ergänzung zu den reinen Funktionsmodellen im
Entwicklungsprozess etabliert (siehe auch [SNO07]). Erste Systementwurfswerkzeuge
wie SystemDesk von dSPACE erlauben dabei die Modellierung von Funktionsnetzen
und Software-Architekturen, die Abbildung von Software auf ECU-Netzwerke und die
Generierung von Integrationssoftwareschichten wie z.B. die AUTOSAR RuntimeEnvironment (RTE, siehe auch [AUR]).
In Zukunft werden solche Architekturmodelle darüber hinaus ausführbar sein müssen
und eine frühzeitige Verifikation des Systems durch Simulation in Nicht-Echtzeit auf
dem PC erlauben. Unter Einbeziehung und Erweiterung der heute vorhandenen
Simulationswerkzeuge für regelungs- und steuerungstechnische Funktionen muss dies
auch für vernetzte Funktionen und seriencodierte Software-Komponenten eines
gesamten Steuergerätes und eines Steuergeräteverbunds möglich sein. Bei Closed-LoopSimulationen müssen Streckenmodelle mit dem Software-System verbunden werden.
Requirement
Level
Logical
Level
Top-Level
Requirements
Vehicle
Functional
Architecture
System
Architecture
Vehicle
Implementation
Architecture
Requirements
for
Functionalities
Subsystem
Functional
Architecture
e.g. ECU
Application
Architecture
e.g. ECU
Implementation
Architecture
RTE
Code
Requirements
for Single
Functions
Function
Design
Optimized
Function
Design
Function
Implementation
Design
Production
Code
Modularization
System
Level
Refinement
Implementation
Level
Bus
Configurations
Generation
Abbildung 1: Prozessrahmen zur Erläuterung der Verwendung von System- und SoftwareArchitekturmodellen
21
Diese zu entwerfenden und zu simulierenden Modelle werden hier aus
Übersichtlichkeitsgründen in drei Kategorien (grob durch Prozessschritte in der E/EEntwicklung motiviert) eingeordnet, die sich durch Verfeinerung aus den Anforderungen
ergeben (horizontale Richtung in Abbildung 1). Anforderungen liegen als informelle
Textdokumente oder frühzeitig entwickelte Modelle vor, zum Beispiel kann eine
Lichtsteuerung relativ einfach mit Hilfe eines endlichen Automaten definiert werden.
Die Modularisierung (vertikale Richtung) ist für den Umgang mit komplexen Systemen
unerlässlich und dient der Arbeitsteilung zwischen Unternehmen und einzelnen
Entwicklern.
Der Prozessrahmen in Abbildung 1 wird hier verwendet, um zu erläutern, wie sich
System-, und Software-Architekturmodelle in bestehende E/E-Entwicklungsprozesse
eingliedern. Da je nach Modellkategorie (horizontale Richtung) unterschiedliche Effekte
simuliert werden können, ist diese Prozessdarstellung zum anderen auch geeignet, durch
eine PC-Simulation identifizierbare Systemfehler (z.B. CAN-Signalsausfälle) den
Phasen des Entwicklungsprozesses zuzuordnen (siehe auch [ONS07]). Hieran ist dann
erkennbar, wie solch eine Systemsimulation in Zukunft dazu beitragen kann, einige
Fehler und Risiken schon früher im Test- und Absicherungsprozess zu erkennen.
Diskussionspunkt: Sind Modularisierung und Modellverfeinerung verschiedene
Aspekte des Modellierungsprozesses? Fehlen Modelle oder Prozessschritte in
dieser Darstellung?
2 Logische Ebene
Auf logischer Ebene (Modellkategorie 1) wird das System als hierarchisches Modell
vernetzter Funktionalitäten oder Software-Komponenten spezifiziert. Solche logischen
Software-Architekturen werden häufig von Fahrzeugherstellern in frühen Phasen
verwendet, um (i) die Modularisierung und Verteilung von Fahrzeugfunktionen zu
entwerfen, (ii) wieder verwendbare Software-Komponenten zu spezifizieren und (iii)
Software-Schnittstellen z.B. mit Zulieferern zu vereinbaren. Auswirkungen der später zu
verwendenden Hardware-Plattform werden dabei nicht berücksichtigt.
Die Simulation auf logischer Ebene findet in erster Linie Fehler im logischen
Zusammenspiel vernetzter Komponenten, die in der gegenseitigen Abhängigkeit der
Funktionen begründet sind. Je mehr das gewünschte Systemverhalten durch verteilte
Funktionen realisiert wird, desto mehr lohnt sich die frühe Simulation. Beispiele dafür
sind der gesamte Bereich der Komfortelektronik, aber auch Fahrerassistenzsysteme wie
adaptive Fahrgeschwindigkeitsregelung, Aktivlenkung oder Spurhalteunterstützung.
AUTOSAR (siehe [AU]) bezeichnet eine Simulation auf logischer Ebene auch als
Virtual-Functional-Bus (VFB) Simulation.
22
Abbildung 2 zeigt ein Beispiel für die logische Architektur einer Blinkerregelung (hier
mit dem Systemmodellierungs-Werkzeug SystemDesk von dSPACE erstellt). SoftwareKomponenten sind, unabhängig von ihrer späteren Verteilung auf Steuergeräte,
miteinander verbunden, um eine Gesamtfunktion zu realisieren.
Diskussionspunkt: Heute findet man zunehmend zwei Arten von Modellen auf
dieser Ebene:
1.
Software Architekturen: Dies sind klassische Software-Modelle mit
formalen Interfaces, strenger Typisierung aller Daten und Referenzen zu
Implementierungsdateien. AUTOSAR ist ein Beispiel für einen Standard
in diesem Bereich. Zweck dieser Modelle ist eine
implementierungsgeeignete und daher formale Modellierung der Software.
Die Durchgängigkeit zur Serien-Software-Entwicklung (z.B. The
Mathwork’s Simulink® als Modellierungsumgebung mit dem
Seriencodegenerator TargetLink von dSPACE) ist hierbei ein
entscheidendes Merkmal.
2.
Funktionsnetze: Diese Netze stellen eine hierarchische Dekomposition der
Funktionen eines Fahrzeugs dar. Zusätzlich wird zumeist die Abbildung
von Funktionen auf Steuergeräte modelliert. Solche Modelle werden oft
von Fahrzeugherstellern verwendet, um sehr früh im Prozess alle
benötigten Funktionen zu erfassen und verschiedene Aufteilungen von
Funktionen auf Steuergeräte durchzudenken und zu bewerten.
Werden sich in Zukunft beide Modelle nebeneinander etablieren? Wie wird der
Übergang zwischen Anforderungen, Funktionsnetzen und Software Architekturen
aussehen? Oder sind Funktionsnetze eher Bestandteil der Anforderungsebene?
Wie kann diese Software-Architektur simuliert werden?
Voraussetzung ist zunächst einmal, dass das Verhalten aller Komponenten implementiert
wurde. In solchen frühen Phasen der Entwicklung heißt dies, dass die Komponenten
entweder als ausführbare Spezifikationsmodelle vorliegen oder existierende
Implementierungen aus frühen Phasen verwendet werden. Spezifikationsmodelle
stammen oft von Herstellern und definieren Funktionen wie z.B. obige Blinkerregelung
für die späteren Software-Entwickler bzw. für Code-Generatoren. Anders als Modelle,
die später für die Software-Generierung verwendet werden, enthalten solche Modelle oft
keine implementierungsnahen Angaben wie z.B. Fixpoint-Definitionen oder
Startup/Shutdown-Funktionen.
23
Abbildung 2: Verschaltung von Software Komponenten auf logischer Ebene
Im Folgenden wird von Software Komponenten nach dem AUTOSAR Standard
ausgegangen. AUTOSAR hat den Vorteil, wieder verwertbare Software Komponenten
definiert zu haben, d.h. solche Komponenten sind Plattform-unabhängig implementiert
und können daher im Normalfall auch auf dem PC ausgeführt werden. Darüber hinaus
weisen AUTOSAR Software-Komponenten standardisierte Schnittstellen für die
Kommunikation mit ihrer Umgebung auf. Im Fall von nicht-AUTOSAR Komponenten
müssen entweder die Komponenten für das Simulationswerkzeug passend „verpackt“
werden, oder es erfolgen Anpassungen im Simulationswerkzeug (bei SystemDesk z.B.
sehr einfach möglich).
Aus Simulationssicht müssen zwei Hauptprobleme vor einer Simulation auf logischem
Niveau gelöst werden:
1. Wann und in welcher Reihenfolge werden die Funktionen in den SoftwareKomponenten (d.h. die so genanten Runnables) aufgerufen?
2. Wie wird, aufbauend auf der Reihenfolge aus Frage 1, die Kommunikation zwischen
den C Modulen von der Simulationsumgebung implementiert? Hier sind ja separat
entwickelte C Module zu verbinden und dafür ist es notwendig, Daten in die C Module
hineinzusenden bzw. herauszuholen. Es sei darauf hingewiesen, dass dies ohne weitere
Kenntnisse über den Aufbau der C Module (wie z.B. von AUTOSAR definiert) nur sehr
schwer möglich ist.
Die erste Frage ist durch eine Analyse der Abhängigkeiten zwischen den SoftwareKomponenten
zu klären. Alle dafür notwendigen Informationen sind in der
(AUTOSAR) Software Architektur vorhanden. Nachdem Software-Komponenten (bzw.
deren Runnables) im Allgemeinen für die Ausführung mit einem automotiven
Echtzeitbetriebssystem (z.B. AUTOSAR OS oder OSEK) implementiert wurden, ist es
sinnvoll, für die Simulation einen OSEK Schedule zu erstellen und die Simulation auf
dem PC mit einem OSEK Emulator durchzuführen.
24
Für die Simulation muß nun noch der Verbindungscode zwischen den SoftwareKomponenten generiert werden. Dies entspricht dem Runtime-Environment (RTE)
Generierungsschritt des AUTOSAR Standards. SystemDesk verwendet hierzu die
gleiche RTE-Generierung, die auch für ECU Serienprojekte zum Einsatz kommt.
Diskussionspunkt: Woher kommen in solchen Szenarien die Implementierungen der
Komponenten? Werden Fahrzeughersteller in Zukunft für solch eine Simulation
formale Spezifikationen der Funktionen verwenden? Oder wird es ein
Geschäftsmodell geben, bei dem Zulieferer simulationsfähige Versionen von
Software-Komponenten an Hersteller liefern?
Welche Möglichkeiten gibt es, vereinfachte Funktionsspezifikation zu modellieren?
Wie kann ein Anwender also in frühen Phasen eine ausführbare Spezifikation
erstellen, ohne den kompletten Implementierungsprozess durchlaufen zu müssen?
3 Systemebene
Modelle auf Systemebene definieren die Verteilung von Funktionen auf Steuergeräte
und die Software-Struktur der Anwendungsebene auf einem einzelnen Steuergerät.
Damit wird unter anderem die Planung und Erstellung der Kommunikationsmatrizen aus
der Gesamtsicht der E/E-Architektur des Fahrzeugs ermöglicht und die Arbeitsteilung
zwischen Fahrzeughersteller und Zulieferern unterstützt.
In der Simulation können nun zusätzliche Effekte dargestellt werden, insbesondere
solche, die von den Kommunikationsbussen verursacht werden. SystemDesk simuliert
nun z.B. für CAN-Busse Auswirkungen der Arbitrierung oder der in der Realität immer
begrenzten Buskapazität.
Die Simulation erlaubt eine grobe Abschätzung der Busauslastung und die
Berücksichtigung von Kommunikationsverzögerungen bei der Analyse des zeitlichen
Verhaltens der Anwendung.
Diskussionspunkt: Busabschätzungen werden heute auch mit statistischen
Methoden durchgeführt. Dies geschieht oft in frühen Phasen, in denen noch keine
Komponentenimplementierungen vorhanden sind, d.h. eine Simulation noch nicht
möglich ist.
Wird es Zukunft Bedarf für beide Formen der Busabschätzung geben? Welche
Methode ist für welche Fragestellungen sinnvoller?
25
4 Implementierungsebene
Die Implementierungsebene dient als Basis für die Erstellung des gesamten
Steuergeräte-Seriencodes. Das Systemmodell muss jetzt alle Informationen enthalten,
die für die Implementierung der Funktionalitäten notwendig sind. Beispielsweise werden
Informationen zur Festkomma-Arithmetik oder Abhängigkeiten zum Mode-Management
des Steuergeräts ergänzt.
Die
Implementierungsebene
bietet
naturgemäß
die
weitreichendsten
Simulationsmöglichkeiten, da hier wichtige Parameter der Seriensteuergeräte bekannt
sind und der Seriencode verfügbar ist.
Das Anwendungs-Timing beruht hauptsächlich auf dem zugrundeliegenden
Betriebssystem-Scheduling des Steuergeräts, zum Beispiel, ob Runnables periodisch
oder beim Eintreten bestimmter Ereignisse aufgerufen werden. Aus diesem Grund
verwendet SystemDesk eine OSEK-Betriebssystemsimulation auf dem PC einschließlich
des Scheduling-Verhaltens zur Simulation des Scheduling-Verhaltens der Anwendung.
Die Fragen dabei sind, wie präzise diese Simulation ist, welche Effekte simuliert werden
können und welche Effekte während eines solchen Ansatzes nicht simuliert werden. Es
ist dafür wichtig, zwischen der simulierten Zeit (gesteuert von SystemDesk) und der
realen Ausführungszeit auf dem PC (definiert durch den PC-Takt) zu unterscheiden. Die
zur Ausführung von Tasks PC benötigte Zeit unterscheidet sich natürlich völlig von der
benötigten Zeit auf dem Seriensteuergerät.
Eine Möglichkeit in SystemDesk ist der Einsatz einer sogenannten NulllaufzeitAnnahme für die Simulation auf dem PC. Damit werden Tasks dann unverzüglich, d.h.
ohne Zeitverbrauch, in der virtueller Simulationszeit ausgeführt. Das ist notwendig, da
dem Simulator die auf dem Steuergerät notwendige Ausführungszeit nicht bekannt ist.
Bei der Simulation von Steuergeräte-Anwendungen ist es ebenfalls notwendig, Teile der
Basis-Software zu emulieren. SystemDesk verwendet dafür vereinfachte (PC-)Versionen
der AUTOSAR Basis-Software Module. Einer der wichtigsten Aspekte ist die
Betrachtung des zeitlichen Verhaltens des Systems, das maßgeblich durch das
Betriebssystem-Scheduling bestimmt wird. Da SystemDesk wie bereits geschildert eine
OSEK-Betriebssystemsimulation auf dem PC verwendet, kann also das SchedulingVerhalten der auf Systemebene modellierten ECUs weitgehend simuliert werden.
Darüber hinaus emuliert SystemDesk zurzeit die folgenden Basis-Software-Module:
Modus-Management, Fehler-Management, NV-RAM-Manager und AUTOSAR-COMSchicht.
26
Diskussionspunkt: Es stellt sich hier die Frage, für welche Szenarien die BasicSoftware mitsimuliert wird:
1. Wird die Basic-Software nur deshalb emuliert, da Applikationen diese zum
Betrieb benötigen (z.B. einen NVRAM Manager)?
2. Oder sollen allgemeine Betriebszustände der Basic Software erfasst werden? In
diesem Fall würde die Emulation der Basic Software ausreichen. Ein Beispiel wäre
die Erfassung der aufgetreten Fehlercodes im Fehlerspeicher eines Steuergeräts.
2. Soll die Basic-Software selbst getestet werden? In diesem Fall muss natürlich
die Serien-Software selbst verwendet werden können.
5 Fazit
Schon heute ist es möglich, Systemmodelle z.B. nach AUTOSAR auf einem PC zu
simulieren. Entsprechende Ansätze, wie sie z.B. aktuell für SystemDesk erprobt werden,
erlauben dabei sowohl die Simulation von Modellen der logischen Ebene als auch die
Simulation kompletter Systemmodelle, die u.a. Hardware-Topologie, Basis-Software
und Busse umfassen.
In Zukunft werden der etablierte Hardware-in-the-Loop Test und PC-basierte
Simulationen sich ergänzen und gemeinsam zur weiteren Qualitätssicherung von E/EArchitekturen und der automotiven Software beitragen.
Literaturverzeichnis
[SNO07] Dirk Stichling, Oliver Niggemann, Rainer Otterbach. Effective Cooperation of System
Level und ECU Centric Tools within the AUTOSAR Tool Chain. SAE World Congress
& Exhibition, April 2007, Detroit, USA
[ONS07] Rainer Otterbach; Oliver Niggemann; Joachim Stroop; Axel Thümmler; Ulrich
Kiffmeier. System Verification throughout the Development Cycle. ATZ
Automobiltechnische Zeitschrift (Edition 004/2007), Vieweg Verlag / GWV Fachverlage
GmbH
[AU] AUTOSAR Partnership: http://www.autosar.org/
[AUR] AUTOSAR Runtime-Enviroment Spezifikation Version 2.1, http://www.autosar.org/
27
Modeling Guidelines and Model Analysis Tools in Embedded Automotive Software Development
Ingo Stürmer1, Christian Dziobek2, Hartmut Pohlheim3
1
Model Engineering Solutions, stuermer@model-engineers.com
2
Daimler AG, christian.dziobek@daimler.com
3
Daimler AG, hartmut.pohlheim@daimler.com
Abstract: Model-based development and automatic code generation have become
an established approach in embedded software development. Numerous modeling
guidelines have been defined with the purpose of enabling or improving code generation, increasing code efficiency and safety, and easing model comprehension. In
this paper, we describe our experience with modeling guidelines and model analysis tools used in typical embedded automotive software development projects. Furthermore, we will describe new approaches for supporting developers in verifying
and repairing modeling guideline violations.
1 Introduction
The way in which automotive embedded software is developed has undergone a change.
Today, executable models are used at all stages of development: from the initial design
through to implementation (model-based development). Such models are designed using
popular graphic modeling languages, such as Simulink and Stateflow from
The MathWorks [TMW07]. Code generators that are capable of translating MATLAB
Simulink and Stateflow models into efficient production code include TargetLink [TL08]* and the Real-Time Workshop/Embedded Coder [TMW07]. The software
quality and efficiency are strongly dependent upon the quality of the model used for
code generation. The increasing complexity of embedded controller functionality also
leads to a higher complexity of the model used for automatic controller code generation.
Modeling tools that are typically used for controller model design cannot provide sufficient support to the developer in dealing with increasing model complexity. One and the
same issue can, for example, be modeled in different ways, thus intensifying problems
such as coping with functional design, the limited resources available, and model/code
maintenance. For that reason, numerous modeling guidelines have been defined with the
purpose of enabling or improving code generation, increasing code efficiency and safety,
and easing model comprehension.
In this paper, we focus on the adoption of modeling guidelines in the model-based development of embedded software. Examples of modeling guidelines will be presented
and their objectives specified. We will introduce the e-Guidelines Server: a tool for pre*
TargetLink uses its own graphical notation for code generation (TargetLink blockset), which is based on the
Simulink modeling language.
28
paring and administering guidelines for different projects. We will also present a case
study of a large-scale software development project. In the next step, we will present
methods of static analysis to verify whether modeling guidelines are being properly following. In addition to discussing the tools that are available, we will present current implementation scenarios, and provide a forecast of probable implementation scenarios in
the years to come. Both methods (modeling guidelines and static model analysis) share
the goal of increasing the quality of the controller model used for production code generation.
2 Modeling Guidelines for Controller Model Design
An agreement to follow certain modeling guidelines is important in order to increase
comprehensibility (readability) of a model, facilitate maintenance, ease testing, reuse,
and expandability, and simplify the exchange of models between OEMs and suppliers.
This is the aim of the MathWorks Automotive Advisory Board (MAAB) guidelines and
patterns [MAAB07], for example. Publicly available guidelines such as these are often
supplemented by in-house sets of modeling guidelines, for example the Daimler Modelbased Development Guidelines (DCX:2007) [Daimler07]. These provide a sound basis for
checking the models used for production code generation. The adoption of such guidelines can significantly improve the efficiency of generated code.
2.1 Examples of Modeling Guidelines
In the following section, we will discuss different categories of modeling guideline (see
table 1). These refer to (1) model layout, (2) arithmetical problems, (3) exception handling, and (4) tool and project-specific constraints. Examples of modeling guidelines that
we are using here are guidelines for model layout and arithmetical problems. As a running example, we will use a snapshot (function or module) of a TargetLink controller
model (see figure 1). This model violates several modeling guidelines. The same model
is depicted complying with the DCX:2007 modeling guidelines in figure 2.
A large proportion of modeling guidelines focuses on the readability and proper documentation of a model. The adoption of such ‘layout guidelines’ can significantly increase
the comprehensibility of a model (cf. figure 1 and figure 2). Here are some simplified
examples of guidelines that can be applied to achieve a better model layout:
! Rule 1: Inports should be located on the left-hand side of a model; outports should be located
on the right-hand side of a model.
! Rule 2: Signal lines should not cross.
! Rule 3: Names or blocks should not be obscured by other blocks.
! Rule 4: Signal flow should be described from left to right.
! Rule 5: No use of ‘magical numbers’.
! Rule 6: The functionality of a model should be described on each graphic model layer.
Arithmetical operations, such as a multiplication with two or more operands, are carried
out in Simulink using product blocks. A product block with three inputs is shown in fig-
29
ure 1 (top right). This way of modeling may create problems when the model is translated into fixed-point† code by a code generator. Intermediate results must be calculated,
as they cannot be determined automatically by the code generator. For this reason, the
following guideline must be followed in order to avoid arithmetical problems with fixedpoint calculation:
! Rule 7: Product blocks with more than two inputs are prohibited in models used for production code generation. A cascade of product blocks with two inputs should be used instead.
An arithmetical exception with a product block can occur if the block is used for calculating the products of two numbers. If the divisor is zero, a division by zero occurs (exception). For this reason, a guideline states that the product block option “omit division
by zero in production code” is selected (not shown in figure 1). If not, a division by zero
might occur in the code, because no additional code for exception handling will be generated.
In model-based development, different tools are used for e.g. testing the model and comparing the test results with the production code. Tools such as MTest [MES08b] or
TPT [TPT07] are usually used for this purpose. Both tools require that the interface of a
function or module to be tested must be ‘well-defined’. For a Simulink model, for example, this means that only single-lined names are used for ports and the names may not
contain any special characters (e.g. carriage return) except underscore. For this reason,
the name of the outport “est. air flow.” in figure 1 (top right) is replaced in figure 2
by the outport name “est_air_flow”. Other guidelines may be specified for specific
development projects. In most cases, naming conventions are part of such projectspecific conventions. Other guidelines may take into account specific hardware types or
platforms that are being used in the respective project.
Table 1. Categories of modeling guidelines
Category
Model layout
Arithmetical
lems
†
Aim/goal
Increase readability, maintainability, portability
prob-
Prevent typical arithmetical problems (e.g. division by zero,
fixed-point limitations)
Exception handling
Increase robustness of the model and the generated code
Tool-specific
considerations
Guidelines that address tool-specific considerations, e.g. ensure
that the model can be tested with model testing tools such as
MTest [MES08b] and TPT [TPT07]
Project-specific
guidelines
Naming conventions; guidelines which ensure that model and
code can deal with project-specific limitations (e.g. a specific
hardware type)
Fixed-point arithmetic is the preferred calculation method in embedded software development for reasons of
limited resources and code efficiency.
30
In summary: Different categories of modeling guidelines are specified in order to address specific topics in the model-based development process. As we can see, the application of a mere seven modeling guidelines can significantly improve the quality of a
model used for code generation (cf. figure 1 and figure 2).
Figure 1. TargetLink model violating several modeling guidelines
Figure 2. TargetLink model in compliance with Daimler modeling guidelines
31
2.2 Why do we need Modeling Guidelines
This question has been posed repeatedly ever since the early days of model-based software development. Other common questions include: How should parameters and options be set up? How should blocks be ordered? How can this task be modeled? What
should not be used? Why is this construct creating problems? I do not understand this
model or subsystem, etc. These questions and problems have all been encountered by
developers at some point in time, and in most cases, have been solved very successfully.
The purpose of modeling guidelines is to collate this experience, then have a group of
modeling experts inspect, verify, and process the resulting guidelines, and finally making
them available to a wider group of developers. Without the knowledge contained in
modeling guidelines, every modeler would have to rely on the extent of his own experience and start from the beginning in each new project, thus leading to regularly occurring problems in the modeling process. Here are just a few of the results of faulty modeling:
! A faulty model (bitwise AND/OR or logical AND/OR, missing brackets in complex logical
expressions)
! A model that is difficult to read and therefore difficult to understand, thus leading to e.g. an
unnecessary delay in the model review
! A model that can only be migrated to a limited extent (critical in the reutilization of libraries,
for example)
! A model that is difficult to maintain
! Unsafe code (especially when using a code generator)
! Greater difficulty of model testing (model and environment are not clearly separated, clear
interfaces for the model test)
2.3 Daimler Modeling Guidelines
The Daimler modeling guidelines are a collection of best-practices and in-house expertise focusing on (1) the design of MATLAB Simulink, Stateflow, and TargetLink models (e.g. for cars and trucks), (2) model testing, and the (3) review of production code,
which has been generated by a code generator. At present, there are over 300 guidelines
and patterns for MATLAB/Simulink/Stateflow and TargetLink models alone. This number has developed over many years, and the guidelines themselves have been developed
and maintained by a few experts in text document form. In the past, an update occurred
approximately every 9 to 12 months. Project-specific guidelines were derived from a
particular status and maintained separately. This document-heavy way of operating, with
no central administration and no easy way of distributing guidelines, became unmanageable at an early stage. In response to this problem, concepts for an electronic administration of these guidelines were developed, and their principle feasibility proven. In particular the challenge of enabling simple and secure access for all Daimler employees necessitated the setting up of a central administration and publication via the public eGuidelines Server. This will be described in more detail in section 3.1.
32
3 Tool Support
The purpose of this section is threefold. Firstly, to describe tool support for the management and publication of modeling guidelines. Secondly, to provide an overview of static
model analysis tools that are capable of analyzing a model with regard to modeling guideline compliancy. Thirdly, to outline a case study where we used a static model analysis
tool in order to (a) evaluate the maturity of a large-scale model with regard to production
code generation, and (b) check the model in regard to guideline compliancy. Note that
the term check does not mean model checking, i.e. a formal verification of the model.
Instead we use the term model conformance checking, or check in short, to describe a
program that verifies whether a model complies with a modeling guideline.
3.1 e-Guidelines Server
The e-Guidelines Server [MES08a] is a web-based infrastructure for publishing and centrally administering different sets of guidelines. The tool was originally developed in the
Research and Technology department at Daimler. Since 2007, the e-Guidelines Server
has been a commercial tool distributed by Model Engineering Solutions. In contrast to
the internal company and document-specific publication of modeling guidelines, the
e-Guidelines Server enables the joint administration and consistent updating of different
sets of guidelines, as well as the project-specific variants which are derived from them.
Moreover, different filters and views can be created for the available guidelines. Filters
can be understood as selectors for guidelines that relate to specific tool versions, or for
guidelines that focus on specific model design aspects, model quality, or process-related
aspects such as readability, workflow, simulation, verification and validation, and code
generation (termed rationale in the following).
Figure 3. GUI of the e-Guidelines Server
33
The GUI of the e-Guidelines Server is shown in figure 3. The image illustrates a typical
modeling guideline, which focuses on naming conventions, e.g. ANSI C conformance of
identifiers. The structure of a guideline or rule is as follows: each rule has a unique ID, a
Scope in which the rule is valid (e.g. a department or a project), a Title, a Description
(text and graphics), and a History field for tracking revisions (not shown in figure 3).
Each rule is represented by an XML document that is transformed via XSLT transformations into an HTML document as shown in figure 3. For this reason, a rule can contain
configurable meta-tags, such as Priority, MATLAB Version, TargetLink Version, and
Rationale. Such meta-tags are usually used for specific filter options, e.g. you can filter
for all rules that belong to the latest MATLAB version. Most guidelines available on the
e-Guidelines Server are company-specific and for internal use only, such as the Daimler
Model-based Development Guidelines (DCX:2007) and project-specific variants of these
guidelines. Access to these guidelines is only granted to Daimler employees and development department suppliers. Some of the guidelines on the server, however, are publicly available guidelines such as the MAAB guidelines or the dSPACE guidelines for
modeling and code generation with TargetLink. All Daimler development department
guidelines are now hosted and centrally published on the e-Guidelines Server. As a result, no department-specific variants of specific guidelines exist, and developers can be
sure that they are always accessing the latest guideline version.
Centralized administration of guidelines enables project-specific guideline libraries to be
compiled simply and efficiently. These are based on the main guidelines and are supplemented by project-specific modifications to individual guidelines and a few additional
guidelines. Changes and updates to the main guidelines are mirrored in all projectspecific guideline libraries. The gradual drifting apart of guidelines belonging to different projects is thus avoided, and all projects use the latest guideline version. On one
hand, this results in considerably reduced effort and expenditure for maintaining modeling guidelines at Daimler. While at the same time, the spread and acceptance of the
guidelines in different projects is increased.
3.2 Model Analysis Tools
Static model analysis tools exist for the automatic verification of models with respect to
modeling guideline compliancy. The purpose of these tools is to establish a framework
in which to perform checks. The check itself is a program that verifies a particular guideline. The authors are aware that prototypes stemming from different research projects are
available. These tools are also capable of performing model analysis of potential violations of modeling guidelines. Here, however, we wish to restrict ourselves to a discussion of commercial tools, which can be applied in a serial industrial project.
Commercial static model analysis tools that are available for MATLAB Simulink, Stateflow, and TargetLink models are the Simulink Model Advisor [TMW07] or
MINT [Ric07]. With both tools, model analysis is carried out by executing a selection of
MATLAB M-scripts. The checks either come with the tool or can be supplemented as
user-written checks. The checks that are distributed with the tools, for example the
Model Advisor checks, focus on considerations with regard to the modeling language
34
(Simulink, Stateflow), model settings, the MAAB style guidelines, or to safety process
standards such as DO-178B. The results of model verification using the checks are compiled into reports. These show all detected guideline violations linked to the model. In a
final step, the faulty model patterns must be investigated and, where necessary, corrected
by the developer: a highly time-intensive and error-prone task.
3.3 Experience with Model Analysis Tools
A problem shared by all of the above model analysis tools is their limited flexibility and
adaptability. Checks that already exist in development departments, for example, cannot
be directly integrated into the check management environment, or only with considerable
difficulty. Moreover, existing checks must be rewritten or even reimplemented for each
analysis tool. User-written checks are an integral part of the Model Advisor and MINT
and cannot be executed without one of these tools. Static corrections are currently not
available with the Model Advisor and MINT (though potentially possible).
In a recent in-house case study at Daimler, we used the MINT framework to check a
typical large-scale controller model from the automotive domain. The emphasis of this
case study was on (a) evaluating the maturity of the model with regard to production
code generation, and (b) checking the model with regard to guideline compliancy. It
turned out that the model contained over 2,000 guideline violations. Upon closer inspection, we estimated that up to 45 percent (900) of these 2,000 violations might have been
fixed automatically, if tools with automatic repair capabilities had been used. Furthermore, we estimated that approximately 43 percent could have been repaired with user
feedback (interactive repair operations). Only 8 percent actually necessitated a manual
correction. A final 4 percent remained undefined, which means that the modeler has to
determine whether the reported violation actually constitutes a modeling guideline infringement.
The manual input necessary for checking guideline compliancy is too high for the guidelines to be sensibly implemented without tool support. Adherence to guidelines can only
be integrated into the development process if tools are available that efficiently support
the developer in both checking guideline compliancy, and also in repairing guideline
violations. In response to this challenge, static model analysis tools with automatic repair
capabilities have been developed. These will be discussed in the next section.
4 How to Support the Developer?
In this section, we will discuss two tools, which we believe provide efficient support for
the model developer in checking and repairing guideline violations. We are currently
pursuing two separate approaches in developing tools for model analysis and model repair: (1) The Model Examiner: a tool for static analysis and repair of guideline violations; (2) MATE: conducts data flow analysis and can perform complex repairs via
model transformation.
35
4.1 The Model Examiner (MXAM)
The Model Examiner (Model eXAMiner, MXAM [MES08a]) is a static model analysis
and repair tool. MXAM was originally developed as a pilot project between Daimler and
Model Engineering Solutions. The purpose of MXAM is (1) to provide model conformance checks, which are not restricted to a specific framework such as the Model Advisor, MINT or the Model Examiner itself. Rather, the generic checks can be executed (a)
in any of the mentioned framework, (b) in stand-alone mode (MATLAB command line,
see figure 4) or (c) in batch-processing, e.g. as part of a user-written script; (2) to provide
automatic model repair functionality (see below). Tool independency of the model
checks is given in that each check can be executed on the MATLAB command line and
within the MXAM framework per se. For other tools, such as the Model Advisor, a
small piece of wrapper software is required, that allows the checks to be loaded into the
Model Advisor framework directly when MATLAB is started.
Figure 4. A Model Examiner check executed from the MATLAB command line.
All guidelines that can be statically checked are implemented in the Model Examiner. If
automatic repair is possible (without a structural modification of the model), this is made
available. In other words, the Model Examiner not only analyzes guideline violations, it
also corrects guideline violations that can be repaired statically (figure 4, bottom). Furthermore, the Model Examiner displays much greater flexibility and adaptability. This
allows it to be integrated into existing or planned development environments much more
easily. The reports not only list guideline violations, they also document all repair operations that have been carried out (not shown in figure 4). This last point is particularly
important for documentation purposes when developing safety-relevant software. The
Model Examiner also has a uni-directional connection/link to the guidelines on the e-
36
Guidelines-Server. This enables the developer to access the source or the original guideline library, on the basis of which the model was checked, if a violation is detected.
The Model Examiner was already pilotized within the development departments of
Daimler (truck divison). MXAM was used before a model review of the controler models was carried out. As an outcome, the use of MXAM could significantly reduce the
work load of a model review, since nearly all formal aspects of guideline violations
(naming conventions, coloring, etc.) were detected and many of them could be corrected
automatically. Apart from such formal aspects, guidelines violations were reported most
developers are not aware of, since such guidelines focus on very specific limitations of
code generators (e.g. unsupported rounding operations).
4.2 MATE
The MATLAB Simulink and Stateflow Analysis and Transformation Environment (MATE) [ALSS07] provides support in detecting previously specified guideline violations,
and in addition–in contrast to the Model Advisor or MINT–provides the possibility of
automatically applying predefined model transformations, e.g. repair operations, layout
improvements, and modeling pattern instantiations. MATE offers four different categories of model transformation and improvement functions: (1) Automatic repair functions: These can potentially eliminate up to 60 percent of MAAB guideline violations by
means of automated model transformations. A typical straightforward repair function,
for example, would be to set the actual view of every subsystem view to 100 percent;
Interactive repair functions: These are high-level analysis and repair functions that
require user feedback, since the required information cannot be determined automatically. One example would be autoscaling support functions, which check and calculate
scaling errors by means of data flow analysis, and generating proposals for how to
eliminate these errors. Such functions often need additional fixed-point scaling information or input about handling specific data types. The transformation of a product block
with more than two operands (as shown in figure 1) is also a typical repair function with
user feedback, as the product block can be automatically transformed into a cascade of
product blocks with two operands. However, the modeler must provide additional scaling information for the intermediate results (see [ST07] for details); (3) Design pattern
instantiation: The MAAB guidelines provide a rich set of modeling patterns, such as
the Simulink if-then-else-if pattern and flowchart switch-case constructs. These predefined, MAAB guideline-conformant pattern instances can be automatically generated
from a pattern library. For example, MATE supports instantiation of a parameterized
flowchart switch-case pattern, the nesting depth of which can be configured by the modeler; and (4) Model ‘beautifying’ operations: Providing a suitable model layout is important and often requisite. For example, it is common practice to locate all Simulink
inports on the left side of a subsystem. MATE provides a set of beautifying operations
that, for example, align a selected set of blocks to the left, right, or center them in relation to a reference block. Furthermore, signals leading into ports of a subsystem can be
easily interchanged when signal lines are overlapping.
37
Compared to the Model Advisor, MINT, and the Model Examiner, MATE is a tool that
provides high-level analysis and repair functions that can hardly be handled by M-script
implementation. MATE uses a meta-model of a MATLAB/Simulink/Stateflow model
that can also be assessed by other tools. The analysis and transformation of the model is
carried out on a model repository using the java API of MATLAB. MATE has not yet
reached the required stage of maturity for an industrial application and therefore MATE
will be further developed in the context of a BMBF project (also termed MATE). However, it is already evident that MATE integrates high-level functions that will play an
ever greater role in future, as models become increasingly complex. The continued development of MATE will take place in the context of a research and development project.
5 Conclusions and Future Work
An increasing number of serial software projects at Daimler are being implemented using models created in Simulink, Stateflow, and TargetLink. To support developers in
their work, the experience collected in various project contexts has been collated and
processed into guideline libraries. These provide developers with a comprehensive body
of knowledge and templates/patterns on which they can rely. In the past, however, strict
adherence to these guidelines always required a certain measure of discipline and a considerable amount of hard work–particularly in the initial phase of a project or when a
new developer joined the team. This provided the impetus for attempting to find new
ways of automating a part of such routine operations. With the help of static model verification tools such as Model Advisor, MINT, and the Model Examiner, it is now possible
to check approximately 80-90% of guidelines automatically. Developers receive detailed
information about which parts of their model do not comply with the guidelines and
why, or which problems might arise due to the type of modeling–all at the ‘touch of a
button’. This development has freed developers from a considerable part of routine operations, and at the same time, has increased general readiness to observe and implement
the guidelines. Static model analysis tools can be integrated into the development process without great difficulty. The next step was to tackle the automatic repair of guideline
violations. After all, 45% of guideline violations can be automatically repaired in a typical serial project, and a further 40 to 45% with the help of user feedback. Tremendous
time savings can be made here through the use of tools such as the Model Examiner. In
this way, the automatic verification of guideline compliancy and the repair of violations
can be easily integrated in the development process of serial projects. Moreover, model
quality can be increased considerably. Data flow analysis, pattern instantiation, and
model transformation with tools such as MATE take this process one step further. They
allow complex problems to be checked and processed, which is not possible using static
model analysis tools. Admittedly, a lot more work must be invested in MATE before the
tool reaches maturity. It is important to remember that all these tool-based automatizations are designed to unburden the developer from routine operations. The tools support
the developer and take away ‘boring’ recurrent tasks, thus giving him time to concentrate
on areas that no tool can master–the creation of models and the critical review of model
parts. Manual reviews can now concentrate on the algorithm and no longer have to take a
38
whole set of more simple guidelines into account, which have in the past often obscured
the view for what is more important. Almost as a byproduct, the model becomes clearer
and easier to read in review thanks to the automated observance of many guidelines.
This also creates a greater readiness on the part of the developer to perform a manual
review, the review itself is easier to carry out, and the developer is (hopefully) able to
detect the few hidden problems that hinder the testing process. Taken as a whole, guidelines, the e-Guideline Server, the Model Examiner, and MATE are all tools that effectively support us in our aim of creating efficient, safe, and error-free models for serial
code development, and keeping the quality of models at a consistently high level.
References
[ALSS07]
Ameluxen, C., Legros, E., Schürr, A., Stürmer, I.: Checking and Enforcement of
Modeling Guidelines with Graph Transformations, to appear in Proc. of Application
of Graph Transformations with Industrial Relevance (AGTIVE), 2007.
[BOJ04]
Beine, M., Otterbach, R. and Jungmann, M. Development of Safety-Critical Software
Using Automatic Code Generation. SAE Paper 2004-01-0708, Society of Automotive
Engineers, 2004.
[Daimler07]
Daimler
Model-based
Development
http://www.eguidelines.de, internal, 2007.
[MAAB07]
MathWorks Automotive Advisory Board, Control Algorithm Modeling Guidelines
Using MATLAB, Simulink, and Stateflow, Version 2.0”, 2007.
[MES08a]
Model Engineering Solutions Berlin,
http://www.model-engineers.com/products, 2008.
[MES08b]
Model Engineering Solutions, MTest-classic, http://mtest-classic.com, 2008.
[Ric07]
Ricardo, Inc., MINT, http:/www.ricardo.com/mint, 2007.
[SCFD06]
Stürmer, I., Conrad, M., Fey, I. Dörr, H.: ‘Experiences with Model and Autocode
Reviews in Model-based Software Development’, Proc. of 3rd Intl. ICSE Workshop
on Software Engineering for Automotive Systems (SEAS 2006), Shanghai, 2006.
[ST07]
Stürmer, I., Travkin, D.: Automated Transformation of MATLAB Simulink and
Stateflow Models, to appear in Proc. of 4th Workshop on Object-oriented Modeling of Embedded Real-time Systems, 2007.
[TL08]
dSPACE. TargetLink 2.2.1: Production Code Generator. http://www.dspace.com,
2008.
[TMW07]
The MathWorks Inc (product information),
www.mathworks.com/products, 2007.
[TPT07]
Piketec GmbH Berlin, Time Partition Testing (TPT), http://www.piketec.com/, 2007.
Guidelines
(DCX:2007).
39
Integrating Timing Aspects in Model- and
Component-Based Embedded Control System Development
for Automotive Applications ∗
Patrick Frey and Ulrich Freund
ETAS GmbH
Borsigstr. 14
70469 Stuttgart
{patrick.frey, ulrich.freund}@etas.de
Abstract: Today, model-based software development for embedded control systems
is well established in the automotive domain. Control algorithms are specified using
high-level block diagram languages and implemented through automatic code generation. Modular software development and integration techniques such as those supported proprietarily by specific tools (e.g. ASCET [ETA06a]) have been around for
more than 10 years, and the concepts are now heading towards industry-wide application through the results of the AUTOSAR initiative [AUT06b].
Still, OEMs and their suppliers are facing challenges, especially when integrating software as modules or components from different parties into the vehicles electric/electronic (E/E) system.
This paper discusses how the consideration of resource consumption (especially
execution time) and system level timing within a development approach which employs model- and component-based software development can lead to predictable integration.
1
Introduction
Today, automotive control applications are developed using high-level block diagram languages from which target-specific implementations are automatically derived through code
generation. Prominent tools which support this approach are ASCET [ETA06a] or Matlab/Simulink. Furthermore, functional validation and verification of the control algorithm behavior is performed through prototyping. For instance, the prototyping platform
INTECRIO [ETA07] supports both virtual prototyping (i.e. prototyping against a simulated plant) and rapid prototyping (i.e. prototyping against the real plant) of control
applications modularly developed using high-level block diagram models in ASCET or
Matlab/Simulink.
In the last decades, modular software development techniques have evolved as an orthogonal concept to model-based specification, design and implementation of embedded soft∗ This work has partially been supported by the ATESST and INTEREST projects. ATESST and INTEREST
are Specific Targeted Research Projects (STRePs) in 6th Framework Programme (FP6) of the European Union.
40
ware. However, OEMs and their suppliers are facing challenges, especially when integrating software as modules or components1 from different parties into the vehicles electric/electronic (E/E) system. The concepts were only supported proprietarily by specific
tools (e.g. ASCET [ETA06a]). The industrial initiative [AUT06b] adapts the concept
of modular software development for the automotive domain, enabling an industry-wide
application.
Failures at integration time happen when multiple software components are integrated
into a system in order to collaboratively fulfill a specific function or functions, and the
integrated system does not operate as expected. Such failures can be classified as either
due to interface problems, communication problems or timing problems. Interface problems occur when the implemented communication interfaces of two software modules are
not compatible. Incompatibility can be caused by wrong signal resolutions (e.g. int8 instead of int32), wrong quantization (different mappings of value ranges to communicated
signals) or wrong interpretation of the physical value (e.g. mph instead of kmh). Communication problems occur when data sinks are not properly supplied by data sources, e.g.
when two software components communicate over a network (“dangling ends”). While
interface and communication problems are rather static problems of the structural entities of a control application’s software architecture - and these problems can already be
identified at pre-compile time, timing problems are problems of the dynamic behavior of
the control application which occur at run-time and thus are more complicated. Timing
problems constitute themselves, for example, as
• timing variations in data transformation and data communication
• temporally unsynchronized correlated actions
• bursty processing of data transformations and data communication (temporal overload)
These problems are induced into the control application or automotive software application
in general by the underlying microcontroller and networking backbone.
1.1
Outline
The rest of this paper is structured as follows: Section 2 gives a brief introduction into
the main results of the AUTOSAR initiative. Section 3 presents an integrated approach
for automotive embedded systems development which goes beyond the AUTOSAR approach. Section 4 details how timing aspects can be explicitly considered at various stages
in model- & component-based embedded control system development. Section 5 explains
our current efforts of implementing the consideration of timing aspects within the ETAS
tool chain. Finally, Section 6 closes the paper with statements on the relevance of the
presented work, draws conclusions and gives a future outlook on further work.
1 The
terms software component and software module both denote the same in this paper.
41
2
AUTOSAR
In the automotive industry, the results of the AUTOSAR initiative are currently considered
as a major step towards the future development of automotive electric/electronic (E/E)
systems. AUTOSAR proposes a
• standardized, layered electronic control unit (ECU) software architecture with an
application software layer, a middleware layer and a basic software layer. To increase interoperability and ease exchangeability of basic software modules from
different vendors, AUTOSAR defines standardized application programming interfaces (APIs).
• a software component (SWC) technology for the application software, which - together with the middleware layer which is termed Runtime Environment (RTE) enables a deployment-independent development (more precisely specification) of
software components with the possibility of reallocation within the vehicle network
and reuse across multiple platforms.
• a methodology which describes how an AUTOSAR-based E/E system - tentatively
composed of multiple ECUs interconnected through various bus systems - or part
of it is actually built. In this context, AUTOSAR defines two architecture views,
the virtual functional bus (VFB) view and the Mapping view2 of an automotive E/E
system or part of it.
Fig. 1 depicts the basic approach as proposed by AUTOSAR [AUT06b].
The central step is the mapping or deployment of software components which have been
integrated into a virtual software architecture (VFB view) to single ECU nodes of an E/E
system. Note that the mapping is not necessarily arbitrary but can be constrained. Additional information for the configuration of the standardized ECU basic software is provided
as ECU descriptions and used in the ECU software built process.
Open issues in AUTOSAR
While AUTOSAR provides adequate solutions for the interface and communication problems, e.g. through strongly typed interfaces for software components, timing
problems are not considered well. Furthermore, AUTOSAR only considers partial
aspects of embedded systems development, namely those which are rather close to
implementation. This, for instance, has become obvious from the work of the AUTOSAR
Timing Team which tried to develop an appropriate timing model for AUTOSAR: a lack
of higher levels of abstraction in AUTOSAR has been identified as a main source for not
being able to integrate specific timing-related engineering information.
2 Note
that the term “Mapping view” has not yet been defined by AUTOSAR in any specification document.
We thus introduce the term in this paper in order to refer to a system where all required mappings (SWCs to ECU
nodes, system signals to network frames, RunnableEntities to OS tasks) and basic software module configurations
have been performed.
42
Figure 1: AUTOSAR basic approach (figure taken from [AUT06b])
In our development approach we consider the AUTOSAR results as basis. We provide extensions which go beyond the current concepts of AUTOSAR. Furthermore, we integrate
timing aspects such as timing requirements gathering, early and late stage timing property determination using alternative methods as well as validation and verification through
system-level timing analysis methods.
3
Model-Based Embedded Control Systems Development
Fig. 2 depicts a development approach for automotive embedded systems which integrates
the AUTOSAR approach. The figure is structured in several abstraction levels in two
dimensions:
• Architecture abstraction levels (or views) reflecting the different perspectives on an
E/E system at different development stages and with different engineering focus.
The architecture abstraction levels are defined according to those of the architecture
description language EAST-ADL2 defined in the ATESST project (www.atesst.org).
• Artifact abstraction levels on the artifacts of an embedded system development under the perspective of a specific architecture abstraction view.
43
44
Figure 2: Function architecture and software architecture in an extended V development approach with timing aspects
Architecture abstraction levels (views)
The architecture abstraction levels are classified into a) logical architecture views
and b) technical architecture views as well as c) function architecture views and d)
software architecture views. The architecture classes a) and b) are separated. This also
holds for c) and d). A logical architecture view abstracts from mapping- as well as
target- and implementation-specific information. A technical architecture view exactly
includes the latter. A function architecture view describes a function or set of function
basically from the problem domain, e.g. control engineering. A software architecture
view describes a software architecture of a set of considered functions in terms of a
software component technology. For the latter, we consider the software component
technology as specified by AUTOSAR [AUT06a].
In Fig. 2, three architecture abstraction levels are distinguished:
• The function architecture abstraction level is a logical view of a considered set of
vehicle functions. Within the function architecture view, top-level functions are
analyzed, hierarchically decomposed and refined into elementary functions.
• The virtual software architecture abstraction level is a logical view on the software
architecture of a set of considered vehicle functions. The previously hierarchically
decomposed top-level vehicle functions are clustered to software components which
are then aggregated to compositions. At this point, interface compliance btw. the
software components can be verified which corresponds to the elimination of the
above-mentioned interface and communication problems.
• The technical software architecture abstraction level is a technical view on the
software architecture of a completely mapped, integrated set of functions - as software components - into the E/E architecture of a vehicle. All ECU configurations
(runnable-to-OS-task mappings, basic software, etc.) and bus system configurations
(frame types, IDs, signal-to-frame mapping, etc.) are included.
Artifact abstraction levels
In Fig. 2, four artifact abstraction levels are distinguished.
• The system level which - under each architecture view - contains top-level artifacts.
In the function architecture view, for instance, this includes the top-level functions
and their interactions. With respect to timing, top level timing requirements such
as end-to-end timing requirements or sensing or actuating synchronization requirements are present. In the software architecture view, software components and their
interactions are present. In the virtual software architecture, these interactions are
reduced to purely logical data communication while in the technical software architecture, remote communication is reflected through system signals which are send
over the vehicle network as payload of network frames.
45
• The node level is actually only used under the technical architecture view as only
here the nodes of a distributed embedded system as which the E/E system of a
vehicle can be considered are actually present3 .
• The component level which considers single software components, in our case
AUTOSAR entities such as AtomicSoftwareComponents, their InternalBehavior in
the form of RunnableEntities and their access specifications to data on the structural interface of the software component. For details on the AUTOSAR software
component technology see [AUT06a].
• The elementary functional building block level which contains a refinement of the
top-level functions in its most detailed form, i.e. a set of interconnected, elementary
functions. Elementary functions have a precise semantical specification (read inputs; perform data transformations; write outputs; terminate in deterministic time).
Note that for single ECU systems the system-level and the node-level are the same.
Relation between architecture views
The relation between adjacent architecture abstraction levels is as follows:
The link from the artifacts under the function architecture view, i.e. the elementary functions, to artifacts of the software architecture view, i.e. software components, is via the
componentization of the these elementary functions. The function architecture view can
give answers to the question where software components actually come from and the degrees of freedom developers have in creating software components. The possible granularity of software components in a software component architecture and possible degrees
of freedom in componentization is not further discussed in this paper.
The relation from the artifacts under the virtual software architecture view to the technical
architecture view is via the mappings and configurations as defined by the AUTOSAR
approach.
Validation and verification
Verification provides evidence that the technical solution as it is implemented
conforms to the specification (Are we building the system right?) while validation
provides evidence that the technical solution satisfies the requirements from which the
specification is derived (Are we building the right system?).
The validation and verification (V & V) activities are performed between artifacts on the
same artifact abstraction level of adjacent architecture abstraction levels. In this paper,
we focus on the validation and verification of timing aspects. Functional validation and
verification through prototyping activities at different stages of the development approach
can be integrated in a similar way, however, this is not discussed in this paper.
3 Note that when introducing another architectural view, e.g. a virtual technical architecture view, the node
level might also be present there.
46
With respect to timing, ultimately, the goal is to provide evidence that the timing properties
of a technical solution satisfy the timing requirements of the functional specification or
design (verification). For system-level timing analysis, for instance, the goal is to provide
evidence that the end-to-end latencies which are present in a technical solution satisfy
the end-to-end latency requirements specified for the top-level functions of a functional
specification. Whether the technical solution also satisfies the requirements from which
the specification has been derived in first place is subject to validation. The latter requires
that the technical solution (i.e. the implemented E/E system) is tested against the real
plant.
The techniques and technologies which we consider to conduct a system-level timing analysis for verification purposes are integrated into different stages of the sketched development approach depicted in Fig. 2 and further explained in Section 4.
Other considerations
The relationship between the virtual software architecture and the technical software architecture can be considered as the fulfillment of the basic AUTOSAR approach,
i.e. the step from the Virtual Functional Bus view to the Mapping view of an AUTOSAR
E/E system. Note that Fig. 2 abstracts from technical details such as consideration of
information in the System Constraints Template or ECU configuration information.
Furthermore note that Fig. 2 is not exhaustive neither on the number of possible architecture views nor on the level of detail of the artifacts under the perspective of such a view.
The figure can easily be extended by integrating a virtual mapped E/E system architecture
view as well a further function abstraction level, e.g. to distinguish between functional
analysis and design (refer to EAST-ADL2 system model).
4
Integrating Timing Aspects in Model- and Component-Based Embedded Control Systems Development
The rationale for integrating timing considerations into a model and component-based
embedded control systems development approach is to be able to verify the actual timing
properties of a technical solution against the original timing requirements. In the following, we explain the origin of timing requirements and timing properties and how the link
between both can be established.
In embedded control systems development, timing aspects need to be considered from the
perspectives of the different involved engineering disciplines and their theoretical foundations [T9̈8]:
• control engineering and control theory, mainly for sampled feedback control systems; this engineering discipline dictates the timing requirements
• real-time computing and real-time computer science for distributed, embedded sys-
47
tems; this engineering discipline dictates the actual timing properties
In the proposed development approach, the distinction of the related engineering disciplines is reflected through the different architecture abstraction levels. The function architecture abstraction level is concerned with control engineering and thus problem specification and analysis while the software architecture abstraction levels are concerned with
implementation in terms of a software component technology and technical E/E system
architecture.
Timing requirements:
Most important timing requirements from a control engineering perspective are
[TES97]
• minimal and preferably constant end-to-end latencies (with tolerances) which must
be maintained with the proceeding dynamics of the physical system or process under
control
• synchronization requirements on correlated actions, i.e. sensing and actuating instants
Within the sketched development approach in Fig. 2, timing requirements, especially endto-end latencies, of the top-level functions need to be captured, analyzed, decomposed
and refined along the track of functional analysis and decomposition. Ultimately, for each
top-level function, a set of interconnected elementary functions is derived for which delay requirements on data transformation and data communication are specified. As the
elementary functions are then grouped to RunnableEntities and clustered in AtomicSoftwareComponents of a software component architecture, at this point, software component
specific timing requirements or constraints can be derived from the clustered elementary
functional building blocks.
Timing properties:
Timing properties are highly implementation dependant and can be obtained for
an implementation on the technical architecture abstraction level as they are actually
physically measurable or mathematically computable. On the system-level, timing
properties basically are constituted as response times, i.e. actually observable end-to-end
delays which relate a stimulus or single sampling instant to a response or actuation
instant. These response times, however, are difficult to obtain as the resource sharing or
scheduling strategies (such as priority-driven scheduling) and variable computation and
communication times are difficult to combine.
48
4.1
Validation and verification of timing through best- and worst-case corner case
construction
Figure 3: Formal analysis vs. simulation/measurement [HPST07]
The validation of the timing properties of an implemented technical solution is achieved by
conducting a system-level timing analysis for the best- and worst-case corner cases. Corner case construction for system-level timing analysis requires best-case and worst-case
timing properties for artifacts on lower abstraction levels such as best-case and worst-case
execution times (BCETs and WCETs, resp.) for computations and worst-case transmission times (WCTTs) for data communication. The latter can be derived from the payload
of the actual network frames which are send over the vehicle bus systems.
The validation and verification of timing thus includes the consideration of issues which
concern several artifact abstraction levels in Fig. 2:
• Translating the originate timing requirements from the function architecture via the
intermediate virtual architecture abstraction levels to the technical software architecture. This path has already been sketched above.
• Gathering timing properties on best-case and worst-case behavior of an implementation. This applies to the component artifact abstraction level. Challenges lie in datadependent determination of best-case and worst-case behavior in order to obtain
timing properties of high precision and with high context-specific expressiveness.
• Analyzing system-level timing. This applies to the system and node artifact abstraction. (Reconsider node-level = system-level in specific cases).
In general, methods to obtain specific timing properties can be distinguished into formal
analysis methods and non-formal analysis methods. While non-formal analysis methods
such as measurement-based methods are intrusive, i.e. they potentially change the actual
execution time by inducing additional code for obtaining measurements, they can also
only be applied very late in the development process when the full technical solution is
implemented. Formal analysis-based methods use models of processor architectures and
memory to compute the execution time or number of cycles for a sequence of instructions.
Formal analysis-based methods can be applied at earlier stages of the development process
as no target system of the complete implementation setup is actually required.
49
Communication latencies can be quite well determined based on the payload of the actual
network frames which are send over the vehicle network. In this paper, we focus on
single ECU systems such that communication delays induced by vehicle networks are not
considered.
Another approach which is not considered further in this paper for the validation of the
temporal behavior of real-time systems utilizes evolutionary algorithms and targets test
generation [Weg01].
Fig. 2 also depicts at which stage in the development approach the different methods
for resource consumption determination and system-level timing analysis can be applied,
which information is required as input and which information is obtained as output.
4.2
Resource consumption determination
For the determination of actual timing properties such as worst-case execution times we
consider one representative technique for a formal analysis method, namely abstract interpretation as implemented in the aiT tool from AbsInt [Gmb07a], and a non-formal analysis method, namely measurement as implemented in the RTA-Trace tool from ETAS
[ETA06b].
WCET determination through abstract interpretation with aiT (AbsInt)
The industry-strength tool aiT from AbsInt applies abstract interpretation in order to determine a safe upper bound for the maximum execution time (WCET). A
sequence of code instructions is analyzed using a very precise processor and memory
model (i.e. including cache, pipeline and branch prediction considerations).
For analysis, aiT needs a compiled binary file as well as additional annotations given in a
proprietary format (.ais). The annotations which are kept separate from the actual code
file provide information which cannot automatically derived by aiT (such as upper bounds
on loop iterations) or information to improve the analysis result. The tool performs a
reconstructing of the control flow graph (CFG), followed by a value analysis to determine
feasible value ranges followed by a subsequent cache and pipeline analysis. The result of
the then following path analysis gives the worst-case execution time by applying integer
linear programming (ILP) and solving techniques.
WCET determination through measurements with RTA-Trace
The other timing property determination method which we consider is a measurementbased technique. RTA-Trace [ETA06b] is a tool which directly integrates with the
operating system (OS) that runs an ECU system and which provides, amongst other
50
statistical data, measurements of execution times for instrumented OS tasks and interrupt service routine (ISR) handlers. However, a running system is required. Thus,
the method can first be applied at late stage of the development approach (technical
software architecture abstraction level under the system or node-level artifact abstraction
perspectives).
Integrating WCET determination in model-based embedded control system development
Automotive control applications are nowadays developed using high-level block
diagram languages from which target-specific implementations are automatically derived
through code generation. Integrating WCET determination with the abstract interpretation
method as implemented in the aiT tool with model-based development and automatic
code generation has the benefit that the required annotations can be generated either
directly during target code generation or as a secondary step after code generation through
the analysis of the well-structured generated target code. When the generated code can be
considered as self-contained, i.e. it can be considered as a module or software component
with a well defined interface specification for structural integration, then even multi-party
development scenarios can be supported.
4.3
System-level timing analysis
System- and node-level timing analysis has been extensively researched for single resources such as ECUs or bus systems. Recently, several system-level timing analysis
approaches have been proposed which consider both ECU and bus systems and their interactions. The symbolic timing analysis for systems (SymTA/S) approach uses standard
event models for event streams to couple the resource consuming entities on ECUs and bus
systems and thus enables a system-level timing analysis ([RE02], [Ric04]). The SymTA/S
approach is implemented in the SymTA/S tool suite [Gmb07b].
Another system-level timing analysis approach which we do not consider further in this
paper is the modular performance analysis with real-time calculus (MPA-RTC, [CKT03])
approach.
Integrating system-level timing analysis in component-based embedded control
system development
System-level timing analysis is integrated into the considered component-based
embedded control system development approach by making relevant information directly
available for an adequate system-level timing analysis tool. In the case of the SymTA/S
tool suite, this information comprises
• information on scheduling policies of ECU nodes and bus systems: these can be ei-
51
Figure 4: Implementation of proposed timing considerations: extending the ETAS tool chain (proprietary concepts)
ther rather general resource sharing strategies such as the static priority preemptive,
round robin, time division multiple access (TDMA) paradigms or specific implementations of those which usually occur in automotive implementations, e.g. in the
ERCOS ek and RTA-OSEK operating systems, CAN-bus, FlexRay-bus.
• information on OS tasks and their containments (processes or runnables): the order of the processes or runnables within the OS task, priority, type (preemptive,
non-preemptive, cooperative, specific kind of ISR). Furthermore, information in the
triggering and potential chaining of OS tasks is required.
• information on best-case and worst-case execution times
• information on minimum and maximum packet sizes for network frames
The static part of the information (i.e. without the execution times which are of dynamic
nature) can directly be obtained from tools which have knowledge of such information.
For instance, ECU integration environments or integrated prototyping platforms such as
INTECRIO [ETA07] can provide such information as it is part of the ECU configuration.
The additional information about the best-case and worst-case execution times which are
required for a system-level timing analysis need to be provided additionally.
52
5
Implementation of proposed timing considerations: extending the
ETAS tool chain
Fig. 4 depicts how the ETAS tool chain is extended to support the determination of worstcase execution times and conduct a system-level timing analysis through tool couplings to
aiT (AbsInt) and SymTA/S (Symtavision).
The so-enhanced tool chain covers a major excerpt of the proposed development approach,
basically starting on the lowest artifact abstraction level of the function architecture and
considering the final technical implementation of a single ECU system.
5.1
Support for resource consumption determination: ASCET - aiT
We have integrated automatic resource consumption determination with model-based development of control applications through integrating the aiT tool into the behavioral modeling tool and integrated development environment ASCET [FHWR06]. ASCET generates
suitable aiT annotation files (.aiT) and aiT project files (.apf) and starts WCET analysis requests. The results of the are both presented directly to the user as well as stored in the file
system for further processing.
The current integration allows an automatic WCET determination for single ECU systems completely developed in ASCET. Target-code from ASCET is generated for the
ERCOS ek operating system and Infineon TriCore-based microcontrollers.
As single ASCET projects are considered as modules in INTECRIO, ASCET and
INTECRIO implement a component-based approach with ETAS proprietary means and
file exchange formats. Information on the modules are passed via the proprietary SCOOPIX4 (.six) format from ASCET to INTECRIO where modules from different sources can
be integrated, and a functional validation through virtual or rapid prototyping can be performed. Thus, the WCET analysis results from the ASCET - aiT integration can be interpreted in the context of a module. This is also depicted in Fig. 4.
The separation of control algorithm specification and implementation in ASCET enables
the determination of WCETs for different target platforms from the same model. In this
sense, mapping-specific safe upper bounds for the execution time can be determined directly from the modeling tool without the need to set up an actual target microcontroller
implementation to provide measurements.
5.2
Support for system-level timing analysis: INTECRIO - SymTA/S
In order to conduct a system-level timing analysis, we use the SymTA/S tool suite. A prototypical extension to INTECRIO exports the operating system configuration information
4 SCOOP-IX
= source code, objects and physics interchange format
53
Figure 5: Implementation of proposed timing considerations: extending the ETAS tool chain
(AUTOSAR concepts)
from an INTECRIO project to a format which can be imported by SymTA/S (.csv).
SymTA/S allows to incrementally load information to obtain all required information for
a complete analysis model. In this sense, we use this feature to firstly load the operating system configuration from INTECRIO and then incrementally load the target-specific
WCETs which have been derived with the ASCET - aiT tool coupling into a SymTA/S
system model.
To fulfill the timing verification, timing requirements are added manually in SymTA/S.
Currently, timing requirements are specified as end-to-end path latencies within SymTA/S.
In our approach, these end-to-end path latencies are the end-to-end timing requirements
which have been derived from the top-level functions in the functional architecture and
translated to the technical software architecture.
6
6.1
Relevance of the work, conclusions and future outlook
Relevance of the work
The work presented in this paper addresses several issues which have lately been identified
as general research gaps or open issues in AUTOSAR. In [Kop07], Kopetz has identified
that additional levels of abstraction on top of the traditionally used implementation level
are required to adequately account for the needs of the different engineering disciplines involved in overall system development. In [RER06], Richter et. al. argue why AUTOSAR
requires a timing model now. In the public funded project TIMMO, several European in-
54
dustrial and university partners jointly work together on a timing model for the AUTOSAR
ECU software architecture and the software component technology.
6.2
Conclusions
The presented approach enables a model- and component based development of embedded control systems while considering aspects from different engineering disciplines. By
integrating timing aspects such as resource consumption determination as well as systemlevel timing analysis, a reduction of overall development time can be achieved through an
early detection of possible problems and a reduced number of iterative development cycles. Furthermore, the reduction of overall costs through increased utilization of available
resources can be achieved.
6.3
Outlook on future work
Currently we are working on how this development approach and the developed tool
couplings can be applied to AUTOSAR. Fig. 5 depicts how the tool chain presented in
Fig. 4 translates to AUTOSAR. The ASCET-aiT tool coupling is extended to also cover
WCET analysis for AUTOSAR AtomicSoftwareComponents, more precisely for the single RunnableEntities which the InternalBehavior. We use INTECRIO as a vehicle for
target-platform integration of an AUTOSAR ECU system. A prototypical extension to
INTECRIO generates an ECU extract of the system description (AUTOSAR XML) for
an AUTOSAR E/E system. The output XML file is used to drive the generator tools for
required basic software modules for a minimal system, i.e. runtime environment (RTE),
operating system (OS) and communication stack (COM-stack), the latter including the
layers COM and CAN. A complete description can be found in earlier work in [FF08].
7
Acknowledgements
The authors would like to thank the anonymous reviewers for their comments.
References
[AUT06a] AUTOSAR. Software Component Template, 2006.
[AUT06b] AUTOSAR. Technical Overview, 2006.
[CKT03] Samarjit Chakraborty, Simon Künzli, and Lothar Thiele. A General Framework for
Analysing System Properties in Platform-Based Embedded System Designs. In Design,
Automation and Test in Europe (DATE), Munich, Germany., March 2003.
[ETA06a] ETAS GmbH. ASCET Manual, Version 5.2, 2006.
55
[ETA06b] ETAS GmbH. RTA-Trace Manual, 2006.
[ETA07]
ETAS GmbH. INTECRIO Manual Version 3.0, 2007.
[FF08]
Patrick Frey and Ulrich Freund. Model-Based AUTOSAR Integration of an Engine Management System. 8th Stuttgart International Symposium Automotive and Engine Technology, March 2008.
[FHWR06] Christian Ferdinand, Reinhold Heckmann, Hans-Jörg Wolff, and Christian Renz. Towards Model-Driven Development of Hard Real-Time Systems - Integrating ASCET and
aiT/StackAnalyzer. Automotive Software Workshop San Diego (ASWSD 2006). Advanced
Automotive Software and Systems Development: Model-Driven Development of Reliable
Automotive Services. San Diego, CA (USA)., March 15-17 2006.
[Gmb07a] AbsInt GmbH. aiT Worst-Case Execution Time Analyzers. http://www.absint.com/ait/,
2007.
[Gmb07b] Symtavision GmbH.
SymTA/S Scheduling Analysis Tool Suite v1.3.1.
http://www.symtavision.com/, 2007.
[HPST07] Wolfgang Haid, Simon Perathoner, Nikolay Stoimenov, and Lothar Thiele. Modular
Performance Analysis with Real-Time Calculus. ARTIST2 PhD Course on Automated
Formal Methods for Embedded Systems, DTU - Lyngby, Denmark - June 11, 2007.
[Kop07]
Hermann Kopetz. The Complexity Challenge in Embedded System Design. Research report, Technische Universität Wien, Institut für Technische Informatik, Treitlstr. 1-3/1821, 1040 Vienna, Austria, 2007.
[RE02]
Kai Richter and Rolf Ernst. Event model interfaces for heterogeneous system analysis,
2002.
[RER06] Razvan Racu, Rolf Ernst, and Kai Richter. The Need of a Timing Model for the
AUTOSAR software standard. Workshop on Models and Analysis Methods for Automotive Systems (RTSS Conference), Rio de Janeiro, Brazil,, December 2006.
56
[Ric04]
Kai Richter. Compositional Scheduling Analysis Using Event Models. Phd thesis, Institut für Datentechnik und Kommunikationsnetze, Technische Universität Braunschweig,
2004.
[T9̈8]
Martin Törngren. Fundamentals of Implementing Real-Time Control Applications in
Distributed Computer Systems. Real-Time Systems, 14(3):219–250, 1998.
[TES97]
Martin Törngren, Christer Eriksson, and Kristian Sandström. Deriving Timing Requirements and Constraints for Implementation of Multirate Control Applications. Internal report, Deptartment of Machine Design, The Royal Institute of Technology (KTH), Stockholm, Sweden, 1997.
[Weg01]
Joachim Wegener. Evolutionärer Test des Zeitverhaltens von Realzeit-Systemen. Shaker
Verlag, August 2001.
Clone Detection in Automotive Model-Based Development
Florian Deissenboeck, Benjamin Hummel, Elmar Juergens,
Bernhard Schätz, Stefan Wagner
Institut für Informatik, Technische Universität München, Garching b. München
Jean-François Girard, Stefan Teuchert
MAN Nutzfahrzeuge AG, Elektronik Regelungs- und Steuerungssysteme, München
Abstract: Model-based development is becoming an increasingly common development methodology in embedded systems engineering. Such models are nowadays an
integral part of the software development and maintenance process and therefore have
a major economic and strategic value for the software-developing organisations. Nevertheless almost no work has been done on a quality defect that is known to seriously
hamper maintenance productivity in classic code-based development: Cloning. This
paper presents an approach for the automatic detection of clones in large models as
they are used in model-based development of control systems.
1
Introduction
Software in the embedded domain, and especially in the automotive sector, has reached
considerable size: The current BMW 7 series, for instance, implements about 270 user
functions distributed over up to 67 embedded control units, amounting to about 65 megabytes
of binary code. The up-coming generation of high-end vehicles will incorporate one gigabyte of on-board software [PBKS07]. Due to the large number of variants in product-lines,
high cost pressure, and decreasing length of innovation cycles, the development process in
this domain demands a high rate of (software) reuse. This is typically achieved by the use
of general-purpose domain-specific libraries with elements like PID-controllers as well as
the identification and use of application-specific elements like sensor-data plausibilisation.
As a consequence of this highly reuse-oriented approach, the identification of common elements in different parts of the software provides an important asset for the automotive
model-based development process.
A proposed solution for the increasing size and complexity as well as for managing product
lines is to rely on model-based development methods, i. e., software is not developed on
the classical code level but with more abstract models specific to a particular domain.
These models are then used to automatically generate production code. Especially in the
automotive domain today already up to 80% of the production code deployed on embedded
control units can be generated from models specified using domain-specific formalisms
like Matlab/Simulink [JOB04]. Hence, it does not surprise that they exhibit a number of
quality defects that are well known from classic programming languages. One such defect
is the presence of redundant program elements or clones.
57
Cloning is known to hamper productivity of software maintenance in classical code-based
development environments, e.g., [Kos07]. This is due to the fact that changes to cloned
code are error-prone as they need to be carried out multiple times for all (potentially unknown) instances of a clone, e.g., [LPM+ 97]. Hence, the software engineering community
developed a multitude of approaches and powerful tools for the detection of code clones,
e.g., [JMSG07]. However, there has been very little work on cloning in the context of
model-based development. Taking into account the economic importance of software developed in a model-based fashion and the well-known negative consequences of cloning
for software maintenance, we consider this a precarious situation.
1.1
Contribution
This paper presents an approach for the automatic detection of clones in models. The approach is based on graph theory and hence applies to all models using data-flow graphs
as their fundamental basis. It consists of three steps: preprocessing and normalisation of
models, extraction of clone pairs (i. e., parts of the models that are equivalent) and clustering of those pairs to also find substructures used more than twice in the models. Through
the application of a suitable heuristic the approach overcomes the limits of algorithmic
complexity and can be applied to large models (> 10,000 model elements) as they are typically found in the embedded domain. We demonstrate the applicability of our approach
in a case study undertaken with MAN Nutzfahrzeuge, a German OEM of commercial
vehicles and transport systems. Here we implemented our approach for the automatic detection of clones in Simulink/TargetLink models as they are widely used in the automotive
domain.
1.2
Results and Consequences
Our approach showed in the case study that there are clones in typical models used for
code generation. In the analysed models with over 20,000 elements, 139 clone classes
were found which affect over a third of the total model elements. By manual inspection
a significant share of them were classified as relevant. Moreover, the case study shows
that it is feasible to analyse models for clones. Our approach proved to be applicable to
industry-relevant model sizes. Hence, it can be used to prevent the introduction of clones
in models and to identify possible model parts that can be extracted into domain-specific
intellectual-propery library to support a product line-like development.
2
Cloning
In general, code clones are code fragments that are similar w.r.t. to some definition of similarity [Kos07]. The employed notions of similarity are heavily influenced by the program
58
representation on which clone detection is performed and the task for which it is used.
The central observation motivating clone detection research is that code clones normally
implement a common concept. A change to this concept hence typically requires modification of all code fragments that implement it, and therefore modifications of all clones.
In a software system with cloning, a single conceptual change (e. g., a bug fix) can thus
potentially require modification in multiple places, if the affected source code (or model)
parts have been cloned. Since the localisation and consistent modification of all duplicates
of a code (or model) fragment in a large software system can be very costly, cloning potentially increases the maintenance effort. Additionally, clones increase program volume and
thus further increase maintenance efforts, since several maintenance-related activities are
influenced by program size. [MNK+ 02], analyses the change history of a large COBOL
legacy software system. They report that modules containing clones have suffered significantly more modifications than modules without cloning, giving empirical indication of
the negative impact of cloning on maintainability. Furthermore, bugs can be introduced,
if not all impacted clones are changed consistently. In [LJ07] Jiang et. al. report the discovery of numerous bugs uncovered by analysing inconsistencies between code clones
in open source projects. Despite the mentioned negative consequences of cloning, the
analysis of industrial and open-source software projects shows that developers frequently
copy-and-paste code [LPM+ 97]. Different factors can influence a programmer’s choice to
copy-and-paste existing code instead of using less problematic reuse mechanisms: Lack of
reuse mechanisms in a language are often the source of duplication [KBLN04]. [Kos07]
lists time pressure, insufficient knowledge of consequences of cloning, badly organised
reuse processes or questionable productivity metrics (lines of code per day) as possible
process-related issues. [KG06] describes situations (e. g., experimental validation of design variations) in which cloning of source code, despite its known drawbacks, can be
argued to have some advantages over alternative solutions. But even if cloning is applied
on purpose, as rarely as it seems to be the case, the ability to identify and track clones in
evolving software is crucial during maintenance.
Since neither the reasons nor the consequences for cloning are rooted in the use of textual
programming languages as opposed to model-based approaches for software development,
we expect cloning to also impact model-based development.
3
Models For Control Systems
The models used in the development of embedded systems are taken from control engineering. Block diagrams – similar to data-flow diagrams – consisting of blocks and lines
are used in this domain as structured description of these systems. Recently, tools for this
domain – with Matlab/Simulink or ASCET-SD as prominent examples – are used for the
generation of embedded software from models of systems under development. To that end,
these block diagrams are interpreted as descriptions of time- (and value-)discrete control
algorithms. By using tools like TargetLink [dG], these descriptions are translated into the
computational part of a task description; by adding scheduling information, these descriptions are then combined – often using a real-time operating system – to implement an
59
.2
1.8
P
z
Max
1
1
In
1
I
2.5
z
I-Delay
I-Delay
<
Compare
Set
1
Out
1
In
2
P
1
1
Out
z
D-Delay
.5
0.7
D
I
Figure 1: Examples: Discrete saturated PI-controller and PID-controller
embedded application. Figure 1 shows two examples of simple data-flow systems using
the Simulink notation. Both models transform a time- and value-discrete input signal In
into an output signal Out, using different types of basic function blocks: gains (indicated
by triangles, e. g., P and I), adders (indicated by circles, with + and − signs stating the
addition or subtraction of the corresponding signal value), one-unit delays (indicated by
boxes with z1 , e. g., I-Delay), constants (indicated by boxes with numerical values, e. g.,
Max), comparisons (indicated by boxes with relations, e. g., Compare), and switches (indicated by boxes with forks, e. g., Set).
Systems are constructed by using instances of these types of basic blocks. When instantiating basic blocks, depending on the block type different attributes are defined; e. g.,
constants get assigned a value, or comparisons are assigned a relation. For some blocks,
even the possible input signals are declared. For example, for an adder, the number of
added signals is defined, as well as the corresponding signs. By connecting them via
signal lines, (basic) blocks can be combined to form more complex blocks, allowing the
hierarchic decomposition of large systems into smaller subsystems. Because of this simple
mechanism of composition, block diagrams are ideally suited for a modular development
process, supporting the reuse of general-purpose control functions as well as applicationdomain specific IP-blocks. However, it also eases a copy-and-paste approach which –
combined with the evolution of product lines typically found with embedded systems and
large block libraries – potentially lead to a substantial number of clones.
4
Approach
In this section we formalise the problem of clone detection in graph-based models and
describe the algorithmic approach used for solving it. Basically our approach consists of
three steps. First we preprocess and normalise the Simulink model, then we extract clone
pairs (i. e., parts of the model that are equivalent), and finally we cluster those pairs to also
find substructures used more than twice in the model.
60
Const
Outport
Switch
Gain
Sum
UnitDelay
Gain
Sum
RelOp: <
UnitDelay
Sum
Inport
Gain
UnitDelay
Inport
Gain
Sum
Sum
Outport
Gain
Figure 2: The model graph for our simple example model
4.1
Preprocessing and Normalisation
The preprocessing phase consists of flattening the models (i. e., inlining all subsystems
by abstracting from the subsystem hierarchy), and removing unconnected lines. This is
followed by the normalisation which assigns to each block and line a label consisting of
those attributes we consider relevant for differentiating them. Later two blocks or lines are
considered equivalent, if they have the same label.
Which information to include in the normalisation labels depends on which kind of clones
should be found. For blocks usually at least the type of the block is included, while semantically irrelevant information, such as the name, the colour, or the layout position, are
excluded. Additionally some of the block attributes are taken into account, e. g., for the
RelationalOperator block the value of the Operator attribute is included, as this decides
whether the block performs a greater or less than comparison. For the lines we store the
indices of the source and destination ports in the label, with some exceptions as, e. g., for
a product block the input ports do not have to be differentiated.
The result of these steps is a labelled model graph with the set of vertices (or nodes)
corresponding to the blocks, the directed edges corresponding to the lines, and a labelling
function mapping nodes and edges to normalisation labels. As a Simulink block can have
multiple ports, each of which can be connected to a line, the graph is a multi-graph. The
ports are not modelled here but implicitly included in the normalisation labels of the lines.
For the simple model shown in Figure 1 the model graph would be the one in Figure 2.
The nodes are labelled according to our normalisation function and the grey portions of
the graph mark the part we would consider a clone.
4.2
Problem Definition
Having the normalised model graph from the previous section, we can now define what a
clone pair is in our context. A clone pair is a pair of subgraphs, such that those subgraphs
must be isomorphic regarding to the labels, the subgraphs are not overlapping clones, and
the sub-graphs are not unconnected blocks distributed arbitrarily through the model. Note
that we do not require them to be complete subgraphs (i. e., contain all induced lines). We
denote by the size of the clone pair the number of nodes in a sub-graph. Then the goal is to
find all maximal clone pairs, i. e., all such pairs which are not contained in any other pair
61
of greater size. While this problem is similar to the well-known NP-complete Maximum
Common Subgraph (MCS) problem, it differs in some aspects; e.g., we do not only want
to find the largest subgraph, but all maximal ones.
4.3
Detecting Clone Pairs
As already discussed, the problem of finding the largest clone pair is NP-complete, we
cannot expect to find an efficient (i. e., polynomial time) algorithm for enumerating all
maximal ones which we could use for models containing thousands of blocks. So we
developed a heuristic approach for finding clone pairs which is presented next. It basically
consists of iterating over all possible pairings of nodes and proceeding in a breadth-firstsearch (BFS) manner from there. For optimization purposes, a similarity function is used.
The idea of this function is to have a measure for the structural similarity of two nodes
which not only looks at the normalisation labels, but also the neighbourhood of the nodes.
4.4
Clustering Clones
So far we only find clone pairs, thus a subgraph which is repeated n times will result in
n(n−1)/2 clone pairs being reported. The clustering phase has the purpose of aggregating
those pairs into a single clone class. Instead of clustering clones by exact identity (including edges) which would miss many interesting cases differing only in one or two edges,
we perform clustering only on the sets of nodes. Obviously this is an overapproximation
which can lead to clusters containing clones that are only weakly related. However, as we
consider manual inspection of clones to be important for deciding how to deal with them,
those cases (which are rare in practice) can be dealt with there.
What this boils down to, is to have a graph whose vertices are the node sets of the clone
pairs and the edges are induced by the cloning relationship between them. The clone
classes are then the connected components, which are easily found using standard graph
traversal algorithms, or a union-find structure (see, e. g., [CLRS01]) which allows the connected components to be built online, i. e., while clone pairs are being reported, without
building an explicit graph representation.
5
Case Study
This section describes the case study, that was carried out with a German truck and bus
manufacturer to evaluate the applicability and usefulness of our approach.We performed
our experiments on a model provided by MAN Nutzfahrzeuge Group, which is a Germanbased international supplier of commercial vehicles and transport systems, mainly trucks
and busses. It has over 34,000 employees world-wide of which 150 work on electronics
and software development. Hence, the focus is on embedded systems in the automotive
62
Figure 3: An example for the visualisation of clones within Matlab
domain. The analysed model implements the major part of the functionality of the power
train management system, deployed to one ECU. It is heavily parametrised to allow its
adaption to different variants of trucks and busses. The model consists of more than 20,000
TargetLink blocks, which are distributed over 71 Simulink files. Such a file is the typical
development/modelling unit for Simulink/TargetLink models.
5.1
Implementation
For performing practical evaluations of the detection algorithm, we implemented it as a
part of the quality analysis framework ConQAT [DPS05] which is publicly available as
open source software1 . This includes a Java-based parser for the Simulink model file
format which makes our tool independent of the Simulink application. Additionally, we
developed preprocessing facilities for the TargetLink-specific information and for flattening the Simulink models by removing subsystems that induce the models’ hierarchy. To
review the detection results we extended ConQAT with functionality to layout the detected
clones and to visualise them within the Simulink environment that developers are familiar
with. The later is done by generating a Matlab script which assigns different colours to the
blocks of each clone. An example of this is shown in Figure 3.
1 http://conqat.cs.tum.edu/
63
25
Number of components
20
15
10
5
0
4
8
16
32
64
128
256
Size of connected component
Figure 4: Size distribution of connected components in the model (logarithmically scaled in x-axis)
Number of models
1
2
3
4
Number of clone classes
43
81
12
3
Table 1: Number of Files/Modelling Units the Clone Classes were Affecting
5.2
Application
To be applicable to real-world models, the general approach described in Section 4 had
to be slightly adapted and extended. For the normalisation labels we basically used the
type, and for some of the blocks which implement several similar functions added the
value of the attribute which distinguishes these functions (e. g., for the Trigonometry block
this would be an attribute deciding between sine, cosine, and tangent). Overall we rather
included less information into the normalisation labels not to loose potentially interesting
clones. From the clones found, we discarded all those consisting of less than 5 blocks, as
this is the smallest amount we still consider to be relevant at least in some cases. As this
still yielded many clones consisting solely of “infrastructure blocks”, such as terminators
and multiplexers, we implemented a weighting scheme, assigned each block type a weight.
Clones with a total weight less than 8 also were discarded, which ensures that at least small
clones are considered only, if their functional portion is large enough.
5.3
Results
From the 20454 blocks nearly 9000 were abstracted out of the model during flattening
the model (4314 Inports, 3199 Outports, 1412 Subsystems). The model then consisted of
3538 connected components of which 3403 could be skipped as they consisted of less than
5 blocks. Finally the clone detection was run on 4762 blocks contained in 135 connected
components. The distribution of component size is shown in Figure 4, with the largest
component having less than 300 blocks. We found 166 clone pairs in the models which
64
Cardinality of clone class
2
3
4
5
Number of clone classes
108
20
10
1
Table 2: Number of Clone Classes for Clone Class Cardinality
Clone size
5 – 10
11 – 15
16 – 20
> 20
Number of clones
76
35
17
11
Table 3: Number of Clone Classes for Clone Size
resulted in 139 clone classes after clustering and resolving inclusion structures. Of the
4762 blocks used for the clone detection 1780 were included in at least one clone (coverage
of about 37%). As shown in Table 1, only about 25% of the clones were within one
modelling unit (i. e., a single Simulink file), which was to be expected as such clones
are more likely to be found in a manual review process as opposed to clones between
modelling units, which would require both units to be reviewed by the same person within
a small time frame. Table 3 gives an overview on the cardinality of the clone classes
found. As mostly pairs were found, this indicates that the clustering phase is not (yet) so
important. However, from our experience with source code clones and legacy systems,
we would expect these numbers to slightly shift when the model grows larger and older.
Table 3 shows how many clones have been found for some size ranges. The largest clone
had a size of 101 and a weight of 70. Smaller clones are obviously more frequent, which
is because smaller parts of the model with a more limited functionality are more likely to
be useful at other places.
5.4
Discussion
Our results clearly indicate that our approach is capable of detecting clones and a manual
inspection of the clones showed, that many of the clones are actually relevant for practical
purposes. Besides the “normal” clones, which at least should be documented to make sure
that bugs are always fixed in both places, we also found two models which were nearly
entirely identical. Additionally some of the clones are candidates for the project’s library,
as they included functionality that is likely to be useful elsewhere. We even found clones
in the library (which was included for the analysis), indicating that developers rebuilt
functionality contained in the library they were not aware of. Another source of clones is
the limitation of TargetLink that scaling (i. e., the mapping to concrete data types) cannot
be parameterised, which leaves duplication as the only way for obtaining different scalings.
The main problem we encountered is the large number of false positives as more than
65
half of the clones found are obviously clones according to our definition but would not be
considered relevant by a developer (e. g., large Mux/Demux constructs). While weighting
the clones was a major step in improving this ratio (without weighting there were about
five times as many clones, but mostly consisting of irrelevant constructs) this still is a
major area of potential improvement for the usability of our approach.
6
Future Work
As the application of clone detection on model-based development has not been studied
before, there is a wide field of possible further research questions. One major direction
consists of improving the algorithm and the general techniques and ideas involved. The
other area complementing this is to have larger case studies or to apply the algorithm to
related problems to get a better understanding of its strengths and weaknesses and its general usefulness. Besides improving modeling quality, another interesting direction is the
application of clone detection with a specific goal in mind, such as finding candidates for a
library or finding clones in a library, where a developer rebuilt existing functionality. One
could also use it the other way round and build a library of anti-patterns which includes
common but discouraged model constructs (such as cascading switch blocks). Clones into
these patterns then could indicate potential defects in the model. A different application
would be to aid in building product lines. Both product lines and model-based development are commonly used or introduced in the industry. Using clone detection on models
of different products could help in deciding whether making a product line out of them is
a good idea, and in identifying the common parts of these models. Finally we would like
to apply clone detection not only to Simulink models, but other kinds of models, too. As
the algorithm itself is only based on graph theory, most of the adjustments for adaptation
to other models are in the parsing and preprocessing phase, including normalisation.
7
Conclusions
Model-based development is more and more becoming a routinely used technique in the
engineering of software in embedded systems. Such models, especially when employed
to generate production code, grow large and complex just like classical code. Therefore,
classical quality issues can also appear in model-based development. A highly problematic and well-recognised problem is that of clones, i.e., redundant code elements. So
far, no approach or tool for clone analysis of models has been developed. Existing approaches like [TBWK07] analyzing similarities between models target the computation of
model differences, focus generally change management. Considering the massive impact
of clones on quality and maintenance productivity, this is an unsatisfying situation. Moreover, we are in a unique position w.r.t. model-based development. We have the opportunity
to introduce clone detection early in the development of large systems and product lines.
In model-based development, these large systems – and especially product-lines – are now
66
emerging. This shows also the two main uses of a clone detection approach for models:
(1) redundant parts can be identified that might have to be changed accordingly when one
of them is changed and (2) common parts can be identified in order to place them in a
library and for product-line development.
References
[CLRS01] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms.
The MIT Press and McGraw-Hill Book Company, 2nd edition, 2001.
[dG]
dSpace GmbH. TargetLink Production Code Generation. www.dspace.de.
[DPS05]
Florian Deissenboeck, Markus Pizka, and Tilman Seifert. Tool Support for Continuous
Quality Assessment. In Proc. 13th IEEE Int. Workshop on Software Technology and
Engineering Practice. IEEE Computer Society, 2005.
[JMSG07] L Jiang, G Misherghi, Z Su, and S Glondu. DECKARD: Scalable and Accurate TreeBased Detection of Code Clones. In ICSE ’07: Proceedings of the 29th international
conference on Software engineering, 2007.
[JOB04]
Michael Jungmann, Rainer Otterbach, and Michael Beine. Development of SafetyCritical Software Using Automatic Code Generation. In Proceedings of SAE World
Congress, 2004.
[KBLN04] Miryung Kim, Lawrence Bergman, Tessa Lau, and David Notkin. An Ethnographic
Study of Copy and Paste Programming Practices in OOPL. In ISESE ’04: Proceedings
of the 2004 International Symposium on Empirical Software Engineering, pages 83–92.
IEEE Computer Society, 2004.
[KG06]
Cory Kapser and Michael W. Godfrey. ”Cloning Considered Harmful” Considered
Harmful. In WCRE ’06: Proceedings of the 13th Working Conference on Reverse
Engineering, pages 19–28. IEEE Computer Society, 2006.
[Kos07]
Rainer Koschke. Survey of Research on Software Clones. In Duplication, Redundancy,
and Similarity in Software, Dagstuhl Seminar Proceedings, 2007.
[LJ07]
Edwin Chiu Lingxiao Jiang, Zhendong Su. Context-Based Detection of Clone-Related
Bugs. In ESEC/FSE 2007, 2007.
[LPM+ 97] B. Lague, D. Proulx, J. Mayrand, E. M. Merlo, and J. Hudepohl. Assessing the benefits
of incorporating function clone detection in a development process. In Proceedings of
the International Conference on Software Maintenance, 1997.
[MNK+ 02] Akito Monden, Daikai Nakae, Toshihiro Kamiya, Shinichi Sato, and Kenichi Matsumoto. Software Quality Analysis by Code Clones in Industrial Legacy Software. In
METRICS ’02: Proceedings of the 8th International Symposium on Software Metrics,
page 87. IEEE Computer Society, 2002.
[PBKS07] Alexander Pretschner, Manfred Broy, Ingolf H. Krüger, and Thomas Stauner. Software
Engineering for Automotive Systems: A Roadmap. In L. Briand and A. Wolf, editors,
Future of Software Engineering 2007, pages 55–71. IEEE Computer Society, 2007.
[TBWK07] Christoph Treude, Stefan Berlik, Sven Wenzel, and Udo Kelter. Difference Computation of Large Models. In ACM SIGSOFT Symposium on the Foundations of Software
Engineering, pages 295–304, 2007.
67
Modellbasierte Softwareentwicklung mit SCADE in der
Eisenbahnautomatisierung
Stefan Milius, Uwe Steinke
Siemens AG
Industry Sector, Mobility Division, TS RA RD I SWI
Ackerstraße 22
38126 Braunschweig
stefan.milius@siemens.com
uwe.steinke@siemens.com
Zusammenfassung: Wir berichten in diesem Beitrag über ein momentan laufendes Pilotprojekt zur modellbasierten Entwicklung sicherheitsrelevanter Software
bei Siemens Mobility im Bereich Eisenbahnautomatisierung. In der Pilotierung benutzen wir SCADE Version 6, ein Tool für die modellbasierte Entwicklung sicherheitsrelevanter Software von der Firma Esterel Technologies. Wir stellen kurz die
wesentlichen Merkmale von SCADE vor, und wir berichten von den Erkenntnissen,
die wir während unserer Pilotierungsphasen gewonnen haben.
1 Einführung
Die Entwicklung von Produkten für Eisenbahnautomatisierung bringt verschiedene Herausforderungen mit sich. Zunächst einmal müssen die Produkte sehr hohen Sicherheitsanforderungen genügen. Um diese Anforderungen zu erfüllen, müssen aufwändige
Tests, Begutachtungen und Zertifizierungen durchgeführt werden. Außerdem haben
Produkte der Eisenbahnautomatisierung typischerweise einen Lebenszyklus von über
25 Jahren. Der Wechsel von hardware- zu softwareorientierten Lösungen bringt Schwierigkeiten mit sich: Mit wachsenden Systemanforderungen erhöht sich die Komplexität
der Software stetig. Die zugrunde liegende Hardware wird sich im Laufe des Produktlebenszyklus ändern. Von der modellbasierten Softwareentwicklung wird ein Beitrag zur
Beherrschung dieser Probleme erwartet. Die Idee ist hierbei, die wesentlichen logischen
Anteile der Software eines Systems auf einer Abstraktionsebene zu beschreiben, die
weitestgehend unabhängig von der konkret verwendeten Hardwareplattform ist. Das
kann bis hin zu einer domänenspezifischen Beschreibung führen. Dabei ist allerdings
unabdingbar, dass es keinen Bruch zwischen Modell und Implementierung gibt. Das
bedeutet, es muss vermieden werden, dass Modelle und Implementierung nicht mehr
konsistent zueinander sind. Dies erfordert den Einsatz von Generatoren zur automatischen Erzeugung von ausführbarem Code aus Modellen. Damit aber die Generatoren für
die Entwicklung sicherheitsbezogener Software überhaupt einsatzfähig sind, muss die
Qualität der Übersetzung sehr hohen Standards genügen und sie muss gegenüber Zertifizierungsbehörden begründbar sein.
Ein Vorteil der modellbasierten Entwicklung gegenüber klassischer Entwicklung ist die
Möglichkeit des Einsatzes von modernen Analysemethoden und -werkzeugen auf Mo-
68
dellebene (z. B. statische Modellanalyse, Modellsimulatoren oder Model Checking).
Dies ermöglicht die frühe Erkennung von logischen Fehlern einer Software, bevor mit
speziellen und meist teuren Werkzeugen auf der Zielhardware getestet werden muss.
Auch verkürzen sich durch den Einsatz der modellbasierten Entwicklung die Zeiten
zwischen Fehlererkennung und Fehlerbehebung.
Für die modellbasierte Entwicklung sicherheitsrelevanter Software nutzt Siemens Mobility in der Rail Automation das Tool SCADE (Certified-Software-Factory) der Firma
Esterel Technologies. SCADE erlaubt die Modellierung der logischen Funktionalität von
Software eingebetteter Systeme auf einer Abstraktionsebene, die von der unterliegenden
Hardwareplattform abstrahiert. Der Modellierungssprache liegt die sogenannte „synchrone Hypothese“ zugrunde: SCADE-Modelle sind zyklisch synchron getaktete Datenfluss- und Zustandsmaschinen. Die SCADE-Modellierungs-Suite unterstützt Simulation
und Test auf Modellebene. Ein wesentlicher Bestandteil des Tools ist ein Codegenerator,
der für die Softwareentwicklung gemäß den gängigen Normen aus Luftfahrt und Eisenbahnautomatisierung, in naher Zukunft qualifiziert sein wird.
In Abschnitt 2 geben wir einen Überblick über wesentliche Elemente der SCADEToolsuite, Version 6. Im Abschnitt 3 berichten wir dann über den Einsatz von SCADE in
einem Pilotprojekt bei Siemens Mobility. In Abschnitt 4 diskutieren wir unsere Erkenntnisse aus dem Pilotprojekt. Schließlich geben wir in Abschnitt 5 einen Ausblick auf
zukünftige Arbeiten, die modellbasierte Entwicklung mit SCADE bei Siemens betreffen.
2 Überblick über SCADE 6
In diesem Kapitel geben wir einen kurzen Überblick über die wichtigsten Sprachmittel
von SCADE 6 und erläutern die wichtigsten Elemente der SCADE-Toolsuite. Grundsätzlich unterscheidet sich die Modellierung in SCADE stark von der Modellierung in bekannten Standard-Modellierungssprachen wie zum Beispiel UML oder SysML. Der
Fokus von SCADE liegt in der Beschreibung einer vollständigen Implementierung auf
Modellebene und aus unserer Sicht nicht auf der Ebene einer System- oder Softwarearchitekturbeschreibung. Der Abstraktionsgrad der Sprache ist dementsprechend gering,
aber angemessen. SCADE bietet eine einfache, mathematisch exakte Semantik, die den
Einsatz von Analyseverfahren (statische Modellanalyse oder Model Checking) begünstigt.
2.1 Sprachmittel
Die Modellierungssprache SCADE ist eine Weiterentwicklung der Datenflussbeschreibungssprache LUSTRE [H91].
SCADE liefert Modellierungsmittel für zyklisch synchron getaktete Softwaresysteme. Der
gesamten Modellierung in SCADE liegt die „synchrone Hypothese“ zugrunde. Diese
besagt, dass die Berechnung eines Taktzyklus eines Modells keine Zeit verbraucht. Diese Annahme stellt natürlich eine Vereinfachung dar, die von realen Systemen nicht erfüllt wird. Dennoch können alle logischen Abläufe und kausalen Zusammenhänge modelliert werden. Für die praktische Anwendung resultiert daraus lediglich die folgende
69
Restriktion: Die in einem Takt vom Modell verbrauchte Rechenzeit muss kleiner sein als
die für einen Takt festgelegte zulässige Zykluszeit.
Hier ein Überblick über die vorhandenen Modellierungsmittel:
Daten: SCADE 6 erlaubt die Behandlung von strukturierten Daten. Alle Datenstrukturen
sind statisch. Es ist möglich aus den Grundtypen (real, int, bool) Strukturen und Arrays
aufzubauen. Die Modellierungssprache ist stark typisiert.
Datenflussbeschreibungen: Die Modellierungssprache enthält die üblichen logischen
und arithmetischen Operatoren sowie konditionale Operatoren. Weiterhin existieren
temporale Operatoren für den Zugriff auf Datenflusswerte vergangener Takte. Vorhanden sind die Operatoren für den Zugriff auf Strukturen und Arrays bis hin zu dynamisch
indiziertem Array-Zugriff. Es existiert jedoch keine Möglichkeit, explizit Schleifen zu
beschreiben. Dafür gibt es die sogenannten „algebraischen Schleifen“ Map und Fold.
Diese sind aus der funktionalen Programmierung bekannt. Map dient dazu, einen Operator mit einfachem Datentyp elementweise auf ein Array dieses Datentyps anzuwenden.
Mit Fold können Berechnungen über ein Array akkumuliert werden. Map und Fold sind
vielseitige Sprachmittel, die bei richtiger Anwendung zu sehr klaren und verständlichen
Modellen führen. Allerdings erfordern diese Sprachmittel bei Programmierern aus dem
imperativen Umfeld eine „Gewöhnungsphase“. Das Bild 1 zeigt ein Beispiel eines SCADE-Modells.
Bild 1: SCADE-Modell eines einfachen Zählers
Zustandsmaschinen: Die Zustandsmaschinen in SCADE sind synchron getaktete, hierarchische Zustandsmaschinen. Sie sind mit den aus der Standard-Modellierungssprache
UML bekannten Statecharts vergleichbar. Ein wichtiger Unterschied ist die streng synchrone Semantik: Pro Takt ist in jeder (parallelen) Zustandsmaschine immer nur eine
Transition aktiv und wird ausgeführt. Zyklische Abhängigkeiten zwischen Modellelementen sind verboten und werden auch zur Generierzeit wie Syntaxfehler behandelt. Ein
Beispiel einer Zustandsmaschine zeigt Bild 2.
Ein Vorteil von SCADE 6 ist, dass die beiden Beschreibungsmittel Datenfluss und Zustandsmaschine voll integriert sind und daher beliebig ineinander verschachtelt werden
können.
70
Bild 2: Eine Zustandsmaschine in SCADE
2.2 Tool Suite
Mit der SCADE-Suite steht ein umfangreiches Tool für die Modellierungssprache SCADE
zur Verfügung. Bild 3 zeigt die SCADE-Suite im Überblick. Zusätzlich können verschiedene Anbindungen an andere Tools (Gateways) eingesetzt werden:
Grafischer Editor: Jedes syntaktische Modellelement hat eine grafische Repräsentation.
Im SCADE-Editor kann mit diesen grafischen Elementen modelliert werden. Es kann
auch mit textlicher Modellierungssprache gearbeitet werden.
Codegenerator (KCG): Der Codegenerator erzeugt aus SCADE-Modellen einen CCode. Die Qualifizierung des Codegenerators für die Entwicklung sicherheitsbezogener
Software nach DO 178B, Level A und nach CENELEC EN 50128 für SIL 4, ist für den
weiteren Verlauf des Jahres 2008 geplant. Zusammen mit dem bei Siemens Mobility in
der Rail Automation eingesetzten zertifizierten CADUL-Compiler ergibt sich eine vollständige Automatisierung der Generierung vom Modell bis ins Zielsytem.
Simulator/Debugger: Er erlaubt, den aus einem SCADE-Modell generierten Code auszuführen, auf Modellebene zu testen und zu debuggen. Der Simulator kann mittels TCLSkripten gesteuert werden und besitzt eine Automatisierungsschnittstelle.
Model Test Coverage: Diese Toolkomponente erlaubt es, für eine gegebene Testsuite
eines Modells strukturelle Abdeckungsmessungen zu machen. Ein Modell kann hierbei
für zwei verschiedene Coverage-Kriterien automatisch instrumentiert werden: Decision
Coverage (DC) und Modified Condition/Decision Coverage (MC/DC). Andere Kriterien
lassen sich auf einfache Weise für (selbstmodellierte) Bibliotheken bereitstellen.
71
Anforderungen
Design Verifier
Grafischer Editor
(Modellerstellung)
Model Test Coverage
Simulator/Debugger
(Testsuite)
vollautomatische Generierung
Modell
Codegenerator (KCG)
C-Quellen
Zertifizierter C-Compiler
(z. B. CADUL)
Object-Code
Bild 3: Workflow mit der SCADE-Suite
Design Verifier: Diese Komponente erlaubt die formale Verifikation gewisser Eigenschaften von Modellen. Die zu verifizierenden Eigenschaften werden in SCADE formuliert. Das hat den Vorteil, dass man keine neue Sprache oder Logik zur Spezifikation von
Eigenschaften nutzen muss. Allerdings ist die Ausdrucksstärke der Operatoren in SCADE, beispielsweise im Verhältnis zu temporalen Logiken (LTL, CTL), eingeschränkt. Es
lassen sich nur sogenannte „Safety-Eigenschaften“ verifizieren, keine „LivenessEigenschaften“.1
Requirements Gateway: Die SCADE-Suite beinhaltet eine Version des Tools Reqtify
(der Firma TNI-Software). Dieses erlaubt eine Verlinkung von Elementen eines SCADEModells mit seinen Anforderungen die beispielsweise in einem Tool wie DOORS erfasst sind. Es können auch klassisch kodierte Teile einer Software oder Testfälle verlinkt
werden, sodass eine durchgängige toolgestützte Verlinkung von den Anforderungen bis
zu Implementation und Test möglich wird.
Weitere Gateways: SCADE besitzt noch weitere Gateways zu anderen Third-PartyTools: Rhapsody, Simulink und ARTiSAN (das ARTiSAN-Gateway ist für SCADE 6.1,
Okt. 2008 angekündigt).
1
„Safety-Eigenschaften“ sagen aus, dass bestimmte Zustände eines Systems nie eintreten. „LivenessEigenschaften“ sagen aus, dass bestimmte erwünschte Zustände in einem Ablauf ausgehend von einem Startzustand irgendwann in der Zukunft sicher auftreten werden.
72
3 Einsatz von SCADE bei Siemens TS RA
SCADE 6 wird seit Januar 2007 in einem Pilotprojekt bei Siemens Mobility eingesetzt.
Bei diesem Pilotprojekt werden Softwarekomponenten von Achszählsystemen mit SCADE modelliert. Achszähler sind Systeme, die in der Bahnautomatisierung zur Gleisfreimeldung eingesetzt werden. Bild 4 zeigt den Einsatz eines Achszählsystems an einem
vollautomatischen Bahnübergang.
Bild 4: Vollautomatischer Bahnübergang mit Achszählern
Die gezeigten Achszählsensoren (AzE, AzA1 und AzA2) beruhen auf dem induktiven
Prinzip und werden durch Überfahrung eines Rades beeinflusst, wobei auch die Fahrtrichtung eines Rades ermittelt werden kann. Die Auswertung der Sensordaten findet im
Achszählrechner (AzR) statt, der sich im Betonschalthaus befindet. Der AzR ist ein
sicherer Rechner nach dem Simis-Prinzip. Der sichere Rechner erfüllt einen Sicherheitsintegritätslevel (SIL) der Stufe 4 – der höchste definierte Level, siehe [N03]. Für die auf
dem Rechner laufende Software gilt entsprechend der Software-SIL 4, was eine Reihe
von Maßnahmen an den Softwareentwicklungsprozess impliziert, siehe [N01]. Unter
anderem werden formale Methoden ausdrücklich empfohlen. Diese Anforderungen liefern eine weitere Motivation für den Einsatz modellbasierter Softwareentwicklungsmethoden.
Im ersten Teil unseres Pilotprojektes wurde in einem bestehenden Achszählsystem der
Teil der Software, der die Funktionalität eines Gleisabschnittes trägt, durch ein SCADEModell ersetzt. Ziele dieses ersten Teiles des Pilotprojektes waren:
!
Schaffung der notwendigen technischen Rahmenbedingungen für den produktiven Einsatz (z. B. Produktionsumgebung, Testumgebung).
!
Herstellung der Lauffähigkeit der modellierten Komponenten im realen System.
!
Evaluation der Modellierung und der Tool-Unterstützung von SCADE 6.
!
Erfahrungsaufbau mit der SCADE-Methode.
73
Nach Abschluss der ersten Phase des Pilotprojektes und den dabei gewonnenen Ergebnissen wurde beschlossen, SCADE auch im produktiven Bereich einzusetzen. Seit September 2007 wird nun in der zweiten Phase der SCADE-Pilotierung eine Komponente,
eines in der Entwicklung befindlichen Systems, mit SCADE modelliert. Es handelt sich
um ein Achszählsystem, das in Stellwerken zur Gleisfreimeldung eingesetzt wird. Die
wichtigsten Ziele der zweiten Phase sind:
!
Durchgängige Entwicklung des Achszählsystems von den Anforderungen bis
zur Zertifizierung.
!
Einsatz von formaler Verifikation zur frühen Erkennung von Fehlern.
!
Einsatz weiterer modellbasierter Techniken, um SCADE in einen durchgängigen
modellbasierten Entwicklungsprozess zu integrieren.
Es wurde eine Software-Komponente mit SCADE modelliert, die das Grundstellverfahren
eines Gleisabschnittes in einem Stellwerk implementiert. Durch die Bedienhandlung des
Grundstellens wird ein Belegungszustandswechsel eines Gleisabschnittes von „besetzt“
nach „frei“ erzwungen. Die Bedienhandlung ist in einem Stellwerk nur in Sonderfällen
zulässig und an Bedingungen geknüpft. Das Grundstellverfahren überwacht die Einhaltung dieser Bedingungen und führt den Grundstellvorgang aus.
Die Anforderungen an die Software-Komponente wurden in dem RequirementsEngineering-Tool DOORS [D] erfasst und mit der Umsetzung in SCADE verlinkt. Auf
diese Weise ist die Nachverfolgbarkeit besser gegeben, als durch die Norm [N01] vorgeschrieben In der Norm ist keine Verlinkung bis auf Quellcodeebene (also in unserem
Fall bis auf SCADE-Modell-Ebene) gefordert – das ist allerdings durch unser Vorgehen
sicher gestellt.
Im Rahmen der Pilotierung wird von uns ein statistisches Testmodell erstellt. Dies wird
unabhängig von dem Implementationsmodell in SCADE aus den Anforderungen abgeleitet. Bild 5 zeigt das grundsätzliche Vorgehen, siehe auch [Sc07]. Bei dem Testmodell
handelt es sich im Wesentlichen um eine Markov-Kette. In einer Kooperation mit dem
Fraunhofer IESE setzen wir im Rahmen einer Diplomarbeit, die am Institut für Software
und Systems Engineering der TU Braunschweig angesiedelt ist, das Tool JUMBL [J]
ein, um aus diesem Testmodell Testfälle zu generieren. Diese können für Belastungstests
des Modells benutzt werden. Mit Hilfe der Wahrscheinlichkeiten in der Markov-Kette
können bestimmte Anwendungsszenarien häufiger oder weniger häufig getestet werden.
Es hat sich gezeigt, dass durch dieses Vorgehen Fehler schon in der Modellierungsphase
erkannt werden, die ansonsten erst in viel späteren Phasen entdeckt würden.
Im Rahmen des Testens des SCADE-Modells wird auch von dem Model-Test-CoverageFeature der SCADE-Tools Suite Gebrauch gemacht. Wie bereits erwähnt lässt sich damit
die strukturelle Abdeckung des Modells durch die vom Testmodell generierte Testsuite
nachweisen. Dies ist auch eine Forderung aus der Norm [N01] für den Software-SIL 4.
74
Anforderungen
SCADE Modell
Testmodell
Code generator
Generierter
Code
JUMBL
manuell
automatisch
Verifikation
Testfälle
Analyse
Bild 5: SCADE-Modell und Testmodell
Ein Problem beim Test ergibt sich durch die Einbettung des Modells in das Zielsystem
oder in klassisch kodierte Software. Um diese Einbettung zu erreichen, muss ein Wrapper implementiert werden, der die Schnittstellen des Modells in die des schon vorhandenen Ziel-Softwaresystems umsetzt. Da dieser Wrapper klassisch in der Zielsprache programmiert wird, lässt er sich nicht mehr auf Modellebene testen. Hier ist ein Modul- und
Integrationstest nötig. Dieser Test kann innerhalb einer vorhandenen Testumgebung für
Softwarekomponenten geschehen. In der Pilotierung wird für diesen integrativen Testschritt das Tool RT-Tester von Verified International [V] eingesetzt.
4 Ergebnisse und Erfahrungen
In diesem Abschnitt berichten wir von den Erkenntnissen, die wir im Laufe der zwei
Pilotierungsphasen gewonnen haben.
Ein wesentliches Ziel des Einsatzes eines modellbasierten Softwareentwicklungstools
war die Trennung von funktional logischen Anteilen der Software von der spezifischen
Hardwareplattform. Dieses kann mit SCADE sehr gut umgesetzt werden. Die gewählte
Abstraktionsebene erlaubt und erzwingt aufgrund ihrer präzisen Semantik einerseits die
vollständige Beschreibung der logischen Funktionalität und abstrahiert andererseits
hinreichend vom Zielsystem. Durch die Verwendung des Codegenerators geht das Modell ohne Bruch in die Implementierung ein. Dadurch sind unserer Meinung nach SCADE-Modelle auch prinzipiell geeignet, leicht auf andere Zielsysteme übertragen zu werden. Dies kann durch Änderung der dünnen Wrapper-Schicht eines Modells geschehen
oder durch Verwendung eines anderen Code-Generators. Die Übertragung auf andere
Zielsysteme ist aber bei unserer Pilotierung noch nicht verifiziert worden. Insgesamt sind
das SCADE zugrunde liegende Paradigma eines zyklisch getakteten Systems und die
synchrone Hypothese gut für die Umsetzung typischer Automatisierungsaufgaben geeignet. So arbeiten beispielsweise typische Prozessrechner als zyklisch getaktete Systeme.
Bei dem vorhandenen Codegenerator gibt es verschiedene Optimierungsstufen, wobei
der Code, der auf einer niedrigen Optimierungsstufe generiert wird, eine noch unzureichende Qualität hat. Code der höchsten Optimierungsstufe 3 hat eine brauchbare Qualität. Modellweite globale Optimierungen waren bei unseren Beispielen im generierten
Code nicht erkennbar. Eine Schwierigkeit ist, dass in der Schnittstelle des zu einem
Modell generierten Codes die Trennung von internem Zustand und Ein-/Ausgabedaten
75
nicht durchgängig ist: Der Codegenerator generiert zu jedem SCADE-Knoten nur eine CDatenstruktur, die sowohl die Ausgabedaten als auch die Daten des internen Zustandes
des SCADE-Knotens enthält. Dies beeinträchtigt die Austauschbarkeit von Modellen
innerhalb ihrer Ablaufumgebung erheblich. Hierbei sollte ein Modell durch ein abweichend implementiertes Modell mit derselben Schnittstelle einfach zu ersetzen sein. Außerdem entspricht der generierte Code nicht den MISRA-Richtlinien [M]. Der C-Code,
der mit der Optimierungstufe 0 aus hierarchischen Zustandsmaschinen generiert wird,
beinhaltet an mehreren Stellen ineinander verschachtelte Switch-case-Blöcke. Diese
haben nur eine Auswahlalternative und besitzen keinen Default-Block.
Es gibt einige problematische Punkte bei der Modellierung. Zunächst ist durch die relativ niedrige Abstraktion eine Modellierung auf der Ebene einer Systemarchitektur nicht
möglich. Die Modellierung mit SCADE ist eher auf der Ebene der Implementierung angesiedelt. Es ergibt sich somit momentan noch eine Modellierungslücke zwischen den
Anforderungen und der Implementierung. Gegenwärtig wird von Esterel Technologies
ein Gateway zwischen SCADE und ARTiSAN entwickelt, der diese Lücke schließen soll.
Eine weitere Schwierigkeit ist, SCADE-Modelle in Umgebungen zu integrieren, die nicht
dem zyklisch synchronen Paradigma entsprechen. Dieser Anwendungsfall tritt auf,
wenn, wie in unserem Fall, nur Teile einer „Alt-Software“ durch SCADE-Modelle ersetzt
werden oder wenn das Zielsystem selbst keine zyklische synchrone Datenflussmaschine
darstellt. Zwar ist die Einbettung in solchen Fällen trotzdem mit einem geeigneten
Wrapper machbar, die Schnittstelle zwischen Modell und Laufzeitumgebung muss dann
aber zwischen beiden „Welten“ umsetzten. Das führt dazu, dass Ergebnisse, die mittels
formaler Verifikation auf Modellebene gewonnen wurden, einer sorgfältigen Interpretation bedürfen. Zudem muss in einem solchen Fall der Wrapper zusätzliche Funktionalitäten übernehmen. Er wird damit so komplex, dass ein eigener umfangreicher Modultest
erforderlich ist.
Beim Thema Test sind unsere Erfahrungen positiv. Durch die Beschreibung der logisch
funktionalen Anteile einer Software auf Modellebene können diese Anteile frühzeitig
unabhängig von der Zielhardware verifiziert werden. Durch die synchrone Hypothese
gibt es jedoch wichtige Aspekte, die nicht ohne Weiteres im Modell getestet werden
können – das sind Echtzeitaspekte und Performance. Diese müssen weiterhin auf der
Zielhardware getestet werden. Ein Schwachpunkt beim Thema Test ist die fehlende
Integration des SCADE-Simulators mit externen Debuggern. Eine Testumgebung, in der
man SCADE-Modelle gemeinsam mit handgeschriebenem Code testen kann, existiert
leider noch nicht.
Begleitend zu unserer Pilotierung von SCADE hat eine unabhängige Evaluation der SCADE-Suite durch Siemens CT stattgefunden. Die Ergebnisse dieser Evaluation sind positiv
ausgefallen. Bei der Untersuchung wurde ein Entwicklungsprozess, der auf SCADE basiert, mit einem ausgereiften Prozess verglichen, der die Programmiersprache C benutzt.
Es wurde festgestellt, dass SCADE insgesamt positiv gegenüber der konventionellen
Entwicklung abschneidet, wobei allerdings die Stärken von SCADE auf Konstruktion und
Design liegen. Bei der Testunterstützung wurde noch Verbesserungsbedarf gesehen.
76
5 Fazit und Ausblick
Zusammenfassend lässt sich feststellen, dass SCADE 6 trotz Einschränkungen bereits eine
modellbasierte Entwicklung realisiert. Die Trennung von Logik und Hardware wird
erreicht, sodass gerade bei langen Produktlebenszyklen der wesentliche Inhalt eines
Systems unabhängig von der Hardware erhalten bleibt. Auch scheint der Einsatz von
Analysetechniken vielversprechend (z. B. Model Checking mit dem SCADE Design Verifier). Auf diesem Gebiet konnten wir aber leider aufgrund der fehlenden Verfügbarkeit
des Design Verifiers für SCADE 6 noch keine Erfahrungen sammeln. Ob der Einsatz von
Model Checking nur zur Testunterstützung dient oder letztlich die Zertifizierung
(teil)modellierter Systeme vereinfacht, bleibt im Moment noch offen. Hier sind noch
intensive Gespräche mit den Zertifizierungsbehörden erforderlich.
Bis ein breiter Einsatz von SCADE im produktiven Bereich erfolgen kann, müssen einige
Verbesserungen realisiert werden – eine Toolstabilität ist noch nicht sichergestellt. In
Zukunft wäre auch ein besseres Konzept zur Integration von Testumgebungen für Modelle mit solchen für klassisch programmierte Software und eine intuitivere Benutzerführung im Editor wünschenswert. Wir werden im Rahmen unsere Pilotierung untersuchen,
wie sich Testfälle aus den Tests der Modellebene für den Integrationstest, der den Wrapper beinhaltet, wiederverwenden lassen.
Trotz Einschränkungen werden bei Siemens Mobility in der Rail Automation die Aktivitäten zur modellbasierten Entwicklung mit SCADE intensiviert. SCADE wird beispielsweise für die Softwareentwicklung von Zugbeeinflussungssystemen eingesetzt. Hierbei wird
auf den relevanten Zielrechnern der Simis-Reihe eine Ablaufumgebung eingesetzt werden, die sich an dem Execution-Chain-Pattern [Sz06] orientiert und zyklisch getaktet
arbeitet. Diese Ablaufumgebung bietet SCADE-Modellen einen geeigneten Rahmen.
Literaturverzeichnis
[D]
[J]
[H91]
[M04]
[N01]
[N03]
[Sc07]
[Sz06]
[V]
Telelogic DOORS, Telelogic AB, Malmö, http://www.telelogic.com/products/doors
Java Usage Model Builder Library (JUMBL), Software Quality Research Laboratory,
Univesity of Tennessee, http://www.cs.utk.edu/sqrl/esp/jumbl.html
Halbwach, N. et al: The synchronous dataflow programming language LUSTRE, Proceedings of the IEEE 79, Nr. 9, 1991, 1305-1320.
MISRA: MISRA-C: Guidelines for the Use of the C Language in Critical Systems,
Oktober 2004.
Norm CENELEC EN 50128: Railway application – Communications, signaling and
processing systems – Software for railway control and protection systems, März 2001.
Norm CENELEC EN 50129: Railway application – Communications, signaling and
processing systems – Safety related electronic systems for signaling, Dezember 2003.
Schieferdecker, I.: Modellbasiertes Testen, OBJEKTspektrum Mai/Juni 2007, 39-45.
Schütz, D.: Execution Chain. In Longshaw, A.; Zdun, U.(Hg.): Proceedings of the 10th
European Conference on Pattern Languages of Programs 2005, 1. Auflage 2006.
Verified Systems International GmbH, Bremen, http://www.verified.de
77
Tool Support for Developing Advanced Mechatronic
Systems: Integrating the Fujaba Real-Time Tool Suite with
CAMeL-View∗
Stefan Henkler and Martin Hirsch
Software Engineering Group
University of Paderborn, Germany
shenkler—mahirsch@uni-paderborn.de
Abstract: The next generation of advanced mechatronic systems is expected to use its
software to exploit local and global networking capabilities to enhance their functionality and to adapt their local behavior when beneficial. Such systems will therefore
include complex hard real-time coordination at the network level. This coordination is
further reflected locally by complex reconfiguration in form of mode management and
control algorithms. We present in this paper the integration of two tools which allow
the integrated specification of real-time coordination and traditional control engineering specifically targeting the required complex reconfiguration of the local behavior.
1
Introduction
This paper is based on the formal tool demonstration and related paper, presented on ICSE
2007 in Minneapolis [BGH+ 07].
For mechatronic systems [BSDB00], which have to be developed in a joint effort by teams
of mechanical engineers, electrical engineers, and software engineers, the advances in networking and processing power provide many opportunities. It is therefore expected that
the next generation of advanced mechatronic systems will exploit these advances to realize more intelligent solutions where software is employed to exploit local and global
networking capabilities to optimize and enhance their functionality by operating cooperatively. The cooperation in turn permits these systems to decide when to adapt their local
behavior taking the information of cooperating subsystems into account.
The development of such advanced mechatronic systems will therefore at first require
means to develop software for the complex hard real-time coordination of its subsystems at
the network level. Secondly, software for the complex reconfiguration of the local behavior
in form of mode management and control algorithms is required, which has to proper
coordinate the local reconfiguration with the coordination at the network level.
∗ This
work was developed in the course of the Special Research Initiative 614 – Self-optimizing Concepts
and Structures in Mechanical Engineering – University of Paderborn, and was published on its behalf and funded
by the Deutsche Forschungsgemeinschaft.
78
The envisioned approach is complicated by the fact that classical engineers and software
engineers employ different paradigms to describe their aspect of these systems. In software engineering discrete models such as state charts are frequently used to describe the
required interaction, while the classical engineers employ continuous models to describe
and analyze their control algorithms.
To enable the model-based development of the outlined advanced mechatronic system, an
integration between these two paradigms is required which fulfills the outlined requirements. To provide an economically feasible solution, the required integration must further
reuse the concepts, analysis techniques, and even tools of both involved paradigms where
possible.
In [BGH+ 05], we demonstrated how to use the F UJABA R EAL -T IME T OOL S UITE1 to
develop safety-critical real-time systems conform to the model-driven engineering (MDE)
approach. We present in this paper the integration of two tools to bridge the development of software engineering for real-time systems with control engineering: the open
source UML CASE tool F UJABA R EAL -T IME T OOL S UITE and the CAE tool CAMeLView. The employed concepts for the real-time coordination [GTB+ 03] and tool support
for it have been presented in earlier work. For the the local reconfiguration only the concepts have been presented [GBSO04], while in this paper the developed tool support is
described.
In the remainder of this paper, we first introduce the modeling concepts for the integrated
description of discrete and continuous models in Section 2. Then, we outline in Section
3 how this conceptual integration has to be paralleled at the execution level. Afterward,
we discuss the benefits of our approach with respect to analysis capabilities in Section
4 and compare our approach with the related work in Section 5. We finally provide our
conclusions and an outlook on planned future work. In the last Section 6, we give an
overview about the planed tool demonstration.
2
Integration at the model level
Modeling advanced mechatronic systems require the integration of modeling approaches
used in software engineering and traditional engineering disciplines. To describe our approach for modeling these systems, we first introduce M ECHATRONIC UML for specifying discrete parts of a system in Section 2.1. As mechatronic systems have continuous
parts too, we introduce in Section 2.2 block diagrams. We finally provide our approach for
modeling the required integration of the different modeling paradigms in Section 2.3.
1 http://www.fujaba.de
79
2.1
Discrete Specification
The software architecture of the considered mechatronic systems is specified in M ECHA TRONIC UML [BGT05] with components which are based on UML [Obj04] components.
The components are self-contained units, with ports as the interface for communication.
A component can contain other components and events can be delegated from the toplevel component to its subcomponents. The internals of a component are modeled with
an extended version of UML State Machines. Ports are the only external access point
for components and their provided and required interfaces specify all interactions which
occur at the port. The interaction between the ports of the components takes place via
connectors, which describe the communication channels.
As UML state machines are not sufficient to describe complex time-dependent behavior
(cf. [GO03]), we introduced Real-Time Statecharts (RTSC) [BGS05] as an appropriate
modeling language for the discrete real-time behavior of a component and for the eventbased real-time communication between components. Real-Time Statecharts contain various Timed Automata [ACD90] related constructs. In contrast to Timed Automata, firing
a transition in a RTSC consumes time.
2.2
Continuous Specification
Mechatronic systems contain software to continually control the mechanic behavior. The
standard notation for control engineering is a block diagram which is used to specify
feedback-controllers as well as the controlled plant. Consequently, our approach uses
block diagrams for the structural and behavioral specification of continuous components.
Block diagrams generally consist of basic blocks, specifying behavior and hierarchy
blocks that group basic and other hierarchy blocks. Each block has input and output
signals. The unidirectional interconnections between the blocks describe the transfer of
information. The behavior of basic blocks is usually specified by differential equations,
specifying the relationship between the block’s inputs and outputs.
2.3
Hybrid Specification: Integration of Feedback-Controller Configurations
As mentioned in Section 1, for mechatronic systems, it is not sufficient to specify how
discrete states of a component change: Dependent on the current state, the components
have to apply different feedback-controllers. In [BGT05], we introduced a new approach
for the specification of hybrid behavior, which integrates discrete and continuous behavior,
and even supports reconfiguration.
The idea is to associate to each discrete state of a component a configuration of subordinated components. Such a configuration consists of the subordinated components and their
current connection. These components are either pure continuous components (feeback-
80
controllers) or discrete or hybrid components. If the subordinated components are discrete
or hybrid components, the configuration of these subordinated components consists also
of the current state of the subordinated discrete or hybrid component. As hybrid components have a dynamic interface and shows just the ports required in their current state, also
a configuration of components show just the in- and out-ports which are really required or
used in the current state of the superordinated component.
This kind of modeling leads to implied state changes: When the superordinated component changes its state, this implies reconfiguration of the subordinated components. The
subordinated components reconfigure their communication connections and –if specified–
their discrete state. Such implied state changes can imply further state changes, if the
subordinated components embed further components. This kind of modeling leads to reconfiguration across multiple hierarchical levels.
Compared to state of the art approaches, this approach has the advantage that models are
of reduced size and that analyses require reduced effort (see Section 4).
3
Integration at runtime
Our approach for developing reconfigurable mechatronic systems applies the model-driven
development approach to develop software systems at a high level of abstractions to enable analysis approaches like model checking. Therefore, ideally, we start with platform
independent models to enable the compositional formal verification (cf. [GTB+ 03]). Afterward, the platform independent model must be enhanced with platform specific information to enable code generation. The required platform specific information is based on
a platform model, which specifies the platform specific worst case execution times. After
providing the platform specific information, we generate code from the models. In the
remainder of this section, the generated code is described. Therefore, we introduce the
evaluation of the system’s components. Due to continuous and discrete parts of the considered mechatronic systems we have to consider data flow and event based evaluation.
Complex reconfigurations lead to a lot of possible evaluation configurations and requires
synchronization between the discrete and continuous parts. Our approach assures reconfigurations by the evaluation of the data flow and further we assure the data flow despite
of reconfiguration. In the next subsections (Section 3.1, Section 3.2, and Section 3.3), we
introduce our approach by considering first the discrete evaluation, then the continuous
evaluation, and finally the hybrid evaluation.
3.1
Discrete Evaluation
When a component is evaluated, it triggers periodically the evaluation of its embedded
components. As not every embedded component belongs to every configuration, it depends on the current discrete state of the component which of the embedded components
are evaluated. Then the triggered components will themselves trigger their embedded
81
components (in dependency of their discrete states) and so forth. Further, the association
of configurations to discrete states leads to reconfiguration when a component changes its
discrete state (e.g. when receiving an event). Due to the implied state changes (see Section
2.3), this leads to reconfiguration of the embedded components which are pure discrete,
pure continuous or hybrid components.
3.2
Continuous Evaluation
The continuous parts of a configuration describe the data flow between the continuous inputs and the continuous outputs of the system. To ensure stability of the continuous part of
the system, the data flow may not be interrupted within a computation step. Consequential,
reconfiguration may only occur between two computation steps. Therefore, we separate
execution of the continuous, data flow-orientated, and the discrete, event-based, parts: At
the beginning of a period, the continuous system parts are evaluated. This is followed
by the evaluation of the discrete system parts. Thus, we ensure that the reconfiguration
–which takes place in the discrete part– occurs after the computation of the continuous
system part is finished.
3.3
Hybrid Evaluation
Besides separating the continuous system parts from the discrete ones, it has to be managed
which components need to be evaluated in which discrete state. Enhancing the top-level
component with this information is usually not feasible as the number of global states
grows exponentially with the number of components. Therefore, we compose the whole
system as a tree structure consisting of single Hybrid Components to obtain an efficient
implementation. Each Hybrid Component contains the information about its discrete and
continuous parts –which may consist of system parts of the embedded components– itself.
By the presented integration at runtime, we ensure consistent, correct data flow and an
efficient implementation in spite of complex reconfiguration. Our seamless approach is
realized by the Fujaba Real-Time Tool Suite in combination with the CASE Tool CAMeL.
Both tools export hybrid components for the integration on the modeling level (see Section
2.3) and they export C++ code which is integrated to realize the hybrid, reconfiguration
behavior.
4
Analysis capabilities
For the outlined M ECHATRONIC UML approach, two specific verification tasks for the
resulting systems are supported.
First, the M ECHATRONIC UML approach supports model checking techniques for realtime processing at the network level. It addresses the scalability problem by supporting a
82
compositional proceeding for modeling and verification exploiting the component model
and the corresponding definition of ports and connectors as well as patterns [GTB+ 03].
Secondly, a restricted subset of the outlined hierarchical component structures for modeling of discrete and continuous control behavior can be checked for the consistent reconfiguration and real-time synchronization w.r.t reconfiguration taking proactive behavior into
account [GBSO04, GH06b].
As the second approach can be embedded into the first one, a combination of both approaches cover the whole real-time coordination issues from the network level down to
the reconfiguration of the lowest level components.
5
Related work
Related tools and techniques to M ECHATRONIC UML are CHARON, Hybrid UML
with HL3 , HyROOM/HyCharts/Hybrid Sequence Charts, Massacio and Giotto, Matlab/Simulink/Stateflow, Ptolemy II, and UMLh [GH06a].
All presented tools and techniques support the specification of a system’s architecture or
structure by a notion of classes or component diagrams. All approaches support modular
architecture and interface descriptions of the modules. Nevertheless, they do not respect
that a module can change its interface due to reconfiguration which can lead to incorrect
configurations.
CHARON, Masaccio, HybridUML with HL3 , UMLh , HyROOM, and HyCharts have a
formally defined semantics, but due to the assumption of zero-execution times or zeroreaction times, most of them are not implementable, as it is not realizable to perform a
state change infinitely fast on real physical machines. CHARON is the only approach
providing an implementable semantics. HyCharts are implementable after defining relaxations to the temporal specifications. They respect that idealized continuous behavior is
not implementable on discrete computer systems. Further, CHARON provides a semantic definition of refinement which enables model checking in principle. Ptolemy II even
provides multiple semantics and supports their integration.
Automatic verification of the real-time behavior including the reconfiguration is supported
by CHARON. It even supports hybrid modelchecking. Compositional modelchecking of
real-time properties is possible for CHARON and HyCharts in principle due to their definition of refinement. Schedulability analysis is supported by CHARON and Ptolemy II.
Although the approaches separate the architectural description from the behavioral description, they do not support the separated specification and verification of communication, real-time behavior and hybrid behavior and their later step-wise integration.
Although most of these approaches enable ruling the complexity by a modular,
component-based architecture and by behavioral models that support history and hierarchical and orthogonal states, reconfiguration across multiple hierarchical levels as required
for the advanced mechatronic systems envisioned and provided by the presented approach
is supported by none of them.
83
All these approaches have drawbacks in modeling support, support for formal analysis, or
in multiple of these categories. Although many approaches enable ruling the complexity
by a modular, component-based architecture and by behavioral models that support history
and hierarchical and orthogonal states, our approach is the only one that supports reconfiguration via multiple hierarchical levels. This is supported by the additional high-level
modeling construct of the implied state change for modeling reconfiguration. Further,
our approach enables reconfiguration by adapting the structure instead of just exchanging
whole models or blinding out unused parts of the system. This is enabled by associating
whole configurations instead of single components to the system’s discrete states.
The presented approaches either do not support formal analysis or this is restricted to
simple models as it does not scale for complex systems. Our approach is the only one that
integrates high-level models, specifying hybrid behavior and reconfiguration, in a wellstructured approach for the verification of the real-time behavior of complex, distributed
systems.
6
Tool support and demonstration
The presented seamless approach is realized by the Fujaba Real-Time Tool Suite. We
integrate the Fujaba Real-Time Tool Suite with the CASE Tool CAMeL and therefore use
the ability of CAMeL to generate code. As shown in Figure 1 both tools export hybrid
components, which are integrated into a hierarchical model as a input for the binding tool.
The export of the tools includes already the transformation of the model formalism to a
code formalism, like C++. Afterward, the binding tool combine the inputs and therefore
combines both formalisms.
CAMeL
!
:Clock[Flywheel]
d 2!
r 2 # cos !
dt
:Pendulum
!
Fujaba
d 2!
# " g sin !
dt 2
[woundUp]
:Clock
Dynamics Model
d 2!
# " g sin !
dt 2
Hybrid Statecharts
:Flywheel
:Flywheel
d 2!
r 2 # cos !
dt
:Pendulum
Hybrid Components
Hybrid Components
Deployment
XML
XML
XML
XML
IPANEMA
:Clock
:Flywheel
d 2!
r 2 # cos !
dt
Deployment
:Pendulum
!
Code
Code
Code
d 2!
# " g sin !
dt 2
Integrated Hybrid Model
Binding
Tool
int main()
{
initialize();
}
Executable System
Figure 1: Tool Integration Overview
84
[unwound]
:Clock[Pendulum]
The demonstration is shown by an example from the RailCab2 research project at the
University of Paderborn. In this project, autonomous shuttles are developed which operate
individually and make independent and decentralized operational decisions.
One particular problem is to reduce the energy consumption due to air resistance by coordinating the autonomously operating shuttles in such a way that they build convoys whenever possible. Such convoys are built on-demand and require a small distance between the
different shuttles such that a high reduction of energy consumption is achieved. Coordination between speed control units of the shuttles becomes a safety-critical aspect and results
in a number of hard real-time constraints, which have to be addressed when building the
control software of the shuttles. Additionally, different controllers are used to control the
speed of a shuttle as well as the distance between the shuttles. The controllers have to be
integrated with the aforementioned real-time coordination.
In this demonstration, we will first show how the structure of the software is specified by
component diagrams. Then we will present the behavior for the discrete components and
the continuous control components respectively. Thereafter, we show how the continuous
components are embedded into the discrete real-time behavior. After code generation and
compilation, we show the behavior of the system using a simulation environment.
7
Conclusion
The tool integration of the CAE Tool CAMeL-View and the CASE Tool Fujaba Real-Time
Tool Suite enables the application of our approach by continuing using well-approved
tools. It does not only integrate models, but also the synthesized source code.
Future Work When changing behavior and structure of a system, reconfiguration is just
the first step, a new trend deals with compositional adaptation, i.e. the behavior and the
structure is changed, but the possible configurations are not all known at design time or
their number is infinite. A first attempt to model compositional adaptation is given in
[BG05]. There, reconfiguration rules which are based on graph transformation systems
are introduced to specify compositional adaptation. Obviously, such a model also needs
to be implemented, the reconfiguration rules need to be respected appropriately in the
verification, and the model requires a formally defined semantics. This will be subject to
future work.
References
[ACD90]
Rajeev Alur, Costas Courcoubetis, and David Dill. Model-checking for Real-Time
Systems. In Proc. of Logic in Computer Science, pages 414–425. IEEE Computer Press,
June 1990.
[BG05]
Sven Burmester and Holger Giese. Visual Integration of UML 2.0 and Block Diagrams
for Flexible Reconfiguration in Mechatronic UML. In Proc. of the IEEE Symposium on
2 http://nbp-www.upb.de/en/
85
Visual Languages and Human-Centric Computing (VL/HCC’05), Dallas, Texas, USA,
pages 109–116. IEEE Computer Society Press, September 2005.
[BGH+ 05] Sven Burmester, Holger Giese, Martin Hirsch, Daniela Schilling, and Matthias Tichy.
The Fujaba Real-Time Tool Suite: Model-Driven Development of Safety-Critical, RealTime Systems. In Proc. of the 27th International Conference on Software Engineering
(ICSE), St. Louis, Missouri, USA, pages 670–671, May 2005.
[BGH+ 07] Sven Burmester, Holger Giese, Stefan Henkler, Martin Hirsch, Matthias Tichy, Alfonso Gambuzza, Eckehard Müch, and Henner Vöcking. Tool Support for Developing Advanced Mechatronic Systems: Integrating the Fujaba Real-Time Tool Suite with
CAMeL-View. In Proc. of the 29th International Conference on Software Engineering
(ICSE), Minneapolis, Minnesota, USA, pages 801–804. IEEE Computer Society Press,
May 2007.
[BGS05]
Sven Burmester, Holger Giese, and Wilhelm Schäfer. Model-Driven Architecture for
Hard Real-Time Systems: From Platform Independent Models to Code. In Proc. of the
European Conference on Model Driven Architecture - Foundations and Applications
(ECMDA-FA’05), Nürnberg, Germany, Lecture Notes in Computer Science, pages 25–
40. Springer Verlag, November 2005.
[BGT05]
Sven Burmester, Holger Giese, and Matthias Tichy. Model-Driven Development of
Reconfigurable Mechatronic Systems with Mechatronic UML. In Model Driven Architecture: Foundations and Applications, LNCS 3599, pages 47–61. Springer-Verlag,
August 2005.
[BSDB00] David Bradley, Derek Seward, David Dawson, and Stuart Burge. Mechatronics. Stanley
Thornes, 2000.
[GBSO04] Holger Giese, Sven Burmester, Wilhelm Schäfer, and Oliver Oberschelp. Modular Design and Verification of Component-Based Mechatronic Systems with OnlineReconfiguration. In Proc. of 12th ACM SIGSOFT Foundations of Software Engineering
2004 (FSE 2004), Newport Beach, USA, pages 179–188. ACM, November 2004.
[GH06a]
Holger Giese and Stefan Henkler. A Survey of Approaches for the Visual Model-Driven
Development of Next Generation Software-Intensive Systems. In Journal of Visual
Languages and Computing, volume 17, pages 528–550, December 2006.
[GH06b]
Holger Giese and Martin Hirsch. Modular Verificaton of Safe Online-Reconfiguration
for Proactive Components in Mechatronic UML. In Jean-Michel Bruel, editor, Satellite
Events at the MoDELS 2005 Conference, Montego Bay, Jamaica, October 2-7, 2005,
Revised Selected Papers, LNCS 3844, pages 67–78. Springer Verlag, January 2006.
[GO03]
Susanne Graf and Ileana Ober. A Real-Time profile for UML and how to adapt it to
SDL. In Proceedings of the SDL Forum’03, 2003.
[GTB+ 03] H. Giese, M. Tichy, S. Burmester, W. Schäfer, and S. Flake. Towards the Compositional
Verification of Real-Time UML Designs. In Proc. of the European Software Engineering Conference (ESEC), Helsinki, Finland, pages 38–47. ACM Press, September 2003.
[Obj04]
86
Object Management Group. UML 2.0 Superstructure Specification, October 2004. Document: ptc/04-10-02 (convenience document).
Composition of Model-based Test Coverage Criteria
Mario Friske, Bernd-Holger Schlingloff, Stephan Weißleder
Fraunhofer FIRST, Kekuléstraße 7, D-12489 Berlin
{mario.friske|holger.schlingloff|stephan.weissleder}@first.fraunhofer.de
Abstract: In this paper, we discuss adjustable coverage criteria and their combinations
in model-based testing. We formalize coverage criteria and specify test goals using
OCL. Then, we propose a set of functions which describe the test generation process
in a generic way. Each coverage criterion is mapped to a set of test goals. Based on
this set of functions, we propose a generic framework enabling flexible integration of
various test generators and unified treatment of test coverage criteria.
1
Motivation
In the field of software and system testing, the quality of test suites is one of the most important issues. Widely accepted means for assessing the quality of a test suite are coverage
criteria. Criteria can be defined on the coverage of certain characteristics of specification,
implementation, or even fault detection abilities of the test suite. The result is a large
variety of defined criteria for structural, functional, and fault coverage. In the course of
model-based engineering, structural coverage criteria have been adopted to models (e. g.,
UML state machines). Whereas existing norms and standards for safety-critical systems
focus mainly on code coverage, this paper deals with model coverage criteria.
An interesting aspect is the relationship between different coverage criteria. Similar criteria are related by subsumption relations (e. g., All-States [Bin99] is subsumed by AllTransitions [Bin99]). However, relationships between criteria from different groups (e. g.,
Multi-Dimensional [KLPU04] defined on data partitions and MCDC [Lig02] defined on
control-flow graphs) are not yet thoroughly investigated and need further analysis [WS08].
Nevertheless, testers desire a means for flexibly applying combinations of different coverage criteria, because individual criteria are well investigated and allow dedicated statements on covering certain risks and aspects.
In this paper, we present a generic framework that allows combining test suites at various
abstraction levels. We provide functional descriptions for various stages in a coverageoriented test generation process and discuss possibilities for the composition of coverage
criteria and the optimization of test suites at different levels. We formalize coverage criteria and resulting test goals in OCL [Obj06] using an example from the domain of embedded systems. The framework uses functional specifications with OCL as a means for
specifying test goals. It features flexible application and combination of coverage criteria and enables to plug-in various test generators. Furthermore, the framework supports
precise statements on achieved coverage relying on OCL-based traceability information.
87
The remainder of this paper is organized as follows: In the next Section, we discuss coverage criteria and their relations and introduce our running example. In Section 3, we give a
functional description of a multi-stage test generation process, and formalize test coverage
criteria and test goals using OCL. In Section 4, we use functional signatures and specifications of test goals with OCL for sketching a framework to compose test coverage criteria.
Finally, we draw some conclusions and give an outlook to future work.
2
Relation between Coverage Criteria
The aim of model-based testing is to validate a system under test (SUT) against its specification, the model. For that, a test suite is generated from the model and executed to
examine the SUT. Coverage criteria qualify the relation between test cases and implementation or model. It is common sense to group coverage criteria into sets based on the
same foundation, like structural (e. g. [Lig02]), functional (e. g. [Lig02]), and fault coverage (e. g. [FW93, PvBY96]).
Members of one group are related by the subsume relation and visualized as subsumption
hierarchy [Nta88, Zhu96, Lig02]. In [Lig02, pages 135 and 145] two subsumption hierarchies for control-flow-oriented and data-flow-oriented criteria are presented. Since both
hierarchies contain the criteria Path Coverage and Branch Coverage, they can be united
in a single hierarchy using these criteria as merge points. The resulting hierarchy contains
a variety of control-flow-oriented (including path-based and condition-based criteria) and
data-flow-based coverage criteria, as depicted in Figure 1.
Path
Coverage
Control-Flowbased Criteria
Data-Flowbased Criteria
Conditionbased Criteria
Branch
Coverage
Figure 1: Combination of Different Types of Coverage Criteria
2.1
Example: a Fail-safe Ventricular Assist Device
As an example, we consider a ventricular assist device (VAD) supporting the human heart.
It provides an external pulsatile drive for blood circulation, which can help patients with
heart diseases to recover. The VAD is designed for stationary use with paediatric and
stroke patients. Both pump and control unit are attached outside the body. There are several parameters, which can be set by the doctor via a serial connection such as systolic
and diastolic pressure, desired blood flow, and pump frequency. The VAD has a redun-
88
VAD
dant architecture with three pneumatic motors, two of which must be frequently running.
The control unit is replicated twice, as depicted in the object model diagram in Figure 2,
one processor being operational and the other being in “hot” standby mode. In case of
malfunction of one motor, the active controller starts the redundant one (in the appropriate
mode). In case of failure of a controller, the redundant one takes over control.
VAD
1
C1:Controller
checked:bool
runP():void
runS():void
1
itsA
1
A:Arbiter
itsA
checked:bool
isOK():void
failure():void
itsC1
C2:Controller
runP():void
runS():void
itsC2
Figure 2: Object Model Diagram for VAD with two redundant controllers
In Figure 3 we give two UML2 state machines of (part of) the startup functionality of the
software. Initially, the primary controller performs startup tests, reads the preset configuration, and starts the pumps. It then waits for some time to let the blood flow stabilize and
compares the actual and desired pressure and flow values. A discrepancy of these values
indicates a problem in one of the pumps, and the pumps are switched. Otherwise, the
controller blocks itself, causing a reset, such that the roles of primary and secondary controller are switched, and self-testing is repeated. Subsequently, the VAD goes into main
operational mode, where a continuous monitoring of pumps and blood circulation takes
place.
StatechartOfArbiter
StatechartOfController
runP[!checked]
[min<=p && p<=max]/
itsA->GEN(isOK());
checked=true;
init
/itsC1->GEN(runP());
isOK/itsC2->GEN(runP());
checkC1
check
isOK/itsC2->GEN(runP());
itsC1->GEN(runS());
[else]
runS
runS
runP[checked]
run_second
runP
CheckC2
C1isP
C2isP
run_prim
failure/itsC2->GEN(runS());
itsC1->GEN(runP());
Figure 3: UML State Machines of VAD Controller (left) and VAD Arbiter (right)
2.2
Application of Combined Test Coverage Criteria
Thorough testing of models such as shown above requires application of (a combination
of) several coverage criteria each focussing on a different aspect. E. g., for the state machines shown in Figure 3, a possible reasonable strategy comprises the following criteria:
Page 1 of 1
89
1.
(a) All-Transitions [Bin99] as lower viable criterion for testing state machine (this
subsumes the criteria All-States [Bin99] and All-Events [Bin99]), and
(b) All-n-transitions [Bin99] as extended state-based criterion targeting typical
failures of state machines,
2. MCDC [Lig02] covering complex decisions in action code, and
3. Boundary-Value Analysis [Mye79] focussing on correct implementation of comparisons.
These criteria relate to the combined subsumption hierarchy in Figure 1 as follows: (1) results from transferring path-based criteria to state machines and (2) is a condition-based
criterion. The criterion (3) is orthogonal to the depicted criteria. A data-flow-based criteria
is not contained in this list, but still can be used to measure coverage achieved by applying
criteria (1)–(3), see [Lig02].
Applying the above coverage criteria to the models of the VAD presented in Section 2.1 in
order to generate test cases results in a number of test goals. In the following, we give an
example of a test goal for each of the coverage criteria above:
All-2-Transitions: The sequence of the two transition characterized by the following trigger[guards]: (1) runP[!checked], (2) [min≤p && p≤max].
MCDC: Evaluation of the guard [min≤p && p≤max] where both conditions become
true (i. e., min≤p=true and p≤max=true).
BVA: The upper bound of the equivalence partition “normal” (i. e., the highest value of p
allowed for normal operation).
Note that it is possible to fulfill all test goals described above with a single test case.
3
Specification of Coverage Criteria and Test Goals using OCL
It is common practice to generate test cases in a multi-step transformation process: First,
a generator applies coverage criteria to the specification (e. g., a state machine) and calculates all resulting test goals (e. g., paths through a state machine), see [IL04]. Then, the
generator creates a concrete test case for each test goal (e. g. a sequence of events triggering a certain path through a state machine). We describe the multi-stage test generation
process with a set of corresponding functional signatures:
gen : S × CC ⇒ T S × M D
goalgen : S × CC ⇒ T G
testgen : T G ⇒ T S × M D
where S is a specification, CC a coverage criterion, T G a set of test goals, T S the test
suite, and M D meta data (e. g., describing achieved percentage of coverage). The overall
90
test generation function gen is the functional composition of the generation of test goals
goalgen and the generation of logical test cases from test goals testgen:
gen(S, CC) = testgen(goalgen(S, CC)).
In the following, we use OCL as a means to precisely define one exemplary coverage
criterion and a corresponding test goal.
3.1
Specification of Coverage Criteria in OCL
In this section, we present an OCL expression that determines model elements for the
coverage criterion All-2-Transitions. To satisfy this coverage criterion, a test suite has to
contain all possible pairs of subsequent transitions t1 and t2 , where t1 .target = t2 .source.
All OCL expressions are based on the meta model of UML state machines [Obj07, page
525]. For sake of simplicity, the context of all OCL queries is a flat StateMachine.
The following expression searches for pairs of subsequent transitions.
context StateMachine
def: all2Transitions : Set(Sequence(Transition)) =
self.region.transition->iterate(
t1 : Transition;
resultSet : Set(Sequence(Transition)) =
Set{Sequence{}} | resultSet->union(
t1.target.outgoing->iterate(
t2 : Transition;
tmpSet : Set(Sequence(Transition)) =
Set{Sequence{}} | tmpSet->including(
Sequence{t}->including(t2)))))
3.2
Specification of Test Goals in OCL
In the previous section, we gave an OCL expression to define a coverage criterion on
a UML model. The application of such a criterion to a specific model instance results
in a set of test goals. In the following, we give two alternative OCL expressions for a
particular test goal, which results from applying the coverage criterion All-2-Transitions
on our example of a fail-safe Ventricular Assist Device presented in Section 2.1.
The first alternative depends on the availability of unique identifiers for the corresponding
model elements:
context StateMachine
def: goalForAll2Transitions : Sequence(Transition) =
Sequence{self.region.transition->select(name = ’t1’),
self.region.transition->select(name = ’t2’)}
91
It is also possible to calculate test goals using the OCL definition of the specified coverage
criterion. For instance, the following expression selects the first element of the set of
elements returned for the criterion All-2-Transitions:
context StateMachine
def: goalForAll2Transitions : Sequence(Transition) =
all2Transitions->asSequence()->first()
4
A Generic Framework for Composing Coverage Criteria
In this section, we sketch a possible design for a generic framework that allows flexible
integration of various test generators and unified treatment of test coverage criteria. The
formalization of coverage criteria and individual test goals relies on the previously defined
OCL expressions. In order to integrate various test generators, they have to fulfill the
following requirements:
Access to test goals: We assume that the generators implement a goal-oriented approach
and do not just provide a black-box functionality gen but two separated functions
goalgen and testgen with independently accessible outcomes (see Section 3).
Precise signatures: Each test case generator has to provide precise meta-information
about it’s functionality including it’s name, accepted types of specifications (model
elements), parameters, achievable coverage, and related coverage criteria.
The implementation of a framework for generator integration requires clarifying the storage of test case and test goals. A unified format for test cases is not necessarily required as
long as formal meta-information on the achieved coverage is accessible. We recommend
specifying required and achieved test goals using OCL as presented in Section 3.2.
Furthermore, a precise specification of interfaces is required. A platform supporting plugin-based software development such as Eclipse [Ecl] can be used to integrate various generators as plug-ins. Commercial off-the-shelf generators can be integrated using wrappers.
Conversion functions for test goals from proprietary formats into OCL are required.
As we have discussed in [FS06], testers ideally want to select coverage using a graphical
interface. Various test generators can be integrated in a plug-in-based framework. Each of
these generators provides detailed information on achievable coverage, which can be presented to the tester in a structured or a graphical form (e. g., as a subsumption hierarchy).
The tester can select a set of coverage criteria (e. g., using sliders on a graphical visualisation of a unified subsumption hierarchy as depicted in Figure 1). The selected coverage
information is passed to the generators and used to control the test generation process.
Specifying test goals using OCL allows to attach detailed traceability information [Som07,
Chapter 7] to each generated test case. This allows to return information to the tester about
unsatisfied test goals as OCL expressions and to realize enhanced dashboard functionalities
that overlap diverse generators. The tester can individually process unfulfilled test goals
by manually adding test cases or treating test goals shown as being unreachable.
92
By applying more than one coverage criterion to the multi-stage test generation process,
three integration options can be identified: (1) integration at coverage-criteria-layer (i. e.,
definition of newly combined coverage criteria), (2) integration at test-goal-layer (i. e.,
merging of test goals), and (3) integration at test-suite-layer (i. e., merging of test suites).
The corresponding functional descriptions are the following:
gen1 : S × CC1 → T S1
gen2 : S × CC2 → T S2
where ⊕ is a suitably defined merging operator.
1. CCn = CC1 ⊕ CC2
2. T Gn = T G1 ⊕ T G2
3. T Sn = T S1 ⊕ T S2
Option 1 can be reasonable if the coverage criteria are not connected by the subsumption relation. Option 2 can be realized via function goalunion : T G1 × T G2 → T Gn .
Option 3 can be realized via function testunion : T S1 × T S2 → T Sn . Note that in
general, the test suite resulting from the union of two test goals is not the same as the
union of the test suites for the components. Usually, it is desired that the resulting test
suite is optimized with respect to implicit properties like the number or the average length
of test cases. One solution is a dedicated optimization operation opt : T Sn → T Sopt that
optimizes a test suite according to given properties while preserving coverage. The optimization requires a weighting function. As an example, an implementation of opt could
remove duplicates and inclusions from a test suite.
5
Conclusions and Outlook
In this paper, we addressed specification and combination of coverage criteria. We sketched
the relations between coverage criteria and provided a consistent set of functional signatures to define coverage criteria, test goals, test suites, and the corresponding generation
functions. This is complemented by a formalization of coverage criteria and test goals. We
substantiated our explanations by the example of a fail-safe ventricular assist device.
The following benefits result from our approach: Different test case generators can be
integrated and combined using test goals represented as OCL expressions. Due to the use
of OCL for formalization, coverage criteria, test goals, and test suites can be compared,
merged, and optimized. Integration is possible at coverage-criteria-layer, test-goal-layer,
and test-suite-layer. The use of OCL enables testers to define arbitrary coverage criteria.
Furthermore, using OCL allows to evaluate achieved coverage and supports analysis and
completion of missing test goals. The formalization of the test suite generation process
allows to trace test cases to test goals, coverage criteria, and specification elements.
In the future, we plan to extend the presented approach. This includes the definition of
coverage criteria on abstractions (including model-based tool support) and the prioritization of test goals. This paper did not deal with the fault detection capabilities of various
(combined) coverage criteria; this topic needs to be investigated further. First steps towards this question have been done (e. g., in [PPW+ 05, WS08]). Currently we have no
tool support for the presented method. We want to implement the sketched framework and
a corresponding OCL-based test generator.
93
References
[Bin99]
R. V. Binder. Testing Object-Oriented Systems: Models, Patterns, and Tools. Object
Technology Series. Addison-Wesley, 1999.
[Ecl]
Eclipse Foundation. Eclipse. http://www.eclipse.org/.
[FS06]
M. Friske and B.-H. Schlingloff. Abdeckungskriterien in der modellbasierten Testfallgenerierung: Stand der Technik und Perspektiven. In Holger Giese, Bernhard Rumpe,
and Bernhard Schätz, editors, ”Modellbasierte Entwicklung eingebetteter Systeme II”
(MBEES), pages 27–33. Technische Universität Braunschweig, 2006.
[FW93]
P. G. Frankl and E. J. Weyuker. A Formal Analysis of the Fault-Detecting Ability of
Testing Methods. IEEE Transactions on Software Engineering, 19(3):202–213, 1993.
[IL04]
I-Logix. Rhapsody Automatic Test Generator, Release 2.3, User Guide, 2004.
[KLPU04] N. Kosmatov, B. Legeard, F. Peureux, and M. Utting. Boundary Coverage Criteria for
Test Generation from Formal Models. In ISSRE ’04: Proceedings of the 15th International Symposium on Software Reliability Engineering (ISSRE’04), pages 139–150,
Washington, DC, USA, 2004. IEEE Computer Society.
[Lig02]
P. Liggesmeyer. Software-Qualität: Testen, Analysieren und Verifizieren von Software.
Spektrum Akadamischer Verlag, 2002.
[Mye79]
G. J. Myers. The Art of Software Testing. John Wiley & Sons, Inc., 1979.
[Nta88]
S. C. Ntafos. A Comparison of Some Structural Testing Strategies. IEEE Transactions
on Software Engineering, 14(6):868–874, 1988.
[Obj06]
Object Management Group. OCL 2.0 Specification, version 2.0 (formal/06-05-01),
2006.
[Obj07]
Object Management Group. OMG Unified Modeling Language (OMG UML), Superstructure, V2.1.2 (formal/07-11-02), 2007.
[PPW+ 05] A. Pretschner, W. Prenninger, S. Wagner, C. Kühnel, M. Baumgartner, B. Sostawa,
R. Zölch, and T. Stauner. One evaluation of model-based testing and its automation. In
ICSE ’05: Proceedings of the 27th international conference on Software engineering,
pages 392–401, New York, NY, USA, 2005. ACM.
[PvBY96] A. Petrenko, G. v. Bochmann, and M. Yao. On fault coverage of tests for finite state
specifications. Computer Networks and ISDN Systems, 29(1):81–106, 1996.
94
[Som07]
I. Sommerville. Software Engineering. Addison-Wesley, 7th edition, 2007.
[WS08]
S. Weißleder and B.-H. Schlingloff. Quality of Automatically Generated Test Cases
based on OCL Expressions. http://www.cs.colostate.edu/icst2008/, April 2008.
[Zhu96]
H. Zhu. A Formal Analysis of the Subsume Relation Between Software Test Adequacy
Criteria. IEEE Transactions on Software Engineering, 22(4):248–255, 1996.
Testbasierte Entwicklung von SteuergeräteFunktionsmodellen
Frank Tränkle, Robert Bechtold, Stefan Harms
Funktionsentwicklung & Simulation
GIGATRONIK Stuttgart GmbH
Hortensienweg 21
D-70374 Stuttgart
frank.traenkle@gigatronik.com
Abstract: Die modellbasierte Funktionsentwicklung für eingebettete elektronische
Systeme findet in der Automobilindustrie vielfache Anwendung. Qualität und
funktionale Sicherheit der entwickelten Steuergerätefunktionen werden durch
Verifikation und Validierung abgesichert. Eine geeignete Methode für die
durchgängige Spezifikation, Implementierung und Verifikation von SteuergeräteFunktionsmodellen besteht in der Testbasierten Entwicklungsmethode 2dev®. Der
Ausgangspunkt ist eine formale Spezifikation, aus welcher sowohl ein
Funktionsmodell als auch Testfälle für die vollständige Verifikation generiert
werden. Als Ergebnis dieser Vorgehensweise erhält der Entwicklungsingenieur
Aussagen über Definitionslücken und Widersprüche in der Spezifikation, um diese
gezielt zu schließen und zu lösen.
1 Einleitung
Die modellbasierte Funktionsentwicklung für elektronische Steuergeräte im
Automobilbereich ist heute auf breiter Basis etabliert. Zur Entwicklung von
kontinuierlichen und ereignisbasierten Reaktiven Systemen werden etablierte
Entwicklungsumgebungen wie Simulink®/Stateflow®, TargetLink® und ASCET®
eingesetzt. Mit den zugehörigen Auto-Code-Generatoren werden die Software-Module
für elektronische Steuergeräte automatisiert aus den Funktionsmodellen generiert.
Wesentliche Vorteile der modellbasierten Funktionsentwicklung sind die Reduzierung
der Entwicklungszeit und -kosten sowie die Erhöhung der Produktinnovation [SC05,
Co07].
In vielen Entwicklungsprojekten für Steuergeräte-Funktionen fehlt es an vollständiger
Spezifikation und ausreichenden Tests. Häufige nicht beantwortete Fragen von
Entwicklungsingenieuren sind: „Wann ist das Funktionsmodell fertig gestellt?“ und
„Wann sind alle Tests durchgeführt?“. In den Entwicklungsprozessen fehlt häufig eine
durchgängige Vorgehensweise für die Spezifikation, die Implementierung und die
Verifikation der Steuergerätefunktionen.
95
Fehler in der Spezifikation oder in der Steuergerätefunktion werden zumeist erst im
Hardware-In-The-Loop-Test oder Fahrversuch festgestellt. Große Iterationsschleifen
sowie hohe Zeit- und Kostenaufwände sind die Folge.
2 Testbasierte Entwicklung
Die Testbasierte Entwicklungsmethode 2dev® (Test-oriented Development Method)
kann die Entwicklungprozesse beschleunigen und die Kosten senken [BS06]. 2dev® ist
eine integrierte Methode zur formalen Spezifikation, Modellgenerierung und
automatischen Verifikation von Funktionsbausteinen für elektronische Steuergeräte. Mit
2dev® werden gegenwärtig Funktionsbausteine für ereignisdiskrete Reaktive Systeme
unter Einsatz von Simulink®/Stateflow® und TargetLink® entwickelt. Ein wesentliches
Ziel ist die frühzeitige Erkennung und Behebung von Fehlern in der Spezifikation und
damit auch im Funktionsmodell.






















 

Abbildung 1: Workflow der Testbasierten Entwicklungsmethode 2dev®
2dev® führt zur systematischen Funktionsentwicklung. Spezifikation und
Funktionsmodell sind zu jeder Zeit konsistent. Die Spezifikation ist gleichermaßen
formal vollständig und allgemein verständlich formuliert. Diese formale Spezifikation ist
Ausgangspunkt für die folgenden Prozessschritte im 2dev®-Workflow (siehe Abbildung
1): der automatisierten Generierung sowohl des Funktionsmodells als auch der Testfälle.
Der anschließende Prozessschritt Testdurchführung umfasst die Stimulation des
Funktionsmodells durch die generierten Testvektoren sowie die Auswertung des
simulierten Ein-/Ausgangsverhaltens durch einen Vergleich mit dem spezifizierten
Verhalten.
Test-Driven Development [Kb07] hat sich als eine Entwicklungsmethode in der
objektorientierten Softwareentwicklung etabliert. Hier werden Unit-Tests zeitlich vor
den zu testenden Software-Modulen entwickelt. Analog zu 2dev® werden bei der
automatisierten Testdurchführung beispielsweise Widersprüche in der Spezifikation
erkannt. In diesem Fall erfolgt eine Anpassung der Spezifikation. Diese
Iterationsschleife ist fester Bestandteil des 2dev®-Workflows (siehe Abbildung 1).
96
3 Funktionsmerkmale der Testbasierten Entwicklungsmethode 2dev®
2dev® weist folgendes wesentliches Funktionsmerkmal auf: Die Methode definiert
Richtlinien für die formale Spezifikation, die sowohl die automatische Modell- und
Testfallgenerierung als auch die automatische Testfallauswertung ermöglichen. Die
formale Spezifikation kann derzeit in Microsoft Word®, in MATLAB® oder in DOORS®
erstellt werden. Für die verschiedenen Eingabeformen stehen geeignete Schablonen zur
Verfügung.
Der 2dev®-Testgenerator verarbeitet die Spezifikationsdokumente automatisch und
generiert Testvektoren für den spezifizierten Funktionsbaustein. Der 2dev®Modellgenerator generiert aus der formalen Spezifikation ein Stateflow-Modell.
Die Testaussage resultiert aus zwei Abdeckungsanalysen, der Requirements Coverage
und der Model Coverage. Die Requirements Coverage wird von 2dev® generiert,
während die Model Coverage mittels der Toolbox Simulink® Verification and Validation
(V&V) erzeugt wird.
Die Requirements Coverage deckt Definitionslücken in der formalen Spezifikation auf.
Eine unvollständige formale Spezifikation wird bei der automatisierten
Testdurchführung dadurch erkannt, dass für einzelne Testschritte keine Anforderungen
in der Spezifikation existieren. Widersprüche in der formalen Spezifikation werden
dadurch erkannt, dass für einen Testschritt beispielsweise mehrere Anforderungen
aktiviert werden, diese aber ein widersprüchliches Ein-/Ausgangsverhalten definieren.
Die Model Coverage prüft zusätzlich dieVollständigkeit der generierten Testfälle und
deckt tote Codeanteile auf.
Ein Algorithmus kann in ereignisbasierte Reaktive Systeme (Logikanteile,
Entscheidungsanteile) und kontinuierliche Reaktive Systeme (Dynamikanteile,
Berechnungs- und Regleranteile) unterteilt werden. 2dev® unterstützt die Entwicklung
ereignisbasierter Reaktiver Systeme in Form einzelner Funktionsbausteine in Simulink®,
Stateflow® oder TargetLink®. Für die Entwicklung kontinuierlicher Reaktiver Systeme
existieren etablierte Methoden beispielsweise aus der Regelungstechnik.
4 Anwendungsbeispiel
Als Anwendungsbeispiel für die Testbasierte Entwicklungsmethode 2dev® wird ein
vereinfachter
Statusgenerator
einer
Tempomatfunktion
betrachtet.
Dieser
Statusgenerator ist ein ereignisbasiertes Reaktives System, das durch eine Verarbeitung
der Eingangssignale Tempomat-Ein-Aus-Schalter, Geschwindigkeitsvorgabe-Taster und
Bremssignal den Tempomat-Status berechnet.
97
R1
ausgeschaltet
R2
Einschalten
R4
Ausschalten
R8
Ausschalten
R5
Bremsen
R3
regelnd
R6, R9
unterbrochen
R7
Wiederaufnahme
Abbildung 2: Zustandsautomat für Tempomat-Statusgenerator
Der Tempomat-Statusgenerator kann als endlicher Zustandsautomat im
Spezifikationsdokument in Form einer Skizze dargestellt werden (siehe Abbildung 2).
Jeder der drei Zustände repräsentiert einen gültigen Tempomat-Status. Die Bedingungen
für die Transitionen werden durch logische Ausdrücke für die Eingangssignale
berechnet.
Dieser Zustandsautomat besitzt ein Gedächtnis. Für einen vollständigen Test müssten
deshalb Eingangssignalverläufe generiert werden, die den zeitlichen Aspekt erfassen.
Sowohl die Erstellung als auch die Auswertung solcher Sequenzen sind in der Regel mit
einfachen Mitteln nicht automatisiert durchführbar.
2dev® umgeht diese Problematik. Eine entscheidende Spezifikationsrichtlinie von 2dev®
fordert das Auslagern aller Speicherelemente aus dem Funktionsbaustein. So wird im
vorliegenden Beispiel der Tempomat-Status des vorangegangen Zeitschritts als ein
weiteres Eingangssignal des Funktionsbausteins definiert. Damit ergibt sich die
Eingangssignalliste in Tabelle 1.
Der resultierende Funktionsbaustein ist gedächtnisfrei und wird ausschließlich durch
formale Wenn-Dann-Regeln beschrieben. Damit ist die Voraussetzung zur
automatisierten Testdurchführung gegeben. Weitere Funktionsbausteine, wie
beispielsweise Timer, Zähler und Integratoren, werden auf diese einfache Darstellung
zurückgeführt.
Eingangssignal
Wertebereich
Bedeutung
Ein-Aus-Schalter
{0, 1}
{aus, ein}
Taster für Geschwindigkeitsvorgabe
{0, 1}
{nicht gedrückt, gedrückt}
Bremse
{0, 1}
{inaktiv, aktiv}
Tempomat-Status des vorangegangen
Zeitschritts
{0, 1, 2}
{ausgeschaltet, regelnd, unterbrochen}
Tabelle 1: Eingangssignalliste für Tempomat-Statusgenerator
98
Die formale Spezifikation des Tempomat-Statusgenerators erfolgt in Form von WennDann-Regeln. Die Wenn-Bedingungen sind logische Ausdrücke für die Eingangssignale.
Die Dann-Bedingungen definieren das Sollverhalten des Funktionsbausteins durch
logische Ausdrücke für die Ausgangssignale. Als Beispiel wird die Transition
„Wiederaufnahme“ des Zustandsautomaten in Abbildung 2 betrachtet. Sie ist durch die
in Tabelle 2 dargestellte Wenn-Dann-Regel spezifiziert.
Name
R7
Funktion
Tempomat Wiederaufnahme
Wenn-Bedingung
Wenn Tempomat-Status des vorangegangenen Zeitschritts [idxCCState_z] gleich
unterbrochen
und Tempomat-Ein-Aus-Schalter [sigCCOnOff] gleich ein
und Taster für Geschwindigkeitsvorgabe [sigCCSetSpeed] gleich gedrückt
und Bremse [sigBrake] gleich inaktiv
Dann-Bedingung
Dann Tempomat-Status [idxCCState] gleich regelnd
Tabelle 2 Wenn-Dann-Regel zur „Tempomat-Wiederaufnahme“
Insgesamt enthält die Spezifikation für dieses Beispiel neun Wenn-Dann-Regeln zur
vollständigen Beschreibung des Tempomat-Statusgenerators. Die obige Wenn-DannRegel wird zusammen mit den übrigen acht Regeln zur weiteren automatisierten
Verarbeitung gemäß Abbildung 1 in der formalen Spezifikation eingegeben.
Für den spezifizierten Funktionsbaustein lassen sich leicht alle 24 Testschritte durch
Permutation der Signalwerte (vgl. Wertebereich in Tabelle 1) automatisch generieren
(siehe Testgenerator in Abbildung 1). Der Modellgenerator überführt alle Anforderungen
in ein Stateflow-Modell (siehe Abbildung 3), welches mit den 24 Testschritten stimuliert
wird.
Abbildung 3: Tempomat-Statusgenerator mit rückgeführtem Zustand
99
Zur Erzeugung der Requirements Coverage werden die formalen Anforderungen, die
Stimuli und die Simulationsantworten herangezogen. Werden nicht alle 24 Testschritte
durch die neun Requirements erfasst, liegt eine Definitionslücke in der Spezifikation vor.
Existieren Anregungskombinationen, bei denen mehrere Wenn-Bedingungen erfüllt
werden, die betreffenden Dann-Bedingungen jedoch Unterschiedliches fordern, liegt ein
Widerspruch in der Spezifikation vor.
Auf diese Weise entsteht ein Funktionsmodell aus verifizierten Funktionsbausteinen. Die
Granularität der einzelnen Bausteine richtet sich maßgeblich nach der Verständlichkeit
der Anforderungen. Testfallgenerierung und Simulation stellen beachtliche
Minimalanforderungen hinsichtlich Speicherbedarf und Ausführungszeit. Der Einsatz
von 2dev® in Steuergeräte-Entwicklungsprojekten zeigt jedoch, dass ein sinnvoll
modularisiertes Funktionsmodell nicht an diese Grenzen stößt.
5 Zusammenfassung und Ausblick
Zur systematischen Spezifikation, Implementierung und Verifikation von SteuergeräteFunktionsmodellen wird die Testbasierte Entwicklungsmethode 2dev® vorgestellt. Durch
den Einsatz von 2dev® erhält der Entwicklungsingenieuer mehr Sicherheit. 2dev® liefert
eindeutige Aussagen darüber, wann das Funktionsmodell fertig gestellt ist und wann die
notwendigen Tests erfolgreich abgeschlossen sind.
Dieses Verfahren wird derzeit zur direkten Verifikation der Spezifikation eingesetzt.
Ohne zeitaufwändige, manuelle Implementierung des Funktionsmodells wird die
Spezifikation direkt auf Vollständigkeit und Widerspruchsfreiheit hin überprüft. Die
automatisiert generierten Stafeflow®-Charts lassen sich hinsichtlich der Laufzeit und
Codegröße weiter optimieren.
Die einfache Systematik bietet zukünftig auch die Möglichkeit Testfälle für das gesamte
Funktionsmodell aus den Spezifikationen der einzelnen Funktionsbausteine zu
generieren. Diese Testfälle eignen sich nicht nur zur Verifikation, sondern auch zur
Validierung des Funktionsmodells.
2dev®-Spezifikationen können im Hardware-In-The-Loop-Test oder Fahrversuch
wiederverwendet werden. Die Ein- und Ausgangssignale der Funktionsbausteine lassen
sich mit Hilfe von Applikationswerkzeugen wie INCA® oder CANape® während der
Durchführung von Fahrmanövern aufzeichnen. Durch die Anwendung des 2dev®Verifikationsverfahrens kann überprüft werden, ob das Ein-/Ausgangsverhalten dem
spezifizierten Verhalten entspricht. Die Auswertung der von 2dev® berechneten
Abdeckungsgrade führt außerdem zu einer direkten Aussage über die Vollständigkeit der
durchgeführten Fahrmanöver.
100
Literaturverzeichnis
[BS06]
R. Bechtold, S. Harms, D. Baum, P. Karas: Neue Wege zur Entwicklung
hochqualitativer Steuergerätesoftware – Hella AFS und GIGATRONIK 2dev®.
Elektronik Automotive, Ausgabe 1/2005, S. 44-51.
[Co07]
M. Conrad: Using Simulink and Real-Time Workshop Embedded Coder for SafetyCritical Automotive Applications. Tagungsband des Dagstuhl-Workshops MBEES 2007,
Informatik-Bericht 2007-01, TU Braunschweig, S. 41-47.
[Kb02]
B. Kent: Test Driven Development. By Example. Addison-Wesley Longman,
Amsterdam, 2002.
[SC05]
R. Starbek, U. Class: Vom Prototyp zum Seriencode. Elektronik Automotive, Ausgabe
1/2005, S. 108-110.
101
A Service-Oriented Approach to Failure Management
Vina Ermagan, Claudiu Farcas, Emilia Farcas, Ingolf H. Krüger, Massimiliano Menarini
Department of Computer Science
University of California, San Diego
9500 Gilman Drive
La Jolla, CA 92093-0404, USA
{vermagan, cfarcas, efarcas, ikrueger, mmenarini}@ucsd.edu
Abstract: Failure management for complex, embedded systems-of-systems is a
challenging task. In this paper, we argue that an interaction-based service notion
allows capturing of failures end-to-end from both the development-process and the
runtime perspective. To that end, we develop a formal system model introducing
services and components as partial and total behavior relations, respectively. A
failure-ontology, refined for the logical and the deployment architecture is presented. This ontology allows us to reduce the partiality of services by managing
failures. Furthermore, the ontology gives rise to an architecture definition language
capturing both logical and deployment architecture models together with a failure
hypothesis. Finally, we exploit these models for the formal verification (modelchecking) of properties under the given failure hypothesis.
1 Introduction
Computer systems have by now penetrated almost every aspect of our lives, changing
the way we interact with the physical world. For embedded systems that control physical
processes in areas where lives and assets are at stake, the demands for high software
quality have increased accordingly. Managing system complexity and quality together is
even more challenging as we move from monolithic systems to distributed systems of
systems. Quality concerns such as reliability, manageability, scalability, and dependability crosscut the entire integrated system and cannot be addressed at the level of individual components.
A key challenge in the design, deployment, and quality assurance for distributed reactive
systems is how to encapsulate crosscutting concerns while integrating functionality scattered among system components. Addressing this challenge requires a transition from
component-centric views to end-to-end system views, and placing the interactions between components in the center of attention. We need, therefore, modeling notations,
methodologies, and tool suites for precisely and seamlessly capturing the overall goals –
the services delivered by the interplay between distributed components – across all development phases.
102
A specific crosscutting concern in embedded systems is failure resilience. For example, a
car has thousands of software functions implemented across a set of distributed, networked Electronic Control Units (ECUs); the car manufacturer has to integrate the ECUs
from different suppliers into a coherent system-of-systems. Many errors cannot be managed on a per-component basis, because they arise from the interplay that emerges from
the composition of components. Currently, such errors are discovered late in the development process, or even after deployment inducing high quality assurance and control
costs. Therefore, a failure management approach that considers the crosscutting aspects
of failures and failure mitigations is needed.
Problem statement. Because failure management in a logically or physically distributed
system is an end-to-end activity, the problem is how to approach model-based failure
management systematically throughout the development process. Important research
topics include, for instance, how to address both component- and service-level failures,
value-added exploitation of the failure models, and scalability of failure models to varying levels of detail. Furthermore, a failure management methodology should provide
low-overhead integration of failure models into existing development processes and the
traceability of failures across development activities.
Approach outline. In this paper, we present a service-oriented development approach
that establishes a clean separation between the services provided by the system, the failure models, and the architecture implementing the services. Our foundations are (1) a
precise interaction-based system model that defines components and services as complete and partial behaviors, respectively, and (2) a hierarchical failure model that defines
failure detection and mitigation mechanisms as wrappers around services. The key benefit lies in the binding between the failure model and the interaction-based service model;
this allows us to specify, detect, and mitigate failures at all scales: from the individualmessage level to highly complex interaction patterns. Through model exploitation, we
obtain an integrated development process that goes from requirements elicitation and
domain modeling, to definition of logical and deployment architecture, to synthesis of
detectors and mitigators from service specifications, and to verification and validation.
The remainder of the paper is organized as follows. In Section 2, we present the system,
component, and service model based on the formal model of streams and relations on
streams. In Section 3, we introduce the failure model together with our approach to defining architectures based on service specifications under the failure model. In Section 4,
we demonstrate how to synthesize a deployment architecture model from a given architecture specification, and how to exploit this model for verifying fail-safety under a
given failure hypothesis (i.e., an assumption about what failures can occur concurrently).
Section 5 discusses our approach in the context of related work and Section 6 contains
our conclusions and outlook.
2 System, Component, and Service Model
We view a system as being decomposed into a set of components (or roles), which
communicate with one another and with the environment by exchanging messages over
103
directed channels. Each component has its own state space. We denote the sets of messages and channels by M and C , respectively. At any time instant t ! ! each component computes its output based on its current state and the messages received until time
instant t " 1 . Therefore, we can define the system behavior by the communication histories on the channels and the state changes of the components over time.
Automotive Example – CLS: Figure 1 presents a simplified structure diagram for the
Central Locking System (CLS) in automotives (a more detailed description of CLS can
be found in [KNP04,BKM07]). Upon impact, an Impact Sensor (IS) will send a signal to
the CLS Controller, which will command the Lock Manager (LM) to unlock the doors.
The diagram shows the system entities (IS, CONTROL, and LM) and the communication channels (isc, lmc, and clm) between them.
IS
isc
CONTROL
lmc
LM
clm
Figure 1. Simplified System Structure Diagram for the CLS
Assuming asynchronous communication allows us to model channel histories by means
of streams, i.e., finite and infinite sequences of messages. We denote the set of finite
sequences over M by M * . For a given time instant, the content of all channels (a channel valuation) is given by a mapping C # M * , which assigns a sequence of messages to
each channel; this models that a component can send multiple messages over any channel at a given point in time. Using timed streams over message sequences, we can further
model communication histories over time. We call the assignment of a timed stream to a
channel an infinite valuation or history: a channel history is denoted by a mapping C # (! # M * ) . Therefore, for a set of channels X & C , we denote by
"#
X $ %(! # (X # M * )) the set of histories of the channels in X . For simplicity we
work with a single channel type here – all channels are of the same type indicated by the
message set M . It is an easy exercise to add a data type system to this model [BKM07].
Components have syntactic and semantic interfaces described by their sets of channels
and streams. In our terminology, we use channels to denote both component input and
output ports, and communication links that connect ports. For a given component, a
channel can be classified as input or output channel, depending on whether the component is the channel’s destination or source, respectively. We denote the sets of input and
output channels with I and O , respectively. The pair (I ,O ) , denoted I $ O , defines the
syntactic I/O interface of a component. The semantic interface defines the observable
#
"#
behavior of the component. We call I and O the sets of input and output channel histories, respectively. By focusing on the interactions, we can define the semantic interface
#
"#
in terms of I/O history relations Q : I # %(O ) as follows.
Definition (Component and Service Model.) For a given syntactic I/O interface
#
"#
Q
:
I
#
%(O
):
I
$O
and
I/O
history
relation
%
'%
104
Q is a component, if its I/O history relation is causal
Q is a service, if its I/O history relation is causal wrt. the service domain
#
Dom.Q $ {x ! I : Q.x ( )}
!"
Causality enforces that a component’s or service’s outputs at time t depend at most on
the input histories strictly before t . We denote the prefix of length t of a stream x by
x *t .
Definition (Causality): For a given syntactic I/O interface %I $O and I/O history rela#
"#
#
tion '%Q : I # %(O ) , Q is causal, iff (for all x, z ! I ):
x * t $ z * t + {y * (t , 1) : y ! Q.x } $ {y * (t , 1) : y ! Q.z }
!"
A service has a syntactic I/O interface like a component, defining the channels by which
the service is connected to its environment. Both components and services relate input
histories with output histories, but components are left-total relations (i.e., there is at
least one output history for every input history), whereas services are partial relations
over the relevant input and output streams of the system. Consequently, services emerge
as generalizations of components.
Architectures emerge from the parallel composition of components and services. Each
service describes a certain projection of the overall system behavior providing a partial
view on each of the components participating in its execution, whereas a component is
complete in the sense that it describes the composition of all services in which it participates. Services “orchestrate” the interaction among system components to achieve a
certain goal. Thus, we view services as formalized specializations of use cases to specify
interaction scenarios.
Our model highlights the duality between partial and complete knowledge about system
structure and behavior. There is a broad spectrum of description techniques available,
such as state machines to model components and sequence diagrams to model services.
For both state machines and sequence diagrams there exist straightforward formalizations in terms of the system model introduced above. In short, the stream relation we
assign to a state machine emerges as an extension of the associated state transition relation over time. This embeds state machines into the semantic model. Furthermore, there
is a transformation algorithm [Kr99,Kr00] from sequence diagrams to state machines,
which then induces and embedding of sequence diagrams into the semantic model. We
need this last embedding in the following section, where we introduce failure models and
architecture definitions as refinements of the interaction models underlying service
specifications.
For a full treatment of the formalization and the corresponding methodological consequences we refer the reader to [BKM07]; this reference also details specific structuring
principles for service specifications, such as splicing, specification by projection, and
refinement notions.
105
Ontology
Logical Model
Mapping
Model
MDA
Interaction
Specification
Failure
Taxonomy
Failure
Hypothesis
Deployment
Model
Deployment
Diagram
Enables
Mapping
Code
Generation
Verification
&
Validation
Figure 2. Model-based failure management approach
3 Service-Oriented Failure Model and Architecture Definition
Our failure management approach, depicted in Figure 2, combines elements of Model
Driven Architectures (MDA) [MCF03] with Service-Oriented Architectures (SOA): an
ontology encompassing a failure taxonomy, a service-oriented logical architecture based
on interactions (Figure 3) as a representation of a Platform Independent Model (PIM),
and a deployment model (Figure 5) as a representation of a Platform Specific Model
(PSM). We decouple the functional aspects of system behavior from the treatment of
failures, and enrich the logical and deployment models typical of any MDA, with a failure hypothesis. We use a Service Architecture Definition Language (SADL) to express
these elements in textual form. By mapping the logical and deployment models together,
we provide a seamless approach towards end-to-end fail-safe software design and development.
A failure taxonomy is a domain specific concept [Le95] (e.g., our general failure taxonomy for the automotive domain [Er07]). A comprehensive taxonomy for failures and
failure management entities is essential from the very early phases of the software and
systems engineering process. In our previous work [EKM06],we created such a taxonomy starting with the central entity - Failure, and then specifying two types of entities in
close relationship with the Failure: Causes and Effects. A Failure has at least one Cause
and manifests itself through one or more Effects. Hence, an Effect is the environmentobservable result of a Failure occurrence. We then define a Detector as an entity capable
of detecting the occurrence of a Failure by monitoring its Effect. Consequently, it is
important to define what type of Effects a failure can have, and leverage them to create
corresponding Detectors. The Cause of a Failure is highly dependent on the application
domain; for instance, it could be a software, computational hardware, or communication
problem. An Effect has an associated Risk Level, depending on its consequences on the
environment. An Effect can also be manifested as Unexpected Behavior (i.e., the occurrence of an event that was not supposed to happen) or a Non Occurrence Behavior (i.e.,
absence of an event that was supposed to happen).
106
3.1 Logical Model for Service-Oriented Architectures: The Service Repository
The main entities of our logical architecture model are Services (Figure 3), which capture the various functionalities of the system in the form of interaction patterns. In short,
every service is associated with an Interaction Element, which can be either atomic, or
composite. Single Events (send or receive) and Messages composed from two corresponding send and receive events are examples of atomic interactions. Composite interactions emerge from the application of composition Operators such as sequencing, alternatives, loops, or parallel composition to Operands. Note that the syntactic model
contained in the lower box of Figure 3 describes the heart of the language of sequence
diagrams or Message Sequence Charts (MSCs) [Kr00] using the composite pattern
[Ga95]. Consequently, we obtain a straightforward embedding of a service specification
according to Figure 3 into the semantic framework as introduced in Section 2.
To illustrate the other entities of Figure 3 in more detail, we observe that this figure
defines an abstract syntax for our SADL. Consider the example SADL specification in
Figure 4. It again describes aspects of the CLS case study. We begin with an Architecture block (Figure 4a). An Architecture block has a Service Repository, which encapsulates the definition of the Roles and the Services in the system; a Configuration, representing the logical and the deployment models respectively. It also has a concurrent
failure number that is part of the failure hypothesis and describes an upper bound on the
number of failures that can occur at the same time in the system.
Figure 3. - Logical Model
We exemplify in Figure 4 the following interaction: upon impact, IS will send on channel isc an Impact() message to CONTROL, which then, on channel clm, sends an unlck()
message to LM. The service ends by CONTROL receiving on channel lmc the acknowledgement unlck_ok() of the unlocking from LM and then, on channel clm, confirming
this receipt.
107
architecture CLS
configuration CLS_deployment
service_repository CLS_service_repository
component
description Central locking system services
type ImpactSensor : sensor
description Sensor that detects the impact.
roles
plays IS in CLS_1, CLS_2
role CONTROL
can_fail true
description Controller of all CLS functions
end
states LCKD, UNLD
component
...
type Controller_component : ECU
end
description Controlling unit for the CLS system.
role LM
plays CONTROL in CLS_1, CLS_2
description Interface to the Lock System.
can_fail true
states INITIAL
end
...
component
end
type LockMotor_component : actuator
role IS
description The interface to the locking system.
description Sensor for detecting an impact
plays LM in CLS_1, CLS_2
states INITIAL, FINAL
can_fail false
...
end
end
component
end
type FailManager_component : ECU
service CLS_2
description A computational unit used
interaction
plays CLS_1_D in CLS_1
MSG(IS, CONTROL, isc, Impact()) ;
can_fail false
MSG(CONTROL, LM, clm, unlck()) ;
end
MSG(LM, CONTROL, lmc, unlck_ok()) ;
connection
MSG(CONTROL, LM, clm, ok()))
end
type CANBus_connection : CANBus
description The CANBus used to connect the CLS
plays clm,lmc,isc in CLS_1, CLS_2
a)
can_fail false
can_loose_message true
end
managed service CLS_1
connections
service CLS_2
CAN:CANBus_connection
detector deadline_failure (
end
MSG(IS, CONTROL, isc, Impact()) :
A components
MSG(CONTROL, LM, clm, unlck()) ;
(ImpactSensor1: ImpactSensor
MSG(LM, CONTROL, lmc, unlck_ok()),
is_connected_to CAN)
deadline 10 unit ms)
(ImpactSensor2: ImpactSensor
mitigator
is_connected_to CAN)
MSG(CLS_1_D, LM, clm, unlck()) ;
(FailManager: FailManager_component
MSG(LM, CONTROL, lmc, unlck_ok())
is_connected_to CAN)
end
(LockMotor: LockMotor_component
is_connected_to CAN)
b)
(Controller: Controller_component
is_connected_to CAN)
end
end
Figure 4. Service ADL
concurrent_failures 1
a) Service repository;
end
b) Managed Service;
c) Configuration.
c)
This interaction-based model provides the basics for specifying regular behaviors of a
system in terms of composite interaction patterns. We add two elements to this model,
namely Detection and Mitigation, to make the system capable of handling failures (see
the upper box of Figure 3). We still retain the ability to specify services that do not need
108
failure management in order to keep the system specification simple whenever possible.
To that end, we define two subtypes of Services: Unmanaged Services and Managed
Services. Unmanaged Services define system behavior without considering failures.
Managed Services require failure management. Following the decorator pattern [Ga95],
a Managed Service includes another service within it, which describes the interaction
scenario that needs to be fail-safe. This service can be an Unmanaged or a Managed
Service, giving the model the ability to create multiple layers of failure management.
Each Managed Service has a Detector and a Mitigator. The Detector is responsible for
monitoring the service defined inside the Managed Service, and detects an eventual
failure by observing its Effects. A Detector may employ different Detection Strategies.
For example, one Detection Strategy can be based on observing interactions; another
strategy could be based on observing the internal states of relevant components. A common mechanism for detecting failures is the use of time-outs. Time is captured in the
model in the form of Deadlines for Interaction Elements. An Effect of a failure could be
a missed Deadline. Upon occurrence of a failure, the Detector will activate the corresponding Mitigator, responsible for handling the failure occurrence. A Mitigator includes a service that specifies its behaviors and may leverage one or more Mitigation
Strategies to address the failure. Figure 4b shows how we can wrap a Service in a Detector and Mitigator layer and augment it to a Managed Service in our SADL. The Detector
detects if LM does not acknowledge the door unlocking within 10 ms since CONTROL
sent the unlock() message. As one possible mitigation strategy, the exemplified Mitigator resends the unlock() directly to LM.
3.2 Deployment Model and the Mapping: Configuration
In distributed real time systems, failures happen at runtime. In the interest of a seamless
treatment of failures throughout the development process, however, it is useful to trace
runtime failures that occur in the deployed system to the related entities of the logical
architecture.
1..*
Component
fails *
1..*
Compound
Component
Failure
*
fails
Connection
*
Port
Simple
Component
2..*
*
Signal
Sensor
Actuator
1
*
Wire
Controller
Figure 5. Deployment Model
Figure 5 depicts a simplified physical deployment model. Its main active entities are
Components. Actuators, Sensors, and Controllers are examples of different component
types in automotive systems. Components (following the composite pattern [Ga95]) can
be simple or compound. Components have Ports, which are entry points for Connections
109
to other Components. One example for a connection is a Wire with electrical Signals
passing over it; another example is a wireless connection.
The SADL represents the deployment model using a Configuration block (Figure 4c). A
Configuration block includes a number of Component type and Connection type definitions. Each Component type plays one or more logical roles. This “plays” relationship
provides the main grounds for mapping of the logical model to the deployment model:
roles are mapped to components, channels are mapped to connections, and messages are
mapped to signals. Each Component type is also augmented with a Can Fail part capturing the failure hypothesis for that component; if a component can fail, then all the roles
mapped to that component can fail. The Configuration block also includes an instantiation part. Connections are instantiated from the Connection types. Components are instantiated from Component types and are connected to specific Connection instances to
form a physical deployment model for the system.
4 Synthesis and Verification
In this section we establish the binding between the failure models described in Section 3
with the system model of Section 2; this binding then allows us to employ formal verification techniques to verify the fail-safety of a system specified in SADL under a given
failure hypothesis. In short, the binding relates the components of the deployment architecture with the roles they play in the respective services. The role behavior is extracted
from the interaction patterns defined in the service repository using the algorithm translating sequence diagrams into state machines alluded to in Section 2. Finally, the resulting state machines are augmented with role behaviors for the failure detectors and mitigators to effect the failure management. The resulting set of state machines by
construction implements all the behaviors specified in the service repository. To ensure
that the system failure management is adequate, we have to verify that the architecture
satisfies temporal properties even in presence of failures. In the following, we describe
this process in more detail, and explain our implementation of a prototypic verification
tool based on the SPIN model checker [Ho03].
The SADL described in Section 3 has four elements: (1) a logical model, composed of
services specified by interaction patterns (in essence, sequence diagrams in textual
form); (2) a deployment model, mapping interaction roles to components; (3) a failuremanagement model, described by Detectors and Mitigators of managed services; and
finally (4) a failure hypothesis. Our M2Code v.2.0 tool translates a SADL specification
into a Promela model suitable for verification using SPIN. The tool, developed in Ruby,
performs syntax analysis and checking and generates state machines for services, detectors, and mitigators. The translation proceeds as follows.
For each role, the tool generates a state machine that captures the participation of this
role in all interaction patterns. This translation is possible because our service language
specifies interactions as causal MSCs: each role can locally choose the next step it has to
engage in to implement the global interaction. [Kr99,Kr00] presents the MSC to state
charts synthesis algorithm in detail. In essence, the algorithm inserts intermediate states
110
between all incoming/outgoing messages of a role axis; then, the various role axes that
describe a component’s interaction patterns are combined into a single state machine for
that component.
Figure 4. State Machine for an Impact Sensor Role
The result of our generation algorithm is a set of Mealy automata. Each automaton captures the behavior of one role. Figure 4 shows the state machine obtained for one role in
the CLS case study. Upon detecting an impact, the impact sensor sends an “Impact”
message on two channels. The resulting state machine has three states and two transitions. The first transition sends “Impact” on the IC9 channel and the second on the IU12
channel. On state machine transitions, we follow the convention of indicating send and
receive events with “!” and “?” prefixes, respectively. The state labels are automatically
generated by the M2Code tool implementing the algorithm.
Figure 5. Modified State Machine for the Impact Sensor
The next step in system verification requires modifying the state machines obtained by
the synthesis algorithm to account for the failures defined in the failure hypothesis. To
this end, M2Code introduces states and transitions into the deployment component state
machines generated as described above. For complete component failures, for instance,
M2Code adds a sink state to the state machine. It then adds a transition from all other
states to the sink state guarded by failure messages. Figure 5 shows the modifications
applied to the example state machine from Figure 4. The three new transition added by
111
M2Code are guarded by a “kill” message on the ERRIS channel. This channel is an
artifact introduced in the verification model to inject failures accordingly to the failure
hypothesis. The decision to send the kill message is taken by a Promela “failure injector”
process that implements the failure hypothesis. This driver is also automatically generated by M2Code.
Mitigators are also modeled as state machines. Their definition is based on interactions
and, therefore, the translation follows the same technique described earlier. A special
treatment is reserved for Detector’s deadlines. When a deadline is present, M2Code
creates non-deterministic transitions from the states that are part of the service to be
managed by the detector, to the state that initiates the corresponding mitigation strategy.
Note that this algorithm results in a set of state machines for the set of deployment components that together define the deployment architecture for the system defined in the
SADL specification. To define a fail-safe system we give an abstract specification as an
SADL specification that has only unmanaged services (we call this specification F1) and
a concrete specification that extends the abstract one such that some services are managed (F2). The concrete specification set of behaviors (expressed as a set of channel
valuations in Section 2) is a subset of F1’s behaviors under projection on the messages
and channels defined in F1. F2 is fail-safe with respect to its failure hypothesis if, even
when the failures defined in the hypothesis affect the execution, its set of projected behaviors remains a subset of F1’s behaviors.
Next, we exploit this observation to formally prove fail-safety under a given failure
hypothesis. For each state machine representing a deployment component type, M2Code
generates a Promela program that encodes it. Moreover, it adds a failure injector in the
Promela program. The failure injector models the failure hypothesis provided in the
SADL and triggers failures during the verification process accordingly. With this machinery at our disposal, we can verify, for instance, LTL properties that encode the
safety concerns of our system (the abstract model). One of the usages of this technique in
our development process is to identify the need for new Detectors and Mitigators upfront, before even starting the implementation. Furthermore, we use this approach to
identify inconsistencies within the failure hypothesis itself and improve it as part of our
integration with FMEA [Re79] and FTA [Ve81] techniques.
As an example, we have applied this tool to the CLS specification of Figure 4 to prove
that under the given failure hypothesis, of having at most one complete component failure per run and not loosing more than one message in the CAN bus, the system will
unlock all doors after a crash. We used the tool to elicit the failure hypothesis in a sequence of iterations. In fact, we elicited the requirement to replicate the impact sensor
and to limit the number of messages lost in the CAN bus by executing the verification
process.
112
5 Related Work
Our service notion as partial interaction specification captures and goes beyond the various uses of the term service in the context of the telecommunications domain and web
services. In telecommunication systems, features encapsulate self-contained pieces of
functionality used to structure the interfaces of components [Pr03]. Feature interactions
[Za03] should not lead to inconsistent behaviors. In contrast to our approach, this service
notion is local to individual components, and their interplay is considered afterwards.
Our model scales at different levels of abstractions. The consequences of modeling services as partial behaviors and components as total behaviors are that components are
themselves services, components may offer a family of services, and services emerge
from the interplay of several components.
Web services and the corresponding technologies view services as implementation building blocks and provide open standards for technical integration concerns such as service
discovery and binding. Nevertheless, web services coordinate workflows among domain
objects, and may depend on other services. Therefore, the view of services as orchestrators of other services and as Facades to the underling domain objects [Ev03] is gaining
popularity. In contrast to web services, we provide a formal model of services and use
services as first-class modeling entities seamlessly throughout the development process.
Our approach is related to Model-Driven Architecture [MCF03], but we consider services as modeling entities in both the abstract and the concrete models. We based our
formal model on streams and relations over streams because they conveniently model
component collaboration and directly support partial and complete behavior specifications. State-based formalisms such as statecharts [Ha87] focus on complete component
specifications rather than the architectural decomposition we aim at for end-to-end failure specifications.
In our approach, we use standard model-checking techniques. In particular, we use the
SPIN model checker [Ho03] to verify parallel execution of automata generated from
MSCs using the algorithm described in [Kr99,Kr00]. Other approaches that support
model checking on interaction-based specifications exist. For example, [Ku05] gives a
semantics for live sequence charts (LSCs) based on temporal logic, effectively reducing
the model checking of LSCs to the verification of temporal logic formulae. [AY99]
addresses the specific issues of model checking message sequence charts. Our approach
differs by focusing on failures and providing techniques to modify the models to include
the failure hypothesis. Our approach allows for flexible model-based development for
reactive fail-safe distributed systems and fits the automotive domain and others with
similar characteristics, including avionics and telecommunications.
[AK98] describes how failure-tolerant systems are obtained by composing intolerant
systems with two types of components – detectors and correctors – and proves that these
are sufficient to create fail-safe and non-masking tolerant systems, respectively. We have
similar detectors and mitigators specifications; however, we explicitly consider the
communication infrastructure of distributed systems, and our detectors need only to be
aware of the messages exchanged by components. Similar concepts are also present in
the Fault Tolerant CORBA [OMG04,Ba01] as detectors and recovery services. Instead
113
of using heartbeat messages for detectors to identify deadlocks and crashes, our approach is more general and allows us to detect unexpected and non-occurrence behavior
failures while providing a much more flexible mitigation strategy. [GJ04] extends the
approach from [AK98] to non-fusion-closed systems by introducing history variables as
needed. However, the system model still does not explicitly consider the communication
by means of messages. Another related approach from [AAE04] describes how to synthesize fault-tolerant programs from CTL specifications and allows generating not only
the program behavior but also detectors and correctors. Nevertheless, their technique is
subject to the state explosion problem. Different types of tolerance are analyzed: masking tolerance, non-masking tolerance, and fail-safe tolerance. In our ontology, we can
easily obtain all three types of tolerance (depending on the redundancy and the type of
mitigator).
To reduce the risk of implementation errors our mitigators can exploit a technique that is
similar to N-Version Software [CA95]. Such systems can switch to a reducedfunctionality safe mode using a simpler version of the code, which (hopefully) doesn’t
contain the error that triggered the malfunction. Our approach can also leverage previous
work on software black boxes [EM00], where features are assigned to modules and
hooks are used to record procedure calls. Both approaches would fit the AUTOSAR
main requirements [AUT08] asking support for redundancy and FMEA compatibility
from any implemented system. Our model could also be combined with architectural
reconfiguration approaches such as the one presented in [TSG04,GH06].
6 Summary and Outlook
In this paper we have bridged the gap between a precise foundation for services and
components, and a practical application of this foundation to prove fail-safety under a
given failure hypothesis. To that end, we have introduced a domain model for serviceoriented architecture specifications, augmented with a model for failure detection and
mitigation. This model gave rise to an architecture definition language for serviceoriented systems. This language allows us to specify (a) services as interaction patterns
at the logical architecture level, (b) deployment architectures, (c) a mapping from logical
to deployment models, and (d) failure detectors and mitigators at all levels of the architecture. Because failure detectors and mitigators are tied to the interaction patterns that
define services in our approach, we were able to establish a formal relationship between
a service specification with and without failure management specification. We have
exploited this relationship by building a prototypic verification tool that checks if a given
service architecture is fail-safe under a given failure hypothesis.
There is ample opportunity for future research. We give two examples. First, the failure
model itself can be interpreted as the basis for a rich domain language for failure specifications. This will give rise to a set of failure management service/design patterns that
could be incorporated in a standard library for the SADL. Second, the verification tool
produces as its output a counter example if the system is not fail-safe. This counter example can be analyzed for potential clues on which failure management pattern to use to
mitigate against it.
114
References
[AAE04]
[AK98]
[AUT08]
[AY99]
[Ba01]
[BKM07]
[CA95]
[EKM06]
[EM00]
[Er07]
[Ev03]
[Ga95]
[GH06]
[GJ04]
[Ha87]
[Ho03]
[Ku05]
[KNP04]
[Kr99]
Attie, P. C., Arora, A., Emerson, E. A.: Synthesis of fault-tolerant concurrent programs. ACM Transactions on Programming Languages and Systems. vol. 26, 2004;
pp. 125–185
Arora, A., Kulkarni, S.S.: Component based design of multitolerant systems. IEEE
Transactions on Software Engineering. vol. 24, 1998; pp. 63–78.
AUTOSAR. AUTOSAR Main Requirements. http://www.autosar.org/download/
AUTOSAR_MainRequirements.pdf, Version 2.0.4. Revision 2008
Alur, R., Yannakakis, M.: Model Checking of Message Sequence Charts. In Proc.
10th Intl. Conf. on Concurrency Theory (CONCUR’99). LNCS, Vol. 1664, Springer
Berlin / Heidelberg, 1999; pp. 114–129
Baldoni, R., Marchetti, C., Virgillito, A., Zito, F.: Failure Management for FTCORBA Applications., In: Proceedings of the 6th International Workshop on ObjectOriented Real-Time Dependable Systems, 2001; pp. 186–193.
Broy, M., Krüger, I.H., Meisinger, M.: A formal model of services. In ACM Transactions on Software Engineering and Methodology (TOSEM). 16(1):5, Feb. 2007
Chen, L., Avizienis, A.: N-Version Programming: A Fault-Tolerance Approach to
Reliability of Software Operation. Software Fault Tolerance. Wiley, 1995; pp. 23–46
Ermagan, V., Krüger, I. H., Menarini, M.: Model-Based Failure Management for
Distributed Reactive Systems. In (eds. Kordon F. and Sokolsky O.): Proceedings of
the 13th Monterey Workshop, Composition of Embedded Systems, Scientific and Industrial Issues, Oct 2006, LNCS vol 4888, Paris, France. Springer, 2007; 53–74
Elbaum, S., Munson, J. C.: Software Black Box: an Alternative Mechanism for Failure Analysis. In Proceedings of the 11th International Symposium on Software Reliability Engineering, 2000; pp.365–376
Ermagan, V., Krueger, I., Menarini, M., Mizutani, J.I., Oguchi, K., Weir, D.: Towards
model-based failure-management for automotive software. In: Proceedings of the
ICSE Fourth International Workshop on Software Engineering for Automotive Systems (SEAS), IEEE Computer Society, 2007
Evans, E.: Domain Driven Design. Addison-Wesley, 2003
Gamma, E., Helm, H., Johnson R., Vlissides, J.: Design Patterns: Elements of Resuable Object-Oriented Software. Addison-Wesley Reading, Mass.,1995
Giese, H., Henkler, S.: Architecture-driven platform independent deterministic replay
for distributed hard real-time systems. ACM Press New York, NY, USA, 2006; pp.
28–38.
Gartner, F.C., Jhumka, A.: Automating the Addition of Fail-Safe Fault-Tolerance:
Beyond Fusion-Closed Specifications. In: Formal Techniques, Modelling and Analysis of Timed and Fault-Tolerant Systems, LNCS vol 3253, Springer Berlin/Heidelberg, 2004; pp. 183–198.
Harel, D.: Statecharts: A Visual Formalism for Complex Systems. Science of Computer Programming, Vol. 8(3), Elsevier Science Publishers B.V., 1987; pp. 231–274
Holzmann, G. J.: The Spin Model Checker: Primer and Reference Manual. AddisonWesley, 2003
Kugler, H., Harel, D., Pnueli, A., Lu, Y., Bontemps, Y.: Temporal Logic for ScenarioBased Specifications. In Tools and Algorithms for the Construction and Analysis of
Systems. LNCS, Vol. 3440/2005, Springer Berlin / Heidelberg, 2005; pp. 445–460
Krüger, I.H., Nelson, E.C., Prasad, K.V.: Service-based Software Development for
Automotive Applications. In Proceedings of CONVERGENCE 2004. Convergence
Transportation Electronics Association, 2004
Krüger, I., Grosu, R., Scholz, P., Broy, M.: From MSCs to statecharts. In (Rammig,
F.J., ed.): Proceedings of the IFIP WG10.3/WG10.5 international workshop on Dis-
115
tributed and Parallel Embedded Systems. Kluwer Academic Publishers, 1999; pp. 61–
71
[Kr00]
Krüger, I.: Distributed System Design with Message Sequence Charts. Ph.D.
dissertation, Technische Universität München, 2000
[Le95]
Leveson, N.G. Safeware: system safety and computers. ACM Press New York, 1995.
[MCF03] Mellor, S., Clark, A., Futagami, T.: Guest Editors' Introduction: Model-Driven Development. In IEEE Software. Vol. 20(5). IEEE Computer Society Press, 2003
[OMG04] Object Management Group. Fault Tolerant CORBA. In CORBA 3.0.3 Specification.
vol. formal/04-03-21: OMG, 2004.
[Pr03]
Prehofer, C: Plug-and-Play Composition of Features and Feature Interactions with
Statechart Diagrams. In Proceedings of the 7th International Workshop on Feature Interactions in Telecommunications and Software Systems, Ottawa, 2003; pp. 43–58
[Re79]
Reifer, D. J.: Software failure modes and effects analysis. IEEE Transactions on
Software Engineering. vol. 28, 1979; pp. 247–249.
[TSG04] Tichy, M., Schilling, D., Giese, H.: Design of self-managing dependable systems with
UML and fault tolerance patterns. In: Proceedings of the 1st ACM SIGSOFT workshop on Self-managed systems (WOSS’04), ACM Press New York, NY, USA, 2004;
pp. 105–109.
[Ve81]
Vesely, W.E, Goldberg, F.F., Roberts, N. H.; Haasl, D.H.: Fault Tree Handbook
(NUREG-0492). Washington, DC: Systems and Reliability Research, Office of Nuclear Regulatory Research, US Nuclear Regulatory Commission, 1981.
[Za03]
Zave, P.: Feature Interactions and Formal Specifications in Telecommunications.
IEEE Computer, 26(8), IEEE Computer Society, 2003; pp. 20–30
116
Technische Universität Braunschweig
Informatik-Berichte ab Nr. 2003-07
2003-07
M. Meyer, H. G. Matthies
State-Space Representation of Instationary
Two-Dimensional Airfoil Aerodynamics
2003-08
H. G. Matthies, A. Keese
Galerkin Methods for Linear and Nonlinear Elliptic
Stochastic Partial Differential Equations
2003-09
A. Keese, H. G. Matthies
Parallel Computation of Stochastic Groundwater Flow
2003-10
M. Mutz, M. Huhn
Automated Statechart Analysis for User-defined Design
Rules
2004-01
T.-P. Fries, H. G. Matthies
A Review of Petrov-Galerkin Stabilization Approaches
and an Extension to Meshfree Methods
2004-02
B. Mathiak, S. Eckstein
Automatische Lernverfahren zur Analyse von
biomedizinischer Literatur
2005-01
T. Klein, B. Rumpe, B. Schätz
(Herausgeber)
Tagungsband des Dagstuhl-Workshop MBEES 2005:
Modellbasierte Entwicklung eingebetteter Systeme
2005-02
T.-P. Fries, H. G. Matthies
A Stabilized and Coupled Meshfree/Meshbased Method
for the Incompressible Navier-Stokes Equations — Part
I: Stabilization
2005-03
T.-P. Fries, H. G. Matthies
A Stabilized and Coupled Meshfree/Meshbased Method
for the Incompressible Navier-Stokes Equations — Part
II: Coupling
2005-04
H. Krahn, B. Rumpe
Evolution von Software-Architekturen
2005-05
O. Kayser-Herold, H. G. Matthies
Least-Squares FEM, Literature Review
2005-06
T. Mücke, U. Goltz
Single Run Coverage Criteria subsume EX-Weak
Mutation Coverage
2005-07
T. Mücke, M. Huhn
Minimizing Test Execution Time During Test
Generation
2005-08
B. Florentz, M. Huhn
A Metamodel for Architecture Evaluation
2006-01
T. Klein, B. Rumpe, B. Schätz
(Herausgeber)
Tagungsband des Dagstuhl-Workshop MBEES 2006:
Modellbasierte Entwicklung eingebetteter Systeme
2006-02
T. Mücke, B. Florentz, C. Diefer
Generating Interpreters from Elementary Syntax and
Semantics Descriptions
2006-03
B. Gajanovic, B. Rumpe
Isabelle/HOL-Umsetzung strombasierter Definitionen
zur Verifikation von verteilten, asynchron
kommunizierenden Systemen
2006-04
H. Grönniger, H. Krahn,
B. Rumpe, M. Schindler, S. Völkel
Handbuch zu MontiCore 1.0 - Ein Framework zur
Erstellung und Verarbeitung domänenspezifischer
Sprachen
2007-01
M. Conrad, H. Giese, B. Rumpe,
B. Schätz (Hrsg.)
Tagungsband Dagstuhl-Workshop MBEES:
Modellbasierte Entwicklung eingebetteter Systeme III
2007-02
J. Rang
Design of DIRK schemes for solving the
Navier-Stokes-equations
2007-03
B. Bügling, M. Krosche
Coupling the CTL and MATLAB
2007-04
C. Knieke, M. Huhn
Executable Requirements Specification: An Extension
for UML 2 Activity Diagrams
2008-01
T. Klein, B. Rumpe (Hrsg.)
Workshop Modellbasierte Entwicklung von
eingebetteten Fahrzeugfunktionen, Tagungsband
2008-02
H. Giese, M. Huhn, U. Nickel,
B. Schätz (Hrsg.)
Tagungsband des Dagstuhl-Workshop MBEES:
Modellbasierte Entwicklung eingebetteter Systeme IV