A Software Framework for Real-Time and Distributed

Transcription

A Software Framework for Real-Time and Distributed
A Software Framework for Real-Time and
Distributed Robot and Machine Control
Een Software Raamwerk voor Ware Tijd en
Gedistribueerde Robot- en Machinecontrole
Peter Soetens
Ph.D. thesis
Katholieke Universiteit Leuven, Belgium
30 May 2006
c
Katholieke Universiteit Leuven
Faculteit Ingenieurswetenschappen
Arenbergkasteel, B-3001 Heverlee (Leuven), België
Alle rechten voorbehouden. Niets uit deze uitgave mag worden verveelvoudigd
en/of openbaar gemaakt worden door middel van druk, fotokopie, microfilm,
elektronisch of op welke andere wijze ook zonder voorafgaandelijke schriftelijke
toestemming van de uitgever.
All rights reserved. No part of this publication may be reproduced in any
form, by print, photoprint, microfilm or any other means without written
permission from the publisher.
D/2006/7515/21
ISBN 90-5682-687-5
UDC 681.3∗I29
Voorwoord
Zo’n vijf jaar geleden, op een piepklein kot in Leuven stonden Dieter en ik
te wijzen op een computerscherm terwijl de prof geı̈nteresseerd vragen stelde
over ons projectje. Een gesimuleerde modulaire robot bewoog in 3D op ons
scherm.
We waren twee studenten Master in Artificial Intelligence die er voor
gekozen hadden een softwareproject voor het vak ‘Robotics’ te doen. Gezien
de prof open had gelaten wat we precies mochten doen, zagen we onze kans
schoon onze eisen door te duwen: we wilden het onder Linux doen en we
gingen de geschreven software vrij beschikbaar maken op het internet. We
wisten niet of dit eigenlijk wel mocht van de universiteit, maar in een wat
niet weet niet deert overtuiging hadden we dit maar terloops langs onze neus
gezegd. De prof stelde er geen verdere vragen over, nu ja, we hadden ons
ingedekt. Het project was nu genoeg af en de gekleurde blokkige robot zwenkte
onverstoorbaar door over ons scherm. We wilden er nog zoveel meer mee doen,
intelligentie, botsingen detecteren, een gesimuleerde wereld waarin de robot
kon vertoeven, . . . Maar we kregen te horen dat we meer dan genoeg gedaan
hadden. ’t Was wel. De prof zette zijn fietshelm weer op en vertrok.
Een tijd later, de examens moesten toen net gedaan zijn, kreeg ik een
mailtje met de vraag of ik niet geı̈nteresseerd was om een echte, liefst
intelligente, robot aan de praat te krijgen onder Linux. De prof had een
nieuw project uit de grond gestampt waarvoor hij nog iemand nodig had. Ik
twijfelde. Ik had helemaal geen ambitie om assistent te worden en ik kende er
niemand. Ik dacht eerder C++ programmeur of Linux-consultant te worden.
Maar ja, anderzijds had ik ook nog niet gesolliciteerd naar een job. Na wat
aarzelen aanvaarde ik dan toch. Bij aankomst op mijn eerste werkdag moet
ik me bijna een ongeluk geschrokken zijn. Zowel de werkplek als de labo’s
draaiden Linux computers. De man voor wie ik ging beginnen werken was
blijkbaar diegene die ijverde om binnen de universiteit Vrije software en open
standaarden te gebruiken en bovendien door de media gebeld werd als ze een
expert Vrije of Open software nodig hadden. Het project heette dan ook
‘Open Robot Control Software’, Orocos.
i
Voorwoord
Er is heel wat gebeurd sinds Herman me wees op het pad naar de top
van die enorme berg. De berg was toen zo veraf dat hij nog klein leek en ik
had toen al de overtuiging dat je met software alles gedaan kon krijgen. Boy,
was I wrong. Een mens kan nog altijd geen robot basisintelligentie aanmeten
zonder hulp van anderen. Eerst ontmoette ik Klaas en Tine, die uitzochten
hoe een machine kan begrijpen hoe haar omgeving in elkaar zit. Of Bob,
die uitzocht hoe je de controle over een robot kon verspreiden over meerdere
computers. En dan waren er nog Walter en Pieter die de bende van de ‘robot’
mensen aanvulden en ons ALMA-groepje compleet maakten. Later groeide
de groep verder aan met Takis, Johan, Wim en Peter, warempel de eerste,
en moedigste, Orocos gebruikers. De derde generatie is ondertussen het werk
aan het overnemen, Ruben, Tine, Kasper en Diederik zorgden voor het verse
bloed in de groep van de altijd maar ‘slimmer’ wordende robots. En vraag me
maar naar de toffe momenten met Dirk, Eric, René, Gudrun, Bram en de vele
momenten die dankzij het Social Event Team collega’s bij elkaar brachten. Ik
bedank ook Ronny en Jan voor hun onvermoeibare steun op IT vlak en Paul
voor het opzetten van de meting van de experimenten.
Orocos was niet geweest wat het nu is zonder de inzet van het dozijn
jobstudenten die gedurende een verschroeiende zomervakantie onder de gesel
van real-time, C++ of zelfs CORBA Orocos gezamenlijk optilden naar een
hoger niveau. Bedankt mannen. Ik zal de plezante momenten met ‘uber’
student Jan niet snel vergeten, wanneer hij me hielp de gasten te managen
bij 35 graden celcius. Een belangrijk stuk code van Orocos werd geschreven
door Dominique. In die twee maanden die je voor ons gewerkt hebt, heb je
ons een jaar bespaard.
Ik kreeg ook veel steun en feedback van de Belgische bedrijven en het
FMTC en wil hen bedanken voor de ernst waarmee ze mijn werk benaderen
en willen verder zetten.
Last but not least wil ik Herman bedanken voor zijn dagdagelijkse en
onuitputtelijke steun aan mijn werk en voor de kansen die ik gekregen heb.
Mijn assessoren prof. De Schutter, prof. Berbers en prof. Van Gool wil ik
bedanken voor het opvolgen van mijn onderzoek en de waardevolle tips om
dit een betere tekst te maken. I would like to thank as well prof. Siegwart of
the École Polytechnique Fédérale de Lausanne, Switzerland and prof. Nilsson
of the Lund Institute of Technology, Sweden for taking the time to jury my
PhD defense.
ii
Voorwoord
Ik wil besluiten met mijn ouders en zussen te bedanken voor de steun
gedurende al die jaren, ook al wonen we nu in het verre Antwerpen. Ook de
familie van Wina is voor mij een tweede thuis geworden, bedankt om er te
zijn voor ons. Ik draag dit werk op aan de twee vrouwtjes die ik het liefste
zie, Wina en Ilke.
Peter Soetens.
Leuven, Mei 2006.
iii
iv
Abstract
The design of software frameworks in real-time robot and machine control
has focused in the past on robotics, where motion control is dominant, or on
automated machine control, where logic control is dominant. Designs with
the former in mind lead to communication frameworks where sending data
between components, controlling the data flow, is central. Designs with the
latter in mind lead to frameworks which aid in realising decision making for
controlling the logic execution flow. Both forms of control are required to
have robot or machine control applications running.
This work looks at the control application as a whole and identifies both
separation and coupling between data flow and logic execution flow. A single
software component model supports control tasks which are highly reactive,
such as in weaving machines or in automated machine tools, and serves equally
well in applications which are highly data driven such as in vision or force in
the loop motion control applications.
Ideally, a software framework for control must offer inter-task communication
primitives which are inherently thread-safe and hard real-time. They may
not add indeterminism, possible deadlocks or race conditions to the control
application. Furthermore, observation of and interaction with the control
task’s activity must not disturb its time determinism. Classic real-time
operating systems, in which present day control applications are built, do
not offer all these guarantees.
This work contributes design patterns for synchronous and asynchronous
inter-task communication which uphold these requirements using lock-free
data exchange. The patterns guarantee ‘localised’ real-time properties
in a mixed real-time, not real-time environment, allowing remote (non
deterministic) access, hence distribution of the application’s real-time
components. The communication primitives are validated in this work and
outperform on average and in worst case traditional lock-based approaches.
This works contributes a design pattern for structuring feedback control
as well. The Control Kernel, which implements this pattern, can be applied
on distributed control applications. For example, it is used to synchronise two
v
Abstract
robots with real-time feedback of a 3D vision system and a force sensor.
The results of this work lead to a new component model for real-time
machine and robot control applications. The component interface specifies
(real-time) data flow, (real-time) execution flow and the parameters of each
component. The component behaviour can be specified by real-time activities
and hierarchical state machines within components. This has been applied in
a camera based on-line object tracker.
vi
Beknopte Samenvatting
Het ontwerp van sofware raamwerken in ware tijd robot- en machinecontrole
heeft zich in het verleden toegespitst op robotica, zoals bewegingscontrole,
of op automatische machinecontrole, zoals logische controle. Ontwerpen naar
het eerste domein leiden tot raamwerken waar datastroom gerealiseerd wordt
tussen softwarecomponenten. Ontwerpen naar het tweede domein leiden
tot raamwerken waar de uitvoeringslogica en het nemen van beslissingen
centraal staan.
Beide vormen van controle zijn nodig om robot- en
machinecontroleapplicaties uit te voeren.
Dit werk beschouwt de controleapplicatie als een geheel en identificeert
zowel de scheiding als de koppeling tussen de datastroom en de logicauitvoeringsstroom. Een softwarecomponentenmodel ondersteunt controletaken
die reageren op gebeurtenissen zoals in machinale weefgetouwen of in
geautomatiseerde werktuigmachines. Het dient even goed in applicaties die
sterk datageorinteerd zijn zoals teruggekoppelde visie- en krachtcontrole in
bewegingscontrole applicaties.
Ideaal gezien biedt een software raamwerk voor controle communicatieprimitieven
aan die inherent veilig zijn en strikte ware tijd eigenschappen hebben.
Observatie of interactie met de controleactiviteit mag het tijdsdeterminisme
niet storen. Klassieke ware tijd besturingssystemen, in dewelke controleapplicaties
traditioneel gebouwd worden, bieden niet al deze garanties aan.
Dit werk draagt een ontwerppatroon bij voor synchrone en asynchrone
communicatie dat aan deze vereisten voldoet, gebruik makende van
slotvrije communicatie. De patronen garanderen gelokaliseerde ware tijd
eigenschappen in een gemengde ware tijd omgeving, die toegang van op
afstand toelaat. De communicatieprimitieven in dit werk worden gevalideerd
en presteren zowel gemiddeld als in het slechtste geval beter dan traditionele
slotgebaseerde aanpakken.
Dit werk draagt tevens een ontwerppatroon bij voor het structureren
van teruggekoppelde controleapplicaties. De Controlekern, die dit patroon
implementeert kan gebruikt worden voor gedistribueerde controleapplicaties.
Bijvoorbeeld om twee robots te synchroniseren gebaseerd op ware tijd
vii
Beknopte Samenvatting
terugkoppeling van een 3D visiesysteem en een krachtsensor.
De resultaten van dit werk leiden tot een componentenmodel voor ware tijd
machine- en robotcontroleapplicaties. De componenteninterface specificeert
(ware tijd) datastromen, (ware tijd) uitvoeringsstromen en de parameters
van elke component. Dit is toegepast in een online cameragebaseerde
voorwerpvolger.
viii
Abbreviations
General abbreviations
ACE
: Adaptive Communication Environment
API
: Application Programming Interface
CAN
: Common Area Network
CAS
: Compare And Swap
CNC
: Computer Numerical Control
CORBA
: Common Object Request Broker Architecture
CPU
: Central Processing Unit
CPF
: Component Property Format
CSP
: Communicating Sequential Processes
DMS
: Deadline Monotonic Scheduler
DOC
: Distributed Object Computing
EDF
: Earliest Deadline First
GNU
: Gnu’s Not Unix
GPL
: General Public License
HMI
: Human Machine Interface
LGPL
: Lesser General Public License
Linux
: Linux Is Not Unix
MDA
: Model Driven Architecture
OCL
: Object Constraint Language
OMG
: Object Management Group
OROCOS : Open Robot Control Software
OS
: Operating System
PIP
: Priority Inheritance Protocol
PCP
: Priority Ceiling Protocol
RMS
: Rate Monotonic Scheduler
ROOM
: Real-time Object Oriented Modelling
RTAI
: Real-Time Application Interface
RTOS
: Real-Time Operating System
TAO
: The Ace Orb
UML
: Unified Modelling Language
ix
Abbreviations
VMEbus
XML
x
: VERSAmodule Eurocard bus
: eXtensible Markup Language
Glossary
Notation
action
activity
architecture
Description
A run-to-completion functional statement, it
can be preempted by another action, but is
always executed as a whole. For example,
setting a parameter, calling a kinematics
algorithm, storing data in a buffer or emitting
an event, 8
The execution of a combination of actions in a
given state, which may be terminated before it
is wholy executed, 8, 18, 148
In software, an application specific partitioning
of responsibilities in software modules. Each
module has a specified task within the system
and only communicates to a specific set of other
modules , 3
behaviour
In this work, the realisation of an interface by
a software component. A behaviour is thus
subject to the interface’s contract. For example,
a state machine can model the behaviour of
an interface since it defines how it reacts to
events in each of its states and which activity
it performs in each state. Behaviour can thus
be purely reactive (event based), active (activity
based) or both, 6, 80, 83
collocation
In distributed computing, merging two
components on a node in the same process,
allowing more efficient communication, 40, 79
xi
Abbreviations
Notation
component
container
contention
control flow
CORBA
critical section
data flow
design pattern
xii
Description
A modular, deployable, and replaceable part of
a system that encapsulates implementation and
exposes a set of interfaces, 3
In component models, a container contains a
component and is responsible for connecting it
with services and other components, 43, 52
A condition that arises when two or more
activities attempt to read or write shared data
at the same time. Contention needs a form
of mediation to serialise access such that the
shared data is not corrupted, 8, 18, 96
The ordering of otherwise independent actions,
placing them in loops, conditional statements
etc. We will use the term “Execution Flow”
as a synonym to avoid confusion with control
theory semantics, 19
A vendor independent middleware standard for
distributing software components, 9
A section of program instructions which may
not be executed by concurrent threads. For
example, a section where data is read, modified
and written, 29, 116
The exchange of data in between actions,
formed because of the dependency of Actions
upon each other. For example, a PID control
action requires sensor inputs and setpoints as
data inputs, which are the results of the sensing
Action and setpoint generation Action. In this
work though, we use the term data flow more
narrowly to denote the flow of data between
tasks for control of the continuous domain, 19
In computer science, a generalised, structural
solution to commonly occuring problems in
software design, 14, 48, 75
Glossary
Notation
event
execution engine
Description
A detected change in a system which is
broadcasted to subscribers. Sometimes refered
to as ‘Publish-Subscribe’ since the event raiser
publishes the change which is then received by
each subscriber to that event. Events commonly
carry data, providing information about the
change. The advantage of this mechanism is
that it separates the detection of the change
from the reaction to the change, 5, 11
An element which executes a control flow with
a given policy, meaning, it invokes actions if
certain conditions are met, 56
garbage collection
A technique to free allocated memory when it
is no longer used. This is done by counting the
references which point to an object and when
the count drops to zero, to delete the object,
31, 137
infrastructure
In software, an application independent library
designed towards providing software tools for
building applications , 2, 78
In computer science, the definition or contract
of the behaviour of a software component, 23
In lock-free literature, the preemption of a
thread which is accessing a lock-free shared
object by a thread which accesses the object
as well. As a result, when the former thread
resumes, this interference will invalidate the
latter thread’s operation and causes it to retry
the operation, 31, 34
interface
interference
lock-free
A technique to share data between concurrent
processes without using locks such that no
process is forced to block or wait during or
attempting access or modification of the data,
6
xiii
Abbreviations
Notation
middleware
mutex
Description
In distributed computing, middleware is defined
as the software layer that lies between the
operating system and the applications on each
node of the network with the purpose to
allow distributed components to communicate
transparently on the interface level, 9, 39, 76
Stands for mutual exclusion. An operating
system primitive used to lock a critical program
section exclusively for one thread. Any other
thread which wishes to execute that section
blocks until the owner of the mutex releases the
mutex, 29
node
In distributed computing, a device on a network,
having its own central processor unit and
memory, 30, 45
preemption
In computer science, the temporary interruption
of the execution of one activity in favour of
the execution of another activity. Reasons for
preemption can be a higher quality of service
of the preempting activity or the expiration of
the time slice of the preempted activity. Such
preemption policies are set by the scheduler.
Preemption causes (additional) contention since
multiple activities may want to access the same
resources at the same time, 94, 96
In computer science, a memory contained
execution of a sequence of instructions, 29
process
quality of service
xiv
In real-time computing, a property assigned to
an activity which expresses a bound on the
execution latency of that activity. It is often
approximated with thread priority schemes, 22
Glossary
Notation
real-time
Description
A term to denote execution time determinism
of an action or sequence of actions in response
to an event. This means that the action
always completes (and/or starts) within a
bounded time interval, or in computer science
terminology, the response time has a maximum
latency. A real-time system is said to fail if this
deadline from event to system response is not
met, this is also known as hard real-time. The
use of the term real-time in this work always
denotes hard real-time. The term on-line is used
to denote soft or not real-time, 1, 28
scheduler
In computer science, an algorithm which
contains the decision logic on when to execute
which process or thread, 28
An operating system primitive used for
synchronisation (like a traffic light.) A thread
can wait (block its execution) on a semaphore
until another thread signals the semaphore,
upon which the first thread resumes execution,
29
semaphore
thread
In computer science, a thread of execution is
a sequence of instructions executed in parallel
with other threads within a process, 29
UML
An OMG standardised language to (formally)
describe software, 22
xv
xvi
Table of contents
Voorwoord
i
Abstract
v
Beknopte Samenvatting
vii
Abbreviations
ix
Glossary
x
Table of contents
xvii
1 Introduction
1.1 Designer Roles . . . . . . . . . . . . . . . .
1.2 Relevance . . . . . . . . . . . . . . . . . . .
1.2.1 Real-Time Feedback Control . . . .
1.2.2 Distribution . . . . . . . . . . . . . .
1.2.3 Machine Control . . . . . . . . . . .
1.2.4 Requirements . . . . . . . . . . . . .
1.3 Contributions . . . . . . . . . . . . . . . . .
1.4 Orocos Walk-Through . . . . . . . . . . . .
1.4.1 Hardware Interfacing . . . . . . . . .
1.4.2 Identifying the Application Template
1.4.3 Building the Components . . . . . .
1.4.4 Component Deployment . . . . . . .
1.4.5 Component Behaviour . . . . . . . .
1.4.6 Running the Application . . . . . . .
1.5 Overview of the Thesis . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
4
4
7
10
13
13
15
15
15
16
17
17
18
18
2 Situation and Related Work
2.1 Design Methodologies . . . . . . . . . . . . . . . . . . . . .
2.1.1 The Unified Modelling Language . . . . . . . . . . .
21
21
21
xvii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Table of contents
2.2
2.3
2.4
2.5
2.6
2.1.2 Real-Time Object Oriented Modelling . . . .
2.1.3 Evaluation . . . . . . . . . . . . . . . . . . .
Time Determinism . . . . . . . . . . . . . . . . . . .
2.2.1 Definition of Real-Time . . . . . . . . . . . .
2.2.2 Real-Time Operating System Requirements .
2.2.3 Real-Time Communication Requirements . .
2.2.4 Evaluation . . . . . . . . . . . . . . . . . . .
Robot and Machine Control . . . . . . . . . . . . . .
2.3.1 Block Diagram Editors . . . . . . . . . . . . .
2.3.2 Chimera . . . . . . . . . . . . . . . . . . . . .
2.3.3 ControlShell . . . . . . . . . . . . . . . . . . .
2.3.4 Genom . . . . . . . . . . . . . . . . . . . . .
2.3.5 Orccad . . . . . . . . . . . . . . . . . . . . .
2.3.6 Evaluation . . . . . . . . . . . . . . . . . . .
Component Distribution . . . . . . . . . . . . . . . .
2.4.1 Communication Frameworks for Distribution
2.4.2 Control Frameworks for Distribution . . . . .
2.4.3 Component Models . . . . . . . . . . . . . . .
2.4.4 Evaluation . . . . . . . . . . . . . . . . . . .
Formal Verification . . . . . . . . . . . . . . . . . . .
2.5.1 Communicating Sequential Processes . . . . .
2.5.2 Evaluation . . . . . . . . . . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . . . . . . .
3 The Feedback Control Kernel
3.1 The Design Pattern for Feedback Control
3.1.1 Introduction . . . . . . . . . . . .
3.1.2 Participants . . . . . . . . . . . . .
3.1.3 Structure . . . . . . . . . . . . . .
3.1.4 Consequences . . . . . . . . . . . .
3.1.5 Known Uses . . . . . . . . . . . . .
3.1.6 Implementation Example . . . . .
3.1.7 Related Design Patterns . . . . . .
3.2 Design Pattern Motivation and Benefits .
3.2.1 Decoupled Design . . . . . . . . .
3.2.2 Support for Distribution . . . . . .
3.2.3 Deterministic Causality . . . . . .
3.3 Kernel Infrastructure . . . . . . . . . . . .
3.4 A Feedback Control Kernel Application .
3.4.1 Task Description . . . . . . . . . .
3.4.2 Hardware . . . . . . . . . . . . . .
3.4.3 Position and Velocity Control . . .
3.4.4 Tool Control . . . . . . . . . . . .
xviii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
24
27
28
28
29
30
35
35
36
36
37
38
38
38
39
39
41
43
43
44
44
45
45
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
48
48
49
53
58
58
60
60
60
60
61
61
63
66
66
67
68
69
Table of contents
3.5
3.6
3.4.5 Kernel Infrastructure .
3.4.6 Kernel Interfacing . .
3.4.7 Execution Flow . . . .
Realising the Control Kernel
Summary . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4 Analysis and Design of Communication in Control
Frameworks
4.1 Control Tasks . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.2 Task Interface . . . . . . . . . . . . . . . . . . . . . .
4.1.3 Task Behaviour . . . . . . . . . . . . . . . . . . . . .
4.2 Decoupling Real-Time and non Real-Time Activities . . . .
4.2.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 The Activity Design Pattern . . . . . . . . . . . . .
4.2.3 Validation . . . . . . . . . . . . . . . . . . . . . . . .
4.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Decoupling Communication of Activities . . . . . . . . . . .
4.4 Data Flow Communication . . . . . . . . . . . . . . . . . .
4.4.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.2 The Connector Design Pattern . . . . . . . . . . . .
4.4.3 The Data Flow Interface . . . . . . . . . . . . . . . .
4.4.4 Validation . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Execution Flow Communication . . . . . . . . . . . . . . . .
4.5.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.2 The Synchronous-Asynchronous Message Design Pattern
4.5.3 The Operation Interface . . . . . . . . . . . . . . . .
4.5.4 Validation . . . . . . . . . . . . . . . . . . . . . . . .
4.6 Synchronisation of Activities . . . . . . . . . . . . . . . . .
4.6.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
4.6.2 The Synchronous-Asynchronous Event Design Pattern
4.6.3 The Event Service . . . . . . . . . . . . . . . . . . .
4.6.4 Validation . . . . . . . . . . . . . . . . . . . . . . . .
4.7 Configuration and Deployment . . . . . . . . . . . . . . . .
4.7.1 The Property Interface . . . . . . . . . . . . . . . . .
4.7.2 Name servers and Factories . . . . . . . . . . . . . .
4.8 Task Browsing . . . . . . . . . . . . . . . . . . . . . . . . .
4.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
71
71
72
72
75
77
77
80
81
83
84
86
93
100
101
102
102
103
106
107
115
115
117
124
125
129
129
131
137
138
142
142
143
145
146
xix
Table of contents
5 Real-Time Task Activities
5.1 Real-Time Programs . . . . . . . . . . .
5.1.1 Program Model . . . . . . . . . .
5.1.2 Program Status . . . . . . . . . .
5.1.3 Program Execution Policy . . . .
5.1.4 Program Scripting . . . . . . . .
5.1.5 Program Statements . . . . . . .
5.1.6 Program Interfacing . . . . . . .
5.1.7 Summary . . . . . . . . . . . . .
5.2 Real-Time State Machines . . . . . . . .
5.2.1 State Machine Model . . . . . .
5.2.2 State Machine Status . . . . . .
5.2.3 State Machine Execution Policy
5.2.4 State Machine Scripting . . . . .
5.2.5 State Machine Interfacing . . . .
5.2.6 Summary . . . . . . . . . . . . .
5.3 Task Execution Engine . . . . . . . . . .
5.4 Application . . . . . . . . . . . . . . . .
5.5 Summary . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
147
148
150
152
152
153
154
159
160
160
163
165
168
168
171
171
171
173
178
6 Conclusions
6.1 Summary . . . . . . . . . . . . . . . . . . . . . .
6.2 Evaluation and Contributions . . . . . . . . . . .
6.2.1 Design Methodology . . . . . . . . . . . .
6.2.2 Real-time Determinism . . . . . . . . . .
6.2.3 Robot and Machine Control Architecture
6.2.4 Software Component Distribution . . . .
6.2.5 Formal Verification . . . . . . . . . . . . .
6.3 Limitations and Future work . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
179
179
181
181
182
182
183
183
183
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
References
A Experimental Validation
A.1 Hardware . . . . . . . . .
A.2 Software . . . . . . . . . .
A.2.1 Time measurement
A.2.2 Experiment Setup
185
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
195
195
195
195
197
Index
199
Curriculum Vitae
203
List of Publications and Dissemination
205
xx
Table of contents
Nederlandse Samenvatting
I
1
Inleiding . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I
1.1
Ontwerper-rollen . . . . . . . . . . . . . . . . . . . .
I
1.2
Relevantie . . . . . . . . . . . . . . . . . . . . . . . .
II
1.3
Bijdragen . . . . . . . . . . . . . . . . . . . . . . . .
III
2
Situering . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
III
2.1
Ontwerp Methodologien . . . . . . . . . . . . . . . .
IV
2.2
Tijddeterminisme . . . . . . . . . . . . . . . . . . . .
IV
2.3
Robot- en Machinecontrole . . . . . . . . . . . . . .
V
2.4
Componenten Distributie . . . . . . . . . . . . . . .
V
2.5
Formele Verificatie . . . . . . . . . . . . . . . . . . .
VI
3
De Teruggekoppelde Controlekern . . . . . . . . . . . . . . .
VI
3.1
Het Ontwerppatroon voor Teruggekoppelde Controle
VI
3.2
Ontwerppatroon Motivatie en Voordelen . . . . . . .
IX
3.3
Conclusies . . . . . . . . . . . . . . . . . . . . . . . .
IX
4
Analyse en Ontwerp voor Communicatie in Controleraamwerken
X
4.1
De Ware Tijd Controleactiviteit . . . . . . . . . . .
X
4.2
Datastroomcommunicatie . . . . . . . . . . . . . . .
XI
4.3
Uitvoeringsstroomcommunicatie . . . . . . . . . . .
XII
4.4
Synchronisatie van Activiteiten . . . . . . . . . . . .
XIII
4.5
Configuratie en Oplevering . . . . . . . . . . . . . .
XV
4.6
Conclusies . . . . . . . . . . . . . . . . . . . . . . . .
XVI
5
Ware Tijd Taakactiviteiten . . . . . . . . . . . . . . . . . .
XVII
5.1
Programma’s . . . . . . . . . . . . . . . . . . . . . .
XVII
5.2
Toestandsmachines . . . . . . . . . . . . . . . . . . . XVIII
5.3
Conclusies . . . . . . . . . . . . . . . . . . . . . . . .
XIX
6
Conclusies . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XIX
xxi
xxii
Chapter 1
Introduction
This work investigates how real-time systems (machines) can be controlled by
software a such that both machine and software can perform multiple tasks
in parallel and on different levels.
Figure 1.1 shows a hybrid robot-machine-tool workshop where the robot
assists in placing the workpiece for and during manipulation by the machinetool. At the lowest level, both robot and machine-tool perform a positioning
Figure 1.1: A hybrid robot — machine tool setup. Picture courtesy of LVD Inc.
1
1 Introduction
Figure 1.2: A component based software framework.
task. At a higher level, a movement path is planned for robot and machinetool without collisions. Between movements, operations are performed on the
workpiece, which require real-time synchronisation. At an even higher level,
the task is to produce a series of these workpieces in a shop floor and so on.
Many parallel (real-time) tasks may be present: measuring quality of work,
evaluating the status of the machines, collecting data for the logs, visualising
progress, etc. This work proposes solutions to design real-time controllers out
of (distributed) tasks such that the real-time properties of the system are not
violated during (or because of) communication between these tasks. Both the
required communication primitives between tasks and the realisation of task
activities are treated in this work.
This Chapter sheds light on how control software is decomposed in this
work by introducing the four designer roles for building control software. Next,
three example control applications are given to outline the problem domain
of this work. In addition, a small control application is designed in order to
provide a high level overview of what this work offers. It concludes with a
summary of the key contributions of this work.
1.1
Designer Roles
The design of control software is delegated to a number of designer “roles” or
builders, which all work on a different topic, as shown in Figure 1.2.
The Framework Builders develop the core infrastructure or middleware
for building software compononents and applications. The infrastructure only
2
1.1 Designer Roles
provides application-independent functionality such as real-time support for
generic control applications, interfaces to the (Real-Time) Operating System;
distribution of components; support for events, command interpreting and
reporting; state machines; etc.
The Application Builders specify the architecture of the application:
they design a template which identifies the different components in a given
application, the nature of the connections (relations) between components and
the kinds of data exchanged between components. For example, an application
template can be as general as feedback control or humanoid robot control and
can be as specific as CNC woodcutting or steel wire manufacturing.
Not only is the application template responsible for defining which
component roles (kinds) exist in a given application, they also connect the
components according to their role in the system. The application template is
defined upon a common framework infrastructure, provided by the Framework
Builders. In state of the art control applications, the application template is
equivalent to defining software component interfaces and the inter-component
communication network topology.
Component Builders are responsible for the concrete contents of the
software components for which they have the expertise: a control law,
a set-point generator, a domain translator, etc. They implement the
interfaces specified for a particular architecture by the Application Builder.
The components use the common framework infrastructure to realise their
implementation and communication.
The System Builders select the components for a specific application
template and connect them to the hardware interface. The application
template and components can not capture machine specific parameters such as
which device is present on which channel or which interpolator and controller
are best for a given device. The System Builder is responsible for delivering
this configuration. The System Builder has a central role in the whole process
of designing a controller. He identifies the application template to use, the
required components and the integration of software with hardware.
Advanced Users, such as machine operators, only focus on configuring
and using the functionality of an application, to solve particular tasks. They
interact through the interfaces defined by the Application Builder and with
the application components. The required knowledge of the Advanced User
depends on the interface the application builder has set forth. This may be
as high level as a graphical user interface or as low level as the individual
methods of each component’s interface.
The “end user” uses the final configured and ready to run application to
produce a part or to conduct an experiment.
The aims of splitting the design responsibilities are: a. allow specialisation
and limit the required (global) knowledge of each role: for example, a
3
1 Introduction
framework builder only requires knowledge of real-time communication and
general control principles, while real-time knowledge is not required (in detail)
by the other roles; b. enable re-use of solutions: for example, once a specific
application template has been defined, it can be reused for similar control
problems, but with different component implementations; c. constrain each
role in its power to negatively influence the design: a component may
not communicate ‘outside’ the application architecture, hence, this reduces
unknown side effects.
This work targets these roles to a certain extent. First of all, it defines
a framework for distributed and real-time control in Chapter 4. Hence, it
is a handbook for framework builders on the analysis and possible solutions
of what a framework for robot and machine control must offer. Application
builders are targeted as well, since an application template for intelligent
feedback control is defined in Chapter 3, which can serve as an example for
setting up different control applications. Component builders can learn from
this work on how to realise the internal real-time activity of a component
with the infrastructure of Chapter 5. System integrators can also benefit from
reading this since it motivates in the ‘analysis’ sections why the framework
behaves in certain ways and which present day problems it tries to solve.
1.2
Relevance
In order to outline the scope of this work, the problem domain is illustrated by
three examples of robot and machine control applications and the relevance
of this work to these problems is given. Each example demonstrates potential
risks which the application is subject to. These risks are generally known by
the experienced control engineer and serve as a special point of attention and
should be minimised when designing the control application. It is the aim of
this work to minimise or eliminate by design these risks both in framework,
application template and component design. Furthermore, each example
application may be constrained by external factors. Special attention is given
to the influence of the used control framework, the influence of communication
latencies and the influence of closedness of the controller. Again, it is the aim
to minimise or eliminate these constraints in each category. The result of this
evaluation are the requirements this work must meet. These requirements are
summarised at the end.
1.2.1
Real-Time Feedback Control
The first example continues from the introductory example which described
a feedback control setup. An additional feedback loop is added to measure
the quality of work ‘in the loop’ by using a camera vision system which looks
4
1.2 Relevance
Figure 1.3: Camera guided workpiece manipulation. Picture courtesy of LVD Inc.
at the progress and result of the machine-tool’s operations (Figure 1.3). This
process consists of extracting the key features from the images, evaluating
the quality of work and adapting the control parameters when necessary.
Although this addition is hardly futuristic, the realisation of vision guided
manufacturing may benefit from beyond state of the art control software.
Not only is feature extraction and process estimation subject of present day
research, scaling real-time control software to the extent that a multitude
of sensors and control algorithms collaborate in parallel is subject to risk of
failure or even constrained in classical setups. The advantage of adding or even
fusing sensors in order to realise a higher level of control, is counterweighted
by the higher risk of
• mis-interpretation: higher level data require more intelligent and
robust algorithms to decide on their contents. Compare the reading
of an axis position to feature extraction of a camera image.
This work reduces the negative effects (consequences) of misinterpretation by separating and specifying components for model
estimation and supervision in Chapter 3 and defining an event
communication mechanism in Section 4.6. This enables failure detection
in both a local algorithm and on a global scope. Events allow error
handling to be done at the appropriate level.
• indeterminism: an increase of parallel executing tasks increases the
uncertainty about real-time guarantees under emergency conditions.
5
1 Introduction
Especially dead-locks, priority inversion and access contention are
sources of indeterminism, hence system failure.
This work reduces indeterminism during communication by using (and
motivating) exclusively lock-free communication primitives. Although
lock-free algorithms were earlier presented in computer science
literature, state of the art machine controllers do not use these
opportunities to reduce indeterminism.
• shared responsibility: both in normal operation and under
emergencies, the safe control of a machine is harder to prove when the
responsibility is shared among multiple components.
Shared responsibility is caused by the lack of separation between the
data exchanged for realising feedback control, which is distributed
over components by nature, and decision logic, which is centralised in
one component by nature. In other words, a decision is made in a
component, while feedback control is caused by moving data between
components. This work separates these two forms of communication
and defines components for centralisation of decision making in feedback
control applications in Chapter 3.
But even if the risks can be assessed, the possibility to extend or scale a
feedback controller can be constrained by
• the framework: too specific controller design may prohibit easy
integration of new tasks easily because it is already too application
specific, assumes certain preconditions to hold true or provides a too
high level interface. Although providing a high level of abstraction
makes a framework more user friendly, in practice, this often constrains
the extendability of most state of the art control frameworks.
This work proposes an application independent framework for control
in Chapter 4, which allows to create any network of interconnected
components, at any abstraction level. Virtually any parameter(set) of
a component or behaviour can be configured on-line or through files.
The application template for feedback control in Chapter 3 is general
enough to encompass any kind of intelligent feedback control we have
encountered so far.
• communication latencies: the addition of tasks adds communication
which may increase worst case data access latency, jeopardising time
determinism of real-time tasks. State of the art control frameworks
hardly address the (sometimes drastically) reduced determinism when
the number of communications increases.
6
1.2 Relevance
Figure 1.4: A woodworking machine lane requires distributed control in order to
allow parallel operations on a workpiece. Picture courtesy of Homag AG
This work limits communication latencies by identifying the necessary
communication infrastructure for local and distributed communication
in Subsection 2.2.2 and applying it in the communication primitives of
Chapter 4. These primitives were chosen such that the latencies are
most deterministic for the highest priority tasks in the system.
• closedness: a closed machine controller does not allow integration or
addition of ‘self made’ real-time controllers.
This work is open to the largest possible extent. Not only all the
interfaces are documented and motivated, the internals are free to any
builder as well.
For a controller, these risks and constraints are directly connected to all
devices. Additional risks and constraints rise when the controller is distributed
or when event driven machine control is required as well.
1.2.2
Distribution
To improve parts throughput and decrease the unfinished goods stock, it
is beneficial to connect machines such that they form lanes converting raw
materials into finished goods. Where a camera, a robot and a machine-tool
could be under supervision of a central control unit, cabling a machine lane
of possibly one hundred meters long to a central cabinet is not only costly,
but also requires a controller with real-time capacities beyond present day
hardware. An example of such a lane is commonly found in woodworking
shop-floors, Figure 1.4. State of the art woodworking machines take in raw
wood, saw it to the appropriate size, drill holes, polish, glue edges and deliver a
7
1 Introduction
finished door or furniture part. These individual workstations have their own
controller to perform the operation which is synchronised with neighbouring
workstations and the conveyor belt which transports the workpiece. The
workstations use a communication network to synchronise and send or receive
messages. Depending on both bandwidth and real-time requirements, the
network ranges from CAN bus over EtherCAT and Ethernet to wireless
communication. Distributing controllers over a network has the advantage
to keep high bandwidth communication locally, such as position control,
and low bandwidth communication on the network, such as monitoring or
configuration. However, using a communication network imposes the following
risks:
• communication failure: due to loss of network connectivity or failure
at the other end, a message may get lost or return in failure. This
can only be detected by attempting the communication, thus any
communication over a network is weak and uncertain in worst case
scenarios. State of the art distributed control frameworks can detect
communication failure and report this consistently to the application.
This work reduces the negative effects of communication failure in
real-time tasks by defining an ‘action’ primitive which wraps any
communication in Chapter 4. It has real-time error semantics which
can be intercepted by the framework or the application which is
demonstrated in Chapter 5.
• contention: messages or data packets received by the controller may
interfere with the local real-time control activity to the extent that
priority inversions or additional latencies degrade real-time performance.
This is especially the case in controllers where the data is temporarily
locked during read or write access. In that case, even mere observation
by a remote activity may disturb real-time performance of a local
activity1 . State of the art control frameworks do not address this issue,
hence only partially take advantage of component distribution: they
profit from the increasing computing power but suffer from decreased
determinism during communication of distributed components.
The solutions for contention in this work favour the higher priority task
to the lower priority task for deterministic data access. This allows
scalable and remote data access without disturbing the local control
activity2 , since the higher priority task is not disturbed by additional
lower priority messages. It allows effective priority propagation in realtime communication protocols, such that a request at the receiver is
executed at the quality of service of the sender.
1
2
8
Quantum physics seems to have a foot in the door everywhere.
Closing that door!
1.2 Relevance
• non-interoperability: no two controllers are the same. The control
hardware, the operating system on which it executes and the way
in which it is interfaced may all vary.
Defining a distributed
communication protocol on top of this diversity might prove to be
very hard since all three factors will require a different implementation.
Furthermore, if such a common protocol can be found, the software
controller interfaces might be incompatible and be designed towards
a different communication methodology. For example a packet based
(asynchronous ‘send’) interface, such as found in the Common Area
Network (CAN) protocol versus a message based interface (synchronous
‘call’), such as found in the CORBA standard.
Compatibility between distributed controllers has been solved by
using standardised communication libraries (middleware) which allow
transparent communication between components.
The common
interface of each component is defined in this work in Chapter 4 and
contains both synchronous and asynchronous communication semantics.
Apart from these risks, the realisation of distributed control is constrained by:
• framework: The programming languages common in control software
such as C or C++ do not provide mechanisms for distributed object
computing. A control framework realised in these languages is thus not
distributable by default. State of the art control frameworks rely on
middleware to distribute components, accepting the overhead in order
to provide distribution flexibility.
The control framework in this work provides, by using external
libraries, a means to communicate with remote objects and provides
communication mechanisms suitable for communication over a network.
Furthermore, it defines the interface and environment for distributed
control components in Chapter 4.
• communication latencies: The choice of a particular network will
result in a worst case latency, constraining the bandwidth of (feedback)
control over the network. State of the art control frameworks use
deterministic network protocols to bound communication latencies.
This work does not propose new solutions to distributed communication
latencies. As motivated in Subsection 2.2.2, it relies on the real-time
operating system to provide a real-time protocol stack and on real-time
middleware to mediate the communication.
• closedness: Some (industrial) control architectures only operate on
proprietary protocol networks. One can thus only distribute within one
product family of a control vendor. Even more, one needs to distribute
at the modularity of the control vendor’s products as well.
9
1 Introduction
Figure 1.5: A carpet weaving machine is an example of an event triggered machine
control application requiring control of the logic execution flow. Picture courtesy of
Van de Wiele.
The modularity at which components can be distributed in this work is
limited by the size of the smallest control task, since tasks are the units
of distribution.
These risks and constraints are in addition to the previous example of
the camera enhanced machine-tool, since the addition of a distributed task
requires the addition of at least one communication activity on each node in
order to receive incoming messages.
1.2.3
Machine Control
Feedback control is complemented in a machine by logic control. Although
Programmable Logic Controllers (PLC) are widely adopted for logic control in
machines, there are scenarios in which current PLCs do not achieve sufficient
performance and are not configurable enough to control certain machines. A
carpet weaving machine (Figure 1.5) is an excellent example of a complex
mechanism with rotating and reciprocating parts which are all under control.
Without going into detail of how a weaving machine works, it can be said
that it is a machine which performs cyclic operations on a continuous flow
of yarns (a kind of thread). The cycle can be represented by a wheel which
represents the execution of one cyclic operation per turn. In a number of
discrete positions of the wheel, some action must be taken in the machine,
10
1.2 Relevance
for example pulling half of the yarns up, which must be finished by the
time the wheel has rotated to the next position; for example, for shooting a
thread through the yarns. In this way, all actions must be taken immediately
when a given angle is reached by the wheel. The faster the wheel turns,
the faster the actions have to complete, and the faster the finished carpet
rolls out the machine. In state of the art weaving machines, the actions
are programmed in software such that the number of mechanical connections
within the machine can be reduced. However, this means that informing
all software components that a certain angle is reached requires a kind of
broadcast infrastructure. Events are the state of the art communication
mechanisms for realising such an infrastructure. They are used to model
communication between and behaviour within reactive systems. For example,
a given component of the weaving machine is ready to perform its action.
When the event reached(angle) occurs with angle equal to a specific value,
the component executes the action and when completed, is ready again. More
advanced behaviours than such a simple two-state state machine are quite
common in such machines. However, a carelessly crafted event infrastructure
leads to the following risks:
• un-scalability: Since the event publisher needs to notify all subscribed
components, notification will take longer with each added subscriber.
In some schemes the publisher executes the reaction of each subscriber,
which may be inappropriate in a real-time context.
This work presents a real-time event design which allows both
synchronous and asynchronous notification of subscribers in Section 4.6.
Synchronous notification allows immediate reaction to events from any
task, while asynchronous notification distributes the event handling to
the subscribers, which allows them to react to an event in their own
context, asynchronously from the publisher.
• subscriber deadlock: May a subscriber unsubscribe from an event
in reaction to that event? May it unsubscribe all subscribers, even if
some did and others didn’t receive the event? May a parallel component
unsubscribe any subscribers at any given time in a real-time context?
In more advanced scenarios, this may be required by the application,
but few published event systems allow such parallel behaviour. The
result is often a deadlock when a subscriber attempts to execute such
an operation.
The event design in this work allows subscription and unsubscription
at arbitrary moments from arbitrary places from real-time contexts and
without corrupting the event. This facilitates the automatic generation
of reactive, real-time state machines in Chapter 5.
11
1 Introduction
• data corruption: Since an event can occur at any time, the reaction
of a subscriber may be inappropriate at that time, since it may interfere
with an ongoing activity and corrupt data. The classical approach is to
use locks to guard critical sections, which cause time indeterminism in
turn for the ongoing activity.
The asynchronous event design in Section 4.6 allows the subscribers to
determine the moment on which they handle their events. Synchronous
event handling can still benefit from the lock-free data containers used
in this work.
The realisation of an event driven application may be constrained by:
• the framework: A control framework must integrate events to a
large extent into its infrastructure. Both the interface to using events
and the behaviours that can be realised in reaction to events need
to be sufficiently advanced and still behave real-time. Especially the
combination of these two requirements are rare in existing designs.
This work defines events as a part of a task’s interface and a state
machine framework allows to specify which actions are taken and
activities executed in response to events. It goes as far as extending
the state of the art state machine semantics for concurrent, real-time
behaviours.
• communication latencies: The synchronous notification of subscribers
adds additional latency for each subscriber to the event publisher. The
number of subscribers is thus constrained by the subscriber latencies,
especially when a subscriber is distributed. State of the art event
systems allow to scale using event processors which have the sole task
to redistribute events to subscribers.
This work allows large amounts, non deterministic or distributed
subscribers to be notified from within real-time tasks by using event
processors which dispatch events asynchronously on behalf of the realtime task.
• closedness: The common way to emit events in closed controllers
is done using packet based networks or callback functions. A closed
controller can not be extended to subscribe to new events in the system.
In this work, each task of a control application can subscribe to events of
other tasks and each task’s behaviour can be adapted even at run-time
to react to new events.
12
1.3 Contributions
1.2.4
Requirements
This section summarises what this work should be able to do in the end. It
should
1. be Time Deterministic: The software framework must be time
deterministic such that it can be used for building real-time control
applications. Ideally, it does not add indeterminism to the application.
2. be Distributable: The framework must allow deployment of well
defined parts of the control application on different devices in a network.
3. be Designed for Machine Control: The framework must allow
the propagation of real-time events within control applications. The
application’s software components must be able to synchronise in realtime with these events, using hierarchical state machines.
4. be Designed for Intelligent Robot Control: The framework must
propose a component based solution for feedback control which allows
state of the art or newer robot control applications to be built.
5. be Component Based: The framework must propose a software
component model suitable for expressing real-time control applications.
The component interface is as such that a component can cope with
machine control and robot control.
6. be Free and Open: The framework must be free to use, modify and
distribute, as is commonplace in the scientific domain.
7. run on Common Of The Shelf hardware: The framework should
not be designed towards deeply embedded single digit Megahertz
microprocessors with only a few Kilobytes of RAM. The target hardware
ranges from 100MHz to 1GHz processors with 1MB to 128MB RAM. It
may not assume the presence of non common hardware or not common
processor instructions.
These requirements will be used as guidelines for which related work is
investigated and which design decisions are made.
1.3
Contributions
This work contributes the following new insights and realisations in the design
and implementation of machine control applications:
13
1 Introduction
• Design Pattern for Feedback Control: This work identifies
the application template and component stereotypes in feedback
control applications, defining a pattern for distributable, real-time and
intelligent control. More specifically, a solution to the separation of
processing of the control data and logic decision making is presented
by identifying generic data flow and execution flow components. The
required interface and behaviour of such components contributes to
the design of reusable components and control architectures. The
necessary infrastructure required for such feedback control applications
was identified as well.
• Design Patterns for Machine Control: This work identifies,
situates, implements and validates design patterns for control which
define the common interface suitable for distributable machine and
robot control components. These patterns are unique in that they
are designed to especially favour higher priority tasks and require
only minimal support from the operating system. Furthermore, they
are inherently thread-safe, lock-free and hard real-time, hence do
not add indeterminism, possible deadlocks or race conditions to the
user’s application. This real-time communication interface, offers
an infrastructure for composing and distributing machine control
applications. The structure or exposition of details is left to the
applications. The framework does not force an architecture upon the
user.
• Definition of a Real-Time Behaviour Infrastructure: A
framework is proposed which allows the specification of real-time
behaviours and activities within components and integrates naturally
with the real-time communication interface.
• A Free Software Framework for Robot and Machine Control:
The academic research has been verified and published in the Open
Robot Control Software, Orocos. It is applied to a wide range
of machines (Waarsing, Nuttin, Van Brussel, and Corteville 2005;
Waarsing, Nuttin, and Van Brussel 2003; Rutgeerts 2005; Ferretti,
Gritti, Magnani, Rizzi, and Rocco 2005) still and continuously validating
the design and implementation decisions of the framework presented in
this work.
The first three contributions are listed in the order of the Chapters after the
literature survey: Chapter 3, Chapter 4 and Chapter 5 respectively. The last
contribution can be accessed by downloading the software, distributed with a
Free Software license, from the Internet (http://www.orocos.org).
14
1.4 Orocos Walk-Through
Figure 1.6: A camera needs to track a toy car.
1.4
Orocos Walk-Through
As mentioned in the previous section, the results of this work were
implemented and verified in the Open Robot Control Software framework. In
order to provide a hint of the capabilities and restrictions of this framework,
a high level walk-through is given on how to setup a small Orocos control
application, tracking a red toy car with a steerable camera as shown in
Figure 1.6
1.4.1
Hardware Interfacing
This topic is not covered at all in this thesis since communication with
hardware is solved by the operating system. The framework provides a
number of device interfaces for common control hardware. These are digital
and analog multi-channel IO, encoders on the bare hardware level and sensor,
axis and force sensor on the logical (physical interpreted) level. These
interfaces do not strictly need to be implemented, but it allows Component
Builders to share components which communicate with hardware, instead of
writing a ‘device’ Component for each application.
1.4.2
Identifying the Application Template
The Orocos framework provides one application template which is for
(intelligent) feedback control. It defines six kinds of components each
with their role and connections in the component network. If the target
application is not feedback control, application analysis will have to identify
the application structure and define the components required to execute the
application. It can be decided for the camera example application that
such a large template is inappropriate and that a ‘Camera’ and an ‘Image
15
1 Introduction
Figure 1.7: The application template of the car tracking application
Figure 1.8: The component interfaces of the car tracking application
Recognition’ component are satisfactory to start with. Both components are
to be connected as such that image data flows from the camera component
to the image recognition component and that the latter can tell the former
where to look at, Figure 1.7.
1.4.3
Building the Components
When the application template is specified, Component Builders can build
components towards the application template’s roles and interfaces.
The proposed component interfaces are shown in Figure 1.8. The Orocos
component model allows a component to have five ways of interfacing
it. Starting with the Camera component, the component has resolution
and refresh rate properties which serve as adjustable parameters of the
component. Proceeding counter-clockwise, a moveCamera command is part
of the command interface, which allows the camera to be aimed at certain
position p. Commands have broad semantics which allows tracking the their
execution progress. The method interface has a getPosition() method which
queries the camera’s current position and returns the result immediately. The
16
1.4 Orocos Walk-Through
Figure 1.9: The component are connected and configured during the deployment
phase.
event interface allows the camera component to notify other components when
a new image is ready. Finally, the camera has a data port which makes the
last image available to other components. The image recognition component
reads the image from a data port as well. It can be configured to track a car
with a specific colour.
1.4.4
Component Deployment
When all components are built and an application template was selected,
the System Builder deploys (instantiates) components in the application
template, which connects components to its surrounding components such
that communication can take place. The components are configured as well
with parameters which can be read from files which may be in an eXtensible
Markup Language (XML) format.
The Orocos component model acknowledges two kinds of connections
between components: data flow and logic execution flow connections,
Figure 1.9. Connectors are responsible for setting up the image data flow
between the data ports and impose a directed flow of data. The logic execution
flow consists of the move commands, the image ready events and the position
readings.
1.4.5
Component Behaviour
The behaviour of a component is executed when the component is started.
This behaviour may be reactive, such that it only takes action on the
17
1 Introduction
occurrence of an event; or the behaviour is active, where it periodically
executes an activity. The result of executing a behaviour is that data flow and
execution flow are taking place. The behaviours of the camera component is to
fetch images at the given refresh rate, writing the image to the data port and
emitting the image ready event. The image recognition component reacts
to this event by reading the image from its data port, calculating the new
location of the car and instructing the camera to point into that direction
with a move command. This command triggers another behaviour in the
camera component which accepts the command and steers the electric motors
into that direction.
Reactive component behaviour is realised in Orocos by (hierarchical) state
machines which react to events, such as the image ready event. The active
behaviour is realised by the ‘program’ primitive which may be executed
periodically. For example, the camera program which fetches an image, writes
it to the data port and emits the image ready event.
1.4.6
Running the Application
When all components are deployed, the application can be interfaced. Orocos
provides an application independent front-end which can connect to any
application component and provides a complete overview of the internals and
externals of the component. Not only the interface of each component can be
explored and used, it also provides complete control and overview over its state
machines and running activities. This advanced form of interfacing provides
the Component and System Builders an early overview of what is going on
inside the control application, even before the human-machine interface is
built.
1.5
Overview of the Thesis
We conclude the introduction with a bird’s eye overview of the following
Chapters. Chapter 2 situates this work in the large application domain
of control. It also motivates why certain approaches to robot or machine
control are left aside and why others were adopted or extended. Especially
the real-time approach of this work may be of special interest as some
traditional approaches towards contention are not followed. Chapter 3 defines
an application template for feedback control as a design pattern. It follows the
recent approaches towards component design and deployment and identifies
the infrastructure required for components in machine and robot control
applications. Chapter 4 generalises these components into ‘control tasks’
and goes into detail on how a real-time control activity can be defined. It
continues with identifying two fundamentally different kinds of activity in
18
1.5 Overview of the Thesis
a control application, and the importance of keeping them separated. On
one hand, the data flow activity uses ports to communicate packets of data
between activities, on the other hand the control flow activity takes decisions,
reacts to events and manipulates other tasks. Both can occur at any level and
use distinct communication protocols. This work defines real-time protocols
to realise both flows. Finally, Chapter 5 takes a look on how real-time
activities and behaviour can be realised and even created or adapted on-line
in control tasks. The last Chapter of this work summarises the conclusions
and limitations of this work as well as some opportunities for future work.
19
20
Chapter 2
Situation and Related
Work
As introduced in the previous Chapter, this thesis covers more a broad
spectrum of domains than a narrow, in-depth problem domain. This Chapter
situates and motivates each facet, being time determinism, distribution,
design, control architecture and verification of the thesis (see Figure 2.1),
and relates it to other state of the art solutions. The design decision taken in
each facet will show that the problems we solve are at the level of what we call
the ‘localised real-time framework’, which is in between the deeply embedded
controller (like a mobile phone) and the massively distributed controller (like
a telecom network).
2.1
Design Methodologies
This work requires a software design methodology for expressing its solutions.
The ideal design methodology for real-time software allows a user to specify
the design graphically, validates its logical correctness and time properties
and generates instantly deployable code. No such framework exists yet.
However, since several decades, formal verification and code generation have
been investigated and the capabilities of design tools are growing beyond
visualisation purposes. Two design frameworks are investigated for their
suitability for designing real-time software.
2.1.1
The Unified Modelling Language
The Unified Modelling Language (UML) (OMG g) was born as a tool for
software architects to communicate software analysis and design artifacts.
21
2 Situation and Related Work
Figure 2.1: Many forces pull this thesis in different directions. (The direction of
the arrows does not imply orthogonality or opposition.)
It is standardised by the Object Management Group (OMG) and is in the
running-up for a new specification, UML 2.0. UML is the de facto (but
vendor-independent and language-neutral) standard for modelling software
and takes positions from initial design (idea) to implementation (code).
Initially, the UML described only the structural properties of software,
such as composition, inheritance, interfaces and any kind of relations. It did
not model constraints or quality of service which are fundamental properties
of real-time systems. Hence a common misconception is that UML has no
future in real-time design. However even not real-time software requires more
specifications than UML can offer. Therefore, almost a decade ago, the Object
Constraint Language (OCL) (Warmer and Kleppe 2003) was invented as an
add-on to UML to specify constraints on associations and interactions between
objects. As it evolved, the OCL 2.0 specification allows to formulate any
expression (assignment, loops,. . . ) in addition to constraints and is now part
of UML 2.0. Furthermore, UML Profiles are developed (OMG e; SysML
) to apply UML and OCL to real-time system design and hence allow to
specify real-time software completely. Consequently it offers logical and time
validation opportunities. The synthesis of this design methodology is termed
Model Driven Architecture (MDA) and is causing a paradigm shift in software
development, much like the move from the assembly language to higher level
procedural languages such as C (Kleppe, Warmer, and Bast 2003).
22
2.1 Design Methodologies
By identifying abstract design patterns for real-time systems, this work
contributes to a future application of MDA to real-time systems. The
terminology used conforms to the UML semantics, unless stated otherwise.
We summarise some key terms from UML. See (OMG f) for elaborated
definitions.
• Action: Models a run-to-completion functional statement, it can be
preempted by another action, but is always executed as a whole. For
example, setting a parameter, calling a kinematics algorithm, storing
data in a buffer, emitting an event,. . .
• Activity: Models the execution of a combination of actions which may be
terminated before it is wholy executed. For example, a periodical control
loop, the execution of a state machine, the execution of a program.
• Operation: Models a specification of an invocable behaviour. In this
work, Operations are realised with Methods and Commands which
belong to Tasks. Example operations a Task may have are start(),
move(args) or getParameter().
• Deployment: Creation, connection and configuration of a Component in
its environment.
• Component: Models a modular, deployable, and replaceable part of a
system that encapsulates implementation and exposes a set of interfaces.
• Event: A detected change in a system which is broadcasted to
subscribers.
• Task : Models a Class or Component, which has the interface and
behaviour as presented in this work. It is thus a large grain object
which executes activities and implements a task-specific interface. It has
a (persistent) state and clear lifetime semantics. For example, a Task
may be a controller, a data logger or a stateful kinematics component.
• Data Flow : The exchange of data between Actions, resulting from the
dependency of Actions upon each other. For example, a PID control
action requires sensor inputs and setpoints as data inputs, which are
the results of the sensing Action and setpoint generation Action. In this
work though, we use the term Data Flow more narrowly to denote the
flow of data between tasks for control of the continuous domain.
• Control Flow : The ordering of otherwise independent actions, placing
them in loops, conditional statements etc. We will use the term
“Execution Flow” as a synonym to avoid confusion with control theory
semantics.
23
2 Situation and Related Work
• Execution Engine: An Activity which executes a Control Flow with a
given policy; that is, it invokes Actions if certain conditions are met.
Furthermore, we use these artifacts which model the interaction and structure
of software:
• class diagram: Presents a view on a set of classes in the system with
their operations, attributes and relations to other classes. It models
associations such as aggregation, inheritance and composition.
• sequence diagram: Presents a view on the functional interactions of a
set of classes.
• activity diagram: Presents a view on the execution flow of an activity,
alternating actions with states. It can be executed by an Execution
Engine.
• state chart diagram: Presents a view on states and transitions within
a software component. State charts thus model finite state machines.
They are used to model the behaviour of software, which is discussed in
Section 4.1.
Understanding these terms and UML sequence, class, activity and state chart
diagrams is required to fully capture the results and consequences of this work.
2.1.2
Real-Time Object Oriented Modelling
Real-time Object Oriented Modelling (ROOM) is a methodology for designing
and implementing distributed real-time software (Selic, Gullekson, McGee,
and EngelBerg 1992). ROOM is discussed because although it shares some
visions similar to this thesis, it is also very different in the realisation of
the design. The authors claim that UML is too general and abstract to be
of use in designing real-time systems since these systems are more ’difficult’
than other systems because of their concurrent and distributed nature. The
ROOM methodology claims to be (real-time) “domain specific and intuitive”
and to cover a “continuous” and “iterative” development process. This
is realised by a “conceptual framework” which introduces three “modelling
dimensions” and three “abstraction levels” which partition the analysis and
design space. ROOM was proposed in the early nineties by Selic and it was
commercially supported by ObjecTime until the company was bought by
Rational Software in 2000 and the development tools were incorporated in
Rational’s development tools.
24
2.1 Design Methodologies
Modelling Dimensions
ROOM identifies three modelling dimensions: “Structure” (realised by
communication between components or classes), “Behaviour” (realised by
using state machines within components) and “Inheritance”1 (realising reuse
of software). It is claimed that these three dimensions occur at any abstraction
level of the system and hence, during analysis and design, the system engineer
should provide a system specification from these three views.
Abstraction Levels
A system is modelled on three levels of detail: “System”, “Concurrency”
and “Detail”. The modelling dimensions above occur on these three levels.
The system level is for composing distributed components, at the concurrency
level, a component is composed of “actors” which have communication “ports”
with an associated “message” based protocol. Compatible ports may be
connected through “bindings” which are the sole means of sending messages
between actors. At the detail level, the algorithms are implemented without
concern for concurrency. Since all communication is mediated through ports, a
Monitor concurrency software pattern can be applied to serialise all concurrent
access.
ROOMCharts and Events
The behaviour of an actor is implemented by a “ROOMChart” a slight
revised version of State charts which allows an easier implementation of the
modelled state machine. An event is defined as the arrival of a message at a
port. Events have a priority (assigned by the sender), a unique type (called
“signal”) and carry data relevant to the message. When an event occurs,
a specified function may be executed, just like an action in State charts.
Messages may be sent synchronously, meaning the sender waits for a response
message, and thus waits for the processing of the event, or asynchronously,
where for no response is waited. Since each message is thus processed by the
ROOMChart, the behaviour of an actor lies completely in the implementation
of the ROOMChart and hence, can be visualised and modified easily in a
development tool.
Evaluation
The Real-Time Object Oriented Methodology has contributed some valuable
solutions for modelling complex behaviour, composing systems and reusing
1
“Ancestry” would have been a more appropriate term, since inheritance is the design
primitive to realise ancestry.
25
2 Situation and Related Work
components. The framework for this methodology is proprietary. This work
however acknowledges that the following ROOM design principles make it a
sound framework for distributed real-time control:
• The port-message based approach is key to allow distribution of actors
since all communication is message based, which is a necessary condition
for distributing components.
• Since events are locally generated upon message arrival, reaction to
events can be real-time.
• The actor’s behaviour is fully modelled using a ROOMChart, allowing
UML-like design in graphical tools to be directly transformed into
working implementations by use of code generation.
• The assignment of priorities to messages enables (but does not
guarantee) a quality of service property of each communication.
• It provides a means to communicate synchronously or asynchronously
between components.
The disadvantages of ROOM are:
• As long as it is a non-standardized dialect of UML, it will not profit from
general modelling tools and solutions defined in the MDA framework.
• It defines its own (hidden) middleware and will possibly not profit from
standards in component based software design, hence limiting its interoperability with other tools and frameworks.
• It does not allow run-time reconfiguration of ROOMCharts: they are
hard-coded.
• The ROOMCharts can only execute run-to-completion functions, thus
do not execute activities.
• The synchronous-asynchronous communication model is somewhat
heavy since both sender and receiver need to implement a message
specific protocol to support synchronous (“call”) communication,
actually building it on top of asynchronous (“send”) communication.
• It is not clear how sensitive the framework is to the effects of locking,
such as deadlocks and priority inversions.
The disadvantages of ROOM are mostly related to its closedness in the
software modelling field. Its advantages are reused in this thesis, more
specifically:
26
2.1 Design Methodologies
• The use of ports as boundaries for distribution of components with a
message based communication protocol.
• Real-time event dispatching.
• The use of State charts for modelling behaviour of components.
• The priority of messages is equal to the priority of the sender in case of
methods or the receiver in case of commands.
• Synchronous and Asynchronous communication is realised with methods
and commands respectively.
While we provide these enhancements over ROOM:
• Our synchronous-asynchronous communication model is founded on
synchronous messaging, allowing more natural procedural use of
messages using “send” and “call” messages.
• State charts and activities are run-time (un-)loadable.
• The framework has deliberately not implemented the middleware, since
more than enough design and development activities are going on
in that domain, e.g., Common Object Request Broker Architecture
(CORBA), Miro (Utz, Sablatnog, Enderle, and G 2002), Orca (Brooks,
Kaupp, Makarenko, A., and Williams 2005),. . . Our framework can, in
principle, use any of these middleware implementations, because they
all implement the fundamental communication patterns.
• It allows a higher degree of concurrency by using lock-free communication
primitives.
• It is far more modular due to its flat structure.
2.1.3
Evaluation
The design of the control framework which this thesis presents is done
in UML, although not all UML artifacts were used. The specification
remains incomplete to a certain degree since the OCL is not used to specify
all constraints and natural language is used instead. A real-time design
methodology such as ROOM might offer some advantages: it presents in
essence design patterns which can also be expressed using UML and OCL.
27
2 Situation and Related Work
2.2
Time Determinism
This work proposes a software design for real-time control. In this section
the requirements for real-time software are investigated and the requirements
this work imposes on the operating system. This section also details how this
work realises both local and distributed real-time communication.
2.2.1
Definition of Real-Time
Real-time systems and applications set forth a contract which does not only
contain logical constraints, but also time constraints. These constraints are
no more complex than:
A given activity must be able to react within a specified bounded
time (range) to a given event.
The event can be the expiration of time or any detectable change in the system.
A hard real-time system is a system in which the failure of an activity to meet
this deadline results in a system failure or catastrophe. In a soft real-time
system a late reaction only leads to degraded performance. In fact, real-time is
a continuous spectrum relating the desired reaction time to the costs involved
by not meeting that reaction time.
The reaction to an emergency stop of a machine, for example, has hard
real-time constraints. A soft real-time activity is also often called an ‘on-line’
activity, for example refreshing the process parameters on a display. We use
the term ‘real-time’ to denote hard real-time activities and the term ‘on-line’
for soft real-time activities.
Although logical program verification has been around for several decades
(for example, (Brookes, Hoare, and Roscoe 1984)), formally verifying that
a system reacts timely to a given event has proven to be far more difficult.
The reasons are that it is yet far too difficult to have exact models of the
underlying hardware, operating system and its drivers. In the absence of
methods to formally verify the real-time constraints of programs at compile
time, methods have been developed to try to satisfy them at run time. A
scheduler is an algorithm that tries to satisfy the (real-)time constraints
of all activities in a running system. We will not repeat all scheduler
related literature in this text; (Stallings 1998; Stevens 1992) are a good
introduction to schedulers. The design and implementation of real-time
scheduling algorithms has traditionally been done in real-time operating
systems. Since our framework needs to execute activities within a specified
bounded time when a given event occurs, it requires a real-time scheduler
and thus a real-time operating system. Since activities need to share data or
communicate data, this communication needs to be bounded in time as well.
28
2.2 Time Determinism
2.2.2
Real-Time Operating System Requirements
This section pins down the requirements of this work on the real-time
operating system.
A Real-Time Operating System (RTOS) builds a thread (or process)
abstraction layer on top of hardware interrupts by providing a priority based
thread scheduler with real-time communication primitives which guarantee a
limited worst case servicing time of threads.
Our work merely relies on the presence of such an abstraction layer given
that it provides the following primitives for the scheduling of non-distributed
activities (see below for the motivation):
1. One of Rate Monotonic (RM), Deadline Monotonic (DM) or Earliest
Deadline First (EDF) real-time scheduler for periodic and non periodic
threads.
2. A signal and wait or wait conditionally primitive (semaphore).
The first item expresses that the RTOS scheduler must at least be able to
schedule the real-time threads using priority assignments proportional to the
rate (i.e. execution frequency) of a periodic thread. This RM scheduler
requires the processor load to decrease when the number of periodic threads
increases but is very easy to implement, and, hence, it is present in any realtime operating system. The DM scheduler is a variant of the RM scheduler
where a relative shorter deadline leads to a higher static priority. An EDF
scheduler which schedules the thread with the earliest deadline can schedule
much higher thread loads on a processor but is harder to implement because
it adapts thread priorities continuously and requires an accurate estimate of
each threads’s deadline. Either way, all three are suitable to schedule the
threads in this framework.
The semaphore is exclusively used to start and stop threads by command,
and not as a side-effect. The latter happens in classical lock-based resource
management schemes, where a thread stops when it enters a locked critical
section until the lock is released. More specifically, to implement the control
designs in this framework, the real-time operating system is not required to
provide a Priority Inheritance Protocol (PIP) or Priority Ceiling Protocol
(PCP), nor any mutual exclusion (mutex) primitive. This is to be interpreted
as the requirements of the framework and not as the requirements of any
arbitrary application. Certain application designs or device driver interactions
will require more support of the operating system than the framework.
Depending on the locality of communication between threads, an
additional real-time operating system primitive is required to transfer or
modify data among threads. This requirement is given in the next section.
29
2 Situation and Related Work
Figure 2.2: Diagram showing distributed and local realt-time communication. The
arrows indicate the flow of messages.
2.2.3
Real-Time Communication Requirements
We distinguish between two forms of real-time communication: local and
distributed , Figure 2.2. Imagine a control application with two threads of
execution (activities). If both run on the same node and thus share memory,
only a primitive which allows one activity to inform the other where the new
or modified data is located is required. The implementation of this primitive
needs only to rely on a set of processor instructions to modify local memory.
When both activities are distributed on two nodes however, they can not share
memory and hence a network communication protocol is required to transfer
the data from one node to the other. This protocol needs to be implemented
by the real-time operating system.
Local Real-Time Communication
The requirements for realising local real-time communication are given in this
section.
A localised real-time framework guarantees time determinism for the
activities on a node. That is, ideally, a real-time activity is not hindered
by communicating with not real-time activities and is not disturbed by
the execution of these activities. A localised real-time framework thus
allows on-line activities to interact with the real-time activities without side
effects on time determinism of the real-time activities. This property is a
necessary condition to allow (distributed ) on-line clients, such as humanmachine interfaces or data loggers, to communicate with the local real-time
activities.
30
2.2 Time Determinism
Although real-time operating systems provide primitives for communicating
data between local threads, no such primitive is required by the framework.
The implementation of all the communication primitives of the framework is
done outside of the real-time operating system (thus without the knowledge of
priorities, deadlines etc.), with the help of one specific processor instruction,
which forms the requirement for local communication in the framework:
• a universal lock-free single word exchange Compare And Swap (CAS)
primitive.
This requirement deserves an explanation. Lock-free shared objects are used
for inter-process communication without the use of critical sections, thus
without locks. This is done by writing the read-modify-write cycle with
retry-loops: if another thread modified the data during a preemption, an
interference occured and the modification is retried. This seems at first sight
a source for starvation, as one thread is retrying over and over again, while
other threads keep modifying the data. However, as will be shown below, this
approach guarantees that the highest priority activity will have to spend the
least amount of time retrying, which is the desired result of the primitive.
Lock-free communication only guarantees system-wide progress thus
that at least one thread leaves the loop successfully. A wait-free shared
object guarantees a strong form of lock-freedom that precludes all waiting
dependencies among threads, thus that every thread makes progress. This is
commonly done by setting up a helper scheme, where the preempting thread
helps all other threads to modify the data. Herlihy introduced (Herlihy 1991) a
formalism which classifies how well a given processor instruction is suitable to
implement such shared objects. Compare and Swap (CAS), an instruction
which compares the content of memory with an expected value and if it
matches replaces it with a new value in one atomic instruction, was proven
to be universal, meaning that it could solve concurrent communication with
any number of threads on any number of processors in a node. It is used to
track pointers to objects, and, if an object is changed, the pointer is changed
to a new object containing the change, and the old object is discarded. The
semantics of the CAS algorithm are shown in Listing 2.1. One can read
the algorithm as: if addr contains a pointer to an object which is equal to
old val, the object did not change and the pointer can safely be set to the
object at new val. Otherwise do not update the pointer at addr. Observing
this algorithm, it can be concluded that it can not detect if the value at
addr changed intermediately from old val to another value and then back
to old val, thus allowing the switch, although the memory location has been
changed. This is well known as the ABA problem (Prakash, Lee, and Johnson
1994), the value on the address changed from A to B back to A. garbage
31
2 Situation and Related Work
Listing 2.1: Semantics of single word Compare-and-Swap
b o o l e a n CAS( v a l u e ∗ addr , v a l u e o l d v a l , v a l n e w v a l ) {
atomically {
i f ( ∗ addr == o l d v a l ) {
∗ addr = n e w v a l ;
return true ;
} e l s e return f a l s e ;
}
}
collectionGarbage collection is a means to cope with this problem2 , since it
does not release an object for reuse until all references to it are released.
However, the ABA problem is an inherent property of CAS, since CAS is
used semantically for more than it can provide.
The findings of Herlihy were not sufficient to make them, and thus CAS,
applicable to real-time systems. The biggest disadvantage was that waitfree algorithms induced a problem similar to priority inversions in lock based
algorithms. Because a preempting thread needs to help other threads to
progress out of the retry loop, they are executing the operations of lower
priority threads. This can lead to the situation where a medium priority
thread needs to wait because a high priority thread is helping a lower priority
thread. Fortunately, it was proven (Anderson, Ramamurthy, and Jeffay
1995) later on that lock-free algorithms can be used to implement bounded
time multi-threaded data access, given a Rate Monotonic Scheduler (RMS)
, Deadline Monotonic Scheduler (DMS) or Earliest Deadline First (EDF)
scheduler on uni-processor systems. The theorems of the above citation
are particularily interesting because they do not require the knowledge of
which threads share which objects. Even more, it was validated that both
average and worst case servicing time was significantly better with the lockfree approach than the lock-based case. Since most processors3 designed for
real-time systems provide instructions which can be used to construct a CAS
primitive, the findings of their work could be implemented immediately in
existing systems.
Although the lock-free communication methodology has been around for
a while, new operating systems (for example (Bollella, Brogsol, Dibble,
Furr, Gosling, Hardin, and Turnbull 2000) and Linux) are far too often
still designed around mutual exclusion primitives. Lock based operating
systems are improved or replaced with a lock-free kernel architecture, with
2
3
32
For a discussion, see Subsection 4.2.1.
For example, the Motorolla 68000, the Intel Pentium and the IBM PowerPC.
2.2 Time Determinism
Listing 2.2: Semantics of double word Compare-and-Swap
b o o l e a n DCAS( v a l ∗ addr1 , v a l ∗ addr2 ,
val old val1 , val old val2 ,
v a l new val1 , v a l n e w v a l 2 ) {
atomically {
i f ( ( ∗ addr1 == o l d v a l 1 ) && ( ∗ addr2 == o l d v a l 2 ) ) {
∗ addr1 = n e w v a l 1 ;
∗ addr2 = n e w v a l 2 ;
return true ;
} e l s e return f a l s e ;
}
}
the purpose to improve determinism within the OS kernel. For example,
Synthesis V.1 (Massalin and Pu 1992) presents a lock-free kernel with a lockbased application programming interface, ironically moving the disadvantages
of locking into the hands of its users. On the other hand, the Cache
Kernel (Greenwald and Cheriton 1996) provides complex lock-free shared
objects for application code as well. The disadvantage of these two systems
is however that they are not portable to common architectures and use
Motorola 68K specific instructions to perform a double word compare-andswap (DCAS), Listing 2.2. Although DCAS allows more efficient and
simple lock-free algorithms, it is still considered not powerful enough (‘no
silver bullet’) to provide simple and correct algorithms (Doherty, Detlefs,
Grove, Flood, Luchangco, Martin, Moir, Shavit, and Steele 2004). The
authors propose that some algorithms require a three word CAS solution
and indicate that even ultimately an NCAS may be required for even
more complex algorithms. The paper ends with the afterthought that
ultimately, transactional memory (Herlihy and Moss 1993; Rajwar and
Goodman 2002)4 seems the only long term candidate as instrument to realise
lock-free algorithms. However, transactional memory does not exist yet in
current hardware.
Another CAS based operating system is the Fiasco micro kernel (Hohmuth
and Härtig 2001). It can run on single word CAS architectures (although
disables interrupts to simulate DCAS) such as the Intel x86 architecture.
Despite the rising of lock-free operating systems, lock based data protection
is still common practice and is a dangerous tool in the hands of the
unexperienced. Deadlocks, priority inversions, data corruption and race
4
Which allows a write only to succeed if the memory did not change since it was read
by the same thread.
33
2 Situation and Related Work
conditions are common communication failures which may occur in real-time
control applications crafted at this level.
It should be clear now that the framework will only rely on lock-free
communication for all inter-thread data exchange and relies on previous
work (Huang, Pillai, and Shin 2002; Michael and Scott 1996) for implementing
lists (Harris 2001), single ended queues (Michael and Scott 1996; Sundell
and Tsigas 2004) and shared objects where CAS is provided or simulated.
Not all approaches are efficient (such as (Valois 1995)) or usable in realtime systems. In cases where critical sections can not be avoided with lockfree shared objects, for example when multiple objects need to be adapted
‘atomically’, a more involved solution is required. Chapter 4 is dedicated to
solving that problem without reverting to lock-based solutions.
The dust on lock-free algorithms has far from settled. It is argued
by (Herlihy, Luchangco, and Moir 2003) that most lock-free algorithms (such
as (Michael 2004)) suffer from too much complexity and interference because
of the lock-free thread progress requirement. For example, garbage collection
schemes to access a lock-free shared object cause interference (even if another
part of the shared object is accessed) and, hence, cripples non-interfering
parallel operations. For example, in a lock-free double ended queue, an
operation at one end of the queue invalidates any operation at the other
end of the queue. Therefore, a new criterion, obstruction-free is defined: it is
less strong than wait-free and lock-free but promises sufficiently simple and
practical algorithms, such that better performance can be achieved. It is
less strong because it does not require any progress of any thread in case
of interference. That is, if any interference is detected, all threads retry the
operation. The suitability of obstruction-free algorithms for real-time systems
has not been investigated yet.
We may conclude that a lock-free aproach for local data exchange will be
taken in this work. The dependency is mainly on the processor architecture,
since lock-free shared objects can be implemented without interference of the
real-time scheduler. This is an important property of lock-free shared objects
since this reduces the complexity of the scheduler and avoid ‘up-calls’ to the
real-time operating system’s kernel. None of the control architectures we will
discuss in the next section has mentioned the use or importance of these
primitives thus we may conclude that to our knowledge no other machine
control architectures have used lock-free shared objects to its advantage.
Distributed Real-Time Communication
Applications which require a quality of service for communication between
distributed components add the following requirement to the real-time
operating system:
34
2.3 Robot and Machine Control
• a real-time communication protocol stack.
More specifically, it must be possible to specify maximum latencies and
minimum throughput of inter-component messages. This work does not
try to solve the design and specification of such protocols. Fortunately,
Schmidt et al. (Schmidt, Stal, Rohnert, and Buschmann 2001) have proposed
many designs for distributed real-time communication. These solutions are
complementary to the solutions of this work and can be used for distributing
the application’s components, while the local communication in a node is done
using lock-free shared objects. Section 2.4 sheds some light on the ways in
which the components which use this framework can be distributed.
2.2.4
Evaluation
To summarize, this work has the following requirements for realising real-time
activities and communication:
1. One of Rate Monotonic (RM), Deadline Monotonic (DM) or Earliest
Deadline First (EDF) real-time scheduler for periodic and non periodic
threads.
2. A signal and wait or wait conditionally primitive (semaphore).
3. A universal lock-free single word exchange Compare And Swap (CAS)
primitive for local data exchange.
4. A real-time communication protocol stack for distributed data exchange.
The use of lock-free primitives is complementary to distributed
communication and even more, encourages distribution since incoming
messages will only disturb the local real-time properties according to their
quality of service. In practice this means that a lock-free control framework
may be accessed by any activity, including messages from remote clients.
2.3
Robot and Machine Control
This section explores how robot and machine control are solved by state
of the art control frameworks. Robot and/or Machine Control software
frameworks (Fayad and Schmidt 1997; Stewart, Volpe, and Khosla 1997;
Borrelly, Coste-Manière, Espiau, Kapellos, Pissard-Gibollet, and adn N. Turro
1998; Becker and Pereira 2002) and real-time software patterns (Selic 1996;
Douglass 2002; McKegney 2000) have been proposed to provide the system
architect with a more robust tool for creating (real-time) control applications.
These frameworks have in common that they allow the user to define an active
35
2 Situation and Related Work
task object (also called component, active object, module, . . . ), which can
define an interface and communicate with other (distributed) tasks, or react
to events in the system. The framework we present is comparable to these
frameworks, hence some are evaluated in this section. Some frameworks favour
continuous control, others are more appropriate for discrete control. Since
most machines require hybrid controllers (i.e., containing both continuous
and discrete parts), we evaluate the frameworks in their ability to control
hybrid setups.
Not all frameworks in this category do provide a quality of service when
the tasks are distributed over a network. Therefore, they offer a localised
real-time control infrastructure. In addition, they must provide a means to
communicate with not real-time tasks (such as the network protocol stack)
without violating the real-time constraints of the system. In practice this
means that it must provide a communication mechanism which is not subject
to priority inversions or deadlocks.
Software frameworks (such as) for machine tools and especially
for Computer Numerical Control (CNC) applications are often strictly
hierarchical and limited to simple axis-centric motion control. As the first
Chapter stated, such architectures are not the (sole) target of this thesis,
which wants to serve even the most complex, multi-sensor, hybrid control
applications. Below, only frameworks closely related to this work are
presented.
2.3.1
Block Diagram Editors
The most natural tool to control engineers are functional block diagram
editors and code generators.
For example SystemBuild by National
Instruments Inc., and SIMULINK (a.k.a. the Real-Time Workshop) by
The MathWorks, Inc. (NationalInstruments ; MathWorks ). These tools
are heavily biased toward the low-end controls market. As such, they
have interfaces to control design tools, and are powerful for choosing gains,
designing controllers, etc. However, while they do have some facility
for custom blocks, they are not designed as whole machine controllers.
Furthermore, code generated from the block diagram combines both data
objects and functional blocks into monolithic structures at run time. As a
result, the generated code is not distributable, not open and has very limited
ability to develop a hybrid machine controller.
2.3.2
Chimera
The Chimera methodology was invented by Stewart et al. (Stewart, Volpe, and
Khosla 1997; Stewart, Schmitz, and Khosla 1997) and presents a Port Based
Object (PBO) for modular, dynamically reconfigurable real-time software. It
36
2.3 Robot and Machine Control
combines the port-automaton algebraic model of concurrent processes with
the software abstraction of an object. The (high level) aims of Chimera
resemble very closely the aims our work tries to accomplish. However, the
approach is completely different. The framework requires the Component
Builder to fill in PBO’s which may only communicate using shared variables,
using “coloured” ports. However, type checking is not present, since all data
is moved around by memory block copies, so the colouring is done by using
names. The framework guarantees that a variable is written by only one
PBO though. A message based protocol is explicitly not used. Hence, no
support for events, commands, state machines, and similar facilities exist
in this framework, and only basic control loops can be constructed. It is
thus an enhanced block diagram editor, with the features described in the
previous section. Attempting to implement more complex systems will lead
to “programming by side effects”, e.g. setting a shared variable with the
intent to cause a state change in another object. Chimera does not tackle
the distribution problem, except over a multi-processor setup with a shared
memory bus (the VERSAmodule Eurocard bus (VMEbus)).
2.3.3
ControlShell
Schneider et al. present in (Schneider, Chen, and Pardo-Castellote 1995;
Schneider, Chen, Pardo-Castellote, and Wang 1998) the ControlShell. The
commercial entity behind ControlShell, Real-Time Innovations Inc. is the
driving force behind the whole framework. It is a software framework for
feedback control which addresses the “programming by side effects” issues of
Chimera, by adding an event formalism with state machines and defining finer
grained functional units. Moreover, ControlShell contains a middleware, the
Network Data Distribution Service (NDDS) (Pardo-Castellote and Schneider
1994), for distributing the data flow of its functional units. Recently,
the OMG DDS standard (OMG d) was specified to adopt this network
protocol. All objects in the ControlShell are name served which allows runtime look-up and interconnection. It intentionally does not address designtime verification of real-time deadline constraints, nor guarantee that state
machines will not deadlock. This has the advantage that the component
builder is not constrained, at the expense of requiring more tests before using it
in production. A powerful feature of ControlShell is that it allows a graphical
composition of both data flow and execution flow in a separate view. When
the system is constructed, the execution engine orders tasks and sequences
them in a single thread if they have the same periodic frequency. ControlShell
implements many features that are targets for this thesis also and confirms
the validity of some software patterns discussed in this dissertation.
37
2 Situation and Related Work
2.3.4
Genom
The Generator Of Modules (Genom) framework (Fleury, Herrb, and Chatila
1997; Mallet, Fleury, and Bruyninckx 2002) was designed for mobile and
autonomous robots. It enables the Application builder to compose distributed
modules communicating over a network or shared memory protocol. The
Component Builder fills in “Codels” (Code Elements) which are executed
by the framework at appropriate times. Very similar to the Chimera
methodology, all communication is done through shared variables and thus
suffers from the same deficits as the Chimera methodology above. Since the
distributed communication is non real-time, but the Codels may be executed
by a real-time scheduler, Genom does not provide means for Real-Time
Communication.
2.3.5
Orccad
Simon et al. have designed Orccad (Simon, Espiau, Kapellos, and PissardGibollet 1998; Simon, Pissard-Gibollet, Kapellos, and Espiau 1999). The
framework provides a Graphical User Interface to build and connect modules
statically, and to formally verify logical correctness. Procedures can be
written in a script language allowing parallelism, wait conditions and event
interaction. Statically constructed state machines allow to switch tasks in or
out the running system. Although initially targeted to robotics applications,
it is usable for general machine tool setups. Distribution of components is not
addressed in Orccad. Also, Orccad defines a strict hierarchy of “low-level”
tasks and “high-level” procedures, the work presented in this dissertation
does not impose such hierarchies and procedures are themselves tasks again.
In contrast to this work, the scripts only setup state machine interactions,
but can do no calculations or define any activity. Hence the usefulness of the
scripts is reduced to “glueing” of existing tasks.
2.3.6
Evaluation
The common denominator for real-time control frameworks is that most
publications do not mention their sensitivity to deadlocks or priority inversions
and how they can be avoided. One can only assume that by constraining
the use of the framework, the common user is shielded from these issues in
practice. The graphical user interfaces for real-time software development are
actually tools to constrain its user in implementing the most natural design
or prevent sub millisecond real-time reaction times.
The framework we present relies on proven real-time communication
primitives of Section 2.2, such that if the application builder chooses to use
the primitives provided by the framework, hard real-time constraints will
38
2.4 Component Distribution
not be violated by the framework itself. The openness of the framework
allows application builders to insert code which may violate real-time bounds.
However, these violations will only occur because of local errors since any
communication is done by the framework itself. Furthermore, the user’s
application is allowed to inter-operate freely with custom software like device
drivers or utility libraries.
2.4
Component Distribution
The introductory example of the modular wood working machine was a clear
example of why controller components need to be physically distributable
in a plant. The synchronisation of both continuous control and discrete
events in a distributed setup requires a real-time networking layer. The term
quality of service is used in distributed computing to describe the real-time
properties of an activity or connection. Due to the research in distributed
object computing, large frameworks have been developed to transparently
distribute software components over a network. Transparently means that
one component can not tell (or, does not need to know) if another component
with which it communicates is local or remote.
Tele-robotics (Botazzi, Caselli, Reggiani, and Amoretti 2002) and
distributed control (Brugali and Fayad 2002; Burchard and Feddema 1996;
Koninckx 2003) require real-time connections between real-time systems.
These control applications thus require a framework for distributing real-time
objects , commonly refered to as “middleware”, and a framework for localised
real-time control to execute the local periodical control actions.
This section will discuss such frameworks for general real-time applications
and for machine control.
2.4.1
Communication Frameworks for Distribution
This thesis does not define or extend a middleware framework, but relies on the
existence of sufficiently advanced middleware for addressing its distribution
needs. We only discuss one framework for distribution, (Real-Time-)CORBA,
which specifies the middleware of our framework. This section situates the
usage and motivates the importance of RT-CORBA in this thesis. Other
frameworks, such as Sun JavaSoft’s Jini, Java RMI, Enterprise Java Beans
or Microsoft’s DCOM and .NET are not (yet) suitable at all for real-time
systems and are not discussed.
The Common Object Request Broker Architecture (CORBA) The
Common Object Request Broker Architecture (CORBA) is one of many
specifications of the Object Management Group (OMG). It allows to define
39
2 Situation and Related Work
object and component interfaces to be defined in the (Component) Interface
Description Language ((C)IDL). One only needs this interface description,
and a CORBA library, to be able to communicate with a remote object
providing the services described in the interface.
CORBA was first designed without real-time applications in mind.
Fortunately, the Distributed Object Computing (DOC) Group for Distributed
Real-time and Embedded (DRE) Systems at the Washington University,
St.Louis, Missouri, the Vanderbilt University, Nashville, Tennessee and the
University of California, Irvine, California extended the existing CORBA
standard with a specification for Real-Time CORBA (RT-CORBA). It allows
to specify quality of service properties of distributed communication, for
example, the priority at which a certain (group of) requests is handled and
the propagation of priorities from client to server. It also allows early resource
acquisition, such that all resources are acquired before the application starts
communicating. The collocation optimisations are a major extension to the
standard which allow objects running in the same process or host to use more
efficient (real-time) communication channels.
RT-CORBA only knows one implementation in The Ace Orb (TAO),
which is an Open Source project founded by Schmidt et al. (Schmidt
and Kuhns 2000; Gokhale, Natarajan, Schmidt, and Cross 2004) at
the Washington University, St.Louis, Missouri and now continued at the
Vanderbilt University, Nashville, Tennessee. Although TAO runs on real-time
operating systems such as VxWorks and QNX, the execution of collocated
objects on a Free Software, hard real-time operating system (Real-Time
Application Interface (RTAI) ) was pioneered during this thesis. The results
were used to distribute an embedded real-time controller from the humanmachine interface5 . A firm real-time operating system Kansas University
Real-Time (KURT) Linux was pioneered earlier in 1998 to execute CORBA
applications (Bryan, DiPippo, Fay-Wolfe, Murphy, Zhang, Niehaus, Fleeman,
Juedes, Liu, Welch, and Gill 2005). The major technical hurdle for distributed
real-time communication was that the only standardised CORBA protocol
was based on the TCP/IP protocol stack, which, first of all, is not a real-time
protocol, and secondly, the stack is implemented in such a way that, if the
TCP/IP protocol could be made reliable (on a dedicated network), the packet
traversal in the stack itself is not bounded in time. The RT-Net (Kiszka,
Wagner, Zhang, and J. 2005) protocol stack tries to address the latter for
Linux, however, integrating it into CORBA is subject for future research.
5
40
A Graphical User Interface, GUI.
2.4 Component Distribution
2.4.2
Control Frameworks for Distribution
Middleware alone is not enough to provide a portable, modular framework for
Control. One needs to define the interfaces or “contracts” which partition the
application in exchangable modules. In other words, the modules only know
each other by their interface. Some noteworthy frameworks are:
Osaca The Open System Architecture within Control Applications (Dokter
1998; Koren 1998) was the result of the European OSACA-I and -II projects
running from 1992 to 1996. A consortium of machine tool builders, control
vendors and research institutions tried to define an uniform, vendor neutral
infrastructure for numerical controllers, programmable logic controllers and
manufacturing cell controllers. The emphasis was however on the definition
and implementation of real-time middleware for connecting these components.
The proposed architecture only defined coarse grained interfaces between
modules allowing most existing systems to be ported to the architecture on one
side, but lacking to much details to guarantee interoperability. The project
thus not only suffered from under-specification, it also lost a lot of effort in the
implementation of the real-time middleware. Only in 1999, the RT-CORBA
specification was available, so this project had no other options. The Ocean
project below is a more fine grained rework of the Osaca project, using RTCORBA.
Ocean The Open Controller Enabled by an Advanced Real-Time Network (Meo
2004; Ocean ) (Ocean) project was very similar to the Osaca project both
in objectives and partners. It was a part of the Sixth Framework of the
European Information Society Technology (IST) programme an ran from
2002 till 2005. It delivered a Distributed Component Reference Framework
(DCRF) which used RT-CORBA to define the interfaces of components for
control of machine tools. Among others, it defined Kinematics, Controller,
Human-Machine-Interface and Programmable Logic Controller components
as distributable and interchangable parts of a modern networked machine
controller. All communication between components is method based, meaning
that a component pulls in the information it needs from other components
when it sees fit. No intermediate data storage (such as a “public blackboard”
or “data object” ) exists in the Ocean communication model.
Both inspired by Osaca and relieved of building and maintaining a
middleware implementation, Ocean could stand on the frontiers of state of
the art technology. The project embraced Open Source software and ideas
completely, re-using what was available and building a reference architecture
as a Lesser General Public License (LGPL) library running on the GNU/Linux
operating system.
41
2 Situation and Related Work
This thesis was partially funded by the Ocean Project and contributed an
initial port of TAO to the RTAI/Linux platform, allowing collocated objects
to communicate in a real-time process.
Omac — EMC — TEAM In 1994, the Open Modular Architecture for
Controls (National Institute of Standards and Technology ) specification was
first published by a group of American automotive manufacturers. The
document provided guidelines for a common set of API’s for U.S. Industry
controllers to better address manufacturing needs for the automotive industry.
Omac has formed a user group and formed working groups around packaging,
machine tools and others, applying the architecture to a specific application
domain. The OMAC API working group develops a specification that defines
an intelligent closed loop controller environment. The API is written in
Microsoft IDL and is hence largely (completely) platform dependent. It
originated from the Technologies Enabling Agile Manufacturing (TEAM)
specifications for Intelligent Closed Loop Processing (ICLP) which defined its
interfaces earlier in portable IDL. The Enhanced Machine Controller (EMC)
platform serves as a validation platform for the OMAC-API.
MMS, Osaca and CORBA Boissier, Soudan, Laurent and Seinturier
have published earlier (Boissier, Gressier-Soudan, Laurent, and Seinturier
2001) about an object oriented real-time communication framework for
numerical controllers. It implemented parts of the ISO/OSI Manufacturing
Message Specification (MMS) in a CORBA IDL specification, using the realtime communication layer. The design was inspired by the object based
architecture. However, MMS’s main focus is remote supervision and the
numerical controller was a monolithic block which presented an MMS interface
over CORBA. Their work concluded with experiments that demonstrated
an client-server setup where the client is a CORBA based Human Machine
Interface (HMI) and the server a CORBA based Java wrapper around the
numerical controller, which in turn was written in C.
Smartsoft Smartsoft of Schlegel et al. (Schlegel and Wörz 1999)
is a Free Software framework using CORBA for implementing design
patterns for distributed communication. The key idea is that for data
transfer, four communication protocols can be identified which encompass all
communication needs. They are ‘Command’ (to send data synchronously),
‘Query’ (to get data asynchronously), ‘Auto Update Timed’ (to get data
periodically) and ‘Auto Update Newest’ (to get data when it changes).
Furthermore, for event propagation and state machine design, ‘Event’ and
‘Configuration’ patterns are proposed, although both are not integrated with
each other. The identification of the required patterns is closely related to the
42
2.4 Component Distribution
analysis made in this work, although the semantics differ. What smartsoft
calls ‘Command’ is in this work named as ‘Method’ and what it calls ‘Query’
is named in this work ‘Command’. The motivation for these names is given
in Section 3.3 and Section 4.5. The auto update patterns of smartsoft are
designed as events in Section 4.6 while an integrated (event) state machine
design, as opposed to the ‘Configuration’ pattern, is presented in Chapter 5.
Although both smartsoft and this work identified the same basic
communication patterns, the semantics, motivation and integration of the
resulting patterns is significantly different. Smartsoft uses a RT-CORBA
implementation, but is, in contrast with this work, unsuitable for hard realtime and highly concurrent applications. Also, this work goes further with
integrating the patterns in a generic task interface, which forms the boundary
for distributing application components, and allows transparently concurrent
interaction between real-time and not real-time components.
2.4.3
Component Models
This work defines a component model for control applications. A component
model is a (large) set of interfaces which defines how software components can
be created, deployed and executed in a distributed environment. Deployment
of distributed objects is a research field in its own and provides infrastructure
for (automated) packaging and distribution of software components, which
encapsulate a “business” or “service”. Research is in progress which tries
to find ways to fit high-performance and real-time applications in such a
component model. The CORBA Component Model (CCM) for example, is
being extended to that aim (Wang, Schmidt, and Levine 2000). Connection
of a component to common services and other components is managed by
a component container. It is not the intention of this work to mimic or
reinvent the functionality of these frameworks, but to define the container
and components for control applications in Chapter 3.
2.4.4
Evaluation
This section discussed one general framework for distributing components
and several frameworks for distributing components for control applications.
Real-Time CORBA is choosen in this work to mediate the distribution of
components. This choice is motivated by: a. it allows to specify and propagate
a quality of service for communications and b. it can distribute any component
if the interface is expressable in IDL. This matches the requirements from the
introduction. If the component design in this work is expressable in IDL
interfaces, it can be distributed over a network.
43
2 Situation and Related Work
2.5
Formal Verification
Formal verification has not been a prime objective of this thesis. The only
requirement was that the solution should be sufficiently structured to not
constrain formal verification methods. Formal verification is a necessary
tool for control systems. As a bridge builder can guarantee that a bridge
will not collapse under the daily load of traffic for decades, ultimately,
software designers must be able to guarantee safety and stability of a software
application. Using tools and languages which have been verified in their own
right is necessary to accomplish that goal. The ultimate goal of verification
is that it should be sufficient to specify a contract, a behaviour, a business
model in an unambiguous language and that from this language, a working
program is generated or compiled. A contract is declarative and fortunately
declarative programming languages are very suitable for formal verification.
On the other hand, procedural languages (like Pascal or C) are not suitable to
specify contracts but are very suitable to execute them. If the contracts can be
verified first and then used to automatically generate procedural code, then
the resulting application will be formally correct. This procedure requires
a vast amount of research to formally verify all tools and artifacts used in
the process. Our overview of formal verifiable languages contains only one
language which inspired our methodology for inter-task communication.
2.5.1
Communicating Sequential Processes
One of the most renown (and older) formal verification methodologies are
the Communicating Sequential Processes (CSP) (Brookes, Hoare, and Roscoe
1984), which allow a program to be logically verified. Each task is modelled
as an independent process which communicates through event channels. All
event channels connect exactly two processes, a producer and a consumer.
When the producer raises an event, it blocks (waits) until the consumer
catches the event, and thus consumes the data associated with the event.
This form of communication allowed to setup a mathematical framework
for formally validating the logical correctness. The validation can detect
events which are never consumed or never produced in a given state, causing
a Process to “freeze”. It could not predict timeliness, thus the time a
synchronisation would take. The user was thus forced into a priority
based approach where concurrent Processes were executed according to their
position in a priority queue. As time determinism is a fundamental property
of real-time systems, CSP could not assist in solving that issue and even left
the user to “trial-and-error” tweaking of priorities to get response time right.
Starvation of Processes was one of the most common failures in complex CSP
programs.
The hardware on which CSP programs run is no longer common and
44
2.6 Summary
efforts in Twente (G.H., Bakkers, and Broenink 2000; Orlic and Broenink
2004) and Kent (Brown and Welch 2003) are underway to map CSP design
methodologies to functional languages such as C, C++ and Java running
on general purpose processors and operating systems. Occam was the first
programming language for CSP programming. It is an imperative procedural
language (thus like C or Pascal) which allows to specify parallel processes
executing sequential statements. The specification of the Processes in Occam
is now being considered as cumbersome. CSP does not offer an object
oriented approach and hence does not allow abstractions such as inheritance,
polymorphism and composition. This limits its usefulness for large scale
designs and largely interconnected systems are hard to specify and maintain.
2.5.2
Evaluation
CSP offers an important lesson to learn: if all communication is
routed through event channels and the programming language is suitable
(restrictive), logical verification is possible. As was recognised in the CSP
field (G.H., Bakkers, and Broenink 2000), the event channels offer also the
ideal boundaries for distribution of processes. This work applies the same
principle: all communication is done through dedicated objects (although not
all event channels), localising data exchange and thus providing distribution
boundaries and fulfilling a prerequisite for formal verification. Section 4.4
explains how the Object-Port-Connector pattern, which models the CSP event
channel more generically, is applied in this work and hence leaves a door open
to formal verification.
2.6
Summary
This work uses UML because it provides a standardised framework for
expressing its designs and mechanisms. To obtain a high degree of time
determinism during communication, local activities on a node exchange
data using lock-free algorithms. This work will validate these algorithms
experimentally. The framework relies on the presence of a real-time network
transport for distributed communication. The communication primitives are
inspired by good practices from other control frameworks: separating data
flow from execution flow, using ports and connectors for the former and
operations and events for the latter. The software components are defined
such that they are distributable by using a middleware framework and the
practices of state of the art component deployment are applied. Finally, the
result is not formally verified, mainly because such tools are not available or
too restrictive in the software complexity they can handle.
45
46
Chapter 3
The Feedback Control
Kernel
Feedback control is just one application in machine and robot control. When
controlling a classical multi-axis machine or a camera-assisted robot, the same
pattern can be found in the application software structure. The aim of this
work was identifying and extending this structure in such a way that intelligent
control (e.g., model-based on-line estimation) and process (or experiment)
surveillance and monitoring fitted naturally in the resulting application.
As the previous Chapter showed, many solutions have been presented in
the last decades, optimising for one facet of the control problem or another.
This Chapter will present our design as the “design pattern for feedback
control” which structures feedback control applications in components for
sensing (and estimating), planning, process control and regulation, which are
common activities in any feedback controller. It serves as an application
template towards feedback control applications and not to any other machine
control application.
The pattern supports “programming in the large”, i.e. connecting business
components into a functioning system. However, to implement advanced
hybrid control applications where reconfiguration, real-time state changes,
distribution and on-line (user) inspection and interaction are “standard”
features, a supporting infrastructure is required to provide these services to
the components. Both the design pattern and the infrastructure form the
“Feedback Control Kernel” (or Kernel in short).
Since this design pattern only identifies structure and infrastructure, it
does not need to enforce a particular implementation. However, in realtime applications, no part of a specification should be left blank as an
implementation detail, hence, all following Chapters have the sole purpose
47
3 The Feedback Control Kernel
of detailing the infrastructure for this design pattern. This Chapter sets the
stage for the rest of this dissertation: It motivates the required infrastructure
and defines a granularity of component activities and interfaces. Solutions for
the design of the infrastructure and the execution of the component activities
are presented in the following Chapters.
This Chapter is structured as follows. First, the design pattern for
feedback control is specified for structuring feedback control applications. The
advantages of the pattern are motivated in Section 3.2. Section 3.3 describes
the infrastructure which is required by a sound implementation of the design
pattern. The Chapter is concluded by an application example of two robots
emulating a surgical procedure.
3.1
The Design Pattern for Feedback Control
This design pattern specifies a common structure for building control
applications of reusable components. The design pins down (restricts) the
responsibility (or “role”) of each component for the sake of reusability and
readability by control engineers. It attaches a semantic meaning derived
from the feedback control domain to each component in the pattern. The
semantics and structure are shared between different implementations, which
is important for the “programming in the large” context of this thesis. This
section is structured into paragraphs commonly used in the design pattern
literature.
3.1.1
Introduction
Name and Classification The Feedback Control design pattern is a
structural design pattern, it structures how feedback control applications can
be built.
Motivation This design pattern captures the application-independent
structure that recurs (often implicitly) in all feedback control systems. Making
this structure explicit facilitates more efficient and deterministic designs (with
an interface familiar to control engineers), compared to starting from scratch
with an application-dependent design that is implemented on top of the bare
operating system primitives.
Intent The pattern intents to solve the design of controllers on real (as
opposed to simulated) machines. This means that the controller, of which
the control algorithm is often created by a tool designed for control engineers,
must react to the signals it receives from the machine and allow run-time
diagnostics or share control with other participants. It is the intent to embed
48
3.1 The Design Pattern for Feedback Control
or decompose existing control algorithms into an open infrastructure which
allows inter-operation with other parts of the application.
Applicability The pattern can be used to design both distributed and local
feedback control for continuous, discrete or hybrid control. To be applicable
to a control problem, the solution must be decomposable in a structure with
the roles of the participants below. It is mainly applicable (optimised) to
systems with high refresh rates, low data rates and high concurrency.
3.1.2
Participants
We can identify three principal participants in our design pattern: “Data
Flow Components”, “Process Components” and the “Control Kernel
Infrastructure”. The responsibilities and roles of each participant is described
in this section.
Data Flow Components The core functionality of a digital feedback
control system is to execute a given set of actions once per sampling interval.
The data flow components have the responsibility to create and manipulate a
flow of data forming a feedback control loop. They form the implementation of
continuous state space controllers, PID controllers etc.Figure 3.1 pictures two
common data flow component networks. The components classified with five
Stereotypes, i.e. the Generator, Controller, Sensor, Estimator and Effector :
Sensor. A component which reads in values from the hardware (pulses,
currents, voltages, etc.) and translates them into input data which
have a physical meaning in the control application (positions, distances,
forces, etc.). One or more Sensor components are always present in
feedback control applications.
Estimator. In general, not all data that are needed can be measured directly.
The Estimator contains the algorithms that transform the sensor values
into “filtered” data, that feed the models. It is also the only component
which has access to the control action (outputs) of the previous
control step (arrow indicated with T (k − 1)). Common activities
performed in the estimator are Kalman filtering (Lefebvre 2003),
particle filtering (Gadeyne 2005) and model building for intelligent
controllers (De Schutter, Rutgeerts, Aertbelien, De Groote, De Laet,
Lefebvre, Verdonck, and Bruyninckx 2005). This component is less
common in state of the art control applications, but indispensable in
the context of “intelligent robotics”, which is a major target application
domain of this thesis. In fact, the lack of this component was the major
reason to start the research on a more advanced control infrastructure.
49
3 The Feedback Control Kernel
(a) Input Oriented
(b) Model Oriented
Figure 3.1: Two possible data flow networks for the Design Pattern for Control. The
rectangular Component Stereotypes contain activities, the oval Connectors contain
data and structure the data flow. The arrows indicate ‘writes’ and ‘reads’. The
dashed-line Command Connector is only present in cascaded architectures. (a)
pictures basic feedback control with only four components, (b) adds an Estimator
which calculates a central model.
50
3.1 The Design Pattern for Feedback Control
Generator. A component which calculates the setpoints, i.e. the data that
contain the next desired values for the system. The command Connector
is only present in a cascaded architecture, where the data flow commands
are generated by a higher level Controller component. In absence of this
connection, the Generator receives commands through its execution flow
interface (detailed below). The setpoints may be calculated from the
model, the command and the inputs from the Sensor.
Controller. A component which uses all above-mentioned data to calculate
the next outputs to be sent to the hardware that drives the system. A
typical algorithm performed in the Controller component is a PID filter
or a state space control algorithm.
Effector. A component which transforms the outputs of the Controller into
hardware signals, such as voltages and pulses.
The above-mentioned periodic data flow activity of the participants is
typically a synchronous activity: the sequence of actions to execute is
deterministic, and can, in a non-distributed application, be run in one single
thread of the operating system. The simplest control applications don’t
need the functionalities of all the above-mentioned core components, e.g., the
Estimator is often not needed or integrated in the Sensor component. In that
case, these “superfluous” components and any unneeded connector can just be
omitted. The application may choose to integrate two or more components
into a single implementation. For example, if the Sensor and Effector are
implemented by a single component, this component reads outputs and writes
inputs and offers two ‘faces’ to the other components.
Process Components Besides controlling system variables (such as
velocity or force) in the data flow, a (modern) controller controls a process.
For example, not only the applied force of a surgical robot tool is controlled,
but the control parameters or algorithms themselves are frequently adapted
as well. A machine tool for automatic deburring must evaluate the quality of
the edges and decide if another pass is needed and with which feed, force etc.
The term “execution flow” is used to denote the continuous evaluation and
execution of these process decisions based on the contents of the data flow.
The execution flow is realised by process components, Figure 3.2, which have
read access to the whole data flow and which adapt the data flow components
in turn. They introduce coupling (i.e. they know of the presence and
interface of a specific type of data flow component) in order to achieve a
higher order of intelligent control. This single point of coupling allows more
decoupled data flow components, but also offers a single point of serialisation,
debugging and verification.
51
3 The Feedback Control Kernel
Figure 3.2: A process component can read all the data flows, in order to be able to
make appropriate decisions on the execution flow level.
Multiple process components can be present in the controller and their
role can be active (controlling) or passive (serving) as seen in Figure 3.2. For
example, a process component which contains the kinematic and dynamic
state information (of the process) can be used by generator, sensor or effector
to perform forward or inverse (stateful) kinematics. This is a passive role, as
it waits for transformation requests. On the other hand, a process component
which keeps track of the measured tool-tip temperature (or estimated tooltip wearing) of a cutting tool will inform the generator of updated tool
dimensions. This is an active role as it measures and corrects when necessary.
As more process parameters need additional control, additional process
components will be added and additional coupling will exist in the Control
Kernel. The process component does not only influence (indirectly) the data
flow, but it also loads new state machines in the Control Kernel. For example,
a process component for homing the axes of a robot or machine tool will
command the setpoint generator to set out homing trajectories and monitor
the inputs or home switch events to control the homing process.
Control Kernel The Control Kernel is a collection of services which all
components require to perform their activity. It provides a deployment
environment very similar to a container of the Component Model (See
Subsection 2.4.3). It connects the data flow between components using
connectors and manages component deployment and executes the activities
of the components. A detailed overview of the services of the Control Kernel
is delegated to Section 3.3. This section merely assumes that the data flow
components and process components can rely on these common services.
52
3.1 The Design Pattern for Feedback Control
Figure 3.3: Overview of data flow and process component’s interface as presented
in this work. Each facet type is used for execution flow, data flow or configuration
flow.
3.1.3
Structure
This section shows how the design pattern structures a control application.
We first show what a component consists of and how it is deployed in the
Control Kernel. Once deployed, we show how this component interacts with
the data flow and the execution flow.
Component Structure Each component has an interface which exposes
what it can do or what it requires. Each Control Kernel component interface
may consist of (Figure 3.3):
• Facets and Receptacles are the provides and requires interfaces of
component and Kernel. The possible facets are restricted as shown in
the figure. A component may be configured through its properties facet,
which allows parameters to be read or written. It may offer operations
it can perform for other components such as doing a calculation or
taking an action. It may export events to which other components can
react. Finally, it also publishes the ports from which it reads or writes
data. The aim of these facets is that they are connected to compatible
receptacles, which reside on other components. The pattern however
constrains the location of receptacles. Data flow components may only
have receptacles to process components while process components may
53
3 The Feedback Control Kernel
have receptacles to any component type. All components may have
receptacles to the common services such as a Naming Service for locating
other components or a Time Service for measuring elapsed time.
• Events may be emitted and consumed by data flow and process
components. An event is a message with optionally data associated.
Events can be used to signal noteworthy changes in the data flow and
cause a change of state of the process. Events are by design a mechanism
to decouple sender and receiver (by an Event Service, see Section 3.3),
an event may be emitted and consumed by varying components. For
example, data flow components may only a change in the data flow or
their internal state, which the process components capture and take
actions accordingly.
• Ports are the reads and writes data flow interfaces of a component.
The component Stereotype defines the ports that indicates the type of
data that can be exchanged with a component through a Connector.
Except the Effector component, each data flow component has one
write (provides) port, and, except the Sensor component, each data
flow component has one or more read ports. Process components may
contain read ports for each connector in the Kernel and are not allowed
to modify the data of a connector.
• Connectors, not a part of a component, connect Ports and
constrain the connected ends such that they look “always and
instantaneously equal” when seen from components not involved in
the connection,Figure 3.4. In other words, the Connector guarantees
concurrent data access of data, which represents the state of
the connection and may contain buffered or unbuffered data. As a
refinement, a data flow Connector constrains the direction of data
exchange indicated by the arrows in Figure 3.1. Two components can
exchange data of any type through more than one Connector, Figure 3.4.
Deployment Before a component can be used in the Control Kernel, it
must be connected with the other components and the Kernel infrastructure.
This ‘setup’ or ‘configuration’ is commonly refered to as deployment or
“Configuration Flow”. It may consist of merely reading (XML) files with
configuration data which describe how the component should operate and
to which services it should connect. The configuration flow communicates
with the components through the property facet. The connection between
components happens on two levels: the data flow is set up using ports and
connectors and the execution flow is set up using the operations and events
facets. During deployment, the validity of each connection is checked such
54
3.1 The Design Pattern for Feedback Control
(a) Without Connector
(b) With Connector
Figure 3.4: (a) shows the classical receptacle-facet connection to denote client-server
relations, typical for execution flow. (b) shows a directional data flow Connector
operating on the component’s facets (ports). Connectors connect compatible ports
on components and are the sole point where the data flow can be “captured” by
other components.
that it becomes ready to be eventually started. The states and transitions of
a component under deployment are:
• Undeployed (or unloaded). It is not associated with a Control Kernel.
• Deployment (or loading). It is being associated with a Control Kernel.
Its requirements can be checked and it will fail to load when it is loaded
in an insufficient or incompatible Kernel. It may also restore previous
state and parameter information, such as control gains and safety values.
• Deployed (or loaded). It is merely present in the Control Kernel but
not necessarily performing any activity.
• Undeploying (or unloading). It is being dissociated from a Control
Kernel, disconnecting any present connectors from its ports and
optionally storing parameter and state information.
The deployment of a data flow component will mainly consist of finding out
which ports to connect and restore parameter and state information, while
the deployment of a process component will in addition set up state machines
and set up the execution flow which dictates the behaviour of the application.
55
3 The Feedback Control Kernel
Data Flow The pattern acknowledges two different structures for realising
data flow. A feedback control loop can be built around the gathered inputs as
in Figure 3.1(a). The generator and controller operate directly on inputs to
generate setpoints and outputs, for example a classical axis position controller.
When the feedback control loop is built around a model as in Figure 3.1(b),
an estimation or filtering step which obtains the model from the inputs and
previous outputs is added. This is required for intelligent controllers which
operate in a different space than the physical space. However, any blend
between these two structures is possible, for example, a Controller component
can process both Sensor Inputs and the Estimator Model.
Since multiple components of the same stereotype can be present in the
Control Kernel, some will need to be executed exclusively in order not to
corrupt each other’s data flow activity. A deployed data flow component has
the following transitions and states of operation:
• Starting It validates its configuration a last time, assures that all
required ports are connected and contain valid values and aborts if not
so.
• Running It performs its data flow activity, periodically reading and
writing data ports, possibly emitting events and changing its behaviour
upon request.
• Stopping It writes (if possible) safe values to its data objects.
The Starting and Stopping transitions are enforced by the execution flow
below. It is the responsibility of the process components such that no two data
flow components write to the same connection and that data flow components
are started and stopped in the correct order.
Execution Flow The structure imposed on the execution flow can not be
modeled as explicitly as the data flow, except that it is completely decoupled
from the data flow. That is, discrete state and discrete state events are not
mixed with the data flow. The flow communicates through the operations
and by the events facets. The execution flow is caused by the execution of
activities in finite state machines. State machines are typically loaded into the
Kernel by process components in order to control discrete state transitions.
The process component delegates the execution of its state machines to the
Execution Engine which serialises state activity and transitions with external
operations and events.
Figure 3.5 shows that the Execution Engine plays a central role in receiving
events and delegating operations to components. One can assume that in any
56
3.1 The Design Pattern for Feedback Control
Figure 3.5: The Control Kernel process defines the order of execution of data flow
components. The Execution Engine serialises the execution of components with
incomming commands.
Control Kernel application, the following operations1 are nevertheless present:
• StartKernel() : The Kernel thread is started, waiting for requests.
Kernel Configuration files are read and possibly a state machine is
started to govern the acceptance of incoming requests.
• StartComponent(‘‘name’’) : Start the activity of a component, such
that it takes part in the data or execution flow.
• StopComponent(‘‘name’’) : Stop the activity of a component, possibly
writing safe values to its connectors.
• StopKernel() : The state machine is brought to a safe state, stopping
all active components and no more requests are accepted.
In addition to these fundamental operations, the infrastructure of the Kernel,
such as the data logger and configuration system and every added component
may add operations to the Control Kernel interface. The precise semantics of
“sending”, “accepting” and “processing” operations is further elaborated in
Section 3.3 and the design patterns for implementing the execution flow are
explained in Chapter 4.
Cascaded Control The cascading of controllers is very common and must
be taken into account when structuring the design pattern. Cascading is by
definition done on the data flow: the outputs of a high level controller is used
as command for a low level controller. Cascaded controllers also share inputs
and models. For example, a low level controller can take advantage of a model
1
We will show that there are actually two types of operations required, synchronous and
asynchronous operations. We do not make this distinction here yet.
57
3 The Feedback Control Kernel
calculated by a high level estimator while the high level controller will most
likely need inputs (e.g. axis positions) gathered by low level sensors, or outputs
generated by low level controllers, effectively closing the loop in software.
Figure 3.6 shows an unfolded view of the data flow of a cascaded controller.
For example, both Controller components have access to the inputs, although
both may read different kinds of data.
The execution flow (not shown) is structurally no more different in
cascaded control, although the additional components may require a process
component to control the data flow component’s states.
3.1.4
Consequences
The design pattern has the following major consequences:
• Since the data flow is constructed at run-time, a connection mechanism
is required such that each component can have its port connected. A
large part of this mechanism can be implemented in the Control Kernel.
A strategy to address this issue is to use a configuration file which detail
which ports of which components are connected to each other.
• Although the connector pattern provides proper abstraction to the
components regarding the locality or distribution of the component,
a connector must be provided by the Control Kernel. A Control
Kernel implementation makes the trade-off between fully distributable
components or memory- and execution-efficient local connections.
• The presented pattern is not sufficiently detailed to implement a hybrid
controller. It only identifies the fundamental structure of a control
application. Section 3.3 gives a detailed overview of the infrastructure a
component may require in addition to the data flow and execution flow
infrastructure.
3.1.5
Known Uses
This pattern is not explicitly used yet in control software frameworks,
although it can be seen as a collection of “good practices” which were
presented in other controller designs. Most control frameworks make
a distinction between the feedback control activity and the activity of
controlling the process as a whole. Orccad, for example, uses Robot Procedures
to define the process of their controllers and Robot Tasks as equivalents to data
flow components.
The ControlShell with its Data Delivery Service (Section 2.3) is closely
related to the data flow structure, using ports to communicate with remote
components by using a connector to encapsulate the distribution and to
58
3.1 The Design Pattern for Feedback Control
Figure 3.6: ‘Unfolded’ cascaded data flow. The outputs serve as commands to the
Generator components.
59
3 The Feedback Control Kernel
provide a logical, consistent image of the data. Also closely related is the
Distributed Acquisition for Industrial Systems (DAIS) which is another OMG
standard for distributing process or data flow data in distributed systems.
The Ocean architecture has a similar granularity of components for
distributed control. The component stereotypes are more specialised towards
motion control, but the roles identified in this pattern are present as well. For
example, they define a kinematics component stereotype, which would map
to a process component for kinematics in the design pattern.
Finally, the Orocos framework provides the Control Kernel Template and
components written to the presented stereotypes for control applications in
hybrid force/vision control.
3.1.6
Implementation Example
Section 3.4 gives an example of a cascaded controller.
3.1.7
Related Design Patterns
Multiple design patterns “in the small” can be applied to implement the
details of this pattern “in the large”. They are detailed in Chapter 4.
3.2
Design Pattern Motivation and Benefits
This section motivates and explains the benefits of the strong structure in the
design pattern for feedback control.
First note that Figure 3.1 models a software engineering structure, and
not a control diagram that control engineers are used to. The major difference
is that each arrow in the Figure can contain multiple data paths, and multiple
components can contain independent activities, possibly running at different
sampling rates. An application builder must assign each of the many control
blocks in a typical control diagram to the appropriate component type of
Figure 3.1, where it will be executed under specified real-time conditions.
3.2.1
Decoupled Design
Decoupling is a well-known design criterion, (Gamma, Helm, Johnson,
and Vlissides 1995), and means “programming to interfaces, and not to
implementations.” In other words, the programmer of a component need not
worry about the internals of the implementation of the other components, but
only about the data and events through which they can interact. Decoupling
is discussed to a larger extent in Chapter 4.
60
3.2 Design Pattern Motivation and Benefits
An important structural property of the Control Kernel is the decoupling
of the data flow interactions between components by means of connectors
and the decoupling of the execution flow interactions between Kernel and
components by means of opeations and events. Why data flow and execution
flow need to be decoupled might not be obvious at first sight. The reason is:
decoupling the execution flow from the data flow allows reuse of data flow
components. The example at the end of this Chapter demonstrates this.
Two different robots require position Cartesian and joint control. It makes
thus sence to use the same Generator and Controller components for both
robots. However, the kinematic transformations or homing of the axes may
be different. If these were all implemented in the data flow components, none
could be reused in another robot and for each control application, new data
flow components would be necessary. The solution is to encapsulate these
robot specific operations in process components, for which each robot type
has its own kind.
The necessity of this decoupling is further motivated by the semantic
differences between data flow and execution flow. Since humans need to design
the controller, a natural assignment of roles to each part of the control problem
is required in order to understand and extend the design. Hence, the design
translates continuous control engineers viewpoints to the data flow, while it
translates machine (discrete) control engineers viewpoints to the execution
flow.
3.2.2
Support for Distribution
Connectors guarantee that data is always consistent in the control system:
only the data that is “in” a connector is “stable,” and can be exchanged
atomically with other components. This clear localisation of data exchange
between components and connectors facilitates (i) implementation of shared
access policies; (ii) distribution of the control application (i.e. the control
system can be “cut” through any of the connectors); and (iii) formal
verification, and automated development and maintenance.
The component model allows distribution of the execution flow as well.
The incoming operations are properly serialised with local activity by the
Execution Engine. Some parts of the Kernel infrastructure might be
distributed itself, for example, the configuration and name service might be
organised from a central server. This choice of distribution of services is chosen
by the Control Kernel implementation and is transparent to the components.
3.2.3
Deterministic Causality
Several conclusions can be drawn from following the arrows in Figure 3.1:
61
3 The Feedback Control Kernel
1. The data exchange between components is always in one direction, and
there is always one writer, and (possibly) multiple readers.
2. There are no loops in one control action, i.e. the execution of the
different components can be ordered (“serialised”) in such a way that
a component A need not interrupt its activity to wait for another
component B to fill in the data that A needs. This property holds
at every level of a multi-level controller.
These properties simplify the data access design, and improve scheduling
efficiency of real-time feedback control systems. The classical way to
schedule different activities in a real-time application is to put them all
in their own separate thread, and let the operating system take care of
the scheduling of these threads. But this execution schedule can only be
influenced indirectly, by assigning (static or dynamic) priorities to the threads.
In addition, scheduling generates overhead, from (i) the computation of the
scheduling algorithm, and (ii) the context switches.
The presented pattern can avoid scheduling overhead: all components in
a control level can run in one single thread, because of the above-mentioned
deterministic serialisation within each level. There is, in general, not one
unique way to serialise the system, and this choice is up to the application
builders: they have to decide how many priority levels are used in an
application, and which causality (i.e. execution sequence) is used at each
priority level.
This execution causality is different from the physical causality of the
system whose behaviour is implemented in the the Kernel: the physical
causality (or, input-output causality) describes the physical cause-effect
relationships in the system that is controlled by the application, (Broenink
1999; Brown 2001; Karnopp, Margolis, and Rosenberg 1990; Paynter 1961).
A walking robot is one example with a clear difference between the physical
causalities and the plug-in execution causalities:
• When a leg of the robot is lifted and moving in “free space” the control
algorithm can control the position of the leg’s foot but not the force
acting on that foot.
• When the leg is in contact with the ground, the force can be controlled
but not the position.
So, the sequence in which the component activities are executed by the Control
Kernel can remain unchanged, while some components will have to be switched
at run-time to handle both cases. This is another clear example where the
process component can be used to model these discrete transitions and to
take proper actions (i.e. starting and stopping of data flow components) when
switching states.
62
3.3 Kernel Infrastructure
3.3
Kernel Infrastructure
Non-trivial control applications need more infrastructural support than the
“bare” design pattern in Figure 3.1. This section lists the common services
which should be available in feedback control applications. This infrastructure
is complementary to the structure that the design pattern imposes on feedback
control applications and is to be interpreted as a deployment environment,
which provides facilities for interaction and configuration.
The various receptacles and facets components may optionally require
or provide can be related to services in this infrastructure. Facets are
used for rather optional services, meaning that if a component implements
a facet, for example, the property configuration facet below, it can be
configured through this facet by the Kernel infrastructure. Reporting of
internal data is another optional service the component can deliver to the
Kernel infrastructure. Receptacles on the other hand, represent requirements
on the Kernel infrastructure. For example, a component with the Naming
Service receptacle requires a naming service to be present in order to function
properly. The items below define possible component receptacles and facets,
as indicated in Figure 3.7.
Command Interpreter. Advanced Users access the control system with
(application-dependent) symbolic commands, that the Command
Interpreter transforms into operations. The Operations Facet describes
which operations a component can execute. It must deal with two types
of operations:
• Synchronous methods which are executed immediately by the
interpreter and report the result back to the requester. We will
call this type of operations “methods” because they behave like a
traditional function or procedure. Methods can be queries such as :
“Is a component running?”, “What is the result of a calculation?”.
They invoke functions which can be safely called across contexts,
such as invoking an algorithm or inspecting a component’s status.
• Asynchronous commands which are dispatched (sent) to the
Execution Engine (see below) and, if accepted, executed directly
before or after the periodic control action is done. We will call this
type of operations “commands” as a command has the connotation
of being sent and being asynchronous. A command can be queried
for execution status (accepted, rejected, failed, done). If the
Kernel is not running, no commands are accepted (hence start()
must be a method). A controller command could, for example,
specify the desired position trajectories for all axes in a machine
tool, unacceptable positions may (should!) cause command failure,
63
3 The Feedback Control Kernel
Figure 3.7: The deployment environment of a component
otherwise, when the positions are reached, the command is done.
Another command example is to adjust the feedback gains to
be used in the servo control algorithms, which requires multiple
parameters to be adjusted “as-if” atomically.
The design of the operations facet with commands and methods is
detailed in Section 4.5.
Execution Engine. Most applications require time coupling of activities
(i.e. execute one activity strictly after the other) and a service to execute
incoming requests at an appropriate time, when it can not disturb
ongoing activities. The Execution Engine executes and centralises
activities of a Control Kernel, i.e., it synchronises incoming commands
or the reaction to events with any program or state machine activity.
Hence, it is best to localise this time coupling explicitly in one
single place. The motivation is very similar to the function of the
process component, which centralises functionality coupling, such as the
coordinated homing of machine axes, in order to avoid the distribution
of knowledge.
64
3.3 Kernel Infrastructure
Since centralised synchronisation is often asynchronous with respect to
the activities running inside (distributed) components, events provide
an appropriate synchronisation mechanism. The Execution Engine
consumes events in its state machines, which model the states of the
Control Kernel, and emits events when it detects noteworthy changes.
Chapter 5 is completely devoted to the design of the Execution Engine
and its programs and state machines.
Reporting. Users and developers of a control system like to see what is
going on in their system. Connectors can be queried directly for (part
of) their exported data. But, in general, components need to implement
the Reporting Facet in order to export internal data. The reporting
service then queries and exports the results of activity, i.e. internal status
information of components and contents of the connectors. Reporting
requires a decoupling of the real-time control activity and the non realtime storing activity. This is accomplished by lock-free data sharing
(Section 2.2 and Section 4.4) and real-time decoupling of activities
(Subsection 4.2.2).
Configuration Service. The Control Kernel and its components can be
configured at run-time when they implement the Property Facet.
Properties are (name,value) pairs which can be converted to text format
for display or storage and back for changing or restoring property values.
For example the configuration service may store the parameters of a
PID Controller component to an XML file format and reload these
values when the Kernel is restarted or the Controller is reconfigured.
Configuration patterns for distributed component systems are discussed
in Section 4.7.
Time Service. Time is an important property of control systems. Whenever
multiple processors are used in a distributed control system, all
processors need software and/or hardware to synchronise their clocks.
This synchronisation is the responsibilities of the Time Service
(or “Heart Beat”) component; most of its implementation will be
hardware specific (e.g., (Koninckx, Van Brussel, Demeulenaere, Swevers,
Meijerman, Peeters, and Van Eijk 2001)), but its API offers all timerelated functionalities: access to the current system-wide “virtual” time;
programming of timed events; time-based watchdogs; etc. Components
requiring the time service have the Time Service Receptacle. This thesis
did only define this receptacle but did not dig into the design of this
service, except that the Orocos implementation returns the current local
time.
Name Service. Name serving enhances the configuration flexibility of a
65
3 The Feedback Control Kernel
complex software system: components are registered by their name
with the Name Server such that one component can replace another one
under the same name, without their clients having to adapt to, or even
be aware of, the replacement. The Name Service is a part of the design
pattern for configuration (Section 4.7) and is common infrastructure in
distributed component systems.
Browsing. Inspecting on-line which components, connector data, state
machines, etc. are present in the Kernel is called “interface browsing”.
The component facets can be constructed in such a way that the
components can be queried about which facets they support, which ports
are present or query the Execution Engine which state machines are
present. It allows the user or other components to gain knowledge about
the system at run-time and compose actions on the basis of the present
infrastructure. Browsing is thus not a component, but an infrastructural
property which allows on-line introspection and modification. The
browsable interface of a task (Section 4.1) is accomplished by the factory
pattern (Section 4.7).
3.4
A Feedback Control Kernel Application
To make the feedback Control Kernel infrastructure more concrete, a realtime, joint-level controller for two cooperating robots in Leuven’s robotics lab
is described.
3.4.1
Task Description
The aim is to perform a minimally invasive surgery on a living person,
Figure 3.8. The robot which holds the tool trough a small hole in the chest
must compensate a preprogrammed path with the initially unknown position
and possible movements of the patient. The person’s chest is tagged with
the LEDs which are observed by a 3D camera in order to estimate the body
position and movements.
For testing the robot, the human is replaced by a dummy device. Body
movement (for example, respiration) is simulated by a robot which holds
the dummy device. The aim is to compensate the body movements, by
superimposing them on the tool movements. Furthermore, the tool-tip
movements are constrained by the width of the insertion hole, since the tool
may not cause damage. We refer to (Rutgeerts 2005) for the details on the
required task frame conversions and focus here on the logical composition of
the tasks.
66
3.4 A Feedback Control Kernel Application
Figure 3.8: Minimal invasive surgery with a robot.
3.4.2
Hardware
The robot which performs the operation is a Kuka 361 industrial robot
(K361), which is controlled at the joint velocity level and provides absolute
encoder positions. The control hardware is a Pentium III 750MHz processor
with 256MB RAM running Linux with the Real-Time Application Interface
in user-space (RTAI/LXRT). The robot is interfaced with PCI cards for
digital IO (brakes, enable signals, end limit switches), analog output (joint
velocity setpoints) and an SSI encoder card (joint positions). The computer
is connected to an Ethernet network. The robot which holds the dummy
device is a Kuka 160 industrial robot (K160), which is controlled at the joint
velocity level and provides relative encoder positions, which requires position
calibration. This robot is controlled by a similar setup as the Kuka 361, both
are connected to the same network. All hardware is Common Of The Shelf
(COTS) and is oversized for the tasks described below which leaves room for
non real-time tasks such as data processing. A third networked computer
running a graphical user environment is used for interfacing the controllers.
A 3D measurement camera sends the 3D positions of mounted LEDs as UDP
packets over a dedicated Ethernet network.
67
3 The Feedback Control Kernel
Figure 3.9: Data flow of position and velocity control loops.
3.4.3
Position and Velocity Control
A control kernel for position and velocity control is set up for both robots.
The data flow is in Figure 3.9. The Controller components are a joint PID
Controller and a joint feed-forward Controller. The Generator components are
a joint velocity (q̇) or joint position (q) interpolation Generator. For Cartesian
control, a path interpolation Generator reads a target end effector frame (f )
and a Generator for converting end effector twists (t) to joint velocities .
The Generators generate joint velocities or joint positions for the Setpoint
connector. The Sensor and Effector components are connecting the hardware
measurements to the kernel’s data flow, providing joint positions (q̂) and
ˆ to the Input connector and desired joint velocities in the Output
velocities (q̇)
connector. The Generators can accept data flow Commands if a Controller is
added in cascade and if configured to do so. The K160 robot has an additional
body simulation Generator. Both control loops are executed at a frequency
of 500Hz.
The components need to be connected as such that the data flow
components have access to the kinematic algorithms. Figure 3.10 (a) shows
the Kinematics process component which is configured with the kinematic
algorithms for the robots. The Kinematics component provides kinematic
operations which are required by both the Frame Interpolation and Twist
Interpolator components. The Kinematics process component is serving the
data flow components.
68
3.4 A Feedback Control Kernel Application
Figure 3.10: Execution flow is executed by process components.
Figure 3.10 (b) Shows the Control Loop process component. It has a state
machine which defines which components are active and executed in each of its
states. This component has the responsibility to ‘schedule’ the correct order
of execution of the data flow components such that a meaningful data flow
is obtained. The operations facets of the data flow components must contain
execution flow operations which the Control Loop requires. In practice, this is
an update() method in which each data flow component reads its data ports
calculates a result and writes that to a data port.
Both robots have the same components but differ in state machines: the
data flow, which is captured by the design pattern, is common and the
execution flow, which is separated by the design pattern, is far more dependent
on the hardware protocols. Thus the separation of data flow and execution
flow enables re-use of data flow components. For example, the K160 needs an
additional encoder position calibration. This can be accomplished with the
presented components by adding a Homing process component (not shown),
which provides a state machine that describes the process of homing.
3.4.4
Tool Control
A controller is added in cascade for the K361 robot in order to read and
interpret the LED positions, process the tool movement commands and
superimpose the body movements as in Figure 3.11. The data flow consists
69
3 The Feedback Control Kernel
Figure 3.11: Cascaded data flow for tool control.
of a Sensor component which reads the LED position packets through a realtime network stack and writes LED positions to the Input connector. An
Estimator component converts the Inputs to 3D body position and velocity
for the Models connector. A Generator replays and interpolates a pre-recorded
tool-tip movement path in the Setpoints. A Controller reads Setpoints and
Models data to produce a desired velocity and position in its Outputs. These
Outputs serve as Commands for the position Control Kernel, the Connector
connects the Tool Controller with the Cartesian path interpolator. These
components also connect to the kinematics process component for robot
position information. A state is added to the control loop process which
starts and executes these components.
3.4.5
Kernel Infrastructure
The contents of any connector can be reported for later review. Since
connector data access is lock-free, reporting can happen in a non real-time
thread without disturbing the control loop. For configuration, all data flow
components have some properties set which are stored in files. For example,
the control parameters, the kinematics to use, the length of the surgical
tools, maximum joint velocities and positions, workspace boundaries, etcetera.
The execution engine schedules the execution flow activities in sequence
after the control loop activity, accepting new commands and executing any
asynchronous events, programs and state machines transitions. This allows
all functionality within the Execution Engine to have the same consistent
70
3.4 A Feedback Control Kernel Application
view on the components’ status and connector data. Also, actions taken by
the Execution Engine such as switching both the Generator and Controller
from joint space to Cartesian space control, seem to happen atomically with
respect to the control application.
3.4.6
Kernel Interfacing
The Kernel and its components are interfaced by the Command Interpreter,
which converts text commands to command or method objects, which
are then sent or invoked by the caller. We wrote a Control Kernel
CORBA interface which contains no more than some query methods and an
executeCommand(string) function. The Command Interpreter parses the
string and invokes it directly or sends it to the Execution Engine. The query
methods allow interface browsing and return all valid commands, methods,
events and a list of arguments they take. In addition, the process components
extend this interface with process specific commands. The K160 robot will
have a home(axis) command for example.
3.4.7
Execution Flow
Each Control Kernel has one or more process components with state machines
to control which component activities are started and to which events the
process components react. For brevity only the state machine of the Control
Kernel of the K160 is discussed. The initial state is entered when the
Control Kernel’s activity is started and it waits for the “StartController”
event, which may be caused by a StartController command or method. The
state machine checks if the robot axes are homed, and if not, enters the
homing state, which starts a child state machine for each axis which drives
an axis to its home switch, resets the position counters and drives it then
to a zero position. The parent state machine waits until all child state
machines have completed homing, or enters a safe error state when an error
occured (for example, an axis is not moving). If all went well it enters
a waiting state. When the “StartSimulation” event is emitted, it starts
the body simulation Generator component, which makes the dummy device
move. A “StopSimulation” command disables the Generator component
and enables a stand still Generator. When the Control Kernel receives a
movement command in the waiting state, it enters to a moving state as long
as the movement is busy. Numerous other states, transitions and events
can be added and the other Control Kernels have similar state machines.
The state machines are (un-)loaded at run-time as scripts, allowing maximal
reconfigurability.
71
3 The Feedback Control Kernel
3.5
Realising the Control Kernel
The realisation (implementation) of the Control Kernel and its components
is not restricted by the design pattern. Although the components are units of
reusable software, a certain implementation may choose only to distribute the
control application as a whole instead of the individual components. Along
the same line of reasoning, one can apply the same variation in granularity to
the definition of a “Control Task” within a control application. The Control
Kernel may be a monolithic control task which offers a “fat” interface to access
all its functionality and components, while it may also be a “thin” top level
control task, delegating its interface to sub-tasks, which are its components,
or even to an activity within a component. The rest of this dissertation
does not assume any granularity in both distribution or interface. We will
abstract Component or Kernel as Control Task and formulate any form of
communication as “peer” communication. This abstraction allows us to design
a machine controller as a set of tasks of which some may form feedback control
tasks.
We decompose the Control Kernel framework in a task infrastructure as
such: components can be seen as control tasks which implement the task
interface (see Section 4.1) in the feedback control application template. This
interface consists of Operations (see Section 4.5 ) , Events (see Section 4.6),
Properties (see Section 4.7) and Ports (see Section 4.4). A control task may
execute multiple activities (see Section 4.2), which exchange data in a data
flow (see Section 4.4) through connectors. Activities may be plain programs
(see Section 5.1) or are executed as reactions to events using state machines
(see Section 5.2). The ordering and execution of these activities form the
control task’s execution flow (see Section 4.5).
3.6
Summary
This Chapter presented the design pattern for intelligent, real-time feed-back
control, which forms an application template for such control applications.
The pattern identifies the common component stereotypes found in a
controller and the relations between these components. Because the pattern
decouples data flow from execution flow, it allows the reuse of data flow
components while encapsulating application specific knowledge in process
components. The identification of the process component as presented here
in control applications is the key to separating execution flow from data flow.
The process component acts as the location to contain coupling in order to
have a higher level of decoupling of data flow components. The components
are deployed in an environment which offers services to each component. The
deployment and lifetime of a component is phased and allows it temporarily
72
3.6 Summary
not to participate in the application’s data flow or execution flow.
73
74
Chapter 4
Analysis and Design of
Communication in Control
Frameworks
Software design is often seen as a logical, almost implementation specific step
after the analysis has been done. Analysis results in the creation of use cases,
sequence diagrams, state charts, control law diagrams and other artifacts such
that the essential properties of the system are captured. The implementation
(or realisation) of these artifacts is done in the design phase. The days of adhoc designs are history for most software development workshops since the
wider acceptance of design patterns. Design patterns are defined as
A generalised solution to commonly occuring problems in software
design.
Design patterns are not immediately transformable into working code. They
provide a method, a recipe, on how a certain set of objects with specific roles
and interactions between them can solve a commonly occuring problem in a
well documented way. The latter is held as fundamental in pattern literature:
in order to choose the correct pattern (“pattern hatching”), the different
patterns must be comparable and each pattern should document how it relates
to others. This requires that patterns are written down in a structured way
during “pattern mining”, recognising a recurring solution in existing designs.
Design patterns are however no silver bullet: they don’t solve all aspects of a
problem equally well. A design pattern will optimise one aspect (e.g. speed,
safety,. . . ) at the expense of another (e.g. ease of use, extendability,. . . ). This
is commonly refered to as “balancing” the forces that possibly pull the design
into opposite directions.
75
4 Analysis and Design of Communication in Control Frameworks
The design of a framework must match the requirements of its applications.
The software design must enable safe, deterministic, scalable and component
based real-time control applications. These are not orthogonal properties,
but we show how we enable them by choosing the right granularity of
decoupling. Technically, decoupling is realised by defining software interfaces.
Consequently these interfaces are the building blocks of the software
components defined in this work. Hence, the purpose of this Chapter is to
dissect and analyse “tasks” in control systems in such that a software design
is found which fulfils the needs of control tasks. As the Chapter title implies,
the emphasis is on communication between control task components and thus
it defines the middleware upon which the application components execute.
This Chapter starts with defining how tasks, activities and actions relate
to each other in a control application. We then dissect this reference frame
in a piecewise manner. The first piece identifies the fundamental property
of real-time frameworks: a way to separate real-time and non real-time
activities. Any communication in a real-time control framework which violates
decoupling of activities will lead at best to decreased performance or reliability
and at worst to (eventual) system failure. The following sections present
designs for realising data flow communication, execution flow communication,
synchronisation between components and deployment of components.
76
4.1 Control Tasks
4.1
Control Tasks
Some practical tasks in control systems have been introduced in the previous
Chapter: interpolating setpoints, tracking machine status, calculating
kinematics,. . . Subsection 4.1.1 summarises which properties a real-time
control task must have. Next, Subsection 4.1.2 proposes a task framework
which enables these properties as a decomposition of task interface and an
internal task activity. Each task executes activities in a given state and has
a generic “peer-to-peer” interface. Through its interface, it serves activities
of other tasks and vice versa, its own activities are clients to the interface of
other tasks.
4.1.1
Analysis
Chapter 3 set up the environment in which a highly configurable and
interactive control application executes.
The framework must enable
deterministic and safe applications and allow to add or (re)move tasks both
horizontally (at the same layer of abstraction) and vertically (inter-operating
with different layers of abstraction). The behaviour of tasks is preferably
specified using the expressive UML state charts formalism. Their behaviour
can be altered by (re)configuring their properties or changed entirely by
loading a new behaviour or program. For supporting feedback control
applications, the framework must accommodate its infrastructure for interoperating periodic and non periodic activities and allow (small) quantities of
data to be communicated fast, reliably and deterministically.
Task Determinism, Integrity and Safety
The fundamental property of real-time systems, is that task activities must
meet time deadlines in a deterministic way. Time determinism is measured
in two ways: with a worst case latency and worst case jitter. Both must
be bounded to reach determinism. Latency is the time an activity needs
(additionally) to complete, jitter is the variation, over time, between the lowest
and highest latency. The position controller, for example, needs low latency
to minimise delay and low jitter to eliminate unwanted excitations. A variety
of design decisions in a framework may impair or improve both latency and
jitter. The framework for activities and what they execute must minimise
the worst case latency and jitter, possibly at the expense of a higher average.
Lock-free communication (see Subsection 2.2.3) is a solution which suits these
objectives: the latency incurred by communication is bounded under RM and
EDF schedulers at the expense of a more complex data exchange model.
Time determinism has no use if the task or data integrity no longer
holds. This may be caused by failing communications, software errors,
77
4 Analysis and Design of Communication in Control Frameworks
misconfigurations or any operation that resulted in inconsistent data. The
framework should minimise these errors by providing safe infrastructure
which prevents or detects errors and halts error propagation throughout the
system. The disadvantages of traditional lock based data protection have
been discussed earlier in favour of the lock-free alternative.
Process safety is another critical part of the task. A task will want to
specify what action must be taken if task specific guarantees no longer hold
and communicate this to interested entities. These safety reactions must be
guaranteed same time determinism as the task’s activity and, when occuring,
must not disturb time determinism of concurrent higher priority activities. A
well known example is when the controller detects a too large position tracking
error. It might take local actions as well as signal this event to a higher level
task, without disturbing the higher priority, high frequency current control
loop. This work uses the traditional approach of signalling safety conditions
with events, and use state machines to react to these events. Events maximise
reuse since sender and receiver are decoupled and state machines are the
constructs designed to react to events.
Task Composition
Control tasks need to be composable both horizontally as vertically. State
of the art control frameworks use software components to realise tasks. It
must be possible to define application components which communicate in any
possible way, as the application architecture requires. Providing a peer based
communication infrastructure is the solution to this problem. Peer-to-peer
networks1 are known for their robustness as the propagation of errors can be
limited to the local peer connections, while in client-server networks, a failing
central server is a fragile bottleneck and may cause all clients to fail. Peers
on the other hand act both as client and provide services to other peers. Peer
networks can be flat, layered or form any structure in between. If tasks are
to be connected in a peer-to-peer configuration, they need to share a common
interface which is general enough to be able to connect any two peers. We
can name this interface the “Peer Service” as it allows a task to query a peer
task for which services it offers, how it can be communicated with and with
which peers it is connected. The peer service thus allows to browse both the
peer network and peer interface.
1
Although a common use of peer-to-peer networks is to form arbitrary large networks
of randomly interconnected clients, we are talking about an engineered network with
components acting both as client and as server of a common interface.
78
4.1 Control Tasks
Application Decomposition
As the previous Chapter pointed out, the natural architecture for a feedback
control application can be captured by designing a network of components
which perform a well defined role within that architecture. The methodology
to decompose an application in a ‘natural’ task architecture is the subject of
software analysis. The first step is identifying the fundamental properties of
the application by means of use case analysis which identifies a system’s actors
and the businesses it provides. Each such business is equivalent to a high level
task. The decomposition of tasks in subtasks, and thus the modularisation of
the architecture, can be motivated by:
• Re-use Subtasks tend to be more suitable for reuse since they represent
an smaller part of the system. Reuse can occur in the same or in another
application. Avoiding repetitive effort motivates re-use.
• Distribution In order to distribute an application, it must be
decomposed in distributable tasks. Distribution in turn is required for
offloading responsibilities to remote nodes.
On the other hand collocation or integration of tasks, leading to a less modular
architecture can be motivated by:
• Ease-of-use Too many tasks makes the architecture harder to
understand due to fragmentation of responsibilities and harder to set
up, due to to the many connections that need to be made between
tasks.
• Bandwidth Required communication bandwidth between tasks force
them to be located closer to each other, for example on the same node
or in the same process.
A fine grained modular application is thus easier to reuse and distribute at
the expense of setup effort and available bandwidth. Prior research (Koninckx
2003) showed that in control applications, bandwidth is the determining factor
for the granularity of the distributed components. The higher the available
bandwidth the finer the granularity can be.
Task Specification
Control applications are built in the framework by specifying each (sub)task
and specifying how tasks are connected. In software design, the interface
of the task is its specification, since it provides the method and contract
of how the task functions within the system. Thus specifying that a task
periodically consumes and produces data, that its configurable parameters
are proportional gain, integration and derivative constants and that it can
79
4 Analysis and Design of Communication in Control Frameworks
signal exceeded limits should require no more than writing a task with such
an interface in a modelling or programming language. However, to satisfy this
specification, the task also needs an implementation which may be hidden
from its interface. The behaviour may be reactive, active or both. A reactive
task is implemented by an object which executes a function or activity on the
occurrence of external events. An active task (periodically) executes activities
within its own context, possibly emitting events and calling upon the interface
of other tasks.
Inter-task communication is caused by activities which use the interface
of another task. The specification of inter-task communication is two-fold.
First the means of communication must be specified and second what data
is communicated from which task to which task. Chapter 3 showed that
in control applications two forms of communication occur: data flow and
execution flow. Both define a different means of communication, buffers
and commands, and both are set up differently. Data flow is specified by
connecting one task port to a compatible task port with connectors, while
the execution flow is specified with task state machines, reacting to events
and executing activities which are sending a command to or requesting data
from another task. Summarised, to specify a control task, it is necessary to
specify interface and behaviour. The specification of a data flow and execution
flow was called the “Process” of the application, it contains all application
specific logic and structure. Although the Process has a specific role within
the application, it is specified just like other tasks with a given interface and
behaviour. The Process is thus just another peer within the application.
4.1.2
Task Interface
The previous Chapter defined the interfaces (or facets) of the control kernel
component. All control tasks provide thus these interfaces to their peer tasks.
A task’s interface consists of Events, Operations, Properties, data flow Ports
and functions to access its peers (which present in turn the same interface
structure). Each part is discussed further-on in separate sections.
• Data Flow Interface Tasks which produce or consume data may define
ports in their data flow interface (Section 4.4).
• Operation Interface The operation interface of a task provides two
distinct mechanisms. It provides asynchronous commands it will accept
from peer tasks such that it can reach a goal and synchronous methods
for the algorithms it can calculate. It is thus the functional interface of
the task, it defines what it can do (Section 4.5).
• Event Interface The event interface contains all events a task exports
and to which peers may subscribe (Section 4.6).
80
4.1 Control Tasks
• Property Interface A task’s properties are a hierarchical browsable
structure of name,value pairs. They form a generic infrastructure for
configuring the task. When the properties are modified, they may have
immediate effect or be verified by the task before they are accepted
(Section 4.7).
• Task Browsing Interface The structure of the system is formed by which
tasks are connected to each other. Through the task browsing interface,
a task can be queried to which tasks it is connected and these can in
turn be visited (Section 4.8).
This interface hides the behaviours and activities (if any) running within
a task.
4.1.3
Task Behaviour
Finite state machines are a common tool to define the behaviour of a software
component. Finite state machines are defined as (such as in (Douglass 2002)):
A finite state machine is an abstract machine that defines
a finite set of conditions of existence (called ”states”), a set of
behaviours or actions performed in each of those states, and a set
of events that cause changes in states according to a finite and
well-defined rule set.
It is important to note that “within each state a set of behaviours or actions
can be specified”. Since behaviours are modelled with finite state machines,
this sentence means a state can execute a state machine in turn. The leaves
of this nested tree structure are actions. Actions were defined as “a run to
completion functional statement”. The essence of a state machine is thus
that upon the reception of an event, one or more functions may be executed
and that these thus define the behaviour. Since an activity is defined as “the
execution of a sequence of actions which may be terminated before it is wholy
executed”, it is clear that a state machine realises activities and that any
activity2 can be written as a finite state machine which executes actions. It
is not desirable to specify all activities as state machines since other forms,
such as procedural programs, are more convenient for the application builder.
Therefor, the term activity will be used in favour of state machine, although
most activities in control applications may be realised with state machines.
The details of realising state machines and procedural programs in the realtime control framework is left to Chapter 5.
The remainder of this Chapter assumes the existence of one or more
activities which execute actions in a real-time application. It assumes
2
According to that specific definition.
81
4 Analysis and Design of Communication in Control Frameworks
activities run on behalf of tasks, that is, an activity is executing because
a task is in a certain state. They are executed in threads of execution of the
operating system. An activity executes one or more actions in response to
an event (time included) and may thus execute other activities in turn. In
real-time systems, activities will be assigned a quality of service which reflects
the time importance of the activity in the system.
82
4.2 Decoupling Real-Time and non Real-Time Activities
4.2
Decoupling Real-Time and non Real-Time
Activities
Activities execute actions. We name the ordered collection of these actions
“functions”. Figure 4.1 gives a schematic overview of where activities and
functions fit in realising behaviour of a task. Figure 4.2 classifies which
type of function an activity may execute. Not all functions can be executed
within a given bounded time because of the algorithm it implements or its
communication with unreliable entities. Hence, one can draw a line between
real-time and non real-time functions, where the former can complete within
a bounded time and the latter can not. Some algorithms can be written
such that their non real-time part can be separated from their real-time part.
Functions may block, which means one of its actions waits for an event to occur
or they may be non blocking and hence no action needs to wait for external
events to complete. Not all function types can be executed in any activity
type. Periodic, real-time activities for example will only execute real-time,
non blocking functions while non periodic real-time activities can execute any
real-time function.
Figure 4.1: Realisation of task behaviour. A task (Controller) is in a state (Running)
which executes an activity (Thread) which calls a function (Calculate) which
executes ordered actions (Read; Calculate; Write). Events cause state transitions.
The result is a system of real-time and non real-time activities executing
83
4 Analysis and Design of Communication in Control Frameworks
Figure 4.2: Activities execute Functions.
functionality. This section describes how we decouple real-time and non realtime functionality and activities, and why it is a fundamental property of
real-time frameworks to provide time determinism.
4.2.1
Analysis
In (Pohlack, Aigner, and Hrtig 2004), decoupling real-time from non real-time
activities is done by tunneling all communication through a buffer component
which connects a real-time environment with a non real-time environment.
In this section, it is motivated that decoupling real-time activities from non
real-time activities does not require such drastic containment strategies but
only requires the following properties to be present in the system:
• A method to assign a quality of service to each activity.
• A method of communication between activities without disturbing the
quality of service.
• A method to delegate low quality of service operations to low quality of
service activities.
These items allow that a high quality of service activity is not disturbed
by lower quality of service activities. Hence, to the former, the latter runs
transparently on the same system. Designing a framework that provides all
above-mentioned items is not simple, if at all possible. Not in the least because
most available libraries (including those provided with the operating systems)
do not satisfy one or more of these requirements.
84
4.2 Decoupling Real-Time and non Real-Time Activities
Assigning Quality of Service
“Real-time” is not always a binary property of a software system: different
tasks can be satisfied with different Quality of Service (QoS) performance;
for example, one task wants the best approximate answer from your object
implementation within a given deadline, while another task is willing to wait
indefinitely longer for a result with specified accuracy. Thread priorities
and/or deadlines allow to specify quality of service of local activities. A
scheduler algorithm then decides on these properties which activity to run
next. These properties form thus the basis of the real-time behaviour of an
activity. The higher its priority or the earlier its deadline, the more likely it
will be executed by the scheduler.
Deterministic Data Access
For long, the focus in real-time systems design was on optimising scheduler
algorithms for low latency, low jitter and scalable thread scheduling. The
effects of data access contention, i.e. accessing common data frequently from
multiple threads was known to be not scalable when the common locking
techniques were applied. Even more, the locks could lead to priority inversions
and to dead-locks, which were even harder to solve using an advanced
scheduler. Only the last decade, the lock-free algorithms were invented to
cope with contention and solve the priority inversion and dead-lock all at
once. An additional advantage surfaced that lock-free algorithms performed
very well under well studied schedulers, the Rate Monotonic scheduler and
Earliest Deadline First scheduler.
Not all data can be accessed lock-free however. Bus arbitration on
hardware busses use a locking mechanism to serialise concurrent access. This
requires a priority inheritance protocol (Lampson and Redell 1980; Sha,
Rajkumar, and Lehoczky 1990) to solve priority inversions, as was experienced
in the NASA Mars Pathfinder Rover (Reeves 1997). The same goes for the
access to network servers, where the caller blocks until the data returns over
the network. A real-time network protocol, which inherits the quality of
service of the activity is then required to bound data access time. The RTCORBA standard (Schmidt and Kuhns 2000) has specified connections with
quality of service properties.
Memory Allocation
The framework must provide infrastructure in which the creation of all
internal data structures is decoupled from the use of these data structures,
because dynamic (“heap”) memory allocation is in many operating systems
not real-time-safe and in the worst case, may fail. There are two solutions
85
4 Analysis and Design of Communication in Control Frameworks
to decouple memory allocation from its use. The first one is to use a
separate pool for real-time (de)allocation and find out the maximum required
dynamic memory for a given application. The second approach is not to
allocate memory at all during real-time activities and defer dynamic memory
allocation to a start-up phase or in a non real-time activity. Neither of these
approaches prevents garbage collection3 strategies, although real-time garbage
collection (Magnusson and Henriksson 1995) is not present on many systems
and lock-free garbage collection was previously restricted to practically non
existing double word compare and swap (DCAS) architectures (Herlihy
and Moss 1991). Gidenstam, Papatriantafilou, and Tsigas (2005) found
a lock-free single word CAS garbage collection algorithm. Noteworthy
is also (Michael 2004) which introduced the ‘Hazard Pointer’, a memory
management methodology which keeps track of per thread memory ownership
and can, by manipulating such a list, reclaim memory for the operating
system when it is no longer used. The implementation of the shared data
objects in this Chapter all use one or another kind of real-time memory
management by pre-allocating memory on a per-thread basis or by clever
use of memory resources. No new insights in real-time memory management
can be contributed by this work, thus the algorithms are not further detailed.
4.2.2
The Activity Design Pattern
This section describes the Activity Design Pattern which is both a structural
and behavioural design pattern. A structural pattern proposes relations and
hierarchies between objects, it assigns responsibilities in order to obtain a
more reusable, extendable and understandable design. A behavioural pattern
provides a means to realise a desired behaviour within an object, mostly by
delegating work to encapsulated objects.
Intent
Whatever a task does (actively and reactively), it is executed in an activity.
An activity serves to execute a given functionality with given real-time
properties. In turn, the functionality must be structured in order not to
obstruct these real-time properties. This pattern thus enforces structure on
functionality and behaviour on the activities executing them, in order to
obtain a system of parallel, decoupled activities.
Motivation
This pattern is motivated by the observation that the growth of real-time
systems is often bounded by the rate of coupling of activities. Deadlocks and
3
86
Automatic freeing of no longer used objects in memory.
4.2 Decoupling Real-Time and non Real-Time Activities
priority inversions typically emerge from strong activity coupling. But also
increases in jitter and latency are artifacts of activities obstructing each other
in spite of quality of service properties. The ideal realisation of this pattern
would allow a real-time performance of activities which is only dependent of
its quality of service. That is, an activity with a given quality of service can
only be obstructed by an activity with a higher quality of service. Lower
quality of service activities run transparently on the system. In practice, this
is almost unaccomplishable on most hardware, since cache flushes and memory
bus locking, which may be the result of a low quality of service activity, may
obstruct high quality of service activities nevertheless.
Applicability
This pattern is applicable to both real-time and non real-time applications
which require multiple activities to execute in parallel. However, it is
optimised towards real-time systems as it requires functionality to be
structured into real-time and non real-time parts.
Participants
There are four participants in this pattern: Functionality, Activity,
Trigger and ActivityManager.
Functionality is the function or algorithm to be executed, i.e. an action.
A function needs data initialisation and resource allocation prior to
execution and resource de-allocation afterwards if the algorithm needs
to complete in bounded time.
In control applications, the function may be executed periodically or in
response to an event. In these cases it is assumed that the function does
not block, that is, it may communicate with other functions but not in
such a way that it may wait indefinitely for a response or stimulus. When
the function requires blocking communication, it must be reformulated
in a non blocking variant if it is to be executed in a periodic activity.
Activity is a thread of execution that manifests itself as an activity which
executes a functionality. Activities are assigned a quality of service
property by means of a priority at which, or a time deadline before
which, the functionality is executed. An activity may execute the
functionality periodically in reaction to an event or allow the function
to take over the execution timing, thus allowing it to block.
Activities require operating system support for their implementation
such as timer alarms for periodic execution and semaphores for reactive
execution. In contrast, a function requires just programming language
support.
87
4 Analysis and Design of Communication in Control Frameworks
Trigger is the abstraction of the cause of the execution of the functionality.
It may be an event, a timer alarm or any object related to the activity
which decides that a step must be executed. Most implementations will
hide or abstract the Trigger in order to avoid coupling.
ActivityManager is the role of the object(s) that decides when the activity
is started and stopped. This may be in fact any object in the system.
Structure
The structure of the activity pattern can be observed from the UML diagram
in Figure 4.3.
Figure 4.3: Activity Pattern UML Diagram.
Collaboration
The activity takes the initiative on when to execute the functionality and
under which terms. In reactive and periodic control systems, the step
function is most frequently used because it allows full decoupling of the what
and the when. The functionality in turn relies on the activity to obtain
information about its execution period and quality of service.
Both lifetime semantics and execution semantics are shown in Figure 4.4.
Lifetime Semantics The activity’s lifetime has the following properties
(Figure 4.5):
1. Upon creation, it is inactive.
88
4.2 Decoupling Real-Time and non Real-Time Activities
Figure 4.4: Activity Sequence UML Diagram.
2. It can be started, becomming active, upon which it may execute the
initialisation function of the functionality on behalf of the caller, after
which, if completed successfullly, it is waiting and ready to run.
3. It may execute the functionality upon each trigger and then becomes
running.
4. The execution of the functionality can be caused by any means and
happen any number of times between start and stop.
5. It can be stopped, upon which the functionality must return to the
waiting state and after which a finalisation is executed on behalf of the
stop caller. When this is done, the activity becomes again inactive.
Item two and five allow decoupling of the initialisation and cleanup of an
activity from the activity’s functionality. Meaning that, prior to being run
or after it is stopped, it can defer setup and cleanup routines to the start
and stop caller. This is a necessary property for control frameworks, as it
allows any real-time activity to defer non-deterministic work to the caller.
Item three and four allow a conditional execution of the activity function by
any cause (or ‘trigger’). For a given type of activity, this cause is most likely
to be fixed. For example, a periodic activity will execute its function upon
a timer alarm or an event driven activity upon each occurrence of an event.
The implementation of start and stop is thus dependent on the activity’s
type.
89
4 Analysis and Design of Communication in Control Frameworks
Figure 4.5: Activity Statechart UML diagram.
Note that these states are analogous to the states of component
deployment in Section 4.7. This pattern occurs thus at different levels, from
the execution of functionality to the deployment of large components. This
pattern will recur as well in Chapter 5 for building state machines.
Trigger Semantics To allow the decoupling of the activity’s function and
its trigger, we have distinguished between two kinds of activity functions
: blocking, which is implemented in loop (Figure 4.3) and non blocking
(or ‘polling’), which is implemented in step.The trigger decides which of
both forms it invokes. For example, a periodic trigger will invoke the non
blocking step method while a non periodic trigger will invoke the blocking
loop method. For the blocking case, the functionality needs to implement a
breakLoop function which forces the loop function to return (thus bring the
activity from Running to Waiting) when the activity is stopped.
Consequences
Using the Activity Pattern has the following consequences:
• It is the responsibility of the application designer to connect a realtime activity only to real-time functionality. This could be enforced
at compile time by defining for example a RealTimeFunctionality
and OnlineFunctionality where both may be executed by an
OnlineActivity and the former only by a RealTimeActivity.
90
4.2 Decoupling Real-Time and non Real-Time Activities
• The same argument holds for a PeriodicFunctionality and
NonPeriodicFunctionality , which must match a PeriodicActivity
and a NonPeriodicActivity.
• If initialise and finalise would contain non real-time algorithms,
such a Functionality could never be started or stopped from a
real-time Activity. RealTimeInitialiser and RealTimeFinaliser
interfaces could be added to indicate the nature of the initialisation and
finalisation functions.
Solving these consequences by design leads to the class diagram in Figure 4.6.
As can be seen from the complexity of this Figure, such extended class
hierarchy is overkill for most applications. Implementing the properties of
activities and functionality as attributes will delay the checks to run time but
reduces the complexity of the class hierarchy.
The semantics of a ‘blocking’ functionality is not to be confused with a
‘locking’ functionality. That is, even in blocking functions, all local data must
be shared with lock-free shared objects and distributed invocations must be
done with the same quality of service of the activity.
Implementation
step requires a careful implementation in order not to disturb the quality
of service. The pattern places all the responsibility for upholding quality of
service on this function (and the functions this function calls in turn). In
order to accomplish this, the step function must be implemented according
to the requirements of Section 4.2; hence, it must:
• use lock-free data access containers for sharing local data.
• not rely on standard memory allocators in real-time activities and not
invoke non deterministic functions.
initialise and finalise should contain all non real-time initialisation and
cleanup. By consequence, the activity of such functionality can not be started
or stopped from a real-time activity. To avoid this limitation, non realtime initialisation and cleanup can be done in the constructor and destructor
respectively of the functionality.4
The implementation of a ConcreteActivity is dependent on the interfaces
of the real-time operating system.
In practice, the creation of an
ConcreteActivity object will lead to the creation of a periodic or non
periodic thread, which will execute the functionality. A semaphore can serve
as a synchronisation point in the start and stop methods.
4
This is the case for the programs running in the state machines of Chapter 5, since
they are started and stopped by a real-time activity, i.e. the Execution Engine.
91
4 Analysis and Design of Communication in Control Frameworks
Figure 4.6: Extended Activity UML class diagram.
92
4.2 Decoupling Real-Time and non Real-Time Activities
Known Uses
A distant, non real-time variant is the Sun Java API which has a minimal
activity interface Runnable which has the single function run. This interface
is executed by the Java Thread class.
Related Design Patterns
Since the functionality may execute one or more activities in turn (thus acting
as a Trigger) it is related to the Composite pattern. Furthermore it is related
to the Command Pattern since the Functionality is commanded by the
Activity.
4.2.3
Validation
Some experiments will validate the design and implementation of the Activity
pattern. A number of activities will execute concurrently under an Rate
Monotonic scheduler, causing load on the central processor unit (CPU). Thus
the activity with the highest execution frequency has the highest priority and
vice versa. We will measure the start and stop times of each activity and
the duration of preemption under a Rate Monotonic scheduler. The details
of the system and test parameters are documented in Appendix A. In this
section, the activities do not communicate, thus the measurement reveals the
minimal latency of executing activities on our test system. The results of this
measurement will be the reference for all subsequent experiments which will
add communication, and thus additional jitter and latency, to the activities.
The effect of faster or slower hardware is cancelled out in these experiments
because time units are used to express load.
Application Categories
To validate how activities perform in different applications, a number of
application behaviours are emulated. Each application category has one
non real-time activity, emulating non real-time application communication
interference, in addition to random non real-time load on the machine5 . The
communication topology is left to the following experiments. The real-time
activities in each category are (see Figure 4.7):
• Minimal An application with one real-time periodic activity running at
1kHz (every 1ms) with a load of 0.5ms per cycle. It emulates a minimal
real-time control loop, which only communicates with one non real-time
activity. It serves well to emulate the latencies of a distributed control
activity which is accessed in a non real-time network. The worst case
5
Note this difference in communication interference and execution interference.
93
4 Analysis and Design of Communication in Control Frameworks
latency of the real-time activity is only (largely) dependent on the worst
case communication latency, since it can not be preempted.
• Small Same as Minimal with one additional lower priority real-time
periodic activity running at 200Hz (every 5ms) with a load of 0.5ms per
cycle. It emulates a cascaded controller with a fast and slow control
loop, where the slow control loop communicates with one not real-time
activity and the fast control loop only with the slow control loop.
• Concurrent An application with three real-time periodic activities
running at 2kHz (every 0.5ms), 1kHz (every 1ms) and 500Hz (every
2ms), causing a load of respectively 0.1ms, 0.2ms, 0.3ms per cycle. It
emulates a highly concurrent communicating controller. Each activity
(non real-time included) communicates with two other activities, hence
eight communication channels are used to exchange data.
• Large An application with four real-time periodic activities running at
1kHz (every 1ms), 200Hz (every 5ms), 100Hz (every 10ms), 20Hz (every
50ms). The loads are respectively 0.05ms, 1ms, 2ms, 5ms per cycle.
It emulates a large controller cascading multiple loops. Each activity
sends buffered data to a higher priority activity and the highest priority
activity sets some data available for reading by all the other activities.
The theoretical load an application with n activities may cause on a processor
is constrained by (Liu and Layland 1973)
n
X
Ci /Ti < 1,
i=1
where Ci is the time required for execution of an activity with period Ti . For
a specific scheduler algorithm, the maximum load may be lower. For a rate
monotonic scheduler (RMS), the maximum load is constrained by
n
X
Ci /Ti ≤ n(21/n − 1).
i=1
Table 4.1 shows the theoretical maximum load and application load imposed
by the threads. Spare load is reserved for inter-activity communication in the
following sections.
Rationale
The aim of these validation experiments is to find the worst case latency (time
required to complete one activity step) for each activity in each application
94
4.2 Decoupling Real-Time and non Real-Time Activities
Figure 4.7: Four types of applications. Legend: period/load
95
4 Analysis and Design of Communication in Control Frameworks
Application
Minimal
Small
Concurrent
Large
Max Load under RMS
1
0.828
0.779
0.743
Application Load
0.5
0.6
0.65
0.55
Table 4.1: Theoretical maximum under RMS and actual load of each application
type.
type. The application types were inspired by existing application topologies.
In order to provoke the worst case latency in a short time interval, the periods
of each activity were adapted to occur once less per second than specified
above, simulating processor clock drift. This results in uneven execution
frequencies, causing a large drift between the execution periods of any two
activities and frequent ‘collisions’ between activity execution periods. These
collisions are the sources for contention and occur frequently in distributed
systems. That is, in present day controllers, clocks are not synchronised to
the micro second level in order to avoid drift, although earlier work (Koninckx
2003) has proposed solutions to avoid drift in distributed controllers.
Although our validation experiments do not use distributed activities, they
serve well to measure the effects of distributing controllers on communication
latencies. Continuing with the reasoning of Section 2.2, which posed that
deterministic local communication is required to allow distributing controllers
(shown in Figure 2.2), the highest priority activity in each application may
serve to emulate the effects on local activities of an incoming message. For
example, in the minimal application, when the real-time activity is distributed
from the not real-time activity, the non real-time activity will no longer be
preempted during its calculation, but may still influence the determinism
of the real-time activity during communication, because of contention. A
same reasoning can be made for the small application, where the highest
priority activity communicates with a lower priority real-time activity which
may emulate the communication over a real-time network.
Measurement Results
This section only measures the latencies of the activities without any form of
communication between the activities. We measure thus the latencies in the
best case communication overhead, being zero. Figure 4.8 shows how latency
and period are measured. The deviation from the release time and the actual
execution time is not measured.
96
4.2 Decoupling Real-Time and non Real-Time Activities
Figure 4.8: Measurement of actual period and latency.
Periodicities To validate the real-timeness of the activity pattern on
our target platform, the thread execution periods are measured for each
application type. Figure 4.9 shows the results for the minimal, small,
concurrent and large application types, respectively. As can be observed,
as more lower priority threads are added, the periodicity of the high priority
remains stable and the lower priority threads suffer from increasing jitter,
caused by preemption.
Latencies The latency measurements are shown in Figure 4.10. Similarly
to the measured execution periods, preemption causes an increase of the
jitter in lower priority threads. The small application measurement shows
this clearly with two peaks for the low priority thread, one at 1ms and
one at 1.5ms. Apart from the preemptive jitter, the execution latencies are
highly deterministic because of the absence of communication. Note that if
the applications with more than one activity were distributed, the latencies
would be as deterministic as in the minimal application in the case that the
communication overhead is nil. In practice however, the additional distributed
communication latency can only be accepted for distributing computationally
intensive algorithms.
Conclusions The presented activity pattern performs as expected and
allows real-time execution on a real-time operating system. The only
significant jitter activities experience is caused by preemption of higher
priority activities. The next Section will measure the influence on the latency
97
4 Analysis and Design of Communication in Control Frameworks
Figure 4.9: Measured execution periods, for (from top to bottom) the minimal,
small, concurrent and large application types.
98
4.2 Decoupling Real-Time and non Real-Time Activities
Figure 4.10: Measured latencies, for (from top to bottom) the minimal, small,
concurrent and large application types.
99
4 Analysis and Design of Communication in Control Frameworks
of communication between activities.
4.2.4
Summary
Activities (threads) execute functionality (functions). Both are strongly
decoupled: the activity only enforces the priority and periodicity upon the
functionality. Lateron in this text, the term activity will be used to relate to
both activity and functionality, since and activity always executes a function.
This is conform the semantics of the UML terminology as given in the
Glossary.
Decoupling real-time activities requires a system that:
• can assign quality of service to activities and network connections.
• provides lock-free data access containers for sharing local data.
• provides priority inheritance for concurrent access of lock based
hardware access.
• does not rely on standard memory allocators in real-time activities.
Fortunately, these requirements are provided by all modern real-time
This makes the
operating systems as was concluded in Section 2.2.
implementation of the activity design pattern plausible on most real-time
systems. An implementation of periodic activities was validated for a range
of application types.
100
4.3 Decoupling Communication of Activities
4.3
Decoupling Communication of Activities
In a control application, activities are not closed boxes. Their functions will
communicate with other tasks, requesting operations and a means to collect
the results. Or, in a simplified abstraction, an activity has inputs and outputs.
Reducing an activity to a bare input-output automaton might benefit formal
verification but might violate the semantics of the application domain. In
software design, it is prefered to attach a meaning that is natural to the
application domain, in casu control, to communication between activities.
When control tasks communicate, we can distinguish two means of
communication : (i) data (or continuous control) communication, which we
call “Data Flow” and (ii) command (or discrete control) communication,
which we call “Execution Flow”. Although both forms have as a result that
data is moved from one task to another, it is the intent that distinguishes
these flows. Data Flow causes no direct side effects. A consumer task may
take action based on the contents of the data, but the producer of the data
did not intend to change the behaviour of the consumer. Execution Flow, on
the other hand, intends to cause effects, i.e. a sending task intends to change
behaviour in the receiving task. Every communication between tasks can be
partitioned in one of these two types of flow which have different semantics
and are thus realised differently.
The distinction between data flow and execution flow activities can not
be seen in the activity pattern and is thus a semantic layer which we draw
above the collection of activities in a system. Both are subject to the realtime constraints of the previous Section, i.e. the functionality must be realtime in a real-time activity. The next Sections go into detail for each type
of activity. Data flow communication is the packet based communication of
typed data between tasks and is realised by the Connector design pattern.
The execution flow communication in control applications is done by means of
messages between tasks, realised by the Synchronous-Asynchronous Message
design pattern. Finally, data flow activities can be coupled to execution flow
activities by means of events, which signal (noteworthy) changes in the data
flow to state machines in the execution flow.
101
4 Analysis and Design of Communication in Control Frameworks
4.4
Data Flow Communication
Data flow has been discussed elaborately in Chapter 3. This section
summarises the requirements analysis for data flow communication and its
realisation in the Connector pattern.
4.4.1
Analysis
Data flow is formed by the (buffered) transfer of shared data between tasks.
Once the data flow is set up between tasks, activities within each task can
choose to produce or consume measurements, interpolation results, etc.
According to Section 4.2, data read and write operations must be lock
free. Furthermore, according to Subsection 3.2.2, data flow connections must
form the natural boundaries for distribution. Summarised, the data flow must
have the following properties:
• Reading and writing is synchronous by nature and is lock-free.
• All data exchanged is value typed, for example, it represents an end
effector frame, estimated temperature or joint positions.
• The value of the data does not reflect the intent of the producer to
change the behaviour of the consumer. Hence no ‘flags’ or enumerations
are communicated in the data flow.
• The exchange is largely encapsulated in an intermediate object which
mediates the data flow between both tasks. This eliminates the
requirement that the task needs to know in advance how communication
is done with distributed or collocated tasks.
These properties motivate largely why connectors are suitable to compose the
data flow between components. Assume the data flow activities of a control
application require two mechanisms to exchange data lock-free. Unbuffered
data exchange is done through a shared object interface which defines Set and
Get methods of shared data of any type. The shared object always returns
the last Set value in Get. Buffered data exchange has Push and Pop methods
which allows to queue data of given type. A container object is required
which implements these methods with lock-free semantics and both activities
are required to agree over the protocol used such that it is impossible that one
activity Sets data while the other Pops data. This container is the connector
and the activity’s task can only be connected to this connector if it offers the
correct ports which are compatible with the protocol. Other data exchange
protocols will require other ports. As originally pointed out in (Bálek and
Plášil 2001), the connector is not a part of the component itself, and thus the
implementation of the connector may depend on the environment in which the
102
4.4 Data Flow Communication
component is deployed. For example, if the component is deployed in a control
application without distribution, the connector can be implemented with a
low overhead lock-free algorithm. When the same component is deployed
in a distributed control application, the connectors will distribute the data
transparently over the network.
4.4.2
The Connector Design Pattern
Realisations of data flow in control systems is certainly not new6 and most
proposals are similar to the principle we apply in this work. Since it is a
relatively old pattern, it is only described briefly for completeness in this
section. Since the Connector is applied often in distributed systems, the term
component is used here in favour of the control oriented term task.
Intent
This pattern intends to decouple data flow between components by
constructing the data flow between components with Connector objects.
A Connector abstracts the locality of the components and enforces a
communication protocol.
Connectors connect to compatible Ports of
components which must exchange the same kind of data. Connectors
represent an unambiguous view (image) of the data they contain, and they
provide, if necessary, concurrent protection to the data. The communication
protocol typically fixes a data flow direction, a buffering depth and the
notification of data updates.
Motivation
Its design is motivated by the analysis that a mediator is required between
components which exchange data. When distributing data flow, a proxy of
remote data is required, which can be done by the Connector. It allows
distribution specific constructs to be encapsulated in the Connector instead
of in each data flow component. It is also motivated by the need to have
a single point of access to acquire the current value of data in a given part
of the data flow. In some data flow architectures the data flow needs to be
constrained (in value or in bandwidth), a Connector can serve as the object
responsible for enforcing these constraints
Applicability
The Connector pattern is applicable whenever data must be exchanged
between components.
6
The term pattern is thus for sure applicable in this case.
103
4 Analysis and Design of Communication in Control Frameworks
Participants
The Pattern has the following participants:
DataFlowComponent A component which executes data flow activity by
reading data from its Ports and writing data to Ports.
Port Part of the interface of a DataFlowComponent which conforms to a
data exchange protocol.
Connector Implements a data exchange protocol and provides a single
consistent image of the exchanged data through its DataObject.
DataObject Represents the type of data exchanged through the Connector
and gives access to the data’s value at a certain time.
Structure
The structure of the Connector pattern is shown in Figure 4.11. Although
multiple data types can be exchanged through one connector, a connector is
exclusively used by two components.
Consequences
• A full fledged implementation of Connector may be overkill to small
applications. Especially in small control systems, the data object needs
only to be shared between two components to obtain data exchange.
• The Connector does not allow “in place” calculations of data on minimal
implementation systems, since the data is accessed by methods which
copy by value.
• The interface of Port and Connector is largely left to the application.
However, the majority of the control applications will only require FIFO
(e.g., to communicate interpolation setpoints), and data image (e.g., to
communicate the position of a robot) implementations.
Implementation
Since data objects and buffers are accessed in the step method of a
functionality in the Activity pattern, their implementation must obey those
consequences as well. A local buffer or data object must be implemented
with a lock-free primitive while a distributed implementation must obey the
quality of service of the component’s activity.
This work implemented a lock-free data object (DataConnector) and lockfree buffer (BufferConnector). Both implementations require on beforehand
104
4.4 Data Flow Communication
Figure 4.11: Connector UML class diagram.
105
4 Analysis and Design of Communication in Control Frameworks
the number of threads that will access these data containers, such that
memory can be allocated per thread on beforehand. Since each thread accesses
in the worst case the original data and the new data, the amount of memory
required is twice the amount of threads that will access the data container.
The memory requirements scale thus linearly with the number of accessing
threads.
Because of the properties of lock-free shared data in real-time systems
as discussed in Subsection 2.2.3, the access time of the buffer and shared
data is constant for the highest priority thread. The trade-off made is
thus memory versus worst case latency. Compare this with a mutex based
implementation where the amount of memory is constant but the worst case
latency is unbounded in case of priority inversions. Even with the presence of
a priority inheritance protocol, the worst case latency is still determined by
the amount of lower priority threads potentially accessing the critical section.
The lock-free implementation is validated and compared with a mutex
based approach in Subsection 4.4.4.
Known Uses
ROOM (see Subsection 2.1.2) and the ControlShell (see Section 2.3) use
Connector to connect distributed components. The ROOM Connector pattern
is described in (Douglass 2002) and in addition decouples the protocol out of
the connector, which is not done in this thesis. The CORBA Data Distribution
Service (DDS) is an OMG standard (OMG d) which is derived from ROOM’s
Network Data Delivery Service (NDDS). The OMG Event Service (OMG a)
is a port-connector based service for realising data flow.
Related Design Patterns
The Adaptor Pattern and Strategy pattern are both applied in Connector.
The Connector encapsulates the protocol (or strategy) of communication,
which can be changed when the collocation of components changes. The Port
then functions as an Adapter to the data access methods of the Component.
The Adaptor–Connector pattern (Schmidt, Stal, Rohnert, and Buschmann
2001) found in Adaptive Communication Environment (ACE) is a close
relative for separating connection management and data transfer on network
sockets.
4.4.3
The Data Flow Interface
Ports form the interfaces of a control task which allow it to process data
flow. The collection of Ports can be accessed from the Data Flow Interface
of a control task. Since connectors are created outside the control task, this
106
4.4 Data Flow Communication
interface serves to browse the ports of the task by the ConnectorFactory
which sets up the application’s data flow (Figure 4.12).
Figure 4.12: DataFlowInterface UML class diagram.
4.4.4
Validation
To validate the connector pattern, the activities of Subsection 4.2.3 are
configured as in Subsection 4.2.3 but the data communication is now
effectively done. Figure 4.13 shows the configurations for the four application
types. Circles denote data objects, rectangles denote buffers. The arrows
denote read/write relations. For buffers, a lower frequency thread fills
the buffer completely per period, while a high frequency thread reads one
sample per period, and vice versa for the inverted read/write case. The
data packet size is 160 bytes, sufficient for controlling 20 degrees of freedom
with double precision floating point numbers. This section measures the
additional communication latency these communications cause using a lockbased or lock-free communication algorithm. This validation differs from
earlier work (LaMarca 1994) comparing lock-based with lock-free algorithms
in that it targets real-time systems.
The applications are first configured such that all data exchange is
protected using mutex locks. This means that if a high priority activity
needs to access data while a lower priority activity is modifying it, the former
blocks until the latter has done. Hence, although a low priority thread can
be preempted, it will never need to block on a lock held by a higher priority
thread. Vice versa, a high priority thread will never be preempted by a low
priority thread, but risks to block on a lock held by a lower priority thread.
The effects of this protocol are disastrous for higher priority threads, since
with each additional lower priority thread, the odds that it needs to block
increase. Priority inheritance was required in order to conduct the experiment,
or the high priority threads would almost fail immediately, missing deadlines.
Next the applications are configured such that all data exchange is lockfree. An activity will experience a failed data update only if a higher priority
thread preempted it and modified the same data. Hence, the highest priority
107
4 Analysis and Design of Communication in Control Frameworks
Figure 4.13: Each application type is configured such that it emulates a common
data flow application type.
thread will never fail updating the data, while a lower priority thread has
a larger chance of failed updates as the number of higher priority threads
increases. As pointed out before, the number of failed updates is bounded
under RM, EDF and DM schedulers. This can be intuitively understood by
realising that multiple higher priority updates cause at most one failed update
during one preemption of a lower priority activity.
Minimal Application Figure 4.14 shows the communication latencies of
the minimal application in the lock-based and lock-free configurations. What
strikes immediately is that the lock-free exchange (average is 2.517µs) is about
three times as fast as the lock-based (average is 8.8429µs) communication.
This difference shows that grabbing and releasing a lock, consumes more
time than copying and modifying a data structure. Even more, since there
is only one high priority thread, it never needs to retry the copy-modify
operation. This experiment shows thus that it is already beneficial to use
lock-free communication in minimal systems. It can be argued that the lock
is not efficiently implemented in the chosen operating system. However, this
108
4.4 Data Flow Communication
Figure 4.14: Minimal application communication latencies. Lock based (top) and
lock free (bottom).
is actually an extra argument in favour of lock-free communication since it
does not depend upon the operating system at all (see Subsection 2.2.3.)
Small Application Figure 4.15 and Figure 4.16 show the results for the
small application for the high priority and low priority threads respectively.
Each figure shows the lock-based on top and lock-free communication below.
The lock-free communication performs better for both high an low priority
threads. Figure 4.16 shows clearly the additional latencies caused by
preemption during communication.
Concurrent Application The concurrent application was set up to
investigate scalability with high data access contention. The difference
between lock-based and lock-free communication are very clear for the highest
priority thread in Figure 4.17. While the lock-based communication suffers
severe delays because of locks, the lock-free communication is as-if unaware
of any concurrent data access by lower priority threads. One can wonder if
this is at the disadvantage of the latter. Figure 4.18 and Figure 4.18 show
that the contrary is true. On average, the lock-free communication is more
efficient and has a better worst case latency for the medium priority thread
and an equal worst case latency for the low priority thread.
109
4 Analysis and Design of Communication in Control Frameworks
Figure 4.15: Small application communication latencies: high priority. Lock based
(top) and lock free (bottom).
Figure 4.16: Small application communication latencies: low priority. Lock based
(top) and lock free (bottom).
110
4.4 Data Flow Communication
Figure 4.17: Concurrent application communication latencies: high priority. Lock
based (top) and lock free (bottom).
Figure 4.18: Concurrent application communication latencies: medium priority.
Lock based (top) and lock free (bottom).
111
4 Analysis and Design of Communication in Control Frameworks
Figure 4.19: Concurrent application communication latencies: low priority. Lock
based (top) and lock free (bottom).
Large Application The last application example validates the communication
latencies for a large controller shown in Figure 4.20 and Figure 4.21. The same
conclusions as with the previous applications can be drawn: On average,
the lock-free approach is more efficient and it has by far better worst case
behaviour for high priority threads.
Conclusions The validation experiments show that lock-free data flow
communication significantly improves the real-time behaviour of activities.
With respect to both average and worst case latencies, it scales better than
traditional lock-based approaches, especially for the high priority activities
of a system. It enables communication with non real-time activities without
the need of priority inheritance capable schedulers, since it does not rely
on the scheduler. Furthermore, these experiments have validated what
was experienced in practice, that the Orocos Connector pattern provides a
performant real-time implementation for lock-free communication.
The results of this validation also contribute to further design decisions of
this work. In any communication protocol, data is moved between activities.
A lock-free solution will be be used in future validation experiments, which
will thus not only validate the proposed protocol, but also the underlying
lock-free algorithms.
112
4.4 Data Flow Communication
Figure 4.20: Large application communication latencies: high priority. Lock based
(top) and lock free (bottom).
Figure 4.21: Large application communication latencies: medium priority. Lock
based (top) and lock free (bottom).
113
4 Analysis and Design of Communication in Control Frameworks
Figure 4.22: Large application communication latencies: low priority. Lock based
(top) and lock free (bottom).
Figure 4.23: Large application communication latencies: very low priority. Lock
based (top) and lock free (bottom).
114
4.5 Execution Flow Communication
4.5
Execution Flow Communication
The execution flow is formed by the actions executed by activities, and
the events which publish changes in the system. One task’s activity can
request another task (“sends”) to execute an operation on its behalf and
wait for the completion. This is an asynchronous type of communication,
because the requester’s activity can execute other actions while waiting for
the result. A synchronous variant is where one task executes (“calls”) an
operation, defined in the interface of another task, in its own activity. This
Section details the task’s operation interface and presents the SynchronousAsynchronous Message Design Pattern for message based communication in
control applications. The next section will detail how events can benefit from
the same pattern to synchronise concurrent activities.
4.5.1
Analysis
In a control application, decisions must be made on the basis of the state
of the system, and actions are executed as a result of these decisions. The
actions which can be executed by a task are defined as operations in the
task’s interface. Example operations are: executing a movement program;
changing the values of control gains; switching to another control law or
motion interpolator; or bringing the system to a safe state after an error has
occured. The available operations are largely dependent on the application,
but the way in which operations are requested and executed isn’t. In control
systems, there are two kinds of operations which are very common. Figure 4.24
shows a schematic overview of these two kinds of operations.
The first type of operations has the semantics of “reach a goal” (thus
imperative). For example, MoveAxisTo(axis, position, velocity). It is
defined by a path interpolator task in a control system and takes an axis
number, target position and velocity as arguments. It can be rejected by the
interpolator if the axis is moving, if the axis number, or any other argument
is not valid or if the generator is not accepting commands at all (because
it has not been started). If the command is accepted, it is completed when
the axis reaches the position. We call this type of operation a command
, which a task executes upon request of another task. In some way, this
command is executed by an activity running in the receiver. The sender of
the command will want to know if the command was accepted, if it finished
(using the query IsAxisReady(axis)) and which result it returned. We will
show in Section 5.2 how intermediate errors, during the command’s effect can
be detected and handled.
The second type of operation has the semantics of “calculate a result”.
Examples are: reading a measurement out of a data object, checking the
status of a task or the calculation of inverse kinematics. These methods are
115
4 Analysis and Design of Communication in Control Frameworks
Figure 4.24: A method calculates synchronously a result, a command reaches a goal
asynchronously.
the equivalent to the classical, synchronous function call7 . Thus when one
task’s activity calls the method of another task’s interface, it executes the
method until completion in a single “call” action. Another method example
is starting an activity. When no activity is running in a task, it can not
process commands. Thus the Start operation must be defined as a method
which will always be executed by another task. Furthermore, since methods
are synchronous, the initialisation of the activity is executed by the caller.
Methods and commands are natural extensions to the activity pattern and
implement the peer-to-peer execution flow messages between tasks. A task’s
interface defines which functions are to be executed as commands and which
as methods. This decision has large implications on the implementation of
the method or command in casu. First, since a command is asynchronous, a
means must be provided to check if it completed and if so, to fetch the result.
Second, since a command is executed by an activity in the receiver, it is not
concurrent and has no critical sections. Commands are said to be “serialised”
with the activity within the target task. Methods on the other hand are called
asynchronously with respect to the activity of the target and must be thread
safe.
7
This is a trivial definition since method and function are synonyms, but it is so to
emphasise its synchronous execution.
116
4.5 Execution Flow Communication
4.5.2
The Synchronous-Asynchronous Message Design
Pattern
The “synchronous-asynchronous message” design pattern is a behavioural
design pattern.
Intent
This design pattern intends to structure the interface of a task in such a
way that it can be commanded to execute functionality asynchronously with
respect to a peer. Synchronous services are offered to a peer by means of
methods. Both commands and methods are objects such that the ownership
can be transferred between objects.
Motivation
This design pattern is motivated by two major forces. The primary force is
the need for synchronous and asynchronous communication between tasks in
a system. Both forms occur frequently in control applications. The second
force is the need to handle both commands and methods as objects, such that
they can be used in larger object structures. For an elaborate motivation, see
also Subsection 4.5.1.
Applicability
It is applicable in any application with concurrent periodic and non periodic
activities which need to communicate messages.
Participants
The main participants are:
Action An executable function which is executed by a method or by
a command.
This function may take arguments by means of
ExpressionInterface objects, although the arguments must be
provided at construction time, in a ConcreteAction.
ExpressionInterface Provides access to an expression which can be
evaluated. The expression may invoke actions to calculate its result.
It allows to evaluate the expression and to indicate if the evaluation was
successful. If the expression results in a Boolean value, this is considered
as the success status, otherwise it is assumed that all other expressions
are evaluated successfully.
Expression A typed expression of which the result can be read.
117
4 Analysis and Design of Communication in Control Frameworks
Method Is both a Expression and an Action intended to be executed
synchronously and to optionally return the result of a function. Methods
have the following properties:
• It is defined in the interface of the receiver, but executed by the
caller.
• It may block.
• It may return any value and take any number of arguments.
The evaluation of a Method is the result of the execution of its Action.
ConcreteMethod receives its arguments as ExpressionInterface instances.
To execute the Action it first evaluates all arguments and passes the
results as arguments to the actual function.
CompletionCondition A Boolean Expression which evaluates to true
or to false and which indicates if the Action associated to a
Command has completed its effects. The evaluate function of a
CompletionCondition may not block and is used for polling, for
example in periodic activities. wait can be used as a synchronisation
point for blocking activities. The CompletionCondition takes the
same arguments as the Command and may store, if the arguments
are AssignableExpressions, results in these arguments when the the
command is done.
Command A container containing an Action and a CompletionCondition
where the latter evaluates if the former has completed already. It is
intended to be executed asynchronously. A command object has the
following properties:
• It is defined in the interface of the receiver, thus each task executes
exclusively its own commands.
• It is always accompanied by a condition which evaluates the
completion status of the command. This may be done blocking- or
polling- wise.
• It uses a Boolean return value to indicate success or failure and
can take any number of arguments.
Activity Executes Methods, sends Commands and runs the CommandProcessor.
CommandProcessor Empties its CommandQueue, a queue of sent Command
objects, and executes their Actions.
118
4.5 Execution Flow Communication
Structure
The Expression, Method, Command and CommandProcessor object structure
is shown in the class diagrams in Figure 4.25, Figure 4.28, Figure 4.26 and
Figure 4.27 respectively.
Figure 4.25: Expression UML class diagram.
Figure 4.26: Command UML class diagram.
Collaboration
The collaboration for commands and methods are discussed separately.
119
4 Analysis and Design of Communication in Control Frameworks
Figure 4.27: Command Processor UML class diagram.
Figure 4.28: Method UML class diagram.
Commands Figure 4.29 shows the command protocol (sending/receiving)
which has the following properties:
• It is asynchronous by nature, is thus sent and takes time to complete.
• It is sent non-blocking into a command queue of the receiver’s Command
Processor (which may block on an empty queue).
• By consequence it is executed at the priority of the receiver.
• It can be dispatched from one task to another, or more generally, can
be wrapped into another command.
120
4.5 Execution Flow Communication
• It can be rejected by the receiver. Possible causes for rejection are that
the receiver does not accept commands or that faulty arguments were
provided.
• A completion condition may only be evaluated after the command was
accepted and did not fail.
• A completion condition may take the same arguments as the command
and return results through reference arguments when it evaluates to
true.
Figure 4.29: The Command UML sequence diagram.
Methods The message protocol is simply the invocation of the method with
its arguments and collecting the return value from the method. Figure 4.30
shows the collaboration of an activity (by means of a functionality) which
invokes a method on a peer Task.
Figure 4.30: The Method UML sequence diagram.
121
4 Analysis and Design of Communication in Control Frameworks
Consequences
This pattern forces the designer of tasks to decide upon the synchronous or
asynchronous nature of the operations of a task. An operation of a task must
or may be a method when:
• It may be called from a concurrent activity.
• It starts the CommandProcessor of the method’s task.
• It has immediate effect. When the method returns all effects have
propagated through the system.
• It must be executed at the quality of service of the caller.
• It is not real-time while all activities of the method’s task are real-time.
By consequence, it can not be executed as a command in one of its
activities, thus it must be a method. The same holds when the method
is blocking and all activities of the method’s task are periodic.
An operation must or may be a command when:
• It may not be called from a concurrent activity.
• The task’s CommandProcessor is able to process the command
(running).
• Its effect takes time to complete or propagate through the system.
• It must be executed at the quality of service of the command’s task.
• It is non blocking and the CommandProcessor is executed periodically.
Or it is not real-time and the CommandProcessor is running in a non
real-time activity.
As the items above make clear, commands and methods are complementary
to a high degree. This complementarity is also the main motivation to use
this design pattern. It provides a complement to the classical synchronous
messages in real-time systems. In our day to day experience, we have found
no task which requires a more nor less sophisticated operations interface. The
separation of synchronous and asynchronous messaging also sets the stage for
events and synchronisation in the next section.
Methods, commands and expressions can be executed or evaluated in realtime as many times as required. The creation of method, command and
expression objects must happen in a non real-time activity or during the
initialisation of a real-time activity.
The pattern is biased towards periodic activities with respect to evaluating
if a command is done. The mechanism behind the wait function (see below)
122
4.5 Execution Flow Communication
was not further investigated in this work since most control activities were
periodic. There is no bias towards the task defining the command, meaning
the CommandProcessor may be blocking or non blocking. To address this bias,
the Synchronous-Asynchronous Event pattern of Section 4.6 can be applied
which is fully transparent towards periodic and non periodic activities.
Implementation
Both Commands and Methods must be implemented according to some
guidelines in order not to violate the requirements of the real-time Activity
pattern.
Commands The CommandQueue must be a lock-free queue in order not to
violate the requirements of real-time activities. Because this queue only stores
pointers to commands, its implementation can be optimised in comparison to
the lock-free buffer seen earlier such that any number of threads can access
the queue. The memory trade-off is thus absent in this case and this queue
is thus as performant as possible requiring constant memory and providing
constant worst case latency.
A periodic CommandProcessor empties this queue completely during each
periodic step, since the number of commands being processed in parallel
is not bounded. However, according to the Activity pattern, the Commands
Actions of a task with a periodic CommandProcessor must be non blocking.
As a consequence, such an Action will set up and start a periodic activity,
which when finished, indicates the completion of the command.
If the task’s CommandProcessor is non periodic, the Action may block
and in fact its completion status may be equal to the CompletionCondition.
In such case, the CommandProcessor serialises each incoming Command. As
above it may also start a periodic activity.
The wait function of a CompletionCondition may be trivially
implemented by polling evaluate continuously. More efficient implementations
may use a periodic check or a semaphore. The wait method may be extended
with a timeout value as an argument to avoid infinite waits. When wait
returns true, the command finished, if it returns false, the timeout expired.
The implementation of a Command’s Action and CompletionCondition
changes if the task’s the CommandProcessor is periodic or not. Since both are
defined in the same task, this coupling is justified.
Methods The implementation of a Method’s Action is far more straightforward
in comparison with a Command. Depending on its contract, it may be blocking
or non blocking and it is the functionality’s responsibility to only invoke the
kind of methods which are allowed in it’s kind of activity.
123
4 Analysis and Design of Communication in Control Frameworks
Related Patterns
The Command pattern in combination with the Factory Method pattern (both
in (Gamma, Helm, Johnson, and Vlissides 1995)) is the key to realising this
pattern for creating and invoking both method and command objects. A
related pattern to the CommandProcessor is the serialiser pattern (Hauser,
Jacobi, Theimer, Welch, and Weiser 1993; Massalin and Pu 1992; Hohmuth
and Härtig 2001) which is a server thread which serialises queued commands.
In that context, it is considered as the last resort to implement lock-free
operations. Our approach differs from serialiser in semantics and intent. A
task serialises the commands it executes in addition to the task it is executing
and only execute its own commands at its priority.
The Reactor pattern (Schmidt, Stal, Rohnert, and Buschmann 2001) is
also related to the CommandProcessor as it waits for incoming asynchronous
requests, most commonly from network sockets.
4.5.3
The Operation Interface
The Operation Interface of a task offers access to the CommandFactory,a
factory which generates Command objects and MethodFactory, a factory which
generates Method objects. The OperationInterface structure is shown in the
class diagram in Figure 4.31.
Figure 4.31: The OperationInterface UML class diagram.
The OperationInterface contributes to the separation of behaviour and
interface in a peer-to-peer task network since the factories abstract how
124
4.5 Execution Flow Communication
command and method objects are created and which types are present. This is
accomplished by passing their arguments with ExpressionInterface objects
and thus the factories hide the type signature of a command or method. This
process is shown in Figure 4.32 and Figure 4.33.
Figure 4.32: The Command creation UML sequence diagram.
Figure 4.33: The Method creation UML sequence diagram.
The realisation of these factories requires a large number of ‘helper’ classes.
These are left out in this text, but can be consulted in the on-line Orocos
manuals.
4.5.4
Validation
To validate the synchronous-asynchronous message pattern, the activities
of Subsection 4.2.3 are configured such that commands are sent from each
activity to a higher priority and lower priority activity in each execution cycle.
The lowest and highest priority activity send commands to each other. An
activity only sends a command if it received one from its neighbour, thus a
concurrent ping-pong effect is established. Every activity thus processes and
sends two commands each cycle at most.
125
4 Analysis and Design of Communication in Control Frameworks
The tasks are configured such that receiving a command causes a load
which is 10 % of the load shown in the graph’s legend. In the worst case, two
commands or 20 % load are generated in the task. For this experiment, both
the default load and data flow communication are disabled such that only the
effects of sending and processing commands show.
The experiment measures command send and execution times, using a
lock-free CommandQueue. The advantages of using lock-free communication
have been validated in the previous experiment.
Sending Commands
The results for all application types were similar, so only the results of the
most demanding application, the concurrent setup, are discussed. Figure 4.34
shows the cost of sending a command to a local peer for the high, medium and
low priority activities respectively. Sending a command into a local lock-free
queue takes on average 4µs for the highest priority thread. The lower priority
threads have a slightly higher average due to the preemptions, as can be seen
in the figure, however, the worst case latencies show a very deterministic
operation.
Processing Commands
After sending commands, each task checks its command queue and processes
any commands received. The execution of the command processor is timed
within the concurrent application. Figure 4.35 shows the results for the high,
medium and low priority thread respectively. The measurement includes
the emptying of the command queue by the CommandProcessor, reading an
empty queue takes about 3µs (top graph). In a few cases, the top graph shows
the overhead is 10µs for processing two commands.
For all three cases, when no preemption occurs and events are received,
the resulting latency is or 10 percent or 20 percent of the load, indicating no
significant jitter induced by the CommandProcessor.
126
4.5 Execution Flow Communication
Figure 4.34: Concurrent application sending commands: high, medium and low
priority
127
4 Analysis and Design of Communication in Control Frameworks
Figure 4.35: Concurrent application processing commands: high, medium and low
priority
128
4.6 Synchronisation of Activities
4.6
Synchronisation of Activities
Activities synchronise when one activity waits for the completion of another.
Or more general, an activity waits for the occurrence of an event (also
known as signal ). Some activities can wait indefinitely for an event to occur,
or in other words, can block on an event. In real-time control, periodic
activities need a way to synchronise without blocking on events. Furthermore,
subscription to and unsubscription from events may be a highly concurrent
operation which is performed when the internal state of activities changes.
Similarly to the Synchronous-Asynchronous Message pattern, the classical
object oriented event pattern can be extended to allow asynchronous callbacks
which are executed in asynchronous activities.
4.6.1
Analysis
In real-time control applications, many changes are detected at one place
within the system and must be notified to subscribers. Errors, (partially)
completed commands or measurements exceeding limits are common events
which are raised and need to be broadcasted to multiple objects. While
semaphores are the old8 school operating system primitives to solve activity
synchronisation, events are the object oriented solution to the same problem.
In control, events must provide periodic and non periodic activities a means
to synchronise. To be more precise, an activity may choose to wait (i.e.
block) for an event or to be informed of (i.e. a function is called back by) an
event (Figure 4.36). An event carries application dependent data (such as,
the measured position, or an identification of which console button has been
pushed), which is passed to the callback function as arguments. The signature
of the callback is thus dependent on the event’s type. Because of the inherent
concurrency with event processing, it is possible that an activity subscribes to
an event just when it is emitted, or a callback function unsubscribes itself and
possibly other callbacks from the same or other events. This obstructs a lock
based event implementation largely because of the possibility of dead-lock.
A subscriber chooses upon subscription if it must be informed immediately
(synchronously) when the event is emitted or deferred (asynchronously), i.e.
delay the processing of the event to a safe moment. A synchronous callback
is related to the ‘method’ of the Synchronous-Asynchronous Message pattern
because the subscribers’ callbacks are called synchronously in the context of
the event raiser. The asynchronous event callback is related to the ‘command’
abstraction because callbacks are called asynchronously, with respect to the
event raiser, thus in the context of the event subscriber. A synchronous
callback should thus be executed according to the quality of service of the
8
Or should I say common?
129
4 Analysis and Design of Communication in Control Frameworks
Figure 4.36: Analysis of event semantics.
activity that raised the event, while the asynchronous callback should be
executed according to the quality of service of the activity which subscribed
to the event. Periodic and non periodic activities will handle deferred events
differently as the former is not allowed to wait for the occurrence of the event.
The traditional publish-subscribe design pattern (Gamma, Helm, Johnson,
and Vlissides 1995) is too primitive9 for coping with these requirements,
although its interface is a basis to start from. Semaphores however, remain
on the other hand the only sound means to let activities wait and continue in
a controlled way. For example, semaphores are required to deterministically
stop and start periodic activities or wake activities when they are waiting
for an event to occur. This mechanism must be somehow combined with the
callback model of the publish subscribe design pattern, while all activities are
reacting to the event according to their quality of service.
A task must have a means to publish which events it emits, and to offer
its peers an interface to query for events and to set up event connections. The
CORBA RT-Event Service is an example of such interface for distributed event
handling, although it is meant to function as a central server which distributes
events to clients according to a quality of service. The involvedness of this
9
It does not take concurrency into account, but also does not offer a solution to self
and cross (un)subscription. Furthermore, it does not define the handle and connection
responsibilities.
130
4.6 Synchronisation of Activities
service does however leave no room for a hard real-time implementation. The
framework we present requires a light-weight per task event service.
Control applications require that the control framework allows to specify
the synchronisation as a state machine. That is, an activity which is waiting
for an event to occur is actually in a certain state. While being in that state,
it may perform a (periodic) activity until the event occurs, which results in
a transition to another state. When the task is in the other state, it is likely
interested in other events, hence, during the transition, subscription from the
events of the former state must be cancelled and a new subscription to events
for the new state must be created. This creation and deletion of subscriptions
is not a trivial operation in real-time systems, as careless implementations
may require memory to be allocated at run-time.
The design pattern in Subsection 4.6.2 presents an event mechanism which
will meet all the above requirements, which are typical for real-time control
frameworks. More specifically, it provides
• an object oriented event framework in which the callback functions may
receive any kind of data.
• a way for both periodic and non periodic activities to synchronise with
events.
• both synchronous and asynchronous callback function invocations with
respect to the event raiser.
• fully thread safe and lock-free (using (Harris 2001) with a real-time
memory management scheme) (un)subscription of callbacks, including
self unsubscription or cross subscription/unsubscription from any
concurrent activity.
The Event Service in Subsection 4.6.3 provides a separation between
behaviour realised by the pattern and an event-type independent interface,
similar to the task’s command and method factories of the SynchronousAsynchronous Message pattern.
We have found no earlier work which succeeded in combining all these
properties. We designed the event with the handle idiom which allows to set
up a “sleeping” signal-slot connection and which has all resources allocated.
A lock-free list implementation allows (un)subscription without disturbing
higher quality of service activities.
4.6.2
The Synchronous-Asynchronous Event Design
Pattern
The Synchronous-Asynchronous Event pattern is a behavioural pattern.
131
4 Analysis and Design of Communication in Control Frameworks
Intent
To provide a publish-subscriber mechanism which allows callbacks to be called
synchronously or asynchronously in concurrent, real-time activities. The
concurrent activities may be periodic or non periodic. The subscription can
be set up on beforehand and connect or disconnect the callback repeatedly in
real-time.
Motivation
This pattern is motivated by the same arguments as in the PublishSubscribe design pattern, but defined in a concurrent real-time control
application. Publish-Subscribe offers a synchronous implementation of a
callback mechanism where a function of a subscriber is called when the event
is raised. The callback function may change some data, signal a semaphore
or perform any work that is necessary to process the event.
It is not uncommon that the callback function contains critical sections.
Critical sections can be avoided by processing the callback asynchronously,
or deferred, in the context of the subscriber. A task which subscribes an
asynchronous callback to an event contains an EventProcessor activity which
must check a queue of deferred events that need further handling. It may wait
for incoming events or periodically (try to) empty the queue. The advantage
of deferred event handling is that the load on the event raiser is deterministic
and raises linear with the number of subscribers.
An object is not necessarily subscribed to an event for its full lifetime.
The (un)subscription process may be decided upon in the execution flow, and
therefore must be a hard real-time action.
For an elaborate motivation, see also Subsection 4.6.1.
Applicability
The pattern is applicable in any real-time system with more than one activity.
Use it if :
• you need to broadcast changes in the system to subscribed tasks which
execute concurrent activities.
• you need lock-free handling of events.
• you need dynamic (un)subscription of subscribers, possible at all times
from all places.
132
4.6 Synchronisation of Activities
Structure
Figure 4.37 shows the class diagram for synchronous event handling.
Figure 4.38 shows the extension to asynchronous callback handling using an
EventProcessor.
Figure 4.37: The Synchronous Event Pattern UML class diagram.
Participants
As with the Synchronous-Asynchronous Message Pattern, a full implementation
will require helper classes to create the necessary objects and relations. These
participants form the core business of the pattern:
Event Defines the signature of the event by means of Event::emit, which the
Subscriber::callback function must match in order to Event::setup
a EventConnection. It creates Handle and EventConnection objects,
but it only returns the former to the user.
Handle Is the user’s representation and allows manipulation of the
EventConnection status between the Event and the Subscriber. A
133
4 Analysis and Design of Communication in Control Frameworks
Figure 4.38: The Asynchronous Event Pattern UML class diagram.
Handle is copyable and thus multiple Handle objects may point to the
same EventConnection.
Subscriber Contains the Subscriber::callback function which must have
the same signature as Event::emit. A connected Subscriber::callback
function is called each time the event is Event::emit’ed.
EventConnection Represents the relation between an Event and a
Subscriber. It has three statuses; it can be connected, meaning that
the Subscriber::callback function is called upon each Event::emit,
it can be disconnected meaning that the Event::emit is not burdened
with the calling the Subscriber::callback function or it is invalid,
meaning that the Event object no longer exists.
AsynEventConnection is a Connection which saves an Event::emit for
completion by an asynchronous EventProcessor. It may store the
‘caught’ data or apply other policies such as only saving the last event
data in case of overrun.
EventProcessor Runs in an activity and contains a list of all AsynEventConnections
which capture events for that activity. When the EventProcessor
executes, it instructs all AsynEventConnections to AsynEventConnection::complete
134
4.6 Synchronisation of Activities
all pending events, thus to dispatch the request to the final asynchronous
Subscriber.
Collaboration
Figure 4.39 shows the sequence diagram for synchronous event handling.
Figure 4.40 shows the extension to asynchronous callback handling using the
EventProcessor.
Figure 4.39: The Synchronous Event Pattern UML sequence diagram.
Consequences
Using this pattern has the following consequences:
• All synchronous callbacks are processed first, also if a higher quality
of service asynchronous callback is present. In order to serve the
asynchronous callbacks to quality of service, the Event must order the
asynchronous callback list according to the quality of service of the
EventProcessor.
• When the Handle object is destroyed, and the callback is connected, the
callback can no longer be removed from the Event. A ScopedHandle
object can tie its lifetime to the Connection and disconnect the callback
upon its destruction.
135
4 Analysis and Design of Communication in Control Frameworks
Figure 4.40: The Asynchronous Event Pattern UML sequence diagram.
Implementation
The core of the implementation are two lock-free lists, ConnectionList
and AsynConnectionList, which store the event-subscriber relations for
synchronous and asynchronous subscribers respectively. In addition the
AsynEventConnection stores the event data in a lock-free data object, until
invoked by the processor. The lists must be growable at runtime, such that
any number of connections can be set up. As with the lock-free buffers and
data objects before, the lists require an amount of memory equal to twice
the number of threads times the number of connections times the size of a
connection object.
• In order to support multiple event (and thus callback) signatures, the
implementation will need to rely on generic implementations in order to
reduce code duplication in the implementation.
• ConnectionList must be implemented such that adding and removing
EventConnection objects is hard real-time and lock-free. Memory
storage can be reserved with ConnectionList::reserve during
Event::setup, such that ConnectionList::add needs only to append
the Connection to the list.
136
4.6 Synchronisation of Activities
• The same arguments hold for the asynchronous AsynConnectionList.
• Since multiple objects with independent lifetimes hold a reference to the
EventConnection, garbage collection is necessary to delete this object
when it is no longer needed.
The implementation benefits from the lock-free approach on multiple
fronts. First, the addition and removal of connections does not interfere with
the emission of events, since no locks are held on the connection lists. Second,
the asynchronous delivery of events by the event processor does not interfere
with an event emission, since the AsynEventConnection stores the event data
internally in a lock-free data object. Last, the emission of an event itself is
always non blocking, meaning that multiple activities may emit the same
event. The highest priority activity will always get its event delivered first,
since the connection list is never locked.
Related Patterns
As noted before, the Publish-Subscribe pattern (Gamma, Helm, Johnson,
and Vlissides 1995) is the synchronous ancestor of this pattern. The CORBA
Event Service (Harrison, Levine, and Schmidt 1997) offers a rich interface for
serving events in a distributed environment.
The Boost::signal Library (Boost ) is an advanced, non-concurrent, nonreal-time implementation which also uses the Handle idiom. The same can
be said of the SigC++ Library (Schulze, Cumming, and Nelson ). Both do
not implement asynchronous callbacks.
The Proactor (Schmidt, Stal, Rohnert, and Buschmann 2001) pattern
listens for incoming network requests associated with a handle and handles a
small part (such as acknowledging) of the request in a synchronous callback
and defers the completion of the request (such as transmitting the requested
data) to a Completion Processor.
4.6.3
The Event Service
The Event Service is the part of a Control Task ’s interface which provides an
event-type independent interface to add callbacks to all exported events of a
task.
The EventService class in Figure 4.41 shows that a task exports its
events using addEvent, and assigns it a publicly visible name. A peer task
can subscribe synchronous or asynchronous callbacks to each exported event
using setupSyn and setupAsyn respectively. The arguments are separated out
of the callback’s signature and the Subscriber::callback(T) function has
been replaced by an function object FObj which contains a callback without
arguments. The function object will find the event data in the sequence of
137
4 Analysis and Design of Communication in Control Frameworks
AssignableExpression objects when the event is emitted. For asynchronous
callbacks, the subscriber task must provide its event processor (EProc) as the
last argument such that when the event is emitted, the callback execution can
be deferred to that processor. The event is always emitted from within the
ConcreteTask.
Figure 4.41: The Event Service UML class diagram.
The implementation of the EventService is beyond the scope of this work.
Since the callback has been separated in a zero arity function object and a
sequence of arguments, the EventService does not expose the Event type
and communicates through Handle objects and the ExpressionInterface
hierarchy.
Analogously to the command and method factories, this
abstraction allows changes of the Event type and implementation without
modifying the interface of a task, hence adds to the required separation of
behaviour and interface in a peer-to-peer infrastructure.
4.6.4
Validation
To validate the event infrastructure, the activities of Subsection 4.5.4 are
configured such that events are emitted in each activity to which a higher
priority and lower priority activity are subscribed. The lowest and highest
priority activity are subscribed to each other. Thus every activity emits one
and processes two events at most each cycle. Processing an event causes a
load of 10 percent of the load shown in the legend of the graph.
As in the Command pattern validation, only the results of the concurrent
application are discussed.
Emitting Events
Figure 4.42 shows the measured latency for emitting a single, asynchronous
event with two subscribers within the high, medium and low priority tasks
respectively. The best case latency can be found in the highest priority thread
138
4.6 Synchronisation of Activities
Figure 4.42: Concurrent application emitting events: high, medium and low priority.
Line legend: period/load.
and is on average 3µs. The pattern is again very deterministic and favours
the highest priority thread.
139
4 Analysis and Design of Communication in Control Frameworks
Processing Events
Each task is equipped with a periodic event processor which checks at the
end of each execution cycle if events need to be processed. Figure 4.43 shows
the measured latency of the event processor for the high, medium and low
priority threads respectively. In the majority of the cases, the queue is empty
and a small overhead of about 3µs shows in all three tasks.
The additional peaks are not to be confused with jitter as they indicate
the processing of one or two events, accounting for a latency of 20 percent
of the task’s load. The top graph reads an overhead of 20µs for processing
one event aynchronously, since the peak should be at 10µs and it is around
20µs. This processing interferes with the emission of events, but no high jitter
shows.
140
4.6 Synchronisation of Activities
Figure 4.43: Concurrent application processing events: high, medium and low
priority. Line legend: period/load.
141
4 Analysis and Design of Communication in Control Frameworks
4.7
Configuration and Deployment
Configuration flow is the (non-real-time) deployment and (re)configuration of
tasks. Configuration starts with deploying a task in an operating environment
and to make it load instructions that provide new, or update existing,
functionalities and behaviours. Configuration also involves moving some
parts of a system to other processing nodes in a network. Remote software
updating is another example: an engineer at a company’s headquarters
connects to a customer’s machine, and steers the machine controller through a
software update cycle. This involves uploading new software versions for some
Tasks’ functionality. This kind of configuration always requires a (partial)
deactivation of the tasks involved and a possible reconfiguration of the tasks
which interact with the reconfigured task.
Execution flow may reconfigure the system by switching between binary
code that is already available in the tasks since the last (re)configuration; for
example, an estimator event triggers the switch between two control laws.
Configuration flow typically requires some execution flow messages to bring
the tasks to an already available “configuration state”, in which functionality
can be unloaded and loaded in a safe way, or tasks be moved to a new node.
Configuration in Real-Time systems can not happen on arbitrary times or
from arbitrary contexts. A thread executing a control functionality can not
be used to read in a file or perform any action of unbounded time. That said,
configuration can not happen in real-time data flow or real-time execution
flow. A number of known design patterns can be applied for separating
configuration flow from execution flow and data flow. An overview with
references is given below.
4.7.1
The Property Interface
The properties of a task are a structured representation of its parameters
(or attributes) and visible state. A property object has a name, value and
optionally, a description. Properties may be exported to and manipulated by
external entities. The Property Interface allows browsing of this structure and
modifying the contents. Properties are typically read from a file and checked
for consistency when a task is created and written to a file format when the
task is destroyed in order to provide task configuration persistence. Not only
parameter values are stored in properties but also the names of task or data
objects to which the task is connected, such that when a task is created, it
can, by using a name server, reconnect to these objects in order to reestablish
its functionality in the system. A standard file format for storing properties
is the CORBA XML Component Property Format (CPF) (OMG c).
Although properties contributed a large deal to the configurability and
usability of the control tasks which can be built with the framework, this
142
4.7 Configuration and Deployment
work does not contribute new insights to that domain. Properties are grouped
in (unordered) property bags. A property may contain a value or a bag
and hence, property bags can form nested hierarchies which can be used
to structure the data. A bag can be queried for its contents and is thus
browsable. The property interface of a control task is thus no more than the
access to its root bag and methods to load, modify and read its properties.
A property can be used as an AssignableExpression as command, method
or event arguments. For completeness, the PropertyInterface of a control
task is shown in Figure 4.44.
Figure 4.44: The Property Interface UML class diagram.
4.7.2
Name servers and Factories
When properties are used to store the names of commands, events, peer
tasks, data objects or any other object in the framework, it needs a name
server to establish these connections. Furthermore, by only updating the
properties, it is possible to reconnect a task within the framework. The
property and name server patterns are thus complementary in establishing
a reconfigurable network. If the task is “moved” over a network to another
node, it will notify the name server of its new location such that peer tasks
can re-establish the connection. The property interface and task browsing
interface of the next section are actually name servers to their interface. Hence
a lightweight version of a name server was implicitly used in these interfaces.
The Naming Service (OMG b) is an OMG proposed standard for name servers
for distributed CORBA objects.
While a name server is used to locate existing objects within an
application, the factory design pattern (Gamma, Helm, Johnson, and Vlissides
1995) serves to create new objects by a given tag or name. For example, when
a task is configured, it uses the interface factories to create all the command
143
4 Analysis and Design of Communication in Control Frameworks
objects it may send to other tasks or create all the data flow connections for
all data it may communicate to other tasks. The Operations, event and data
flow interfaces are thus all factory interfaces which create commands, event
connections or data flow connections.
144
4.8 Task Browsing
4.8
Task Browsing
As Section 4.1 pointed out, a peer-to-peer task infrastructure is the most
flexible to implement any application architecture, being it a small scale
robot control loop or a large scale wood working machine. The aim is to
build associations between tasks and to jump from one task to the other
following these associations, in other words to “browse” the task network.
Such a peer-to-peer network requires that each task presents the same
interface which allows to set up connections to other tasks and browse
these connections. Furthermore, the interface presented by each task is in
addition composed of all the interfaces seen in this Chapter. Figure 4.45
shows the resulting Control Task Interface which extends all interfaces of this
Chapter with methods for browsing. This interface respects the separation
Figure 4.45: The Control Task Interface UML class diagram.
between interface and behaviour. Although every control task implements the
ControlTaskInterface, it does not need to contain commands, properties,
events, etc. Each interface can be queried for the existence of these objects
and thus the interface of every task essentially hides if and which behaviour
is implemented. In other words, not only the task network is browsable, but
also the interface of each individual task. The advantages of this technique
become clear when command or property objects are added (or upgraded) at
run-time to a task’s interface; or when a new task is added in the network,
existing tasks can query it for the services it offers.
145
4 Analysis and Design of Communication in Control Frameworks
4.9
Summary
This Chapter analysed the requirements for an application independent realtime control task or component. The control task defines the separation
between interface and activity on one hand, and provides interfaces which
allow real-time, thread-safe execution of its activities on the other hand. In
other words, by analysing the activities which are executed within control
tasks, the requirements of its peer task’s interface could be deduced. The
communication patterns which were proposed in this Chapter allow, by
design, hard real-time communication and lock-free data exchange, which was
validated experimentally, and compared to classical communication methods.
The interface of a control task such as proposed in this work allows task
activities to
• communicate buffered or unbuffered data between tasks;
• subscribe to peer task events;
• send commands or call task methods;
• configure and read task properties;
• be distributed over a network.
This interface suffices to set up the control kernel application template, but
gives only partial insight on how the behaviour of a control task needs to
be realised. The next Chapter goes into great detail on how to realise task
activities which use the communication patterns of this Chapter.
146
Chapter 5
Real-Time Task Activities
The design patterns presented in the previous sections structure the
application’s configuration and communication mechanisms. We define a
Control Task as an object which supports one or more of these interfaces
and makes them available to peer tasks for inspection (“browsing”) and
execution (“calling”). This Chapter details how task activities, which use
these mechanisms for communication, can be realised. Activities consist of a
set of actions, such as calculations or commands, it wishes to execute. A task
offers the environment (or context) in which activities directly operate and
through which activities are connected to other tasks.
Figure 5.1 shows the task with its interfaces as defined in the previous
Chapter and shows how a task can be internally structured. Centrally, an
Execution Engine (Section 5.3) serialises incoming commands, executes event
handlers and executes activities. The activities can be hard-coded (fixed at
compile time) in a task or, in our terminology, be loaded (in non real-time) as
programs (Section 5.1) or state machines (Section 5.2) which are executed in
real-time by a Program Processor and State Machine Processor, respectively.
Activities in a task use the interfaces of peer tasks to communicate and realise
their activity. The definition of this infrastructure in terms of the design
patterns of the previous Chapter, is a major contribution of this work since it
integrates and provides a framework for all communication patterns of realtime control systems.
This Chapter starts with the introduction of the program graph. It is
a real-time automaton which defines and executes an algorithm and thus
realises an activity. This program graph is very suitable and even designed
towards the communication mechanisms of the previous Chapter. Next, a
state machine model is defined which is as expressive as UML state charts.
The state machines built to this model are very suitable to express which
activities must be executed in which state and how transitions from one state
147
5 Real-Time Task Activities
to another must be made. The central Execution Engine is left to the last
section and is merely the serialisation of all kinds of Processor s in this work.
Figure 5.1: A task’s public interface is composed of events, operations (methods
and commands), properties and data flow ports. Its activities are executed by
an execution engine and are programs and state machines. Its interface and peer
connections can be browsed.
5.1
Real-Time Programs
As the previous Chapter pointed out, activities realise part of the behaviour
of a task, i.e. what it does. Programming languages serve to define activities
which are then compiled to an underlying process model. In some languages,
like C++ or Ada, this is the actual hardware processor architecture, while in
other languages, such as Java or Lisp this is a virtual machine: a program
which offers a hardware independent environment which can interpret and
execute applications. The former technique has the advantage that it executes
applications as fast as possible and as deterministically as possible, but the
compilation step can take a long time and delivers non portable machine
code. The latter technique (commonly named “byte-compiling”) is slower
since the resulting code needs to be interpreted by the virtual machine but
has the advantage that it is portable. Furthermore, interpreted code has the
148
5.1 Real-Time Programs
advantage that the application can be gradually extended with new units of
functionality, which seamlessly inter-operates with the existing application. A
broad spectrum of variants exists between these approaches, and the approach
in this work is a blend optimised to real-time execution of run-time loadable
programs.
A program conditionally executes actions (or statements), such as invoking
a method, checking a completion condition or modifying a property. The most
straightforward way to model this execution is by using a graph which executes
an action in each node and checks the outgoing arcs of the current node for a
valid transition to the next node. This model has several advantages:
• It is deterministic in time, when the actions and conditions are
deterministic. That is, traversing a graph is a bounded time process
and does not add indeterminism to the actions it executes.
• Such a graph can be constructed at run-time out of “node” objects and
“edge” objects which are connected.
• The arcs form synchronisation points: they can block or allow the
traversal to a next node under certain conditions.
• One can execute one action at a time, as in a debugger, without requiring
additional functionality.
• Although executing a graph is slower than executing a compiled
program, it is faster than an interpreted program, since only virtual
functions need to be called and nothing is left to interpretation.
The first item makes program graphs suitable for executing real-time
activities.
The second item allows both compile-time and run-time
construction of graphs. The third item makes it fit to execute commands
and methods. The forth item makes it fit for debugging purposes and the last
item makes it a better alternative for embedded systems than an interpreted
language. The program graph approach has also some disadvantages:
• Since Action::execute() takes no arguments (Subsection 4.5.2) an
alternative argument passing infrastructure is required.
• Constructing such an object graph by hand is notoriously difficult.
Imagine only a small program of about one hundred statements.
• Each kind of action requires a different object type, easily leading to
hundreds or even thousands of different types (i.e. classes).
• It can not easily express real-time recursiveness.
149
5 Real-Time Task Activities
The first item was already solved in the previous Chapter by introducing
the ExpressionInterface which serves to share data between actions. The
second item can be solved by developing a scripting language which queries
the method and command factories for actions and glues the graph’s nodes
together. The third item can only be solved by using a generic programming
language such as C++ or a recent version of the Java language. That is, a
“skeleton” action object can be constructed which is tied at compile-time (thus
by the compiler) to a function of the task. When the action is executed, the
function is called. The last item comes from the fact that there is no “stack”
involved in traversing a program graph. All intermediate data is stored in
pre-allocated expression objects and a real-time program is not allowed to
allocate additional expression objects when a function is recursively entered.
This may be considered as the biggest disadvantage as recursivity is often
seen as a fundamental building block to build intelligent systems. A virtual
stack emulation could be added in which the stacked expression objects are
allocated, but this was beyond the scope of this work.1 However, any need
for recursion can be encapsulated in a method or command of a task written
in the underlying programming language.
This Section describes the program graph model and how such a graph
can be constructed from program scripts.
5.1.1
Program Model
A (real-time) program is a graph which contains one Action in each Node
and directed Arcs between the Nodes, Figure 5.2 and Figure 5.3. There is
a start node and an end node and any number of nodes (Nx ) connected
with directed arcs (Ax,y from Nx to Ny ) in between. The Action in a node
may modify or read local data, evaluate an expression, send a command to a
peer task context,. . . If the Node’s Node::execute() method returns ’false’,
an error occured and the program flags its ProgramStatus as error. Upon
entry (Figure 5.4), a Node is reset with Node::reset() and its outgoing arcs
with Arc:reset() upon which it is ready to call Action::execute(). After
the Action is executed, each outgoing Arc evaluates a CompletionCondition
which checks if a transition to a next Node may be made. It may evaluate if a
Command is done, if a variable has a certain value or evaluate other expressions.
When the end node is reached, the program stops.
The execution of a node in itself and the traversing of the arcs are a
deterministic process and do not add indeterminism to the execution of the
operations.
1
The C++ construct “placement-new” offers a possibility to application-managed
memory allocation.
150
5.1 Real-Time Programs
Figure 5.2: Program graph structure.
Figure 5.3: A UML class diagram of a program graph.
151
5 Real-Time Task Activities
Figure 5.4: ProgramGraph sequence diagram.
5.1.2
Program Status
Programs are loaded and executed in a task’s Program Processor. Once
loaded, they can be started, stopped, paused and stepped by node and be
queried for their status, being ready, stopped, running, paused or error.
The transitions between these states in relation with the methods of the
ProgramGraph class are given in Figure 5.5. The error state is entered when
an action returns false. The ProgramGraph::step() function may be used in
both running and paused states, but will execute at most one action in the
latter case. The effect of step in the running state depends on the program
execution policy and is discussed next.
5.1.3
Program Execution Policy
The ProgramGraph may contain a number of policies in which it can execute
its program. The ProgramProcessor executes a program statement using
ProgramGraph::step() which executes the current node and, if any arc
152
5.1 Real-Time Programs
Figure 5.5: UML State diagram of the ProgramGraph Status.
condition evaluates ’true’, make the target node the current node and reset
its action and outgoing arcs. The program execution policy determines how
many steps are executed in uninterrupted sequence. The default policy is to
execute as many statements until a node has no arcs which evaluate ’true’.
We call this a wait condition since it forces to postpone the execution of
the program to the next step. The next executing step, the action is again
executed and all conditions evaluated. If an action may be executed only once,
a decorator (see Subsection 5.1.5) can be used to execute a contained action
only once after Node::reset(). In practice, methods are executed directly
one after each other, while a program encounters a wait condition for most
commands. When a program is paused, the ProgramProcessor no longer
executes the step method. When the step method is externally invoked, the
ProgramGraph will execute exactly one statement. Other policies may execute
a given maximum number of statements or even try to ignore errors or handle
exceptions.
5.1.4
Program Scripting
To facilitate the construction of program graphs, we have developed a scripting
language which allows to create and load program graphs in a (running)
task. A script parser queries the factory interfaces of all peer tasks to create
actions which execute methods, commands or modify program variables or
task properties. The nodes are wired to each other using the classical branch
statements such as if ...then ...else and while(...) constructs. The
syntax of the scripting language is very similar to the C programming
language, although it lacks the notion of pointers to objects and is thus also
not able to define object oriented structures. A task is however represented as
an object within the language such that a task’s operations, properties, events
and ports are members of that task. The language can handle any user defined
type and can be extended to parse expressions operating on these types. An
introduction to this language can be found in (Soetens and Bruyninckx ). The
resulting program graphs are fully real-time executable and no exceptions are
153
5 Real-Time Task Activities
Listing 5.1: Invoking a method
var d o u b l e n e w f o r c e = 1 0 . 0 ;
do PeerTask . s e t P a r a m e t e r ( ‘ ‘ Force ’ ’ , n e w f o r c e ) ;
thrown or run-time type information is required during execution.
The program script parser is also used to build the interactive ’task
browser’ application. It is an application independent human machine
interface which connects to any task in the system and allows to browse its
interface, invoke methods, commands or modify properties of the task or one
of its peers. An introduction to the task browser and its capabilities is also
given in (Soetens and Bruyninckx ).
5.1.5
Program Statements
To clarify the rather complex program execution semantics, some examples
of common functional programming statements are converted into node-arc
structures and a light is shed on the advantages of the above execution
semantics. The syntax of the algorithms conforms to the Orocos program
script syntax.
Invoking a Method
The simplest construct is to invoke a peer method, for example such as in
(Listing 5.1) Since a method is a type of Action, and a Node contains an
Action, the construction of a node which executes an action is straightforward
as in Figure 5.6 and Figure 5.7. A method node M has one out going arc A
which contains the True condition , which always evaluates to true. Hence,
when node M is executed, both the action m and condition t are first reset,
which has no effects (both are stateless). The method m is then executed
synchronously and the condition t is evaluated. Since t always returns true,
the ProgramProcessor follows the arc A and moves on to the next node N .
Invoking a Command
Commands, such as a movement command as in Listing 5.2, are somewhat
more complex than methods, but the program execution semantics are very
fit to execute such asynchronous operations. The node and arc of a command
are shown in Figure 5.8. Extra layers of indirection are needed to wrap
both command and its completion condition in order to adhere to command
execution semantics. The command needs to be guarded against multiple
invocations when its completion condition does not immediately evaluate to
154
5.1 Real-Time Programs
Figure 5.6: Program graph for invoking a method.
Figure 5.7: UML sequence diagram for invoking a method.
true, since the ProgramProcessor executes the action as long as no arc can be
taken. Furthermore, its completion condition may not be evaluated as long as
the command was not executed by the peer’s CommandProcessor, thus there
must be a way to delay its evaluation and detect the command’s execution
status. A small optimisation is possible when we allow the ProgramProcessor
to execute a Command directly when the command is owned by the task itself.
This situation is explained first.
Figure 5.9 shows a sequence diagram for invoking a command ma of
the activity’s task. The command’s MoveAxisAction is wrapped in an
Listing 5.2: Invoking a command
var d o u b l e v e l o c i t y = 1 2 . 5 ;
var d o u b l e p o s i t i o n = 1 . 2 5 0 ;
var d o u b l e a x i s = 3 ;
do r o b o t . moveAxis ( a x i s , v e l o c i t y , p o s i t i o n ) ;
155
5 Real-Time Task Activities
Figure 5.8: Command node-arc structure.
AsynchronousAction asyn which is a Decorator (Gamma, Helm, Johnson,
and Vlissides 1995) which only executes the action ma once after a reset by
the program’s node. The outgoing arc a of a command node contains its
CompletionCondition. The condition may be evaluated directly since the
command’s action was executed directly.
Figure 5.9: UML sequence diagram for invoking a local command.
Figure 5.10 shows a class diagram and Figure 5.11 shows a sequence
diagram where the command is to be dispatched to a remote CommandProcessor,
thus where a program invokes the command of another task. In comparison
with Figure 5.9, two decorators were added to the command’s Action. The
TrackStatusAction stores the execution status of the command’s action. It
shares this state with the TrackStatusCondition which delays evaluation
of the CommandCondition as long as the command was not executed. It
also shares this state with the DispatchAction which will return the actual
command status when the command is executed. The DispatchAction
thus replaces the AsynchronousAction object and actually requires multiple
156
5.1 Real-Time Programs
invocations by the program node to report back the command status.
157
5 Real-Time Task Activities
Figure 5.10: UML class diagram for invoking a remote command.
Figure 5.11: UML sequence diagram for invoking a remote command.
158
5.1 Real-Time Programs
Listing 5.3: A while loop
var i n t i = 0 ;
w h i l e ( i != r o b o t . t o t a l A x e s ( ) ) {
do r o b o t . d i s a b l e A x i s ( i ) ;
set i = i + 1;
}
Executing a Loop
Looping and branching are the most common constructs in functional
programming language and are supported by program graphs. This Section
only shows a classical while loop (Listing 5.3), but other constructs can be
(and have been) created likewise. A single statement loop only needs to consist
of a single node and an arc which evaluates how many times the action in the
node was executed, as in Figure 5.12. When the action is a command or
multiple actions must be executed within the loop, a branch arc must be
constructed which connects the last action with the first guarded with the
conditional statement of the loop, as in Figure 5.13.
Figure 5.12: A program graph with a single action loop.
The constructs for other loop types or branch statements are similar.
Loops can contain loops or branch statements in turn. These features are
supported by the program script parsers.
5.1.6
Program Interfacing
The functions of the ProgramGraph in Figure 5.3 may not be concurrently
invoked while the program is executed in order to not interfere with
the invocations of the ProgramProcessor.
A program borrows the
CommandProcessor(Subsection 4.5.2) of its task to accept its start, stop,
159
5 Real-Time Task Activities
Figure 5.13: A program graph with multiple actions within a loop.
pause, . . . commands. A program has in common with the task that its local
variables could be represented as properties and its functions exported as
commands. Hence a light-weight task can be constructed which offers this
interface to access and manipulate the program, but shares its command and
program processor with its ‘parent task’. This ‘program task’ is then a peer
of the task in which it is loaded, as in Figure 5.14. Allowing programs to be
manipulated as tasks opens transparent possibilities for integrating programs
in a network of tasks and allows to inspect and even modify running programs.
5.1.7
Summary
The real-time program graph infrastructure offers a real-time framework for
executing statements and evaluating conditions in a control application. It
is powerful enough to execute any functional program as long as it does not
involve recursive algorithms. It is fit to execute methods, commands or any
action defined in the system. The program infrastructure relies on a command
processor for manipulation of program graphs. Towards usability, it relies on
the presence of a generic programming language for building action types and
on a script parser for building the program graph structure.
5.2
Real-Time State Machines
Since activities are executed only when a task is in a certain state, it would be
convenient to express which activities to execute in which state and describe
160
5.2 Real-Time State Machines
Figure 5.14: A task network with a program task.
(O)perations, (E)vents, (D)ata flow.
Legend:
(P)roperties,
how and when transitions from one state to another are made or allowed.
The management of such a state machine forms an activity in itself and
hence, state machines can execute state machines as activity, hence forming a
hierarchical state machine. A change from one state to another can be caused
by an event or by evaluating a transition condition. Naturally, both activities
and transitions of the state machine must be compatible with the results of
the previous Sections and Chapters. The construction of a state machine
infrastructure in this Section merges all results of this work. State machines
execute program graphs as activities, thus execute peer tasks commands and
methods and can make state transitions using conditions and events. The
state machine is the most appropriate construct to build Process components
to keep track of the current state of the data flow components of the Control
Kernel. The state machine must be as real-time as the activities it executes.
Switching controllers, performing a recalibration or monitoring correctness
all require real-time response times. The state machine is the embodiment of
the logic and process knowledge of a physical machine and must change as
the machine changes. In the field, hardware up- or downgrades will require
state machine changes, hence, as program scripts, state machines should be
on-line (un-)loadable, such that remote maintenance or upgrades become a
possibility.
Figure 5.15 shows an example state machine for a fictitious application.
The application builder defines the Initialise, SelfTest, . . . , Cleanup
states and the transitions between them. One state is the initial state and
one state is the final state. Any number of in between states can be defined,
depending on the needs of the application. The arrows between states denote
161
5 Real-Time Task Activities
the events upon which a transition is made. Clearly, different states react to
different events. Each state may execute a (real-time) activity, which is not
shown in the figure. The state machine infrastructure of this work allows to
realise and manage state machines as shown in this figure.
Figure 5.15: UML state diagram of an application’s state machine.
Harel’s state charts (Harel 1987) have proven to be sufficiently expressive
to specify discrete behaviours. OMG’s UML work group acknowledged the
state chart model by embracing it completely in its 1.0 specification and
leaving it largely (fundamentally) unmodified in successive revisions. A state
machine is thus the realisation of a UML state chart. (OMG g) and (Douglass
1999) are the prefered sources for an overview of the semantics of state
charts. State charts have the following improvements over classical MealyMoore(Mealy 1955; Moore 1956) finite state machines, which were specifically
designed with circuit electronics in mind. State charts
• are designed for modelling software behaviour.
• can model nested state machines to specify hierarchical state
membership.
• can execute state machines concurrently and asynchronously
• use pseudo states to model state change dynamics.
• allow a list of actions to be executed upon state transition.
As motivated earlier (Selic 1993), state charts are less suitable to derive
an implementation from the specification because of transition semantics
ambiguities with respect to the state’s activity. We refined the state chart
with respect to these transition semantics and the special meaning of the
initial and final state of a state chart. More specifically, the state machine
design we present has been extended over classical state charts in the following
ways:
162
5.2 Real-Time State Machines
• state charts use action lists as entry, exit and transition programs, thus
to be executed ’atomically’, as if in an infinitely small amount of time.
In our approach, these programs may be activities and thus may take a
finite amount of time to complete.
• The initial and final states are safe states, being entered when the state
machine is created and stopped respectively. A transition from the final
state to the initial state is always valid. A transition from any state to
the final state is always valid. Both kinds of transitions need not be
defined by the state machine designer.
• The state charts specification leaves the semantics of interrupting the
state activity open, as well as the semantics for denied transitions. Our
approach defines one run program to define the activity and one or more
handle program to handle denied transitions.
The next Sections dissect the resulting state machine model.
5.2.1
State Machine Model
A state machine consists of a finite number of states. When a state is entered,
the entry program is executed, next the state’s activity is executed until a
transition occurs. A transition triggers the execution of a transition program,
the exit program of the current state and the entry program of the next state.
The state chart formalism allows to define two kinds of state transitions: by
event and by evaluating a condition. The semantics of a state transition are
slightly different in this work compared to state charts. A transition event in
a state is commonly defined as:
state-name: event-name(parameter-list): [guard] {transition-program}
target-state
Our framework interprets such a transition rule as follows. When the event
event-name occurs in state-name, its data is stored in the parameter-list and
a guard condition checks (for example, by inspecting the parameter-list) if
the event may be accepted in this state. If so, the transition-program, which
can also access the parameter-list, is executed, next the state’s exit program
and finally the target-state is entered. A flow-chart shows these semantics in
Figure 5.16.
A transition condition from state-name to target-state is commonly defined
as:
state-name: [guard] {transition-program} target-state
Our framework interprets such a transition rule as follows. A transition
condition, guard, is checked after the run program state-name. If no transition
163
5 Real-Time Task Activities
condition is valid, the state’s handle program is executed and the run program
is again started. When the condition transition may be made, first the
transition-program is executed, then the state’s exit program and finally the
target-state is entered. A flow-chart shows these semantics in Figure 5.17.
Figure 5.16: UML state diagram for processing events in a state.
Figure 5.17: UML state diagram of evaluating transition conditions in a state. The
event handling is omitted but will take place as in Figure 5.16 if event transitions
are defined.
164
5.2 Real-Time State Machines
We define a StateMachine (Figure 5.18) by an initial State, a final
State and any number of intermediate states. A State has (all optional)
an entry, exit, run and handle StateProgram. All these programs
have the same execution semantics as the ProgramGraph of the previous
Section. The StateMachineProcessor executes its state machines using
StateMachine::step(), which in turn executes the current program of the
current state. Furthermore, each state defines its Transitions to target
states. With each Transition, a TransitionProgram may be associated and
a Guard which evaluates if the transition may be taken. As a specialisation,
the EventGuard has access to event arguments of an EventHandler which is
subcribed to an Event (not shown). Each EventHandler is associated with
exactly one Transition. Each State may have one or more Preconditions
which must hold true when it is entered. It functions like an extra guard
which is evaluated in addition to the transition’s guard.
Currently, state machine hierarchies are modeled as parent-child relations
between StateMachine objects (Figure 5.18). This is not fully conforming
the state charts standard. In state charts, the hierarchy is formed within
states, thus a state contains a state machine. This nested state machine is
then started when the state is entered and executed instead of the state’s
entry, run or exit program. Although adapting the current model to these
semantics remains future work, it can be mimicked by starting a child state
machine in the state’s entry program and stopping or pausing it (in case of
history2 ) in the exit program.
Whether transition conditions are evaluated depends on the mode of the
state machine. A state machine in the reactive mode only reacts upon events.
This will mainly be used if the activity of the state is realised externally and
the state machine merely influences the activity by the events it receives.
In the automatic mode, the run program contains the activity which is run
continuously and in addition the transition conditions are checked. The
motivation for these two modes comes from the open interpretation of how
state charts can be realised and the demands of control applications. A state
machine in control should be able to both monitor external activity, such as
the Process component in the control kernel, or execute activity itself in a
given state, such as a Generator component keeping track of the axis path
planning state and calculating the interpolation between setpoints.
5.2.2
State Machine Status
Like a program graph, a state machine has a state of its own. We denote
these states as status in order not to confuse with the user defined states of
2
History in state charts means that the state ’remembers’ in which sub-state it was
before it was left the last time and enters it immediately the next time.
165
5 Real-Time Task Activities
Figure 5.18: A UML class diagram of a hierarchical state machine.
166
5.2 Real-Time State Machines
Figure 5.19: UML state diagram of the StateMachine status.
a state machine. The state diagram in Figure 5.19 shows the evolution of
its status. The states and transitions correspond to the StateMachineStatus
and StateMachine methods respectively, shown in Figure 5.18. Upon creation
and before destruction, a state machine is inactive, it does not react to
events nor executes any activity. When the StateMachine::activate()
method is called, it executes the entry program of the initial state. The
error status can be reached from virtually any state and is not shown. A
request for a transition to the final state is always valid, even if no such
transition is defined by the state machine designer. This allows to safely stop
(StateMachine::stop()) a task from any state. A request for a transition
from the final state to the initial state (a StateMachine::reset()) is also
always valid. When the state machine is deactivated it executes the exit
program of the current state and no longer reacts to events.
Reactive Mode When a state machine is activated (Figure 5.19), it enters
the initial state and executes its entry program. The entry program may start
an activity, initialise state data or activate and start a child state machine.
167
5 Real-Time Task Activities
It now waits for the events, which are defined by its event transitions. It also
accepts state change requests (StateMachine::requestState()), which are
guarded by transition conditions, to enter another state. A request may also
request to execute the run program once. The state machine is thus purely
reactive.
Automatic Mode When an activated state machine is Running, by calling
StateMachine::automatic(), it executes the run program of the current
state, when finished, checks the transition conditions and if none succeeds
executes the handle program. When the handle program is finished, the
run program is run again and so on. These execution sequences may take
multiple steps if some activity’s actions are commands, but are guaranteed
to be executed as-if they form a single action if the program has no wait
conditions (see Subsection 5.1.3). Incomming events are processed between
entry and exit programs, and if necessary delayed until after transition
condition evaluation and the handle program. If a guard prevents the event
to have effect, the event’s handle program, if present, is executed.
Error Handling As mentioned in Subsection 5.1.2, a program node may
indicate failure after being executed. If this happens in any state program, the
state machine stops the execution of that program and flags the error status.
A parent state machine may detect this and request the child’s final state or
even deactivate it in case this transition fails too. If an error is detected during
deactivation, a second deactivation request will finish deactivation without
further execution of the erroneous program.
5.2.3
State Machine Execution Policy
From the moment on a state machine is activated, the StateMachineProcessor
invokes the StateMachine::step() method and hence delegates state
program execution and transition processing to the state machine. At any
time, the state machine can be paused. From this moment onward, the
state machine processor does no longer invoke the step method and whichever
program was being executed is also paused. When the state machine’s step
method is externally invoked, it will execute one statement in the current
program or evaluate one transition. This mechanism is possible because
the ProgramGraph can be paused as well. This policy allows to debug state
machines during development.
5.2.4
State Machine Scripting
Analogously to building program graphs, constructing a state machine may
be to cumbersome and hard to realise with only the model at hand as in
168
5.2 Real-Time State Machines
Figure 5.18. A scripting syntax was developed which allows to describe
hierarchical state machines. A state machine script parser reads the script
and builds on-line an object structure which can execute the described state
machine and activities. Although the parsing is a non deterministic process,
the execution of the resulting state machine is deterministic and thus suitable
for execution in real-time threads. The language is non standard and is even
not able to model all state graph functionality directly. Providing a portable
specification remains thus future work but does not require additional changes
of our state machine model. It is merely the front-end that needs to be
improved.
A state machine is specified much like a class in object oriented languages.
After specification, it can be instantiated multiple times with different
parameters. A full syntax description is given in (Soetens and Bruyninckx
), but a short overview is given below.
Listing 5.4 shows the definition of a StateMachine of type StateMachineType.
The parameters (param) of a state machine are provided upon instantiation,
the variables (var) are members of the state machine which can be used by
its states to store and exchange data. Any number of states can be defined
with the keyword state followed by the name of the state (StateName). A
state may have preconditions, which are entry guards and are chained (logical
AND) with any guard for a transition to this state. When the state is entered,
the entry program is executed and contains syntax identical to the program
script syntax. The same holds for the run, exit and handle programs. The
transitions list contains the condition transitions and is thus evaluated
when a state change is requested in reactive mode or evaluated in between
run program executions in the automatic mode. The guard is formulated as
the well known if ... then syntax followed by the transition program,
again in program script syntax. The transition program is finished by a
select statement which indicates the TargetState of this transition. Event
transitions are analogous to condition transitions but start with the keyword
transition, followed by the name of the event, which may be a peer’s event,
and a list of variables which receive the data of the event. The guard and
transition program are analogous to the condition transitions. The select
statement is now optional (denoted by [...]) though and when the guard
fails, a program can be executed in the else branch and an alternative state
can be selected.
A StateMachine can be instantiated with the syntax of Listing 5.5.
RootStateMachine is a keyword which informs the parser that we wish to
instantiate a StateMachine of type StateMachineType. The resulting object
gets the name stateMachineName and gets an argument-list which initialises
the parameters of that state machine. In this way, multiple state machines
can be instantiated with different arguments. Building a hierarchical state
169
5 Real-Time Task Activities
Listing 5.4: Minimal state machine script
StateMachine StateMachineType
{
param . . .
var
...
s t a t e StateName
{
precondition ( expression )
entry {
// Program s y n t a x
}
run {
// Program s y n t a x
}
exit {
// Program s y n t a x
}
transitions {
i f ( e x p r e s s i o n ) then {
// Program s y n t a x
} s e l e c t TargetState
}
handle {
// Program s y n t a x
}
t r a n s i t i o n eventName ( v a r i a b l e l i s t )
i f ( e x p r e s s i o n ) then {
// Program s y n t a x
} [ s e l e c t TargetState ] e l s e {
// Program s y n t a x
} [ s e l e c t OtherState ]
}
}
Listing 5.5: Instantiating a task’s state machine
RootStateMachine
StateMachineType machineName ( a r g u m e n t l i s t )
machine has analogous syntax as shown in Listing 5.6. Any number of children
can be instantiated.
170
5.3 Task Execution Engine
Listing 5.6: Instantiating a child state machine
StateMachine StateMachineType
{
SubStateMachine
ChildType childName ( a r g u m e n t l i s t )
// . . .
}
5.2.5
State Machine Interfacing
Analogous to a ProgramGraph, a StateMachine can be controlled from
other activities. It also requires a command processor for serialising the
incoming commands with the activity of the state machine processor. The
resemblance between state machines and tasks is again noteworthy. As shown
in Figure 5.20, a state machine can be seen as a light-weight peer task, S, of
the task in which it is executed. Its child state machines, Sa and Sb, are peers
in turn to S. The variables and parameters of a state machine are mapped to
task properties, and the interface of a StateMachine (Figure 5.18) appears
as commands and methods in the task’s operations interface. These state
machines can now be interfaced from peer tasks or activities from the same
task.
5.2.6
Summary
UML state charts form a solid basis for modelling concurrent, hierarchical
state machines. The state chart model is realised by using program graphs
as activities and asynchronous events or transition conditions to trigger state
transitions. The presented state machine model is capable of executing purely
reactive programs or executing activities and checking transition conditions
actively. The resulting state machine hierarchy can be interfaced as a task
peer network, allowing access to parameters and use its operations interface.
5.3
Task Execution Engine
The Execution Engine is the central activity of a task. It invokes the
Command, Event, Program and State Machine Processor activities in
sequence (Figure 5.21), such that communication within the Execution Engine
is not concurrent and thus safe. A periodically triggered Execution Engine will
invoke the step() functions of each Processor sequentially, in effect, polling
for work. For periodic control tasks, like servo loops, this is only a small
171
5 Real-Time Task Activities
Figure 5.20: A task network with a hierarchical state machine task.
(P)roperties, (O)perations, (E)vents, (D)ata flow.
overhead which is done in sequence with the control activity.
Figure 5.21: UML sequence diagram of the Execution Engine.
172
Legend:
5.4 Application
Figure 5.22: The interface of the components of Figure 1.8 is extended with
additional commands.
A not periodically triggered Execution Engine has much in common with
the Reactor pattern (Schmidt, Stal, Rohnert, and Buschmann 2001). A single
synchronisation point, such as a semaphore, acts as an event generator to
indicate that work is to be done, and is signalled when a command is queued,
an event was raised or a program’s command finished. For each signal, the
Execution Engine executes step() of each Processor to poll for work.
Either way, periodic or non periodic, the Execution Engine functionality
is executed as the main activity of a task. Collocated tasks can share a single
execution engine, sequencing their activities. The Execution Engine is not
subject to distribution since this is against its reason of existence: to localise
the serialisation of incoming communication with the local activities. Hence
the Execution Engine is not visible in the interface of a control task, although
the command and event processors are accessible to accept commands and
process events from the task’s operation and event interfaces.
5.4
Application
This Chapter concludes with an application example which requires the
presence of state machines and activities. This example extends The ‘Car
tracking’ application from Section 1.4. The interface of the components is
extended as shown in Figure 5.22. The application fetches an image of the
play field with the Camera component and sends it to the Image Recognition
component. The latter calculates a vector from the camera to the car. Using
the current position and orientation of the camera, the location of the car in
the room is calculated. The camera is then aimed as such that the object is
located in the middle of the image.
173
5 Real-Time Task Activities
The Program A third component, the Tracking component, is added which
describes the tracking activity with a program graph (this is a role analogous
to the Process component of Chapter 3). Figure 5.23 illustrates how this
description can be decomposed in a program with actions and conditions.
The control loop starts with taking a camera image in node N 1, which is
done by an action which sends a command to the camera component. The
program then waits until the image is ready, A12, which is the completion
condition of that command. Next a command to process the image is sent
to the image recognition component in node N 2. Two outcomes are possible,
the car is located, A24, or it isn’t, A23. These options are logically AND’ed
with the completion condition of the command. In case the car is located,
the camera is moved towards the location of the car in node N 4. In the other
case the program jumps to a check point, node N 3, where it is determined if
the program should stop, A3e. No action is taken there. Otherwise, A23, the
program jumps again to node N 1.
Figure 5.23: A car localisation program.
The State Machine When the localisation program is used in the field, it
will not be desired to continuously track the Car. Before the car is tracked,
the camera needs calibration or it needs to be positioned in a given direction.
This leads to a number of states in which the application can be. In each
174
5.4 Application
Figure 5.24: The states of a car localisation application.
state, another activity (program) is executed, and events define the transitions
between states. Figure 5.24 illustrates the state machine of the application,
the activity which is executed in each state and the events that cause state
transitions. This state machine is loaded in the Execution Engine of the
Tracking component and thus executed by the State Machine Processor.
The state machine is straightforward. When the state machine is
activated, it enters the initial state, Startup which starts both camera and
image recognition components. To start the application, the state machine
itself must be started and enters the Waiting state. There it waits for a
movement command or for the start calibration event. When the camera
is calibrated the application is Ready for Localisation. When the Start
Localisation event is emitted, the earlier described car localisation program is
run, until the Stop Localisation event is received.
Whether this particular state machine polls for these transition conditions
or waits for events is merely an implementation detail. Since the Tracking
component needs to execute a periodic activity in the Localise Car state, it is
reasonable that the Execution Engine, and thus the State Machine Processor
is executed by a periodic activity.
The state machine of Figure 5.24 can be written down as a script.
For illustration purposes, this is done in Listing 5.7 and Listing 5.8. The
listing assumes that the Tracking component reacts to the startMovement(p),
startCalibration() and startLocalisation() events (not shown in the Figures).
The state machine type is TrackingApplication. A variable p is used to
store a target position of the Camera, lateron in the script. A stopProgram
flag is used as well lateron. The initial state is Startup which starts the
components in its entry program. The transition statement indicates that
the Waiting state may be entered when the state machine is started. The
175
5 Real-Time Task Activities
Listing 5.7: The state machine running in the Tracking component, Part 1.
StateMachine T r a c k i n g A p p l i c a t i o n
{
var p o s i t i o n p ;
var b o o l stopProgram = f a l s e ;
i n i t i a l s t a t e Startup {
entry {
do Camera . s t a r t ( ) ;
do I m a g e R e c o g n i t i o n . s t a r t ( ) ;
}
transitions {
s e l e c t Waiting
}
}
f i n a l s t a t e Shutdown {
entry {
do Camera . s t o p ( ) ;
do I m a g e R e c o g n i t i o n . s t o p ( ) ;
}
}
s t a t e Waiting {
transition startCalibration () select Calibration
t r a n s i t i o n moveCamera ( p ) s e l e c t P o s i t i o n i n g
}
// See Part 2 .
final state is Shutdown which is entered when the state machine is stopped
and which stops both components. The Waiting states waits for two events,
startCalibration or moveCamara. The latter receives an argument, which is
stored in p. Depending on the event type, the Calibration or Positioning
states are entered.
Listing 5.8 contains the second part of the script. The Positioning
state sends the moveCamera command with the stored position to the
Camera component. When this command is completed, the run program
will return and the transitions are evaluated. In this case, the Waiting state
is entered. The Calibration state is analogous to the Positioning state and
the ReadyForLocalisation state is analogous to the Waiting state. Finally, the
program of Figure 5.23 is written down in the run program of the LocaliseCar
176
5.4 Application
Listing 5.8: The state machine running in the Tracking component, Part 2.
// Continued from Part 1 .
state Positioning {
run {
do Camera . moveCamera ( p ) ;
}
transitions {
s e l e c t Waiting
}
}
state Calibration {
run {
do Camera . c a l i b r a t e ( ) ;
}
transitions {
i f Camera . i s C a l i b r a t e d ( ) then
s e l e c t ReadyForLocalisation
s e l e c t Waiting
}
}
state ReadyForLocalisation {
transition startLocalisation () s e l e c t LocaliseCar
}
state LocaliseCar {
run {
w h i l e ( ! stopProgram ) {
do Camera . f e t c h I m a g e ( ) ;
do I m a g e R e c o g n i t i o n . p r o c e s s I m a g e ( ) ;
i f ( I m a g e R e c o g n i t i o n . c a r L o c a t e d ( ) ) then
do Camera . moveCamera ( Camera . g e t P o s i t i o n ( ) +
ImageRecognition . carLocation ( ) ) ;
i f stopProgram then r e t u r n ;
}
}
transitions {
s e l e c t ReadyForLocalisation
}
}
}
177
5 Real-Time Task Activities
state. By toggeling the stopProgram variable, the program can be stopped at
a fixed point. This variable could be toggled by an event.
5.5
Summary
This Chapter proposed a method for executing real-time activities as a nodeedge graph which executes actions in its nodes and evaluates conditions in its
edges. The advantage of this approach is that it can express most functional
programming constructs at reasonable speed under real-time conditions.
Furthermore, it allows activities to be reprogrammed in an on-line controller.
However, a solution to incorporate recursive algorithms in this method was
not proposed. The definition of how activities can be executed is only a part
of the solution. Providing a method to precisely define when activities are
executed and under which conditions is a necessary complement. The state
chart formalism was refined in order to define precise state change semantics
with respect to the activities running within a state. The model of such a
state machine was presented which took advantage of the program graph and
event semantics. The state machine model allows real-time execution and
state changes and can be loaded in a running system as well.
The Chapter concluded with an example of a Car tracking application
with a state machine.
178
Chapter 6
Conclusions
This Chapter concludes this work with a high level summary of the results
and innovations of this work. Next, the limitations of the presented work are
discussed as well as future work.
6.1
Summary
This text started with analysing some common machine control scenarios
in which the choice and design of the software control framework were
fundamental for correctly realising the application. In each of the given
machine control applications, it was required that: the framework was open
in order to be extendible; hard real-time in order to be usable and provide
generic communication semantics in order to build connectible software
components. The literature survey gave an overview of the state of the
art robot and machine control solutions and held it to the light with
respect to these requirements. This was done by looking at five different
facets of a control framework. To design such a framework, the Unified
Modelling Language provides mechanisms suitable to model interactions
and relations between software components. Furthermore, it provides us
a terminology to describe software systems. The real-time properties of a
framework are fundamental as well for (distributed) control applications. It
was argued that the dependencies on the real-time operating system can be
kept minimal with respect to the framework. A real-time process scheduler
and a semaphore primitive are sufficient to control execution of activities. To
realise communication between local activities a universal lock-free primitive
is required. For distributed communication, a real-time communication
protocol is required in addition. In absence of the latter, only a localised
real-time framework can be obtained, such as is presented in this work.
179
6 Conclusions
This does not exclude component distribution which is another facet of a
control framework. Middleware enables distributed components but requires
interfaces which define how one interacts with a distributed component.
Finally, some control architectures were discussed with respect to these facets,
but no satisfactory architecture could be found. The difference between
infrastructure and architecture was a key argument to dispose existing control
’architectures’. Although architectures or application templates are useful
for rapid development of one kind of application, a control framework must
offer an application independent infrastructure on which application specific
components can be built.
The difference between architecture and infrastructure was clearly shown
by the feedback control kernel versus the real-time control framework. In
a top-down approach, an architecture for intelligent feedback control was
presented and the required infrastructure for such an application template
was identified. The design pattern for feedback control decouples data flow
(i.e. packet based or ’streaming’ communication) from execution flow (i.e.
message based communication or ’decision making’ communication). This
opens major benefits to reusability, since the data flow can be identified for a
wide range of applications (it shapes the template) while the execution flow is
part of the “Process” of a specific application and requires application specific
configuration.
To realise such an application template and its components, a solid
infrastructure is required which allows to define component interfaces and
build components to that behaviour. Defining interface and implementing
behaviour are the yin and yang of a realised component. This work presented
a generic real-time control component interface which exposes its properties,
events, operations and data flow ports. These interfaces were motivated by
analysis of common communication or configuration needs. None of these
communication primitives are new, but their identification as software pattern
and their capabilities with respect to both real-time execution and design
towards on-line constructable and browsable component interfaces has not
been presented in previous work. Even more, these patterns also provide
an answer to the dual synchronous-asynchronous completion requirements in
control applications.
The real-time properties were validated for each communication mechanism
and compared to classical lock-based approaches. The presented framework is
superior to the other control frameworks in scalability and real-time latencies
by the use of lock-free data exchange in local communication. This approach
allows to extend any controller with additional components without increasing
the latency of existing higher priority components. Distributed control also
benefits from local lock-free communication, as priority propagation, which
is common in distributed real-time communication protocols, becomes very
180
6.2 Evaluation and Contributions
effective since the remote invocation can not stall on locks and is thus truly
only dependent on its quality-of-service property.
In order to complement the interface definition with encapsulated
behaviour, the realisation of activities and state machines was investigated.
State of the art reactive, hierarchical behaviours can be specified with UML
state charts. State charts have however two main shortcomings, they leave
the specification and execution of activities open ended, and the transition
semantics with respect to these activities are not specified. This work
proposes an activity execution mechanism which allows to express functional
real-time programs which are fit to use the framework’s communication
patterns. Rapid prototyping benefits from the browsable nature of the
component’s interfaces since the latter allows to load real-time activities
at run-time into a component. Such extended functionality has not been
observed in previous work and was only possible by a well designed form of
the framework’s communication primitives. In order to manage the execution
of activities, hierarchical, real-time state machines can be specified, integrated
seamlessly with the event interface of the real-time control components. The
contribution of this work lays again in the expressiveness of these state
machines. Transitions to both events and conditions can be specified which
allows a state machine to both actively and reactively change state. The
state machine and its activities can be interfaced as a peer task which can be
commanded or inspected down to the variable level.
The real-time control ‘task’ infrastructure presented in this work provides
a means to build highly interactive and thus truly open controllers. The tasks
form a peer-to-peer network which captures the architecture of the application
at hand. Each entity in the control system can browse both the network and
each task’s interface. The task interface provides the means to build objects
which are responsible—in real-time—to invoke operations, react to events, or
build data flow connections. It becomes thus almost trivial to build a tool
which connects to a control application and inspects or modifies each aspect
of the controller.
6.2
Evaluation and Contributions
It would be reasonable to test the results of this work to the same criteria
as the other frameworks which were investigated in Chapter 2 by using
Figure 6.1.
6.2.1
Design Methodology
This work re-uses well established, standardised design methodologies and
practices to communicate its results. The definition of communication
181
6 Conclusions
Figure 6.1: How well does this work perform, and what does it contribute, in these
areas ?
patterns for real-time communication in the UML guarantees that the results
can be compared with related design patterns. It can be argued that the work
did not use the full expressiveness of this standard. More specifically, the UML
profile for time and scheduling was not used, and OCL expressions were not
provided. The former could be left aside largely because of the scheduler
independence provided by the solutions in this work, while the latter was
given in textual form.
6.2.2
Real-time Determinism
This work presents the first machine control infrastructure built exclusively
with lock-free communication. Further more, the designs presented fully
decouple real-time and not real-time activities in order to guarantee time
determinism. The real-timeness of the design patterns were validated
experimentally and compared to traditional lock-based approaches. It was
concluded for each communication primitive that the patterns effectively
favour the highest priority activity, in contrast to the traditional approach.
6.2.3
Robot and Machine Control Architecture
The results of this work contribute to the field of designing machine controllers
because it does not enforce an architecture, but motivates an infrastructure
182
6.3 Limitations and Future work
for hard real-time communication. A design pattern for feedback control
applications, the backbone of intelligent robot control, is presented which
also shows how an application template can be designed with the presented
infrastructure. This design pattern is extensively motivated by the decoupling
of data flow from execution flow, and by the role each component type has in
realising this mechanism. Especially the Process component is a key in this
pattern as it defines the locations of coupling, in order to obtain a higher level
of decoupling between the data flow components.
6.2.4
Software Component Distribution
This work contributes to the design of distributed machine controllers as it
defined the common interface for a machine control component. This common
interface nullifies the costs for deploying any component after the first, since
both deployment environment and component do not need to be changed to
accommodate for new components. Its ‘meta’ interface becomes immediately
visible to other components once deployed.
6.2.5
Formal Verification
This work is not formally verified, nor formally verifiable. The correctness
of the lock-free communication however relies on proven algorithms and were
also validated as such. The expression of software in UML models such as
state charts makes it potentially integrable with future formal verification and
code generation methods.
6.3
Limitations and Future work
The limitations of this work are summarised here for completeness. The
previous section already pointed out that the presented designs were only
experimentally validated, but not formally.
The design pattern for feedback control requires a rapid prototyping tool to
easily build components. Nowadays the components interfaces and behaviour
are written by hand in a programming language, while commercial tools allow
to graphically connect and reuse control algorithms. This work would benefit
from the integration with such a tool such that a control algorithm can be
‘uploaded’ to a component, or used as a plug-in.
The presented infrastructure was designed with periodic activities in mind
in the first place1 . Sending commands is mainly a business of periodic
tasks which periodically check the completion condition. The synchronousasynchronous event pattern can however be used to communicate in a reactive
1
Which is not rare in control applications however.
183
6 Conclusions
system. A clear ‘style-guide’ on how to use the presented design patterns in
practice was not presented, although the state machines and activities are
first steps in that direction.
Realistically speaking the framework has in its current form in the short
term good chances of survival but in the long term, none. Most software
frameworks only live for a decade (or two), after which they are replaced by
a state of the art makeover, which solves the problem in a fundamentally
different way. There is hope though, that, because of its openness, it may
adapt to the changing requirements. The source is open and out there and it
got just mature enough to actually spark both academic and industry interest.
Its full potential can only be reached if people start sharing their modifications
and additions, something in which the free software license surely assists.
However, the day it stops changing, it will deprecate. The most important
reason for this is that the science of software design is far from its end and
racing from childhood into maturity. Sooner than most people think, writing
software will be very different from today. The Model Driven Architecture
will introduce a new way of writing software in which the program source
files as we know them today are products of automation tools and not of
humans. Fortunately, the knowledge which is in the source of Orocos need
not to be lost. Most modelling tools are already capable of capturing the
structure of an application and store it in a universal file format, which allows
to regenerate said program in another programming language. The design
patterns presented in this thesis are in the long term the best candidates for
survival however. Source code is always being changed and refactored while
design patterns are reused and complemented. We had no tools to transform
the latter in the former, so a lot of effort2 was spent in the validation of
the design by implementing it and seeing if it was of any use for the real
users of Orocos. We will keep working like that in the near future, gradually
adopting the new design methodologies and tools transforming the whole into
something being only a distant relative of what we have now.
2
184
And I admit, joy.
References
Anderson, J., S. Ramamurthy, and K. Jeffay (1995). Real-time computing
with lock-free shared objects. Proceedings of the 16th IEEE Real-Time
Systems Symposium, 28–37.
Bálek, D. and F. Plášil (2001). Software connectors and their role in
component deployment. In Proceedings of the IFIP TC6 / WG6.1 Third
International Working Conference on New Developments in Distributed
Applications and Interoperable Systems (DAIS), pp. 69–84.
Becker, L. B. and C. E. Pereira (2002). SIMOO-RT—an object-oriented
framework for the development of real-time industrial automation
systems. IEEE Trans. Rob. Automation 18 (4), 421–430.
Boissier, R., E. Gressier-Soudan, A. Laurent, and L. Seinturier (2001).
Enhancing numerical controllers, using MMS concepts and a CORBAbased software bus. Int. J. Computer Integrated Manufacturing 14 (6),
560–569.
Bollella, G., B. Brogsol, P. Dibble, S. Furr, J. Gosling, D. Hardin, and
M. Turnbull (2000). The Real-Time Specification for Java. AddisonWesley.
Boost. Portable C++ libraries. http://www.boost.org/.
Borrelly, J. J., E. Coste-Manière, B. Espiau, K. Kapellos, R. PissardGibollet, and D. S. adn N. Turro (april 1998). The orccad architecture.
Int. J. Robotics Research 17 (4), 338–359.
Botazzi, S., S. Caselli, M. Reggiani, and M. Amoretti (2002). A software
framework based on Real-Time CORBA for telerobotic systems. See ??
(2002), pp. 3011–3017.
Broenink, J. F. (1999). Introduction to physical systems modelling with
bond graphs. In SiE Whitebook on Simulation methodologies.
Brookes, S. D., C. A. R. Hoare, and A. W. Roscoe (1984). A theory of
communicating sequential processes. 31 (3), 560–599.
185
References
Brooks, A., T. Kaupp, A. Makarenko, O. A., and S. Williams (2005).
Towards component based robotics. See ?? (2005).
Brown, F. T. (2001). Engineering system dynamics. Marcel Dekker.
Brown, N. and P. Welch (2003). Communicating Process Architectures,
Chapter An Introduction to the Kent C++CSP Library. Enschede,
Netherlands: IOS Press.
Brugali, D. and M. E. Fayad (2002). Distributed computing in robotics and
automation. IEEE Trans. Rob. Automation 18 (4), 409–420.
Bryan, K., L. C. DiPippo, V. Fay-Wolfe, M. Murphy, J. Zhang, D. Niehaus,
D. T. Fleeman, D. W. Juedes, C. Liu, L. R. Welch, and C. D. Gill (2005).
Integrated corba scheduling and resource management for distributed
real-time embedded systems. In 11th IEEE Real Time and Embedded
Technology and Applications Symposium (RTAS’05), pp. 375–384.
Burchard, R. L. and J. T. Feddema (1996). Generic robotic and
motion control API based on GISC-kit technology and CORBA
communications. In Int. Conf. Robotics and Automation, Minneapolis,
MN, pp. 712–717.
De Schutter, J., J. Rutgeerts, E. Aertbelien, F. De Groote, T. De Laet,
T. Lefebvre, W. Verdonck, and H. Bruyninckx (2005). Unified
constraint-based task specification for complex sensor-based robot
systems. In Int. Conf. Robotics and Automation, Barcelona, Spain, pp.
3618–3623.
Doherty, S., D. L. Detlefs, L. Grove, C. H. Flood, V. Luchangco, P. A.
Martin, M. Moir, N. Shavit, and G. L. J. Steele (2004). Dcas is not a
silver bullet for nonblocking algorithm design. In SPAA ’04: Proceedings
of the sixteenth annual ACM symposium on Parallelism in algorithms
and architectures, New York, NY, USA, pp. 216–224. ACM Press.
Dokter, D. (1998). HP open architecture machine-tool controller. In
Y. Koren, F. Jovane, and G. Pritschow (Eds.), Open Architecture
Control Systems, Summary of global activity, Milano, ITALY, pp. 103–
114. ITIA-Institute for Industrial Technologies and Automation.
Douglass, B. P. (1999). Doing Hard Time. Addison-Wesley.
Douglass, B. P. (2002). Real-Time Design Patterns. Addison-Wesley.
Fayad, M. E. and D. C. Schmidt (1997). Object-oriented application
frameworks (special issue introduction). Communications of the
ACM 40 (10).
Ferretti, G., M. Gritti, G. Magnani, G. Rizzi, and P. Rocco (2005,
March). Real-time simulation of modelica models under linux / rtai.
186
References
In Proceedings of the 4th International Modelica Conference, HamburgHarburg, Germany, pp. 359–365.
Fleury, S., M. Herrb, and R. Chatila (1997). GenoM: a tool for the
specification and the implementation of operating modules in a
distributed robot architecture. In Proc. IEEE/RSJ Int. Conf. Int.
Robots and Systems, Grenoble, France, pp. 842–848.
Gadeyne, K. (2005, September). Sequential Monte Carlo methods for
rigorous Bayesian modeling of Autonomous Compliant Motion. Ph. D.
thesis, Dept. Mechanical Engineering, Katholieke Universiteit Leuven.
Gamma, E., R. Helm, R. Johnson, and J. Vlissides (1995). Design patterns:
elements of reusable object-oriented software. Reading, MA: AddisonWesley.
G.H., H., A. Bakkers, and J. Broenink (2000). A distributed real-time java
system based on csp. In 3rd IEEE International Symposium on ObjectOriented Real-Time Distributed Computing ISORC 2000, pp. 400–407.
Gidenstam, A., M. Papatriantafilou, and P. Tsigas (2005). Practical
and efficient lock-free garbage collection based on reference counting.
Technical Report 04, Computing Science, Chalmers University of
Technology.
Gokhale, A. S., B. Natarajan, D. C. Schmidt, and J. K. Cross
(2004). Towards real-time fault-tolerant corba middleware. Cluster
Computing 7 (4), 331–346.
Greenwald, M. and D. R. Cheriton (1996). The synergy between nonblocking synchronization and operating system structure. In Operating
Systems Design and Implementation, pp. 123–136.
Harel, D. (1987). State charts: A visual formalism for complex systems.
Science of Computer Programming 8, 231–274.
Harris, T. L. (2001). A pragmatic implementation of non-blocking linkedlists. Lecture Notes in Computer Science 2180, 300–314.
Harrison, T. H., D. L. Levine, and D. C. Schmidt (October 1997).
The design and performance of a real-time CORBA event service. In
Proceedings of OOPSLA ‘97, (Atlanta, GA).
Hauser, C., C. Jacobi, M. Theimer, B. Welch, and M. Weiser (1993,
December). Using threads in interactive systems: A case study. In 14th
ACM Symp. on Oper. Syst. Principles, Asheville, NC, pp. 94–95.
Herlihy, M. (1991). Wait-free synchronization. ACM Trans. Program. Lang.
Syst. 13 (1), 124–149.
Herlihy, M., V. Luchangco, and M. Moir (2003). Obstruction-free
synchronization: Double-ended queues as an example. In ICDCS
187
References
’03: Proceedings of the 23rd International Conference on Distributed
Computing Systems, Washington, DC, USA, pp. 522. IEEE Computer
Society.
Herlihy, M. and J. E. B. Moss (1993). Transactional memory: architectural
support for lock-free data structures. In ISCA ’93: Proceedings of the
20th annual international symposium on Computer architecture, New
York, NY, USA, pp. 289–300. ACM Press.
Herlihy, M. P. and J. E. B. Moss (1991). Lock-free garbage collection for
multiprocessors. In SPAA ’91: Proceedings of the third annual ACM
symposium on Parallel algorithms and architectures, New York, NY,
USA, pp. 229–236. ACM Press.
Hohmuth, M. and H. Härtig (2001). Pragmatic nonblocking synchronization
for real-time systems. In Proceedings of the 2001 Usenix Annual
Technical Conference, Boston, Ma.
Huang, H., P. Pillai, and K. Shin (2002, June). Improving waitfree algorithms for interprocess communication in embedded realtime
systems. In USENIX Ann. Techn. Conference, Monterey, CA.
IROS2002 (2002). Proc. IEEE/RSJ Int. Conf. Int. Robots and Systems,
Lausanne, Switzerland.
IROS2005 (2005). Proc. IEEE/RSJ Int. Conf. Int. Robots and Systems,
Edmonton, Canada.
Karnopp, D. C., D. L. Margolis, and R. C. Rosenberg (1990). System
Dynamics: A Unified Approach (2nd ed.). Wiley.
Kiszka, J., B. Wagner, Y. Zhang, and B. J. (2005, September).
Rtnet – a flexible hard real-time networking framework. In 10th
IEEE International Conference on Emerging Technologies and Factory
Automation, Catania, Italy.
Kleppe, A., J. Warmer, and W. Bast (2003). MDA Explained. AddisonWesley.
Koninckx, B. (2003). Modular and distributed motion planning,
interpolation and execution. Ph. D. thesis, Dept. Mechanical
Engineering, K.U.Leuven, Leuven, Belgium.
Koninckx, B., H. Van Brussel, B. Demeulenaere, J. Swevers, N. Meijerman,
F. Peeters, and J. Van Eijk (2001). Closed-loop, fieldbus-based clock
synchronization for decentralised motion control systems. In CIRP 1st
Int. Conf. on Agile, Reconfigurable Manufacturing.
Koren, Y. (1998). Open-architecture controllers for manufacturing systems.
In Y. Koren, F. Jovane, and G. Pritschow (Eds.), Open Architecture
188
References
Control Systems, Summary of global activity, Milano, Italy, pp. 103–
114. ITIA-Institute for Industrial Technologies and Automation.
LaMarca, A. (1994). A performance evaluation of lock-free synchronization
protocols. In PODC ’94: Proceedings of the thirteenth annual ACM
symposium on Principles of distributed computing, New York, NY, USA,
pp. 130–140. ACM Press.
Lampson, B. W. and D. D. Redell (1980, February). Experience with
processes and monitors in mesa. Communications of the ACM (2), 105–
117.
Lefebvre, T. (2003, May). Contact modelling, parameter identification
and task planning for autonomous compliant motion using elementary
contacts. Ph. D. thesis, Dept. Mechanical Engineering, Katholieke
Universiteit Leuven.
Liu, C. L. and J. W. Layland (1973). Scheduling algorithms for
multiprogramming in a hard real-time environment. J. of the
ACM 20 (1), 46–61.
Magnusson, B. and R. Henriksson (1995). Garbage collection for control
systems. In IWMM ’95: Proceedings of the International Workshop on
Memory Management, London, UK, pp. 323–342. Springer-Verlag.
Mallet, A., S. Fleury, and H. Bruyninckx (2002). A specification of generic
robotics software components and future evolutions of GenoM in the
Orocos context. See ?? (2002), pp. 2292–2297.
Massalin, H. and C. Pu (1992). A lock-free multiprocessor os kernel.
SIGOPS Oper. Syst. Rev. 26 (2), 108.
MathWorks. Simulation and model-based design by the mathworks. http:
//www.mathworks.com/products/simulink/.
McKegney, R. A. (2000). Application of patterns to real-time object-oriented
software design. Ph. D. thesis, Queen’s University, Kingston, Ontario
Canada, Canada.
Mealy, G. H. (1955). A method for synthesizing sequential circuits. Bell
System Technical J. 34 (5), 1045–1079.
Meo, F. (2004). Open controler enabled by an advanced real-time
network (ocean). In R. Teti (Ed.), Proc. Intelligent Computation in
Manufacturing Engineering, pp. 589–594. CIRP.
Michael, M. M. (2004). Hazard pointers: Safe memory reclamation for lockfree objects. IEEE Trans. Parallel Distrib. Syst. 15 (6), 491–504.
Michael, M. M. and M. L. Scott (1996). Simple, fast, and practical nonblocking and blocking concurrent queue algorithms. In PODC ’96:
189
References
Proceedings of the fifteenth annual ACM symposium on Principles of
distributed computing, New York, NY, USA, pp. 267–275. ACM Press.
Moore, E. F. (1956). Gedanken-experiments on sequential machines.
Automata Studies, Annals of Mathematical Studies (34), 129 –153.
National Institute of Standards and Technology. Open modular architecture
http://www.isd.mel.nist.gov/projects/
controls
(OMAC).
teamapi/.
NationalInstruments. Systembuild by national instruments inc. http://
ni.com/matrixx/systembuild.htm.
Ocean. Open Controller Enabled by an Advanced real-time Network. http:
//www.fidia.it/english/research ocean fr.htm.
OMG. Corba event service v1.2 specification. http://www.omg.org/
technology/documents/formal/event service.htm.
OMG. Corba naming service v1.3 specification. http://www.omg.org/
technology/documents/formal/naming service.htm.
OMG. Corba property service v1.0 specification. http://www.omg.org/
technology/documents/formal/property service.htm.
OMG. Data distribution service (dds) v1.0 specification. http://www.omg.
org/docs/formal/04-12-02.pdf.
OMG. Uml profile for schedulability, performance, and time. http://www.
omg.org/technology/documents/formal/schedulability.htm.
OMG. Uml specification. http://www.omg.org/technology/documents/
formal/uml.htm.
OMG. Unified modeling language (uml). http://www.uml.org/.
Orlic, B. and J. F. Broenink (2004, Octobre). Redesign of the c++
communicating threads library for embedded control systems. In
Proceedings of the 5th Progress symposium on embedded systems, NBC
Nieuwegein, Netherlands, pp. 141–156.
Pardo-Castellote, G. and S. A. Schneider (1994). The Network Data
Delivery System: real-time data connectivity for distributed control
applications. In Int. Conf. Robotics and Automation, San Diego, CA,
pp. 2870–2876.
Paynter, H. M. (1961). Analysis and design of engineering systems. MIT
Press.
Pohlack, M., R. Aigner, and H. Hrtig (2004, February). Connecting realtime and non-real-time components. Technical Report TUD-FI04-01Februar-2004, TU Dresden, Germany.
190
References
Prakash, S., Y. H. Lee, and T. Johnson (1994). A nonblocking algorithm for
shared queues using compare-and-swap. IEEE Trans. Comput. 43 (5),
548–559.
Rajwar, R. and J. R. Goodman (2002). Transactional lock-free execution
of lock-based programs. In ASPLOS-X: Proceedings of the 10th
international conference on Architectural support for programming
languages and operating systems, New York, NY, USA, pp. 5–17. ACM
Press.
Reeves, G. E. (1997). What really happened on mars rover pathfinder. The
Risks Digest, 19, http://catless.ncl.ac.uk/Risks/19.54.html#
subj6.
Rutgeerts, J. (2005). A demonstration tool with Kalman Filter data
processing for robot programming by human demonstration. See ??
(2005), pp. 3918–3923.
Schlegel, C. and R. Wörz (1999, October). The software framework
smartsoft for implementing sensorimotor systems. In In Proceedings
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Kyongju, Korea, pp. 1610–1616.
Schmidt, D., M. Stal, H. Rohnert, and F. Buschmann (2001). Patternoriented software architecture. Patterns for concurrent and networked
objects. Wiley.
Schmidt, D. C. and F. Kuhns (2000). An overview of the real-time CORBA
specification. IEEE Computer special issue on Object-Oriented Realtime Distributed Computing 33 (6), 56–63.
Schneider, S., V. Chen, G. Pardo-Castellote, and H. Wang (april 1998).
Controlshell: A software architecture for complex electromechanical
systems. Int. J. Robotics Research 17 (4), 360–380.
Schneider, S. A., V. W. Chen, and G. Pardo-Castellote (1995). The
ControlShell component-based real-time programming system. In Int.
Conf. Robotics and Automation, Nagoya, Japan, pp. 2381–2388.
Schulze, M., M. Cumming, and K. Nelson. Sigc++: Callback framework
for c++. http://libsigc.sourceforge.net/.
Selic, B. (1993, April). An efficient object-oriented variation of the
statecharts formalism for distributed real-time systems. In IFIP
Conference on Hardware Description Languages and Their Applications,
Ottawa, Canada, pp. 335–344.
Selic, B. (1996). An architectural pattern for real-time control software.
In Proceedings of the 1996 Pattern Languages of Programming and
Computing, pp. 1–11.
191
References
Selic, B., G. Gullekson, J. McGee, and I. EngelBerg (1992, July). Room:
An object-oriented methodology for developing real-time systems. In
Fifth International Workshop on Computer-Aided Software Engineering
(CASE), Montreal, Quebec, Canada.
Sha, L., R. Rajkumar, and J. P. Lehoczky (1990). Priority inheritance
protocols: An approach to real-time synchronisation. Trans. on
Computers 39 (9), 1175–1185.
Simon, D., B. Espiau, K. Kapellos, and R. Pissard-Gibollet (1998). The
orccad architecture. Int. J. Robotics Research 17, 338–359.
Simon, D., R. Pissard-Gibollet, K. Kapellos, and B. Espiau (1999).
Synchronous composition of discretized control actions: Design,
verification and implementation with orccad. In RTCSA ’99:
Proceedings of the Sixth International Conference on Real-Time
Computing Systems and Applications, Washington, DC, USA, pp. 158.
IEEE Computer Society.
Soetens, P. and H. Bruyninckx. The Orocos Real-Time Control Services
Manual. http://people.mech.kuleuven.ac.be/∼psoetens/orocos/
doc/orocos-manual.pdf: KU Leuven.
Stallings, W. (1998). Operating Systems, Third Edition. Prentice Hall.
Stevens, W. R. (1992). Advanced programming in the UNIX environment.
Reading, MA: Addison-Wesley.
Stewart, D. B., D. E. Schmitz, and P. K. Khosla (1997). The Chimera
II real-time operating system for advanced sensor-based control
applications. Trans. on Systems, Man, and Cybernetics 22 (6), 1282–
1295.
Stewart, D. B., R. A. Volpe, and P. K. Khosla (1997). Design of dynamically
reconfigurable real-time software using port-based objects. Trans. on
Software Engineering 23 (12), 759–776.
Sundell, H. and P. Tsigas (2004, March). Lock-free and practical
deques using single-word compare-and-swap. Technical Report 2004-02,
Computing Science, Chalmers University of Technology.
SysML. Systems modelling language (sysml). http://www.sysml.org/.
Utz, H., S. Sablatnog, S. Enderle, and K. G (2002). Miro—Middleware for
mobile robot applications. IEEE Trans. Rob. Automation, 101–108.
Valois, J. D. (1995). Lock-free linked lists using compare-and-swap. In
Proceedings of the 14th Annual ACM Symposium on Principles of
Distributed Computing, pp. 214–222.
Waarsing, B., M. Nuttin, and H. Van Brussel (2003, March). Behaviorbased mobile manipulation: the opening of a door. In 1st International
192
References
Workshop on Advances in Service Robotics, Bardolino, Italy, pp. 168–
175.
Waarsing, B. J. W., M. Nuttin, H. Van Brussel, and B. Corteville (2005).
From biological inspiration toward next-generation manipulators:
manipulator control focused on human tasks. IEEE Transactions on
Systems, Man, and Cybernetics: Part C 35 (1), 53–65.
Wang, N., D. Schmidt, and D. Levine (2000). Optimizing the CORBA
component model for high-performance and real-time applications.
Warmer, J. and A. Kleppe (2003). The Object Constraint Language Second
Edition. Object Technology Series. Addison-Wesley.
193
194
Appendix A
Experimental Validation
This appendix documents the interesting features of the experiments
conducted in this work. Both hardware and measurement method are given.
A.1
Hardware
All tests were conducted on the same common of the shelf hardware, being a
Pentium III 750MHz processor, 128KB level 2 cache, 256MB SDRAM and a
20GB ATA harddisk. It provides a network adaptor for remote access and a
graphical adapter suitable for optionally diplaying a graphical user interface.
It was extended with a National Instruments NI6024e PCI card which provides
digital IO using a BNC 2040 breakout box of the same company.
A.2
Software
The main operating system was GNU/Linux, patched with the RTAI/LXRT
v3.0r4 scheduler in order to create real-time tasks in a rate monotonic
scheduler setup. The tasks were written using the Orocos library, which uses
RTAI for creating threads, Apache’s Xerces for reading XML files and Comedi
v0.77 for accessing the National Instruments card. Since Orocos is a C++
framework, the application and library are compiled with the Gnu Compiler
Collection’s g++ version 3.3.4.
A.2.1
Time measurement
The experiments were conduced using the ‘ThreadScope’ feature of the
framework, which uses a digital output interface to generate block-waves
which indicate the runtimes of threads, Figure A.1. When a thread wakes
195
References
Figure A.1: The thread scope encodes time measurements by using block waves.
up, Orocos toggles an assigned bit number high and when it sleeps, the bit
is toggled down again. The application can reserve per thread additional
pins which it uses to encode the duration of a set of sequential functions.
By matching the first wave with the last, the applications duration can be
interpreted unambiguously.
The thread scope calibrates the duration of a measuring time itself (which
is an operating system call) and stores it for later corrections. Two methods
for capturing these waves were devised. One thread scope implementation
takes timestamps on each wave edge and stores the results in a lock-free buffer
per thread. These buffers are emptied by a non real-time thread. This means
the real-time threads never block on a write operation. Furthermore, the
data are written to the buffer just before the real-time thread goes to sleep,
so, writing the buffer as such is not measured. The non real-time thread
empties the buffers and writes the time measurements to a file per thread.
The histograms are then generated from each file. The RTOS calibrated the
CPU frequency to 703533000Hz. This means that one time tick is equal to
1.421ns, which is thus the resolution of the clock in this setup. This is well
below the micro second order of magnitude of the effects we wish to measure.
Time measurement itself took on average 3.174e-06 seconds, which is quite
large since it requires an invocation of the operating system. Accumulation
of this overhead was avoided by substracting only two consequetive times,
correcting them and adding the results.
The second method was devised to validate the correctness of time
196
References
Figure A.2: Detail of Labview measurements.
measurements on the target. This alternative thread scope toggles the
digital outputs of the National Instruments card and a Labview acquisition
(Figure A.2) program was created to measure the wave lengths. The wave
was sampled at 10MHz, thus has a precision of 0.1 micro seconds, which
is still accurate enough for our purpose. The advantage of this method is
that the overhead on target is minimal, only the latency of the accessing the
device driver influences the experiment. The disadvantage of this method
was that the testing of all the application types in all configurations was
too time consuming since the measurement PC needed to be synchronised
manually with the experiment. In addition the sampling generated so much
data that only short intervals could be captured. For this reason, the on
target measurements were used to gather the measurements in the validation
experiments.
A.2.2
Experiment Setup
Only two kinds of Orocos tasks were designed to cope with any experiment.
The first type of task forms an application component which is configured
197
References
through properties for its real-timeness, execution frequency, load, etc. It
also contains methods to calibrate the load and configure it to send events
or commands to designated peers. The four application types and their
topologies are hardcoded in the application’s main function.
The second kind of task functions as a test manager. It has all application
tasks as peers and can thus control their behaviour. It loads a program script
from disk, which calibrates each task, enables the thread scope and then starts
the application tasks. After a configurable amount of time, it stops all tasks
and the thread scope after which the experiment exits. Each experiment
ran for 5 minutes, which is sufficient to detect the worst case communication
latencies, given the artificial drift in execution periods.
The result of all experiments ended up in 80 files, one file per real-time
thread, per application type, per validation experiment, adding up to 759MB
of data. The data was processed using a Python programming language script
which creates the histogram data and the Gnuplot plot tool to visualise the
results.
198
Index
Smartsoft, 42
action, 23
activity, 23, 81, 86
event, 129
signal, 129
synchronisation, 129
activity diagram, 24
advanced user, 3
application builder, 3
application template, 3
artifacts, 24
behaviour, 83
browsing, 145
callback, 129
CAS, 31, 35
Chimera, 36
class diagram, 24
component
data flow, 56
deployment, 54
distribution, 39
execution flow, 56
structure, 53
component builder, 3
configuration flow, 142
connector, 102, 103
contention, 96
control flow, 23, 115
Control Kernel, 47
cascade, 57
example, 66
infrastructure, 63
ControlShell, 37, 106
CORBA, 39
RT-CORBA, 40
CSP, 44
data flow, 23, 101, 102
components, 49
interface, 106
data object, 104
decoupling, 75
activities, 83
communication, 101
deployment, 23
design, 75
real-time, 83
design pattern, 75
behavioural, 86
event, 131
for feedback control, 48
hatching, 75
mining, 75
structural, 86
synchronous-asynchronous, 117
determinism, 28
distribution, 30
drift, 96
end user, 3
event, 23, 131
callback, 132
Event Service, 137
Execution Engine, 64
execution engine, 24, 147, 171
199
Index
execution flow, 23, 101, 115
commands, 63, 115, 120
components, 51
methods, 63, 115, 121
Property Interface, 143
publish-subscribe, 130
factory, 143
feedback control, 47
framework builder, 2
real-time
distributed, 34
localised, 30
operating system, 29
ROOM, 24, 106
ports, 25
ROOMCharts, 25
Genom, 38
hard real-time, 28
IDL, 40
lock-free, 31, 85
memory allocation, 85
middleware, 39
name server, 143
obstruction-free, 34
Occam, 45
Ocean, 41
OCL, 22
OMAC, 42
on-line, 28
operation, 23
Orccad, 38
Osaca, 41
peer-to-peer, 78, 145
persistence, 142
port, 102
priority inversion, 32
process, 51
components, 51
program, 149
pausing, 153
scripting, 153
Program Processor, 147
program statement, 152, 154
properties, 142
200
quality of service, 39, 84, 85
scheduler, 28, 32, 85
semaphore, 29, 129
sequence diagram, 24
Simulink, 36
soft real-time, 28
state chart, 162
state chart diagram, 24
state machine, 160, 162
automatic, 168
errors, 168
hierarchical, 165
history, 165
instantiate, 169
model, 163
pausing, 168
reactive, 167
scripting, 168
status, 165
transition, 163
transition condition, 163
transition event, 163
State Machine Processor, 147
statement, 149
system builder, 3
task, 23, 77
behaviour, 81
composition, 78
determinism, 77
integrity, 77
Index
interface, 80
browsing, 66, 81
data flow, 80
events, 80
operations, 63, 80
properties, 65, 81
operation interface, 115
safety, 78
specification, 79
UML, 21
validation
activity, 93
data flow, 107
execution flow, 125
verification, 44
virtual machine, 148
wait-free, 31
201
202
Curriculum Vitae
Personal data
Peter Soetens
Date and place of birth: 27 April 1978, Waregem, Belgium
Nationality: Belgian
Email: peter.soetens@gmail.com
Education
• 2001-2005:
Ph.D. student at the Department of Mechanical
Engineering, Katholieke Universiteit Leuven, Belgium.
• 2000-2001: Master in artificial intelligence (option computer
science) at K.U.Leuven, Belgium.
• 1996-2000: Industrial Engineer Electro Mechanics (option
automation) at KAHO St.Lieven, Gent, Belgium.
204
List of Publications
[1] H. Bruyninckx and Peter Soetens. Generic real-time infrastructure for
signal acquisition, generation and processing. In The 4th Real-Time
Linux Workshop, Boston, MA, 2002.
[2] H. Bruyninckx, Peter Soetens, and Bob Koninckx. The real-time motion
control core of the Orocos project. In Proceedings of the 2003 IEEE
International Conference on Robotics and Automation, pages 2766–
2771, Taipeh, Taiwan, 2003.
[3] H. Bruyninckx, J. De Schutter, Tine Lefebvre, K. Gadeyne, Peter Soetens,
Johan Rutgeerts, P. Slaets, and Wim Meeussen. Building blocks
for slam in autonomous compliant motion. In Raja Chatila, Paolo
Dario, and Oussama Khatib, editors, Robotics Research, the 11th
International Symposium, pages 432–441, Siena, Italy, 2003.
[4] Peter Soetens and Herman Bruyninckx. Realtime hybrid task-based
control for robots and machine tools. In Proceedings of the 2005 IEEE
International Conference on Robotics and Automation, pages 260–265,
Barcelona, Spain, 2005.
205
List of Publications and Disseminations
206
Presented Seminars and
Dissemination
[5] Peter Soetens and Herman Bruyninckx. Orocos: Present state and future.
Free / Open Source Software Projects Concertation Meeting Brussels,
June, 24 2002.
[6] Peter Soetens. Orocos workshop on design and implementation of the
real-time motion control core. http://www.orocos.org, 18-19 November
2002.
[7] Peter Soetens. Lock-free data exchange for real-time applications. Free
and Open Source Developer’s Meeting, Brussels, February, 24 2006.
[8] Peter Soetens. Orocos: The open source reference when it comes to realtime and control. DSPValley Seminar Embedded Systems in Robotics
and Automation, Eindhoven, March, 22 2006.
[9] Peter Soetens. Seminar and hands-on on Orocos: Open Robot Control
Software, Odense. RoboCluster Controller Seminar, April, 5 2006.
[10] Peter Soetens and Herman Bruyninckx. The Orocos Real-Time Control
Services. KU Leuven, http://www.orocos.org.
[11] Peter Soetens and Herman Bruyninckx. The Orocos Component Builder’s
Guide. KU Leuven, http://www.orocos.org.
[12] Peter Soetens and Herman Bruyninckx. The Orocos Robot Control
Software Manual. KU Leuven, http://www.orocos.org.
207
208
Een software raamwerk
voor
ware tijd en
gedistribueerde
robot- en machinecontrole
Nederlandse Samenvatting
1
Inleiding
Ware tijd systemen (machines) voeren taken uit op vele niveaus en worden
gecontroleerd door ware tijd controlesoftware. Beschouw een hybride robotwerktuigmachine opstelling waar de robot assisteert voor het plaatsen van
een werkstuk gedurende de manipulatie van de werktuigmachine. Op het
laagste niveau, voeren zowel robot als werktuigmachine een positioneringstaak
uit, gebruik makende van een ware tijd teruggekoppelde controller. Op een
hoger niveau wordt een bewegingspad zonder botsingen gepland voor roboten werktuigmachine. Tussen de bewegingen worden operaties uitgevoerd
op de werkstukken wat een ware tijd synchronisatie vereist. Op een nog
hoger niveau bestaat de taak eruit een serie van deze werkstukken te
produceren en zoverder. Vele parallelle (ware tijd) taken kunnen bezig
zijn: kwaliteitscontrole van het werkstuk, evaluatie van de status van de
machines, het verzamelen van data voor analyse, het visualiseren van de
vooruitgang et cetera. Dit werk stelt oplossingen voor om ware tijd controllers
te ontwerpen door middel van (gedistribueerde) taken zodat de ware tijd
eigenschappen van het systeem niet geschonden worden tijdens communicatie
tussen taken. Zowel de vereiste communicatieprimitieven en de realisatie van
taakactiviteiten worden behandeld in dit werk.
Deze sectie geeft een inleiding op hoe controlesoftware bekeken wordt in dit
werk door vier ontwerper rollen. Vervolgens worden drie controleapplicaties
voorgesteld om het probleemdomein en de relevantie van dit werk uit te klaren.
De sectie eindigt met een overzicht van de contributies van dit werk.
1.1
Ontwerper-rollen
Een controleapplicatie wordt ontworpen door vier verschillende rollen van
bouwers. We beschouwen:
Raamwerkbouwers Ontwerpen de basis infrastructuur voor het bouwen
van componenten en applicaties. Deze infrastructuur is applicatie
I
Nederlandse samenvatting
onafhankelijk en omvat de besturingssysteemabstractie, ondersteuning
voor distributie, toestandsmachines et cetera.
Applicatiebouwer Deze specificeert de architectuur van de applicatie.
Een stereotype applicatie (mal ) wordt ontworpen en de nodige
componenten worden gedentificeerd. Bijvoorbeeld teruggekoppelde
controle, humanoı̈de robotcontrole of staaldraad trekken.
De
applicatiebouwer definieert welke componenten er nodig zijn en via welke
interface deze communiceren.
Componentenbouwer In deze rol wordt de concrete inhoud van elke
component gerealiseerd naar de interface gedefinieerd door de applicatiebouwer.
Systeembouwer Stelt een applicatie samen door een applicatie mal in
te vullen met de nodige componenten. De componenten worden
zodanig geconfigureerd dat zij kunnen werken in de voorhanden zijnde
machineomgeving.
Geavanceerde gebruiker Deze gebruikt de interface aangeboden door de
applicatiebouwer. Dit kan zowel een grafische interface zijn of een laag
niveau interface.
1.2
Relevantie
Dit werk heeft relevantie in drie domeinen: ware tijd (teruggekoppelde)
controle, distributie en machinecontrole.
• Ware tijd controle.
Dit werk is relevant voor ware tijd
controlesystemen omdat de communicatie primitieven voor locale
communicatie op zo’n manier ontworpen zijn dat tijdsdeterminisme
gegarandeerd is voor elke taak. Daarenboven zal de toegangstijd tot
data dalen naarmate de prioriteit van een taak stijgt. Het bouwen van
modulaire controlearchitecturen met de resultaten uit dit werk wordt
gemakkelijker. Hoofdstuk 3 beschouwt zo een architectuur voor ware
tijd teruggekoppelde controle.
• Distributie. Hoewel distributie toelaat de beschikbare rekenkracht
te verhogen, kan distributie extra indeterminisme veroorzaken door
de zwakheid van netwerk communicatie en het storen van lokale
activiteiten door activiteiten op afstand. Dit werk laat distributie van
controlecomponenten toe door een gemeenschappelijke interface voor
controle te definiëren, het encapsuleren van asynchrone commando’s,
en het gebruik van ‘middleware’ om distributie te realiseren in de C++
programmeertaal die courant is voor het ontwerp van controllers. Verder
laten de eigenschappen van de ware tijd controle hierboven vermeld
II
2 Situering
toe dat activiteiten op afstand communiceren met ware tijd locale
activiteiten zonder het determinisme van deze laatste te ondermijnen.
• Machine controle. Onder machinecontrole verstaat deze tekst het
reageren van de controlesoftware op discrete gebeurtenissen in een
machine, zoals een positie die bereikt is of een actie die voltooid is.
Dit werk stelt het model voor een synchrone-asynchrone gebeurtenis
voor dat toelaat vanuit parallelle activiteiten te reageren op deze
gebeurtenissen.
1.3
Bijdragen
De bijdragen van dit werk kunnen samengevat worden als volgt:
• Ontwerppatroon voor teruggekoppelde controle: Dit werk
identificeert het ontwerppatroon voor intelligente controle.
Dit
wordt gerealiseerd door de scheiding van datastromen en (beslissings)uitvoeringsstromen en de identificatie van de stereotype componenten
in zo’n architectuur.
• Ontwerppatronen voor machinecontrole: Dit werk stelt vier
patronen voor controle voor. Zowel voor het uitvoeren van ware tijd
activiteiten als voor het verwezenlijken van ware tijd communicatie
tussen (gedistribueerde) componenten.
• Definitie van ware tijd gedragsinfrastructuur: Een raamwerk
wordt voorgesteld voor de specificatie van ware tijd gedragingen en
activiteiten binnen componenten dat natuurlijk integreert met de
communicatieprimitieven van de componenten.
• Een vrije software raamwerk voor robot- en machinecontrole:
De resultaten van dit werk zijn gevalideerd en gepubliceerd in de Open
Robot Controle Software, Orocos en is publiek beschikbaar voor alle
partijen.
2
Situering
Dit werk kan gesitueerd worden in een brede waaier van probleem
domeinen. De focus ligt op determinisme, distributie, ontwerp, validatie en
controlearchitectuur en probeert een oplossing te formuleren die in zekere mate
voldoet aan deze domeinen.
III
Nederlandse samenvatting
2.1
Ontwerp Methodologien
Men kan twee voorname ontwerp methodologien aanhalen die geprobeerd
hebben software in het ware tijd domein te modelleren. Deze zijn UML en
ROOM.
UML is ontstaan als modelleertaal om softwarestructuren te beschrijven.
Deze werd recentelijk aangevuld door OCL om tevens beperkingen en
numerieke relaties tussen entiteiten te beschrijven. De ondertussen tot versie
2.0 gegroeide UML taal laat profielen toe om ware tijd systemen in tijd en
logische compositie te beschrijven. Deze techniek wordt MDA genoemd en
veroorzaakt een paradigma verschuiving in software ontwerp net zoals de
beweging van assembleer talen naar de C taal. Dit werk draagt bij tot het
modelgedreven ontwerp door abstracte ontwerppatronen te identificeren die
geschikt zijn voor ware tijd systemen. Tevens wordt de terminologie die in
UML wordt gedefinieerd gebruikt om deze patronen te beschrijven.
ROOM is een methodologie voor ontwerp en implementatie van
gedistribueerde ware tijd software. De auteurs van deze aanpak beweren
dat UML niet voldoet voor het ontwerp van ware tijd systemen. Daarom
hebben ze een dialect ontworpen dat componenten, gebeurtenissen en
toestandsmachines modelleert. Relevant voor dit werk is dat een ROOM
component poorten heeft waarop boodschappen aankomen. Bij aankomst
genereert dit een gebeurtenis waarop de toestandsmachine van de component
op kan reageren. De auteurs laten in het midden hoe gevoelig hun ontwerp
is voor deadlocks en prioriteitsinversies. De communicatieprimitieven in
dit werk zijn hier compleet ongevoelig voor. Verder is het gedrag van een
component online niet aanpasbaar, wat wel mogelijk is in dit werk.
2.2
Tijddeterminisme
Tijddeterminisme in ware tijd systemen wordt gedefinieerd als: “Een gegeven
activiteit moet kunnen reageren binnen een beperkt tijdsinterval op een
gegeven gebeurtenis.” Men spreekt van een hard ware tijd systeem indien
het halen van deze eindtijd leidt tot systeem falen. Het begrip “ware
tijd” in deze tekst heeft steeds deze betekenis. Het woord “online” wordt
gebruikt om zachte ware tijd systemen te benoemen. Dit werk veronderstelt
de aanwezigheid van een ware tijd planner gegeven dat deze de volgende
eigenschappen heeft: Deze kan activiteiten periode-monotoon, eindtijdmonotoon of vroegste eindtijd eerst plannen. Deze een signaal en wacht
primitief (semafoor) aanbiedt.
Ook de communicatie tussen activiteiten dient ware tijd eigenschappen te
hebben. We kunnen lokale en gedistribueerde communicatie onderscheiden.
Een raamwerk dat ware tijd garanties biedt aan lokale activiteiten zorgt
ervoor dat deze deterministisch blijven tijdens lokale communicatie. Een
IV
2 Situering
raamwerk dat ware tijd garanties bied aan globale activiteiten garandeert
tevens determinisme voor gedistribueerde communicatie. Dit werk beschouwt
enkel lokale garanties. Voor globale garanties wordt doorverwezen naar werken
over ware tijd middleware.
Om ware tijd garanties te bieden aan lokale communicatie steunt
op een architectuur met een universeel slot-vrij enkel woord Compare
And Swap (CAS) uitwisselingsprimitief.
De theorie over slot-vrije
algoritmes is eerst beschreven door Herlihy en later werd bewezen dat
gegeven de eerder vernoemde planners, deze algoritmes een deterministische
tijdsuitvoering hebben. De slot-vrije algoritmes kennen reeds toepassingen in
besturingssystemen en algoritme ontwerp. Dit werk steunt op de bevindingen
van deze werken om de slot-vrije algoritmes te implementeren.
Het gebruik van slot-vrije communicatie om determinisme in machinecontrole
toepassingen staat nog nergens beschreven en is nog niet aan de orde in
industrile toepassingen.
2.3
Robot- en Machinecontrole
Vele raamwerken voor robot- en machinecontrole zijn reeds beschreven. Deze
sectie haalt een aantal gebreken aan in eerdere raamwerken die niet aanwezig
zijn in dit werk.
Raamwerken kunnen niet distribueerbaar zijn, doordat ze enkel monolitische
controllers toelaten. Dit werk definieert expliciet de interfaces voor distributie
van controllercomponenten.
Raamwerken kunnen exclusief datastroom georinteerd zijn. Daardoor
wordt de beslissingslogica gemengd met de datastroom en vergroot dit sterk
de koppeling tussen componenten. Dit werk scheidt deze stromen en voorziet
tevens de scheiding tussen niet ware tijd configuratie en ware tijd uitvoering
van activiteiten.
Raamwerken kunnen te applicatie gericht zijn waardoor bruikbaarheid
en uitbreidbaarheid in het gedrang komen. Hoewel het aanbieden van
abstractielagen in het raamwerk de gebruiker initieel helpt zijn applicatie
te ontwikkelen, behoort het maken van gelaagde domein-abstracties tot het
domein van de applicatie en niet van het raamwerk. Dit werk presenteert
een bouwblok die zowel in horizontale als verticale architecturen kan gebruikt
worden.
2.4
Componenten Distributie
Een component kan gedefinieerd worden als een ’distribueerbare entiteit die
een dienst aanbiedt via een interface’. De dienst die in dit werk wordt
aangeboden is het toe laten dat controlecomponenten elkaar controleren
naargelang de applicatie dat vereist. De distributie impliceert dat een netwerk
V
Nederlandse samenvatting
de componenten verbindt en ware tijd gedistribueerde componenten laten
een manier toe om de kwaliteit van de dienstverlening over dit netwerk
te specificeren. De Real-Time CORBA standaard is zo’n specificatie. Dit
werk relateert naar zulke standaarden door interfaces voor ware tijd controle
diensten te specificeren zonder zo een standaard expliciet te vereisen. Met
andere woorden, als een ware tijd componentenmodel aanwezig is kunnen
de diensten er gebruik van maken, maar het werkt ook op niet ware tijd
netwerken, bijvoorbeeld om een gebruikersscherm aan een machine controller
te hangen.
2.5
Formele Verificatie
Formele verificatie van software poogt te doen wat sterkteleer doet voor
de bruggenbouwer: wiskundig aantonen dat een ontworpen brug niet zal
instorten onder een gegeven belasting. Gegeven een bepaalde software
structuur, zal het programma correct uitgevoerd worden op vlak van logica
en te halen eindtijden ? De CSP aanpak toonde aan dat de correctheid van
de logica kan bewezen worden voor sterk vereenvoudigde parallel uitvoerende
programma’s. Voor tijdscorrectheid of complexere programma’s bestaat geen
algemene aanpak. Dit werk draagt niet bij in het opsporen van correctheid en
stelt geen bewijzen van correctheid voor van de vooropgestelde oplossingen.
3
De Teruggekoppelde Controlekern
Deze sectie beschrijft het ontwerppatroon voor teruggekoppelde controle en
de realisatie ervan in de Controlekern. In de plaats van een controller te
beschrijven in een datastroom blokdiagram wordt de controletaak in dit
hoofdstuk opgenomen door een aantal componenten die elk een welomlijnde
rol spelen in de controleapplicatie. Het patroon is bedoeld om een controller te
realiseren die op gebruiker commando’s en signalen in machines kan reageren.
Alsook maakt het patroon intelligente controllers en het verzamelen van
meetgegevens mogelijk.
3.1
Het Ontwerppatroon voor Teruggekoppelde Controle
Dit ontwerppatroon heeft tot doel de controleapplicatie te structureren.
De structuur laat toe algoritmes in componenten in te bedden zodat deze
in ware tijd verwisseld kunnen worden, bijvoorbeeld, als de controle van
discrete toestand wisselt. Het patroon kent rollen toe aan de verschillende
componenten en afhankelijk van deze rol zal het algoritme een andere functie
dienen in het geheel.
VI
3 De Teruggekoppelde Controlekern
Participanten Het patroon structureert de applicatie in “Datastroom” en
“Proces” Componenten en “Controlekern-infrastructuur”.
De kerntaak van een digitaal teruggekoppelde controlealgoritme is het
verwerken van gegevens per tijdsinterval. Deze verwerking resulteert in
een stroom van gegevens tussen algoritmes, de datastroom. We kunnen de
verschillende actoren op deze stroom classificeren in 5 rollen of stereotypes:
Generator, Controller, Sensor, Schatter en Effector (Figuur 3): De Sensorrol encapsuleert de het uitlezen van sensoren die gegevens leveren over
de toestand van het te controleren systeem. Deze produceert invoer data
die ter beschikking staat van de andere componenten. De Schatter-rol
encapsuleert de verwerking van invoer - en uitvoer data tot het bekomen van de
model data van het systeem. Deze component analyseert dus de datastroom
en kan bijvoorbeeld discrete toestandsverandering opmerken en melden. De
Generator-rol levert de referentie voor de Controller, door bijvoorbeeld een
interpolatie uit te voeren. Een adaptieve Generator heeft tevens toegang tot
het model - en de invoer data om bijvoorbeeld de referentie te superponeren
op de huidige invoer. De Controller berekent uit de invoer, het model
en de referentie het stuursignaal in de uitvoer. Met deze laatste kan de
Effector de machine aansturen. Deze rollen zijn optioneel en kunnen door
dezelfde component of door meerdere componenten vervuld worden. Het type
datastroom component bepaalt hoe deze verbonden wordt.
De keuze welke componenten actief zijn en in welke volgorde ze worden
uitgevoerd wordt bepaald door de “Proces” componenten participanten. Deze
componentenrol heeft tot doel beslissingen te nemen of diensten aan te bieden.
Zodoende is deze component verantwoordelijk voor het realiseren van de
uitvoeringsstroom. Hij definieert de toestanden van de controller als geheel
en reageert op gebeurtenissen. De Proces component kan tevens passief ten
dienste staan van de andere componenten. Bijvoorbeeld om kinematische
transformaties uit te voeren.
De derde participant, de “Controlekern”, werkt als een infrastructuur voor
de componenten, ook wel ‘componenten container’ genoemd. Deze bevat
standaard diensten zoals componenten lokalisatie in een netwerk, wegschrijven
van controledata en het synchroniseren van tijd.
Structuur De structuur wordt als volgt opgelegd. Een component kan
meerdere facetten implementeren. Een facet maakt het mogelijk een bepaald
onderdeel van de componenten aan te spreken, bijvoorbeeld een facet voor
configuratie parameters, een facet voor gebeurtenissen of een facet voor
datastroom. Het datastroom facet bevat bijvoorbeeld ‘poorten’ die middels
‘connectoren’ worden verbonden. Dit laat toe dat de connectie tussen
componenten beheerd kan worden en de connector encapsuleert mogelijke
buffering en netwerkdistributie van de data.
VII
Nederlandse samenvatting
(a) Invoer Gedreven
(b) Model Gedreven
Figuur 3: Datastromen van het Ontwerppatroon voor Controle. De rechthoekige
Componenten stereotypes bevatten activiteiten, de ovale connectoren bevatten data
en structureren de datastroom. De gestippelde Commando connector is enkel
aanwezig in cascade controle.
VIII
3 De Teruggekoppelde Controlekern
Dit patroon slaagt erin datastroom, uitvoeringsstroom en configuratie te
scheiden door deze in verschillende facetten onder te brengen. De koppeling
tussen deze facetten wordt door de componenten implementatie beheerd. Een
configuratie parameter kan bijvoorbeeld invloed hebben op de verwerking van
de datastroom en deze kan op zijn beurt voor gebeurtenissen zorgen in de
uitvoeringsstroom.
De voorgestelde structuur kan uitgebreid worden naar meerdere controlelussen
door de commando-objecten gelijk te stellen aan de uitvoer objecten. Dus een
generator die datastroom-commando’s wil inlezen moet die ontvangen van een
controller.
3.2
Ontwerppatroon Motivatie en Voordelen
Het structureren van een teruggekoppelde controleapplicatie volgens het
patroon wordt gemotiveerd door de eis die gesteld wordt aan herbruikbaarheid,
uitbreidbaarheid en distributie van de componenten. Herbruikbaarheid
wordt bestelligd door “ontkoppeling” van datastroom, uitvoeringsstroom en
configuratie. Door koppeling tussen datastroomcomponenten te elimineren
via de datastroom-poorten en -connectoren zijn datastroomcomponenten niet
onderling afhankelijk. Daardoor kunnen een of meerdere componenten in
andere applicaties gebruikt worden. De koppeling wordt gevormd door
de procescomponenten die de volgorde van uitvoering van componenten
bepalen waardoor de datastroom op gang komt. Uitbreidbaarheid is mogelijk
doordat het patroon enkel stereotypes vastlegt en er dus een onbeperkt
aantal implementaties van elk stereotype aanwezig mag zijn. Distributie is
mogelijk wanneer de componenten facetten beschikbaar gemaakt worden voor
middleware. Dit hoeft maar eenmalig te gebeuren wanneer deze facetten zo
ontworpen zijn dat ze universeel en inspecteerbaar zijn.
3.3
Conclusies
Dit patroon definieert een architectuur en componentenrollen voor teruggekoppelde
ware tijd controle.
Componenten met een datastroom-rol vormen de
datastroomtopologie van de applicatie en voeren samen het controlealgoritme
uit. Componenten met een uitvoeringsstroom rol grijpen in op de datastroom
door datastroom componenten in of uit te schakelen en te (her)configureren.
De concrete communicatie tussen deze componenten wordt verduidelijkt in de
volgende sectie.
IX
Nederlandse samenvatting
4
Analyse en Ontwerp voor Communicatie in
Controleraamwerken
Deze sectie heeft tot doel de verschillende vormen van ware tijd communicatie
tussen controlecomponenten te definiëren. Eerst worden ware tijd activiteiten
gedefinieerd die een communicatie initiëren of erop reageren. Vervolgens
wordt de communicatie van data opgesplitst in de continue datastroom
tussen componenten en het verzenden van doelgerichte boodschappen tussen
componenten, de uitvoeringsstroom. Een andere ware tijd communicatie
gebeurt wanneer activiteiten wensen te wachten op bepaalde gebeurtenissen.
Men spreekt dan van synchronisatie tussen activiteiten. Men kan de
configuratie van een component, dit is het lezen en schrijven van parameters,
als laatste vorm van communicatie zien, waarbij de communicatie bestaat uit
de stroom van gegevens die naar een componenten verzonden worden om deze
te configureren.
Communicatie tussen componenten wordt altijd geı̈nitieerd vanuit de
activiteit in een component en mogelijk afgehandeld door de activiteit
van de ontvangende component. Beide activiteiten hoeven niet ware tijd
eigenschappen te hebben, maar de communicatie vanuit een niet ware tijd
activiteit mag geen verzwakking van de ware tijd eigenschappen veroorzaken
bij de ontvanger en vice versa. Dit is de hoofd vereiste van elke vorm van
ware tijd communicatie tussen componenten: het niet verstoren van ware tijd
eigenschappen.
4.1
De Ware Tijd Controleactiviteit
Een activiteit bestaat uit een verzameling van acties die ondernomen moeten
worden in een component. Bijvoorbeeld het afhandelen van inkomende
communicatie, het initiëren van uitgaande communicatie of het verwerken van
gegevens. De toestanden van een activiteit worden gedefinieerd in Figuur 4.
Initieel is een activiteit inactief, en vindt er geen verwerking van gegevens of
communicatie plaats. Wanneer de activiteit gestart wordt, wordt deze actief
en kunnen initialisatie acties uitgevoerd. Daarna is de activiteit klaar voor
uitvoering en wacht op een trigger. De trigger beslist of een enkele actie wordt
uitgevoerd (step(), stap) of een onbeperkte reeks van acties mag uitgevoerd
worden (loop(), lus), naar discretie van de activiteit. In dat laatste geval dient
de activiteit een methode te voorzien die ze toelaat de lus te verlaten wanneer
de activiteit gestopt wordt. Vervolgens worden afwerkingsfuncties uitgevoerd
en wordt de activiteit terug inactief.
X
4 Analyse en Ontwerp voor Communicatie in Controleraamwerken
4.2
Datastroomcommunicatie
De eerste vorm van communicatie tussen componenten is het verzenden
van pakketten data die bijvoorbeeld meetgegevens of zetpunten bevatten.
datastroomcommunicatie vormt de basis van teruggekoppelde controle. Om
een modulaire controller te maken dienen de componenten ontkoppeld te
zijn. Dit kan men bereiken via een poort-connector gebaseerd ontwerp,
schematisch weergegeven in Figuur 5 Elke component definieert een aantal
datastroompoorten die aangeven welke data deze wenst te lezen of wenst te
schrijven. Het poorttype bepaalt tevens of de data gebufferd of ongebufferd is.
Compatibele poorten worden vervolgens via een connector object verbonden,
die de data uitwisseling realiseert.
De connector heeft de volgende verantwoordelijkheden. Hij encapsuleert
hoe de data van de ene poort naar de andere poort verzonden wordt. Hij
controleert of de connectie geldig is, bijvoorbeeld of de poorten compatibel
zijn en er zowel een lezer als een schrijver van de data aanwezig is. Hij beheert
de connectie en kan de componenten verwittigen indien de connectie verbroken
wordt. Ten bate van het inspecteren van de inhoud van de connector is er per
connector een dataobject dat het lezen van het huidige datapakket toelaat.
Componenten make via hun interface bekend welke datastromen zij
kunnen lezen van en schrijven naar andere componenten.
Het datastroompatroon werd experimenteel gevalideerd in dit werk. Een
vergelijking werd gemaakt tussen traditionele slotgebaseerde ontwerpen en
het slot-vrije ontwerp waarvoor in deze thesis gekozen is. Figuren 6 en 7
Figuur 4: Toestandsdiagramma van een activiteit
XI
Nederlandse samenvatting
Figuur 5: Componenten met poorten en een connector
tonen de meetresultaten van de tijd die een datastroomcommunicatie nodig
heeft tussen een netwerk van actieve componenten in èèn proces onder een
periode-monotone planner. De eerste figuur toont de communicatie tijden
van de hoogste prioriteitstaak in het systeem. Deze doet 2000 communicaties
per seconde en belast de processor voor 20%. De tweede figuur toont de
communicatietijden van een lagere pioriteitstaak in het systeem. Deze doet
1000 communicaties per seconde en belast de processor voor 20%. Andere
taken die de processor verder belasten en communiceren zijn tevens aanwezig.
In beide taken is de maximale communicatietijd deterministischer in het
slot-vrije geval en het slechtste geval is tot 30 maal sneller voor de hoogste
prioriteitstaak. Dit komt Ook de gemiddelde communicatietijd is beduidend
lager in het slot-vrije geval omdat de planner van het besturingssysteem niet
meer tussenbeide hoeft te komen.
4.3
Uitvoeringsstroomcommunicatie
Niet alle communicatie tussen componenten kan semantisch correct uitgedrukt
worden via datastroomcommunicatie. Voorbeelden hiervan zijn het opvragen
of schrijven van componenten parameters, of het geven en opvolgen van
commando’s aan componenten. Deze communicaties zijn verschillend van
datastroomcommunicatie omdat zij de intentie van een zender uitdrukken
om een wijziging aan te brengen in de ontvangende component, en dus
zender en ontvanger koppelen. Het resultaat is dat iets uitgevoerd wordt,
en daarom wordt het verzenden van zulke berichten in een controlesysteem
uitvoeringsstroom genoemd.
Dit werk maakt het onderscheid tussen synchrone en asynchrone
uitvoeringsstroom. Dit wordt gellustreerd aan de hand van een fictieve
‘cameracomponent’. Men kan een beeld opvragen via de ‘getImage()’ functie.
Deze gedraagt zich zoals een klassieke functie die aangeroepen wordt op
de interface van de cameracomponent. Dit type wordt een ‘methode’
XII
4 Analyse en Ontwerp voor Communicatie in Controleraamwerken
Figuur 6: Communicatietijden: hoge prioriteit, slotgebaseerd (boven) en slot-vrij
(beneden).
genoemd. De last van het uitvoeren van een methode wordt gedragen
door de activiteit van de aanroeper, zoals te zien in Figuur 8(a). De
cameracomponent kan dezelfde functionaliteit aanbieden via een asynchroon
‘commando’, ‘fetchImage()’. Het commando wordt in een wachtrij van
de cameracomponent geplaatst. De wachtrij wordt bij het ontwaken van
de activiteit in de cameracomponent uitgelezen en dit commando wordt
uitgevoerd. Het lezen van een nieuw beeld is nu ten laste van de activiteit
van de cameracomponent, zoals te zien in Figuur 8(b). De aanroeper kan na
verloop van tijd met een afloop conditie, ‘isImageFetched()’, controleren of het
beeld afgehaald is. Dit is dan bijvoorbeeld beschikbaar via een datastroom
poort.
Componenten make via hun interface bekend welke methodes en welke
commando’s zij aanbieden aan andere componenten.
4.4
Synchronisatie van Activiteiten
Synchronisatie treedt op wanneer een activiteit wacht op een externe
gebeurtenis alvorens de volgende acties uit te voeren. Er is dus een primitief
nodig dat een activiteit op de hoogte brengt van veranderingen in het systeem.
Een eerste oplossing hiervoor is het ‘testen’ (polling) of een gebeurtenis heeft
XIII
Nederlandse samenvatting
Figuur 7: Communicatietijden: lage prioriteit, slotgebaseerd (boven) en slot-vrij
(beneden).
plaats gevonden. Dit werd reeds in de vorige sectie gedemonstreerd met de
afloop conditie ‘isImageFetched()’. Een probleem stelt zich echter wanneer de
test duur is of wanneer de activiteit niet in staat is de test uit te voeren. Het
hier beschreven primitief biedt voor deze situaties een oplossing.
Figuur 9 illustreert hoe een gebeurtenis, ‘imageFetched’, gecommuniceerd
wordt. De eerste stap bestaat eruit dat de synchroniserende (wachtende)
component de cameracomponent meedeelt dat hij wenst te reageren op
de gebeurtenis. De synchroniserende component kan een onmiddellijke,
synchrone, reactie en een uitgestelde, asynchrone reactie registreren. Wanneer
het beeld ingelezen is in de cameracomponent, wordt deze gebeurtenis
gepubliceerd. Dit houdt in dat eerst alle geregistreerde synchrone reacties
on middelijk uitgevoerd worden. Zo’n reactie kan er bijvoorbeeld uit bestaan
een lampje te doen branden of de tijd op te meten wanneer het beeld is
genomen. Vervolgens worden alle asynchrone reacties gepubliceerd. Elke
synchroniserende component heeft hiervoor een wachtrij die bijhoudt op welke
gebeurtenissen er nog gereageerd moet worden. Wanneer de activiteit weer
lopend wordt, controleert deze de wachtrij en voert de uitgestelde reacties zelf
uit, en verwerkt het nieuwe beeld.
Componenten make via hun interface bekend welke gebeurtenissen zij
kunnen publiceren naar andere componenten.
XIV
4 Analyse en Ontwerp voor Communicatie in Controleraamwerken
(a)
(b)
Figuur 8: Synchrone methodes versus asynchrone commando’s.
4.5
Configuratie en Oplevering
Een laatste facet van de communicatie met controlecomponenten is het
configureren van componenten. Configuratie heeft tot doel opgeleverde
componenten in het veld zodanig in te stellen dat ze correct kunnen
functioneren in de omgeving. Voor de cameracomponent kan bijvoorbeeld de
resolutie in pixels of de focus afstand ingesteld worden. Deze eigenschappen
van een component vormen een onderdeel van de interface van de component.
XV
Nederlandse samenvatting
Figuur 9: Activiteiten kunnen reageren op gebeurtenissen.
Het kan zijn dat het aanpassen van deze waarden niet op elk moment mag
gebeuren. Subsectie 4.1 stelde hiervoor een oplossing voor door vooraleer de
activiteit gestart wordt, initialiserende acties uit te voeren. Het inlezen van
nieuwe parameters kan dan gebeuren.
4.6
Conclusies
Deze sectie stelde vijf manieren van communicatie voor tussen controlecomponenten:
• datastroomcommunicatie voor het verwerken van stromen van gegevens
tussen componenten,
• synchrone uitvoeringsstroomcommunicatie voor het uitvoeren van
methodes,
• asynchrone uitvoeringsstroomcommunicatie voor het uitvoeren van
commando’s,
• synchrone of asynchrone publicatie van gebeurtenissen, en
• configuratie van componentenparameters.
Alsook werd een raamwerk voor activiteiten voorgesteld die deze communicaties
uitvoeren ten dienste van een component.
XVI
5 Ware Tijd Taakactiviteiten
5
Ware Tijd Taakactiviteiten
De vorige sectie liet in het midden hoe een activiteit gerealiseerd kan worden
of wat de gevolgen zijn op een activiteit wanneer communicatie plaats vindt.
De laatste sectie van dit werk stelt een oplossing voor voor deze problemen.
In dit werk worden activiteiten gerealiseerd met programma’s. Zij voeren
geordende acties voorwaardelijk uit. Om programma’s op zich voorwaardelijk
uit te voeren worden toestandsmachines gebruikt, die zodanig ontworpen zijn
dat bepaalde programma’s maar in bepaalde toestanden worden uitgevoerd.
Gebeurtenissen kunnen er voor zorgen dat de toestandsmachine van toestand
verandert, en dat er bijgevolg andere programma’s uitgevoerd worden.
Programma’s en toestandsmachines worden in deze sectie terug gellustreerd
met de cameracomponent.
5.1
Programma’s
Programma’s worden gellustreerd aan de hand van een ware tijd beeldverwerkingsapplicatie.
De applicatie bestaat eruit een beeld te nemen met de cameracomponent, dit
te verzenden naar de beeldverwerkingscomponent. Deze haalt uit het beeld
de afstand van een voorwerp in de kamer tot de camera. Met behulp van de
huidige positie van de camera wordt dan de ware locatie van het voorwerp in
de kamer berekend. De camera moet zich zodanig richten dat het voorwerp
zich in het midden van het beeld bevindt. De bedoeling is het voorwerp te
volgen met de camera en dus steeds te weten waar het zich bevindt.
Figuur 10 illustreert hoe het bovenstaande programma kan ontbonden
worden in acties en voorwaarden of condities. Een programma bestaat uit
knopen (N) die elk een actie bevatten. De knopen worden via bogen (A)
verbonden die elk een conditie bevatten. Een programma wordt uitgevoerd
door de actie van een knoop uit te voeren, vervolgens te kijken naar de
uitgaande bogen en de condities ervan te evalueren. Indien een conditie waar
is wordt die boog gevolgd en wordt de actie van de volgende knoop uitgevoerd.
Zodoende springt men van knoop naar knoop, totdat een eind knoop bereikt
wordt. Op deze manier kan elke activiteit die onderverdeelbaar is in een eindig
aantal acties uitgedrukt worden.
Deze manier van werken heeft de volgende voordelen:
• Het is deterministisch in de tijd wanneer de acties en condities ook
deterministisch zijn in de tijd. Het overlopen van de graaf op zich is
een deterministisch algoritme en voegt geen indeterminisme toe aan de
acties die het uitvoert.
• Zo een graaf kan geconstrueerd worden tijdens het uitvoeren van de
applicatie door “knoop” en “boog” objecten te creren en deze te
verbinden.
XVII
Nederlandse samenvatting
Figuur 10: Een voorwerp lokalisatie programma.
• Men kan actie per actie uitvoeren, en stapsgewijs het programma
uitvoeren
5.2
Toestandsmachines
Wanneer de lokalisatie toepassing van de vorige sectie in het veld gebruikt
wordt zal het niet steeds gewenst zijn het voorwerp te volgen. Voordat het
voorwerp gevolgd wordt vereist de camera vereist een calibratie of men wil
eerst de camera positioneren voordat deze het voorwerp begint te volgen. Zo
komt men tot een aantal toestanden waarin de applicatie zich kan bevinden.
In elke toestand heerst een andere activiteit (wordt een ander programma
uitgevoerd) en gebeurtenissen definiëren de overgang van de ene naar de
andere toestand. Figuur 11 illustreert de toestandsmachine van de applicatie,
de activiteiten die in elke toestand uitgevoerd worden en de gebeurtenissen
die toestandsveranderingen veroorzaken.
Dit werk laat complexere toestandsmachines toe dan in het voorbeeld
gegeven. De belangrijkste eigenschappen van de toestandsmachines in dit
werk zijn:
• Ze zijn zo expressief als UML State Charts,
• ze kunnen dus geneste toestanden beschrijven,
XVIII
6 Conclusies
Figuur 11: De toestanden van een voorwerp-lokalisatieapplicatie.
• ze hebben ware tijd eigenschappen,
• ze kunnen zowel reageren op gebeurtenissen als actief condities evalueren
om toestandsveranderingen te bewerkstelligen,
• ze definiëren per toestandsmachine een veilige toestand, waar in
noodgevallen steeds naar gegaan kan worden,
• ze kunnen in een tekstbestand beschreven worden en in applicaties in
uitvoering ingeladen of uitgeladen worden en laten dus upgrades toe van
opgeleverde applicaties.
5.3
Conclusies
Deze sectie stelde een oplossing voor om ware tijd activiteiten te realiseren
met behulp van programma grafen. Deze methode heeft als voordeel
dat ze de ware tijd eigenschappen van acties niet verstoort en toelaat de
eerder gedefinieerde communicatie primitieven aan te roepen. Vervolgens
worden toestandsmachines aangewend om te beslissen welke activiteiten op
welk moment worden uitgevoerd en wordt bepaald welke invloed bepaalde
gebeurtenissen hebben. Dit maakt het mogelijk applicaties met meerdere
toestanden en activiteiten te beschrijven in dit raamwerk.
6
Conclusies
Deze sectie vat de conclusies van dit werk samen. Deze zijn:
XIX
Nederlandse samenvatting
• Dit werk presenteerde het ontwerppatroon van teruggekoppelde
controleapplicaties.
De ontkoppeling in dit patroon is zodanig
gekozen dat de datastroom, uitvoering van commando’s en configuratie
gescheiden zijn.
Deze keuze van ontkoppeling laat een uiterst
modulaire opbouw van componenten toe en de rollen die aan een
component kunnen toegekend worden laten toe een teruggekoppelde
controleapplicatie gemakkelijker te structureren.
• Dit werk beschrijft hoe componenten voor ware tijd controle beschreven
kunnen worden, zodat zij de problemen die zich stellen in ware
tijd controleapplicaties kunnen oplossen. Er zijn vijf facetten die
aangepakt worden. De applicatie is instelbaar doordat een component
kan geconfigureerd worden via zijn eigenschappen. De applicatie
kan reageren op veranderingen in het systeem doordat componenten
gebeurtenissen kunnen publiceren naar andere componenten waardoor.
De applicatie kan zelfstandig taken vervullen in opdracht doordat
componenten gecommandeerd kunnen worden. Nieuwe algoritmes
kunnen aan de applicatie aangeboden worden doordat componenten
methodes kunnen aanbieden. Ten laatste kan de applicatie data
verwerken doordat componenten data produceren en consumeren
en zij een netwerk kunnen vormen.
Al deze primitieven zijn
specifiek ontworpen naar de deterministische tijdvereisten van ware
tijd applicaties en zijn slot-vrij ontworpen. Hierdoor presteren deze
primitieven het meest tijdsdeterministisch voor de ware tijd taak met
de hoogste prioriteit. Het voorgestelde raamwerk biedt met deze
bijdrage een manier aan om controleapplicaties te structureren via deze
communicatieprimitieven.
• Dit werk beschrijft hoe ware tijd actief en/of reactief gedrag in
componenten kan gerealiseerd worden. Het gedrag van een component
is reactief wanneer deze kan reageren op gebeurtenissen. Dit wordt
gerealiseerd door ware tijd toestandsmachines. In elke toestand voert
de component een activiteit uit die gerealiseerd wordt door ware tijd
programma’s. Zowel de activiteiten als de gebeurtenissen kunnen
gedefinieerd worden in termen van de vijf communicatieprimitieven die
in dit werk voorgesteld worden. Het voorgestelde raamwerk biedt met
deze bijdrage een manier aan om controleapplicaties te implementeren.
Dit werk heeft ook beperkingen. Het commandoprimitief gaat ervan uit
dat er een of meerdere periodische taken aanwezig zijn in het systeem. Het
publiceren van gebeurtenissen biedt hier echter al een tegengewicht. Dit werk
stelt geen methode voor om gestandaardiseerde UML State Charts om te
zetten naar het raamwerk. Verder is het werk niet formeel geverifieerd en biedt
XX
6 Conclusies
het geen pasklare oplossing aan voor formele verificatie van de applicaties die
erin beschreven worden.
Het toekomstig ontwerp van controleapplicaties en raamwerken gaat
verder met de ontwerppatronen die in dit werk gepresenteerd worden. Het
zal binnen korte tijd mogelijk worden deze volledig te beschrijven als
platformonafhankelijke modellen waardoor codegeneratie en modelgedreven
ontwerp een voorname plaats krijgen in het ontwerp van controleapplicaties.
XXI