Avoiding late-night debugging

Transcription

Avoiding late-night debugging
VOLUME 24,
NUMBER 9
NOVEMBER 2011
EMBEDDED SYSTEMS DESIGN
The Official Publication of The Embedded Systems Conferences and Embedded.com
Avoiding
late-night
debugging
16
Five forensic
techniques
13
Scrum for
embedded systems
23
Semiconductor
revolution
29
Twice the performance. Half the power.
Innovate without compromise with the Xilinx 7 series FPGAs.
The new 7 series devices are built on the only unified architecture
to give you a full range of options for making your concepts
a reality. Meet your needs for higher performance and lower
power. Speed up your development time with next-generation
ISE® Design Suite. And innovate with the performance and
flexibility you need to move the world forward.
www.xilinx.com/ 7
LOWEST SYSTEM COST
BEST PRICE / PERFORMANCE
HIGHEST SYSTEM PERFORMANCE
2011 WINNER
© Copyright 2011. Xilinx, Inc. XILINX, the Xilinx logo, Artix, ISE, Kintex, Virtex, and other designated brands included herein are
trademarks of Xilinx in the United States and other countries. All other trademarks are the property of their respective owners.
The INTEGRITY RTOS
®
Certified and Deployed Technology
The INTEGRITY RTOS is deployed and certified to:
Railway: EN 50128 SWSIL 4, certified: 2010
Security: EAL6+ High Robustness, certified: 2008
Medical: FDA Class III, approved: 2007
Industrial: IEC 61508 SIL 3, certified: 2006
Avionics: DO-178B Level A, certified: 2002
www.ghs.com
Copyright © 2011 Green Hills Software, Inc. Green Hills, the Green Hills logo and INTEGRITY are trademarks of Green Hills Software, Inc. in the U.S.and/or
internationally. All other trademarks are the property of their respective owners.
AS9120A
Authorized Distributor
Find It Here. Faster.
™
The Newest Products for Your Newest Designs®
Scan Here
to Watch Video
Authorized distributor for the most advanced
semiconductors and electronic components.
Get What’s Next. Right now at mouser.com.
Mouser and Mouser Electronics are registered trademarks of Mouser Electronics, Inc.
T H E O F F I C I A L P U B L I C AT I O N O F T H E E M B E D D E D S Y S T E M S C O N F E R E N C E S A N D E M B E D D E D. C O M
COLUMNS
EMBEDDED SYSTEMS DESIGN
VOLUME 24, NUMBER 9
NOVEMBER 2011
barr code
13
Firmware forensics—best
practices in embedded software source-code discovery
BY MICHAEL BARR
Remember unintended acceleration?
Here’s what NASA should have examined in Toyota’s software..
16
break points
29
The semiconductor revolution
BY JACK G. GANSSLE
Cover Feature:
Making late-night
debugging an exception
rather than the norm
BY RUTUL DAVE, COVERITY
Static analysis finds the bugs, even
if you’re working under an Agile
development process. Here’s when
and where to use static analysis.
In part 3 of Jack’s series honoring the
40th anniversary of the microprocessor, the minis create a new niche—the
embedded system.
DEPARTMENTS
#include
5
Android at ESC and IDF
BY CHRIS A. CIUFO
In case you missed it, check out
these online Android resources
from ESC and IDF.
parity bit
7
Android attacks
new products
11
IN PERSON
23
Using Agile’s Scrum in embedded software
development
BY KIM H. PRIES AND JON M. QUIGLEY
Keeping the team focused and organized is what Scrum can do
for your embedded systems project.
DesignCon 2012
January 20–February 2, 2012
http://designcon.techinsightsevents.com/
ESC Silicon Valley
March 16–29, 2012
http://esc.eetimes.com/siliconvalley/
ONLINE
EMBEDDED SYSTEMS DESIGN (ISSN 1558-2493) print; (ISSN 1558-2507 PDF-electronic) is published 10 times a year as follows: Jan/Feb, March, April, May, June,
July/August, Sept., Oct., Nov., Dec. by the EE Times Group, 600 Harrison Street, 5th floor, San Francisco, CA 94107, (415) 947-6000. Please direct advertising and editorial
inquiries to this address. SUBSCRIPTION RATE for the United States is $55 for 10 issues. Canadian/Mexican orders must be accompanied by payment in U.S. funds with additional postage of $6 per year. All other foreign subscriptions must be prepaid in U.S. funds with additional postage of $15 per year for surface mail and $40 per year for
airmail. POSTMASTER: Send all changes to EMBEDDED SYSTEMS DESIGN, EE Times/ESD, PO Box #3609, Northbrook, IL 60065-3257, embedsys@omeda.com. For customer service, telephone toll-free (847) 559-7597. Please allow four to six weeks for change of address to take effect. Periodicals postage paid at San Francisco, CA and additional
mailing offices. EMBEDDED SYSTEMS DESIGN is a registered trademark owned by the parent company, EE Times Group. All material published in EMBEDDED SYSTEMS
DESIGN is copyright © 2010 by EE Times Group. All rights reserved. Reproduction of material appearing in EMBEDDED SYSTEMS DESIGN is forbidden without permission.
www.embedded.com
INDUSTRIAL
AEROSPACE
SYSTEM ON A CHIP
MEDICAL
AVIATION
CONSUMER
THREADX: WHEN IT
REALLY COUNTS
When Your Company’s Success, And Your Job, Are On The Line You Can Count On Express Logic’s ThreadX® RTOS
Express Logic has completed 14 years
of successful business operation,
T H R E
and our flagship product, ThreadX,
has been used in over 800 million
electronic devices and systems,
ranging from printers to smartphones, from single-chip
SoCs to multiprocessors. Time and time again, when
leading manufacturers put their company on the line,
when their engineering team chooses an RTOS for their
next critical product, they choose ThreadX.
Our ThreadX RTOS is rock-solid, thoroughly field-proven,
and represents not only the safe choice, but the most
cost-effective choice when your company’s product
simply must succeed. Its royalty-free
licensing model helps keep your BOM low,
A D
and its proven dependability helps keep
your support costs down as well. ThreadX
repeatedly tops the time-to-market results
reported by embedded developers like you. All the while,
Express Logic is there to assist you with enhancements,
training, and responsive telephone support.
Join leading organizations like HP, Apple, Marvell, Philips, NASA,
and many more who have chosen ThreadX for use in over 800
million of their products – because their products are too
important to rely on anything but the best. Rely on ThreadX,
when it really counts!
Contact Express Logic to find out more about our ThreadX RTOS, FileX® file system, NetX™ Dual IPv4/IPv6 TCP/IP stack, USBX™
USB Host/Device/OTG stack, and our new PrismX™ graphics toolkit for embedded GUI development. Also ask about our TraceX®
real-time event trace and analysis tool, and StackX™, our patent-pending stack size analysis tool that makes stack overflows a
thing of the past. And if you’re developing safety-critical products for aviation, industrial or medical applications, ask
about our new Certification Pack™ for ThreadX.
Newnes
n
Second Editio
E
REAL-TIM
ED
EMBEDD ADING
RE
MULTITH
adX for ARM, Coldfire,
With Thre
ices
append
Now with
architectures
PowerPC
MIPS and
For a free evaluation copy, visit www.rtos.com • 1-888-THREADX
L. Lamie
Edward
Copyright © 2010, Express Logic, Inc.
ThreadX, FileX, and TraceX are registered trademarks, and NetX, USBX, PrismX, StackX, and Certification Pack are trademarks of Express Logic, Inc.
All other trademarks are the property of their respective owners.
M
CD-RO
INCLU DED
EE M
MB
B EE D
DD
D EE D
D SS YY SS TT EE M
M SS D
D EE SS II G
GN
N
Director of Content
Chris A. Ciufo
(415) 683-1106
chris.ciufo@ubm.com
Managing Editor
Susan Rambo
(415) 947-6675
susan.rambo@ubm.com
Acquisitions/Newsletter Editor,
ESD and Embedded.com
Bernard Cole
(928) 525-9087
bccole@acm.org
Contributing Editors
Michael Barr
Jack W. Crenshaw
Jack G. Ganssle
Dan Saks
Art Director
Debee Rommel
debee.rommel@ubm.com
Production Director
Donna Ambrosino
dambrosino@ubm-us.com
Article submissions
After reading our writer’s guidelines, send
article submissions to Bernard Cole at
bccole@acm.org
Subscriptions/RSS Feeds/Newsletters
www.eetimes.com/electronics-subscriptions
Subscriptions Customer Service (Print)
Embedded Systems Design
PO Box # 3609
Northbrook, IL 60065- 3257
embedsys@omeda.com
(847) 559-7597
Article Reprints, E-prints, and
Permissions
Mike Lander
Wright’s Reprints
(877) 652-5295 (toll free)
(281) 419-5725 ext.105
Fax: (281) 419-5712
www.wrightsreprints.com/reprints/index.cfm
?magid=2210
Publisher
David Blaza
(415) 947-6929
david.blaza@ubm.com
Associate Publisher, ESD and EE Times
Bob Dumas
(516) 562-5742
bob.dumas@ubm.com
Corporate—UBM Electronics
Paul Miller
David Blaza
Karen Field
Felicia Hamerman
Brent Pearson
Jean-Marie Enjuto
Amandeep Sandhu
Barbara Couchois
Chief Executive Officer
Vice President
Senior Vice President, Content
Vice President, Marketing
Chief Information Officer
Vice President, Finance
Director of Audience Engagement &
Analytics
Vice President, Partner Services &
Operations
Corporate—UBM LLC
Marie Myers
Pat Nohilly
Senior Vice President,
Manufacturing
Senior Vice President, Strategic
Development and Business
Administration
#include
Android at ESC and IDF
BY Chris A. Ciufo
A
ssuming you’re not living under a
rock, you’ve been following the
buzz this year surrounding Android smart phones. According to Gartner (August 2011, www.gartner.com/it/
page.jsp?id=1764714), Android-based
smart phones now command 43.4% of
the worldwide market—nearly double
number two Symbian—with share
growing steadily every quarter. Since
Android-based phones are essentially
embedded devices—as are the tablets
that are the next consumer gotta-have
device—Android is fast spilling over
into the embedded space, big time. So I
thought I’d pass on some recent information gems that might apply to you
embedded Android developers.
Firstly, at the recently concluded
Embedded Systems Conference (ESC)
Boston, we presented the first-ever Android Certificate Program curriculum.
ESC is closely associated with ESD
(note the logo similarity), and I’m also
the conference chair for the event. My
colleague Ron Wilson assembled a
five-class curriculum for ESC that
walked attendees through: Fundamentals of Android; Android Jumpstart;
Variants, Hacks, Tricks and Resources;
Open Accessory Kit; and the hardwarebased Embedded Android Workshop.
How’d it go? Let’s just say the series was a runaway success. We were
mobbed with attendees, far exceeding
the number of people who had preregistered for the class with their paid-for
all-access pass. Unfortunately, we
didn’t have a solid strategy for assigning seats to pre-reg attendees, nor did
we have a fall-back plan for the extra
wannabees. If you were in either camp
and we failed to meet your expectations: my apologies. We’re going to reChris A. Ciufo is the
director of content for
Embedded Systems
Design magazine,
Embedded.com, and the
Embedded Systems
Conference. You may
reach him at
chris.ciufo@ubm.com.
peat the program—with improvements—at ESC Silicon Valley 2012.
But as a consolation prize of sorts,
Day 1 and 4 instructors Bill Gatliff and
Karim Yaghmour have agreed to post
their slides and sample code online at
www.billgatliff.com/~bgat/esc-bos2011/
and www.opersys.com/blog/esc-boston2011-wrapup.
Also at ESC Boston, we worked
closely with Intel, the sponsor of three
embedded sessions, one of which was
particularly applicable to Android: Optimizing Android for Intel Architecture.
That class was based upon Intel’s
sort-of shocking announcement two
weeks prior at the Intel Developer Forum (IDF) in San Francisco that
Google had “plans to enable and optimize future releases of the Android
platform for Intel’s family of low power Intel Atom processors.”
Intel then made good on their Android strategy by offering a few goldplated IDF class sessions about running Android on IA (x86) CPUs:
•
•
•
https://intel.wingateweb.com/
us11/scheduler/catalog.do
http://developer.android.com/
guide/developing/index.html
http://software.intel.com/en-us/
android/
Intel insiders tell me this will be the future home of all things Android on IA.
Intel will start sending you stuff when
they’re ready. (If you’re planning on
deploying Android on IA, I strongly
recommend reviewing SFTS010: “Developing and Optimizing Android Applications for Intel Atom Processor
Based Platforms.” Of note are the new
low-power Atom Medfield and Clover
Trail roadmaps, along with the tradeoffs
of using native C/C++ code in Android
NDK apps versus Intel’s own x86 NDK.
Chris A. Ciufo, chris.ciufo@ubm.com
www.embedded.com | embedded systems design | NOVEMBER 2011
5
INNOVATOrs buIld NeTwOrks wITh explOsIVe speed
ANd excepTIONAl INTellIGeNce.
How do innovators build networks of stunning speed and intelligence, while keeping costs squarely under control?
They work with Wind River. Our advanced networking solutions give leading network equipment manufacturers the
packet acceleration, hardware optimization, and system reliability they need to deliver breakthrough performance and
greater value—from the core to the consumer and everywhere in between. All while reducing their costs and cutting
their time-to-market so they can focus on innovation to create a truly competitive edge.
Please visit www.windriver.com/customers to see how Wind River
customers have delivered breakthrough performance and greater value.
INNOVATOrs sTArT here.
parity bit
Android attacks!
G
I didn’t get to the class, however,
not because it overflowed, but because I made other choices. I went to
Jack Ganssle’s class. I’m sure I got
more out of that and meeting him afterwards then any amount of Android
info. Maybe next time...
—Steve C.
reat article, (“Understanding
Android’s strengths and weaknesses,” Juan Gonzales, Darren
Etheridge, and Niclas Anderberg,
www.eetimes.com/4228560, p.16) other than the statement might not hold
true anymore: “Android is free and
anybody can download the sources
and use it for whatever purpose they
wish.” If it becomes closed source for
its key components in the whole
stack, it will be only relevant to
Google itself and probably a few partners it picks.
—joyhaa
Android crazy
While the embedded world seems to
be going Android mad (“The perils
and joys of Android development,”
Bernard Cole, p.5), it is quite obvious
to anyone who has been in the industry for more than five minutes that
Android will be highly suited to some
applications and less suited to others.
There is no one-size-fits-all software
environment.
As for security . . . Android security, like any security, is only as good as
the gate keepers. Once you allow a
malicious or poorly secured application to punch a hole through your armour, the security is compromised.
No operating system can prevent that.
—cdhmanning
Yes, hackers are targeting Android
systems. See this article “HTC security
vulnerability said to leak phone numbers, GPS data, and more,” Sean
Buckley, Engadget.com, Oct 2, 2011,
at www.engadget.com/2011/10/02/htcsecurity-vulnerability-said-to-leakphone-numbers-gps-data/.
—David Meekhof
Yes, when I attended ESC I went to
classes where there was something
new to be learned, not where I already
knew it.
—FillG
!
!
!
Android will be highly
suited to some applications and less suited to
others. There is no onesize-fits-all software
environment.
My Android phone randomly hangs
for up to 10 seconds at a time. Even
the circle thing (the Android version
of the Windows hourglass) locks up.
—ken a
Android stampede
I admit interest in the Android classes
at ESC (“Android—now or later?” Jack
Ganssle, www.eetimes.com/4229799/),
but just to see what the buzz is all
about. I know little to nothing about
Android, don’t own a mobile phone
from either camp and was just curious.
Maybe Android would make a good OS
for a project/ product I’m considering.
Ignoring all technical considerations,
I will risk making myself look like a
fool:
I find Android to be baffling, confusing, and extremely un-intuitive.
Why is everyone so excited about a UI
that doesn’t make its operation selfobvious?
Is it just me? Am I missing some
trick that would suddenly make the
Android UI easy to use? The user-interface (for example, on phones and
tablets) never seems to do what I expect, and doesn’t seem to offer any
options for “tuning” it to fit my style.
As far as I know, it forces me to
re-learn everything that I’ve previously learned wrt using GUIs, and many
operations simply make no sense—
different for the mere sake of being
different, rather than offering any obvious improvement.
Again I must ask: am I missing
some simple trick, or am I hopelessly
stuck in an ancient “double-click”
paradigm?
—vapats
We have been evaluating using Android for doing an upcoming project.
I was tasked with creating a demo to
www.embedded.com | embedded systems design | NOVEMBER 2011
7
parity bit
!
!
!
Proprietary compilers have
been known to have problems, too! So many companies just trust their tools
until their faces get
rubbed into the dirt.
see how well it would fit with what we
want to accomplish. Working with
Android is really pretty simple. I
downloaded the free tools from
Google, used an off-the-shelf development board from a popular vendor,
(a great place to start) and went from
there with their port of Android. For
testing I loaded the created app(s) on
that board as well as my own Android
phone. The simulator was also very
helpful.
I also created a similar app using
QT Quick. Both environments have
their advantages and challenges but
both seem to really give developers
some nice tools to work with if a GUI
is needed. I would say that the commercial licensing arrangements with
Android are much more friendly than
with QT. (Apache license with Android works very well with business
strategies.)
With QT technically, I can not
commercially release any internal
demo work that I did using the free
tools I downloaded. I would have to
re-do all of that work after purchasing the license. Also every developer
that uses it needs to have a rather expensive separate license as well.
8
Seems that if QT does not modify
their license to be more business
friendly, they will likely be left behind... A true shame for such a capable GUI platform.
—someEmbeddedGuy
Only as good as your tools
This is very interesting. A set of tools
to test the tools?! This (“Validating
your GNU platform toolchain: tips and
techniques,” Mark Mitchell and Anil
Khanna, www.eetimes.com/4228687,
p23) reminds me of recursive programming. :-)
But it seems required when building up an open-source environment
for development as mentioned in the
start of the article.
Expect?! First time I hear about
this language... odd name for a language isn’t it?
—Luis Sanchez
Sad to see reported recently that Tony
Sale, who led the Colossus rebuild,
died only last month. Obituary from
the Daily Telegraph, here: www.telegraph.co.uk/news/obituaries/technology-obituaries/8733814/Tony-Sale.html
—Chris
Thankfully nobody patented software
compilation, memory allocation, using software to blink an LED and other algorithms that would get patented
these days and during Edison’s time.
—cdhmanning
The interesting question is did these
people receive the fame and fortune
due them during their life so their
families could also benefit or did they
all get a warm fuzzy ataboy and a
mention in the history books?
—TFC-SD
That is why when the Ada Mandate
was in place a validated Ada compilation system (ACS) was required. The
ACS was more than just a compiler. It
provided build management and included an RTOS.
—Robert.Czeranko
Well it depends. Edison very much
ensured he got more fame and credit
than he deserved.
Others less so. Much of the early
computing development was heavily
guarded military secrets. So much so
that it was not used as prior art and
claims by others were allowed to take
the credit and even get various
patents.
As for fame, fortune and benefit,
getting those will depend to a great
extent on factors other than the usefulness of the invention.
I very much doubt that the individual inventors really make the difference. There have typically been
multiple threads of invention in parallel and it is often just the first to file
and claim the glory that gets remembered.
Without Edison, ENIAC, and all
the others, we would still have what
we have today.
—cdhmanning
Goodbye, Tony Sale
Lovely article, Jack. (“From light bulbs
to computers,” Jack Ganssle, www.eetimes.com/4228549, p.30) Thank you.
We welcome your feedback. Letters to the
editor may be edited. Send your comments to
Chris Ciufo at chris.ciufo@ubm.com or post
your feedback directly online, under the article you wish to discuss.
It’s not recursion so much as co-routines!
The test suite will have bugs and
so will the system being tested and
development of both has to occur in
parallel.
However, this process really
should proceed irrespective of the
origin of the compiler, proprietary
compilers have been known to have
problems, too!
So many companies just trust
their tools until their faces get rubbed
into the dirt.
—derek_c
NOVEMBER 2011 | embedded systems design | www.embedded.com
Register online and enjoy the benefits:
www.productronica.com/benefits
for electronic manufacturing
19th international trade fair
for innovative electronics production
november 15–18, 2011
new munich trade fair centre
www.productronica.com
innovation all along the line
smarter, faster, smaller
At CUI, our approach is to develop smarter, faster, smaller power modules. Whether it’s an embedded
ac-dc power supply, a board level dc-dc converter, or a level V external adapter, we continuously strive to
keep our power line, that ranges from 0.25 W to 2400 W, ahead of the curve.
Check out the latest addition to CUI’s power line:
720 W Novum Intermediate Bus Converter
Highlights
¬ Industry leading power density
¬ Driven by CUI’s patented
Solus Power TopologyTM
¬ DOSA compliant pin-out
Specifications
NQB 1/4 Brick
IBC Converter
cui.com/power
¬ 720 W / 60 A output
¬ 1/4 brick package
¬ 36~60 Vdc input range
¬ Greater than 96% efficiency
new products
Rugged display computer enabled for M2M
S
tay up on new products. For the
latest new products and
datasheets, go to Embedded.com,
www.eetimes.com/electronics-products,
and Datasheets.com.
Compact rugged display computer
with advanced communications
Julien Happich, EE Times Europe
Eurotech’s DynaVIS 10-00 compact
rugged display computer has been
specifically designed for modern railway applications, it uses the company’s Everyware Software Framework
(ESF) to simplify embedded M2M
communications. Based on the Intel
Atom processor, the DynaVIS is low
power, compact and can withstand
the mechanical and temperature
stresses commonly encountered in
harsh environmental conditions.
The display computer provides
connectivity through WiFi and 3G
cellular networks, Gigabit Ethernet, a
5.7" touchscreen panel, a high performance GPS and plenty of transportation specific features such as optoinsulated I/Os, serial ports, USBs
and a wide range power supply section. The DynaVIS 10-00 is EN50155
compliant, is IP65 protected and features high-end rugged connectors for
long-term reliability in harsh environments.
Eurotech
www.eurotech.com.
Mentor Graphics’ next-gen Nucleus
RTOS addresses power management and connectivity
Toni McConnel, Embedded.com
Mentor Graphics Corporation has released the third generation of its Nucleus real time operating system
(RTOS) with new power management, connectivity, and wireless communication features. Nucleus power
management uses hardware features
such as dynamic voltage and frequency scaling (DVFS), designed for power
conscious, battery-powered applications. New connectivity features include a certified “IPv6 Ready” networking stack and security protocols,
and the RTOS now offers a variety of
wireless communication options such
as Wi-Fi, Bluetooth and Zigbee to enable faster time-to-market when developing connected devices.
The Nucleus RTOS supports a
comprehensive set of architectures
and processor families, including
ARM, MIPS, Power PC, and SuperH,
as well as DSPs and soft processor
cores on FPGAs.
The new version delivers high
performance while optimizing resource usage in a single-OS or multiOS platform, making it a good choice
for resource-constrained devices (frequency and memory), and for environments where squeezing out every
cycle-per-watt is critical across any
system architecture.
Mentor Graphics
www.mentor.com
u-blox announces CDMA module for
US M2M markets
Toni McConnel, Embedded.com
u-blox has announced the LISA-C200
wireless voice and data modem sup-
porting the CDMA2000 1xRTT mobile communications standard. The
LISA-C2 series provides dual-band
CDMA2000 1xRTT data and voice
communication in a compact SMT
form factor. They are fully qualified
and certified modules, featuring extremely low power consumption and
a rich set of Internet protocols.
LISA-C2 modules are designed for
M2M and automotive applications
such as fleet management, automatic
meter reading (AMR), people and asset tracking, surveillance and security,
and Point of Sales (PoS) terminals.
The new modem is based on CDMA
technology gained through the recent
acquisition of San Diego-based Fusion Wireless.
Packaged in u-blox’ micro-miniature LISA LCC (leadless chip carrier),
LISA-C200 offers pin/pad compatibilCONTINUED ON PAGE 28
www.embedded.com | embedded systems design | NOVEMBER 2011
11
Expect something greater.
We’ll make you think differently about what it means to be an independent distributor.
Our 420,000 square foot distribution facility and 30-person inventory solutions team provides the
logistics power to handle your excess inventory. We have solutions that address your specific needs:
•
•
•
•
•
Line Item Purchasing
Lot Purchasing
Consignment
Asset Management
End-of-Life Buys
Contact America II Electronics to
leverage our inventory solution programs.
800.275.3323
www.americaii.com
barr code
Firmware forensics—best practices in
embedded software source-code discovery
By Michael Barr
S
oftware has become ubiquitous, embedded as it is into the
fabric of our lives in literally
billions of new (non-computer)
products per year, from microwave
ovens to electronic throttle controls.
When products controlled by software are the subject of litigation,
whether for infringement of intellectual property rights or product liability, it’s imperative to analyze the
embedded software (also known as
firmware) properly and thoroughly.
This article enumerates five best
practices for embedded software
source-code discovery and the rationale for each.
In February 2011, the U.S. government’s National
Highway Traffic Safety Administration (www.nhtsa.gov)
and a team from NASA’s Engineering and Safety Center
(www.nasa.gov/offices/nesc/) published reports of their
joint investigation into the causes of unintended acceleration in Toyota vehicles. While NHTSA led the overall effort
and examined recall records, accident reports, and complaint statistics, the more technically focused team from
NASA performed reviews of the electronics and embedded
software at the heart of Toyota’s electronic throttle control
subsystem (ETCS). Redacted public versions of the official
reports from each agency, together with a number of related documents, can be found at www.nhtsa.gov/UA.
These reports are very interesting in what they have to
say about the quality of Toyota’s firmware and NASA’s review of the same. However, of greater significance is what
they are not able to say about unintended acceleration. It
appears that NASA did not follow a number of best practices for reviewing embedded software source code that
!
!
might have identified useful evidence. In brief, NASA failed to find a
firmware cause of unintended acceleration—but their review also fails to
rule out firmware causes entirely.
This article describes a set of five
recommended practices for firmware
source code review that are based on
my experiences as both an embedded
software developer and as an expert
witness. Each of the recommendations
will consider what more could have
been done to determine whether Toyota’s ETCS firmware played a role in
any of the unintended acceleration.
The five recommended practices are:
(1) ask for the bug list; (2) insist on an executable; (3) reproduce the development environment; (4) try for the version control repository; and (5) remember the hardware.
The relative value and importance of the individual practices will vary by type of litigation, so the recommendations
are presented in the order that is most readable.
Remember unintended
acceleration? Here’s what
NASA should have examined in Toyota’s software.
Michael Barr is the author of three books and over
60 articles about embedded systems design, as well
as a former editor in chief of this magazine. He is a
popular speaker at the Embedded Systems
Conference, a former adjunct professor at the
University of Maryland, and the president of Netrino.
You may reach him at mbarr@netrino.com or read
his blog at www.embeddedgurus.net/barr-code.
ASK FOR THE BUG LIST
Any serious litigation involving embedded software will require an expert review of the source code. The source code
should be requested early in the process of discovery. Owners of source code tend to strenuously resist such requests
but procedures limiting access to the source code to only
certain named and pre-approved experts and only under
physical security (often a non-networked computer with no
removable storage in a locked room) tend to be agreed
upon or ordered by a judge.
Software development organizations commonly keep
additional records that may prove more important or useful
than a mere copy of the source code. Any reasonably thorough software team will maintain a bug list (a defect database) describing most or all of the problems observed in the
software along with the current status of each (for example
“fixed in v2.2” or “still under investigation”). The list of
bugs fixed and known—or the company’s lack of such a
list—is germane to issues of software quality. Thus the bug
list should be routinely requested and supplied in discovery.
Very nearly every piece of software ever written has dewww.embedded.com | embedded systems design | NOVEMBER 2011
13
barr code
fects, both known and unknown. Thus the bug list provides
helpful guidance to a reviewer of the source code. Often, for
example, bugs cluster in specific source files in need of major
rework. To ignore the company’s own records of known bugs, as
the NASA reviewers apparently did, is to examine a constitution without considering the historical reasons for the adoption of each section and amendment. Indeed, a simple search
of the text in Toyota’s bug list for the terms “stuck” and “fuel
valve” might yet provide some useful information about unintended acceleration.
INSIST ON AN EXECUTABLE
In software parlance, the “executable” program is the binary
version of the program that’s actually executed in the product.
The machine-readable executable
is constructed from a set of human-readable source code files
using software build tools such as
compilers and linkers. It’s important to recognize that one set of
source code files may be capable
of producing multiple executables, based on tool configuration
and options.
Though not human-readable, an executable program may
provide valuable information to an expert reviewer. For example, one common technique is to extract the human-readable
“strings” within the executable. The strings in an executable
program include information such as on-screen messages to
the user (such as “Press the ‘?’ button for help.”). In a copyright infringement case in which I once consulted several
strings in the defendant’s executable helpfully contained a
phrase similar to “Copyright Plaintiff”! You may not be so
lucky, but isn’t it worth a try?
It may also be possible to reverse engineer or disassemble an
executable file into a more human-readable form. Disassembly
could be important in cases of alleged patent infringement, for
example, where what looks like an infringement of a method
claim in the source code might be unused code or not actually
part of the executable in the product as used by customers.
Sometimes it’s easy to extract the executable directly from
the product for expert examination—in which case the expert
should engage in this step. For instance, software running on
Microsoft Windows consists of an executable file with the extension .EXE, which is easily extracted. However, the executable
programs in most embedded systems are difficult, at best, to extract. Extraction of Toyota’s ETCS firmware might not be physically possible. Thus the legal team should insist on production
of the executable(s) actually used by the relevant customers.
!
!
There is a reliable way for an
expert to confirm that she has
been provided with all of the
source code.
REPRODUCE THE DEVELOPMENT ENVIRONMENT
The dichotomy between source code and executable code and
the inability of even most software experts to make much
sense of binary code can create problems in the factual land14
scape of litigation. For example, suppose that the source code
produced by Toyota was inadvertently incomplete in that it
was missing two or three source-code files. Even an expert reviewer looking at the source code might not know about the
absent files. For example, if the bug the expert is looking for is
related to fuel valve control and the code related to that subject doesn’t reference the missing files, the reviewer may not
notice their absence. No expert can spot a bug in a missing
file.
Fortunately, there is a reliable way for an expert to confirm that she has been provided with all of the source code.
The objective is simply stated: reproduce the software build
tools setup and compile the produced source code. To do this
it’s necessary to have a copy of the development team’s detailed build settings, such as make files,
preprocessor defines, and linker control files. If the build process completes
and produces an executable, it’s certain
the other party has provided a complete copy of the source code.
Furthermore, if the executable as
built matches the executable as produced (actually, ideally, the executable
as extracted from the product) bit by binary bit, it’s certain
that the other party has provided a true and correct version of
the source code. Unfortunately, trying to prove this part may
take longer than just completing a build; the build could fail
to produce the desired proof for a variety of reasons. The details here get complicated. To get exactly the same output executable, it’s necessary to use all of the following: precisely the
same version of the compiler, linker, and each other build tool
as the original developers used; precisely the same configuration of each of those tools; and precisely the same set of build
instructions. Even a slight variation in just one of these details
will generally produce an executable that doesn’t match the
other binary image at all—just as the wrong version of the
source code would.
NOVEMBER 2011 | embedded systems design | www.embedded.com
TRY FOR THE VERSION CONTROL REPOSITORY
Embedded software source code is never created in an instant.
All software is developed one layer at a time over a period of
months or years in the same way that a bridge and the attached
roadways exist in numerous interim configurations during
their construction. The version control repository for a software program is like a series of time-lapse photos tracking the
day-by-day changes in the construction of the bridge. But
there is one considerable difference: It’s possible to go back to
one of those source code snapshots and rebuild the executable
of that particular version. This becomes critically important
when multiple software versions will be deployed over a number of years. In the automotive industry, for example, it must
be possible to give one customer a bug fix for his v2.1 firmware
while also working on the new v3.0 firmware to be released the
following model year.
Consider, for the sake of discussion, that the executable
version of Toyota’s ETCS v2.1 firmware that was installed in
the factory in one million cars around the world had an undiscovered bug that could result in unintended acceleration under
certain rare operating conditions. Now further suppose that
this bug was (perhaps unintentionally) eliminated in the v2.2
source code, from which a subsequent executable was created
and installed at the factory into millions more cars with the
same model names—and also as an upgrade into some of the
original one million cars as they visited dealers for scheduled
maintenance. In this scenario, an examination of the v2.2
source code proves nothing about the safety of the hundreds of
thousands of cars still with v2.1 under the hood.
Gaining access to the entire version control repository
containing all of the past versions
of a company’s firmware source
code through discovery may be out
of the question. For example, a
judge in a source-code copyright
and trade secrets case I consulted in
would only allow the plaintiff to
choose one calendar date and to
then receive a snapshot of the defendant’s source code from that specific date. If the plaintiff
was lucky it would find evidence of their proprietary code in
that specific snapshot. But the observed absence of their proprietary code from that one specific snapshot doesn’t prove
the alleged theft didn’t happen earlier or later in time.
There are some problems with examination of an entire
version control repository. It may be difficult to make sense of
the repository’s structure. Or, if the structure can be understood, it might take many times as long to perform a thorough
review of the major and minor versions of the various source
code files as it would to just review one snapshot in time. At
first glance, many of those files would appear the same or similar in every version—but subtle differences could be important
to making a case. To really be productive with that volume of
code, it may be necessary to obtain a chronological schedule
provided by a bug list or other production documents describing the source code at various points in time.
!
!
is mechanically inserted. Some or all of the software required to
send and receive messages over this network may be not be executed until a cable is inserted. A proper analysis of the software
needs to keep hardware-software interactions like this in perspective. Ideally, testing of the firmware should be done on the
hardware as configured in exemplars of the units at issue—so it
is useful to ask for hardware during discovery, if you are not
able to acquire exemplars in other ways. It’s not clear from the
redacted reports if NHTSA’s testing of certain Toyota Camrys
was done using the same firmware version on exactly the same
hardware as the owners who experienced unintended acceleration. Hardware interactions can be one of the most important
considerations of all when analyzing embedded software.
Sometimes a bug is not visible in the software itself. Such a
bug may result from a combination
of hardware and software behaviors
or multiprocessor interactions. For
example, one motor-control system
I’m familiar with had a dangerous
race condition. The bug, though, was
the result of an unforeseen mismatch
between the hardware reaction time
and the software reaction time
around a sequence of commands to the motor.
I hope that expert review in the
class-action litigation against
Toyota will include these and
other types of analysis.
REMEMBER THE HARDWARE
Embedded software is always written with the hardware platform in mind and should be reviewed in the same manner. For
example, it’s only possible to properly reverse engineer or disassemble an executable program once the specific microprocessor (such as Pentium, PowerPC, or ARM) is known. But
knowing the processor is just the beginning, because the hardware and software are intertwined in complex ways in such
embedded systems.
Only one or more features of the hardware are enabled or
active when the hardware is in a particular configuration. For
instance, consider an embedded system with a network interface, such as an Ethernet jack that is only powered when a cable
ADDITIONAL ANALYSIS REQUIRED
As you can see, the review of embedded software can be complicated. This is partly because the hardware of each embedded
system is unique. In addition, the system as a whole generally
involves complex interactions between hardware, software, and
user. An expert in embedded software should typically have a
degree in electrical engineering, computer engineering, or computer science plus years of relevant experience designing embedded systems and programming in the relevant language(s).
The five best practices I’ve presented here are meant to establish the critical importance of making certain specific requests early in the legal discovery process. They are by no
means the only types of analysis that should be performed on
the source code. For example, in any case involving the quality
or reliability of embedded software, the source code should be
tested via static-analysis tools. This and other types of technical
analysis should be well understood by any expert witness or litigation consultant with the proper background.
In the case of Toyota’s unintended acceleration issues, I
hope that expert review in the class-action litigation against
Toyota will include these and other additional types of analysis
to identify all of the potential causes and determine if embedded software played any role. Though government funds for
analysis by NASA are understandably limited, it’s suggested
that transportation safety organizations, such as NHTSA,
should establish rules that ensure that future investigations are
more thorough and that safety-related technical findings in litigation cannot be hidden behind the veil of secrecy of a settlement agreement. ■
www.embedded.com | embedded systems design | NOVEMBER 2011
15
cover feature
Static analysis finds the bugs, even if you’re working under an Agile development
process. Here’s when and where to use static analysis.
Making late-night
debugging an
exception rather than
the norm
BY RUTUL DAVE, COVERITY
I
t’s close to midnight and after hours of debugging you’ve finally
identified the root cause of a defect. It’s a nasty null pointer
dereference that gets triggered after various conditional checks,
and it’s buried deep inside a code component that has not been
touched in a while. The challenges of debugging pale in comparison with the fact that you still have a long road ahead in checking whether the bug exists in three other branches, merging the
fix, and then unit testing the changes
in all four branches to make sure you
didn’t break anything else, especially
when you changed something in the
legacy code component. Think about
how many times you might have been
in a similar situation right before code
freeze for a major release or the night
before a hot-fix is scheduled to go out?
16
NOVEMBER 2011 | embedded systems design | www.embedded.com
Static analysis can help you avoid
some of the late nights. In this article, I
discuss the advantages of static analysis
for finding and fixing the most common
coding defects, the Agile programming
techniques used in modern static analysis to identify precise defects that lead to
actual crashes, and the technologies that
enhance the analysis results, beyond just
cover feature
a list of defects, by providing valuable information such as where the defect exists
in the different branches of code.
Along with other methods of testing
and verification, many companies have
taken advantage of the benefits of code
testing with modern static analysis to
identify defects early in development.
During the past few years, various reports by embedded systems market research firm VDC Research indicate
strong growth in companies adopting
static analysis as a critical test automation tool. The immense growth in the
size of code bases is one of the strongest
reasons to use static analysis as a cost-effective and automated method to evaluate the quality of the code and eradicate
common coding defects. In a survey
done by VDC (“Automated Test & Verification Tools, Volume 2,” January 2011,
www.vdcresearch.com/market_research/e
mbedded_software/product_detail.aspx?pr
oductid=2639), software engineers who
use static analysis indicated that these
tools reduced the number of defects and
increased the overall quality of code. In
!
!
!
Dataflow analysis identifies the execution path
during compile time. Interprocedural analysis finds
defects across function
and method boundaries.
addition, VDC cited efficiency as a major benefit (and return on investment).
DATAFLOW ANALYSIS
One powerful static-analysis technique is
dataflow analysis. To find the defect in
the Listing 1, modern static-analysis
tools use dataflow analysis to identify the
execution path during compile time.
Embedded
smxWiFi
™
First, a control flow graph is generated from the source code. In this case,
the if statements could have four possible execution paths through the code.
Let’s follow one of those paths. When
the value of x passed into the function
is not zero, p is assigned a null pointer
with p=0. Then, the next conditional
check (x!=0) takes a true branch and
in the next line p is dereferenced, leading to a null pointer dereference.
INTERPROCEDURAL ANALYSIS
In addition to dataflow analysis, another useful technique that good static
analysis employs is interprocedural
analysis for finding defects across function and method boundaries, as in
Listing 2.
In Listing 2, we have three functions: example_leak(), create_S(),
and zero_alloc(). To analyze the
code and identify the memory leak, the
analysis engine has to trace the execution to understand that memory is allocated in zero_alloc(), initialized in
create_S(), and leaked when variable
tmp goes out of scope when we return
from function example_leak(). This
is known as interprocedural analysis.
FALSE-PATH PRUNING
The third technique is called false-path
pruning. One of the key quality-assurance requirements that developers are
held accountable for is that the reported bugs are real. In other words, the
untethers
your designs.
Listing 1 Dataflow analysis.
• 802.11a, b, g, i, n
• USB WiFi Dongles
• PCI WiFi Cards
• Device <–> Access Point
• Device <–> Device
• Optimized for SMX®
• Portable to other RTOSs
• Security: WEP, WPA, WPA2
• Ralink RT2501, RT2573,
RT2860, RT2870, RT3070,
RT3572, RT5370 Drivers
• Small RAM/ROM Footprint
• Full source code
• Royalty free
www.smxrtos.com/wifi
18
NOVEMBER 2011 | embedded systems design | www.embedded.com
void null_pointer_deref(int x)
{
char *p;
if (x == 0)
p = foo();
else
p = 0;
if (x != 0)
*p;
else
….
return;
}
bugs are true problems. This expectation is the same from static analysis—it
should report critical defects and not
false positives. One way to ensure that
the reported defects are real is to analyze only the executable paths. Naïve
static analysis will usually find defects
on paths that can never be executed because of data dependencies. We can understand this with the code sample illustrated in Listing 3.
This example is slightly modified
from Listing 1 discussed previously. In
this case, we will look at an execution
Listing 2 Interprocedural analysis.
void* zero_alloc(size_t size) {
void *p = malloc(size);
if (!p)
return 0;
memset(p, 0, size);
return p;
}
struct S* create_S(int initial_value) {
struct S* s = (struct S*) zero_alloc(sizeof *s);
if (!s)
return 0;
s->field = initial_value;
return s;
}
int example_leak(struct S *s, int value) {
struct S *tmp = s;
if(!tmp)
tmp = create_S(value);
if(!tmp)
return -1;
/* ... */
return 0;
}
Listing 3 False-path pruning.
void null_pointer_deref(int x)
{
char *p;
if (x != 0)
p = foo();
else
p = 0;
if (x != 0)
*p;
else
….
return;
}
www.embedded.com | embedded systems design | NOVEMBER 2011
19
cover feature
A duplicate defect caused by code branching and merging.
•
•
Merge
fix
•
Original development trunk
• • •
• • •
• •
•
•
•
•
•
•
OEMA Version 1.3
OEMA Version 1.2
OEMA Version 1.1
OEMA Version 1.1
OEMA Version 1.2
Defect in the original development branch
Defect introduced in a release branch before a merge
Defect introduced in a release branch after a merge
Figure 1
path that simply cannot be executed.
Consider the case where the first conditional check (if (x != 0)) results
in the false case being evaluated. This
will assign variable p the value of 0. At
the next conditional check, if the
analysis engine looks at the true path,
it will report a null pointer dereference defect. But that would be a false
positive because the execution logic
will never traverse this path. It is not
possible to evaluate the same conditional check (if (x != 0)) in two
different ways. By pruning a path that
can never be executed (a false path),
good analysis can report up to 50%
fewer incorrect defects. This results in
higher trust in the analysis reports
and allows the development team to
focus on the true positives instead of
having to muddle through a long list
of false positives.
Using a combination of techniques
such as dataflow analysis, interprocedural analysis, and false-path pruning,
effective static analysis has made a case
for being an extremely valuable tool
for developers. It’s automated,
achieves 100% path coverage, and does
not require time intensive test cases to
be written. We saw examples of a null
pointer dereference and a memory
leak. In addition, the analysis is able to
identify other critical defects such as
memory corruptions caused by incorrect integration operations, misused
pointers, other resource leaks besides
20
memory, invalid memory accesses, undefined behavior due to uninitialized
value usage, and many more.
!
!
!
A defect in code exists
because the execution
path went through a
series of events and conditional statements that
led to the error.
WHY, WHAT, AND WHERE?
In addition to analysis results, one of
the major benefits of static analysis is to
provide the developer answers to questions important in effective Agile development such as:
•
•
•
Why does the defect exist?
What impact will it have?
Where does it need to be fixed?
To understand the context for a defect and validate it as a true defect, the
developer needs to understand why the
defect exists. A defect in code exists because the execution path went through
a series of events and conditional statements that led to the error. In Listing 1
discussed earlier, defining the character
pointer p is an event. The two if statements that checked the value of the x
NOVEMBER 2011 | embedded systems design | www.embedded.com
variable are conditional statements. By
identifying the true or false path taken
through those conditional checks, we
could trace the execution path, which
will show us that we dereferenced a null
pointer *d, which is the defect.
Similarly, experienced software engineers are able to associate the impact
that a null pointer dereference or a
memory leak can have on the system
running the software. However, identifying the impact that the defect has on
the different branches that have been
forked from the same code base is not
always a straightforward task. Therefore, the answer to “what impact will a
defect have?” sometimes can be more
complex. Consider a team developing a
new operating system for mobile
smartphones. Because multiple mobile
phone vendors (OEMs) need to be supported for this new operating system,
every vendor in the source control
management (SCM) system is assigned
a development branch that has been
forked from the same code base. Add
each vendor’s need for multiple
branches for the different releases and
product generations and this picture
starts to get complex very quickly.
Static analysis performed on every
branch produces a list of the critical defects. The development team can go
over every identified defect and verify
the reason why it exists. However, depending on when a defect is introduced, it could exist in all versions and
branches or a subset. When looking at a
single defect in isolation in a single
branch, it’s tough to gauge the severity
of the defect without knowing where
else it is present. A defect that is not
limited to a single version or one OEM
client might be considered more severe
and fixing it would need to be prioritized over others. Figure 1 shows a duplicate defect duplicate caused by code
branching and merging.
Finally, to answer the question of
Where does a defect need to be fixed?, a
developer writing the fix needs to know
exactly which branches need to be
checked. Analysis results that identify
cover feature
Listing 4 A defect in foo.c triggers a single defect in both 32- and 16-bit binaries.
gcc –m32 -c foo.c
gcc -c foo.c
// 32-bit compile. Contains a null pointer dereference defect.
// 64-bit compile. Contains the same null pointer dereference defect.
the various branches where a defect exists is highly valuable and can save
hours of manual verification.
Another common case that embedded software engineers encounter is
when code is designed to run on multiple platforms. Device drivers are typical
examples of such software components.
Listing 4 is a simple example based on
code required to be compiled for 32and 64-bit platforms.
The advantage of a good staticanalysis solution in such cases is it can
identify and report a defect in foo.c
that gets triggered in both 32- and 64-bit
binaries as a single defect. In this case,
the code is not duplicated, but instead
built in multiple ways. Hence the developer needs to evaluate the severity by
understanding if it’s going to get triggered in both 32-bit and 64-bit binaries.
SHARED CODE
Another interesting case is when common code components are shared and
used in more than one product. Take the
example of a team developing the platform software for a family of networking
switches. Since the functionality provided by the platform software must be implemented in all products in the product
family, this code component will be
shared as shown in Figure 2.
For developers working on this
team, the best assessment of the severity of a defect reported by static analysis
is not only the impact it will have on
one switch product, but also information on all the products that use this
platform software component. A product is usually created by combining
many such shared components. Each
component is not only a project itself,
!
!
!
Static-analysis implementations will be as common as an SCM system
or a bug tracking system
in the development
workflow.
but also a part of various other projects
using it. Thus the analysis result needs
to identify that a defect in this shared
component has an impact on the various projects using it. Such cases are especially valid when using open-source
components shared among various
projects and products. A library that
parses a specific type of a network
packet might be used in all the different
networking products that the group is
designing and developing.
Code branching is a critical aspect
of developing software for embedded
systems. So is compiling the codebase
for multiple platforms and reusing a
component in multiple projects and
products. With static analysis being valued for its ability to find hard-to-detect
critical defects due to common programming errors and being trusted for
its ability to do so without a large number of false positives, the trend in adoption of such solutions is going to continue. And with the value in terms of
efficiency and productivity that the
analysis results provide, it might not be
long before static-analysis implementations will be as common as an SCM
system or a bug tracking system in the
development workflow.
Unfortunately, heroic late night debugging marathons might still be very
necessary. Even after diligently testing
and verifying every code change and
every release, embedded systems software will have bugs that will require
manual debugging efforts. However, by
taking advantage of modern static
analysis and techniques that provide
value beyond a simple list of defects,
one can make the late night the exception rather than the norm. ■
A platform software component used in multiple products.
Switch Product A
Platform
software
component
Switch Product B
Figure 2
Rutul Dave is a senior product manager
at Coverity, where he creates tools and
technology to enhance the software development process. He received his masters in computer science with a focus on
networking and communications systems
from University of Southern California.
He has worked at Cisco Systems and at
various Bay Area-Silicon Valley startups,
such as Procket Networks and Topspin
Communications.
www.embedded.com | embedded systems design | NOVEMBER 2011
21
Upcoming
Virtual Conferences
When: Thu., Nov. 10, 2011 • 11am – 6pm EDT
When: Wed.-Thu., Nov. 16-17, 2011 • 11am – 6pm EDT
When: Thu., Dec. 8, 2011 • 11am – 6pm EDT
On Demand
Virtual Conferences
EE Times, the leading resource for design decision makers in
the electronics industry brings to you a series of Virtual
Conferences. These fully interactive events incorporate online
learning, active movement in and out of exhibit booths and
sessions, vendor presentations and more. Because the
conference is virtual you can experience it from the comfort
of your own desk. So you can get right to the industry
information and solutions you seek.
DesignMED Virtual Conference
http://e.ubmelectronics.com/DesignMEDvc
Why you should attend:
Digi-Key Sensors Virtual Conference
e.ubmelectronics.com/sensors
• Participate in educational sessions in real time
• Learn from top industry speakers
• Easy access to EE Times library of resources
ESC Silicon Valley Virtual Conference
e.ubmelectronics.com/escsv
System On A Chip
e.ubmelectronics.com/soc
Digi-Key Microcontrollers Virtual Conference
e.ubmelectronics.com/mcu
Industrial Control
e.ubmelectronics.com/industrialcontrol
Multicore: Making Multicore Work for You
e.ubmelectronics.com/multicore
• Interact with experts and vendors at the Virtual Expo Floor
• Find design solutions for your business
For sponsorship information, please contact:
Christian Fahlen, 415-947-6623 or
christian.fahlen@ubm.com
feature
Keeping the team focused and organized is what Scrum can do for your embedded systems project.
Using Agile’s Scrum in
embedded software
development
I
BY KIM H. PRIES AND JON M. QUIGLEY
n modern Agile software development, teams follow a simple productivity technique called Scrum to organize the work flow and solve problems
during development. Scrum implementation for line management in IT
projects increases the pace of accomplishment, decreases steady-state
project lists, and improves team communication. We think it can do
the same for embedded systems soft-
niques similar to conventional project
ware development.
management, such as defining scope,
Having literally written the book on
Scrum, we present a short primer here
for embedded systems developers.
developing a statement of work, creating a work breakdown structure, and
1
managing the stakeholder expectations.
You’ll find the simplified basics in this
The team developing the software
article but we give you references at the
consists of team members and a facilita-
end to further your knowledge.
tor, called the Scrum master. The team
Scrum is a way to manage projects
members use a series of lists and meet-
that focuses on the immediate objec-
ings to keep on track, including:
tives and deliverables of the people in-
•
A work breakdown structure (a general, large-focus list of to-do items
volved, helping to keep them focused
that feeds the product backlog list).
and undistracted by other projects and
activities. The method employs tech-
•
A product backlog list (the list of all
www.embedded.com | embedded systems design | NOVEMBER 2011
23
feature
•
•
•
•
things that need to be done).
A sprint backlog list (the list of all
things to be done immediately).
A burndown chart (showing how the
team will consume the hours allotted to the tasks).
A daily Scrum meeting (short meetings to answer three questions):
What did you accomplish yesterday?
What are you working on today?
What obstacles confront you?
A sprint retrospective meeting (short
meetings after each sprint is complete to review how the sprint went
and what could be improved).
The team also constantly communicates and involves the customer or stakeholder during the process. All the actual
coding work is done in sprints.
WBS AND THE PRODUCT BACKLOG
In Scrum, the product backlog list outlines the deliveries and expectations from
the customer along with something
called a work breakdown structure (WBS),
a hierarchical list of topics and to-do
items that must be addressed during the
project. The WBS is the heart of project
management.
We recommend the team base their
product backlog list on the WBS defined
in U.S. Department of Defense military
handbook (MIL-HDBK-881x).2 The
handbook’s list is not completely relevant to software development, but the
An excerpt from a work breakdown
structure recommended by DoD.
WBS
• Integration, assembly, test, and
checkout efforts
• Systems engineering and program
management
• Training:
Equipment
Services
Facilities
• Data:
Technical publications
Engineering data
Management data
Support data
team can modify or delete items before
populating their Scrum product backlog
list. Table 1 shows an excerpt from MILHDBK-881x; see the full sample list online in an article we wrote last year.3
The purpose of the WBS, and hence
the product backlog, is to ensure that the
intent of the customer drives the scope
of work which, in turn, drives the requirements and subsequent actions taken by the project team. It doesn’t matter
if the requirements come from external
or internal sources. The requirements
will be broken down for what is called
the sprint—a short period of time during which the team will be focused upon
a specific subset of the product features.
When requirements change, the
team changes the WBS (which feeds the
product backlog) because WBS is a
functional decomposition of top-level
deliverable elements. Once the team has
a detailed and visible structure, updating, re-planning, re-estimating cost, and
duration become simplified because all
elements are itemized and visible.
The WBS is also important because
the cost centers are always derived from
elements of the project deliverables. The
WBS is the source of the product backlog, which in turn is broken down to the
sprint backlog, providing input for the
sprints and distributing the work to cost
centers or other sprint teams.
The WBS assures that the necessary
actions are taken to produce the product
or service that the customer demands.
The concept works because products are
composed of systems which, in turn, are
composed of subsystems and then components and so on. If we start with a
top-level assembly as the first or second
level on the WBS, we can easily break the
product down into “atomic”-level tasks.
The team uses this approach for all
deliverables, including more internalfacing deliverables, such as internal specifications, models, failure mode and effects analyses (FMEA), and the total
round of documents that any formal
quality system requires.
As in any project management
methodology, tracking updates to the
Table 1
24
NOVEMBER 2011 | embedded systems design | www.embedded.com
project scope or changing deliverables to
meet requirements is where many projects go astray. In Scrum, the WBS is not
simply an action item list, but a formal
document designed to support cost and
schedule reporting, or earned value management (EVM), a project management
technique for objectively measuring
project performance and progress. We
can derive our action item lists, schedule, and budgets from the WBS.
HOW DEEP SHOULD THE WBS GO?
We can deconstruct the WBS as far as we
need to in order to put items into our
product backlog planning document
with minimal effort. This is another
area where conventional product management projects go astray with missing
items and misunderstood or nonexistent
dependencies between tasks. The Scrum
approach avoids these pitfalls because
the focus stays on the immediate goal of
the sprint, which is a very short defined
period of time. This short planning
horizon requires that we break down
and understand the interactions for that
specific sprint period. We call this highly-detailed analysis atomic decomposition
because we are decomposing the higherlevel tasks until further decomposition
no longer adds value.
When we complete this task, we will
have a list of “atoms” that become part
of our other planning documents. Once
we have these atoms, we’re ready to go.
We now take the atomic tasks and use
these to populate the product backlog. If
we have set up our breakdown correctly,
we should not need to ever list the higher-order tasks. Completing the atomic
tasks in appropriate order will automatically result in completion of the higherorder task.
This deconstruction also allows us to
estimate the amount of work we can fit
into the sprint. The sprint is “time
boxed” so the amount of work taken on
should not exceed the amount of time
allotted. This is important when constructing the sprint backlog and during
subsequent execution. A typical sprint
will last from two to four weeks.
Description
Review BBB
traceability matrix
AAAA gauges
traceability matrix
Emigrated XYZ from
PMSDBA to EPDPROD
(Edit station, display,
and visual aids)
Rebuild heavy fixture
and add to medium
equipment
Integrate all hauler
products in one
equipment
Change C station to
Labview
LED Color detector for
individual gauges
Develop spare parts
database FF2
Emigrated AAAA
programming station
from VB to Labview
Warning light bar
traceability matrix
Design programmer
for AAAA urea 2 inches
XYZ calibration tool
Start date Days Owner Progress
22 Oct 09
5
30 Sep 09
5
15 Oct 09
5
Week 32
A first pass sprint list.
Week 31
feature
other parts of the enterprise.
The real benefit over the EVM technique is that you see the performance almost immediately, the duration is only
two to four weeks, and the burndown
chart is updated daily. If the team can’t execute to plan, portions of the sprint
backlog may be eliminated from the present sprint and postponed to a subsequent
sprint. Likewise, if the sprint is accomplishing more than expected, components
from the product backlog can be added.
Table 2 represents a first-pass sprint
list. The boxes to the right can be used as
a crude estimate of progress. Figure 1
shows an EVM chart and Figure 2 shows
a burndown chart.
50%
50%
75%
1 Oct 09
30
75%
1 Aug 09
5
50%
1 Aug 09
15
30 Sep 09
15
15 Sep 09
10
15 Sep 09
30
30 Sep 09
5
15 Sep 09
15
30 Sep 09
15
50%
50%
50%
50%
50%
50%
50%
Table 2
Cumulative cost
Planned value
Actual cost
Earned value
A burndown chart.
Hours of work remaining
An earned value management chart.
Team performance slower than planned
Consider subtracting tasks
from the sprint backlog
Pla
nn
tea
m
pe
Team performance
rfo
rm
faster than planned
an
ce
Consider adding tasks to
the sprint backlog.
Time
Figure 1
THE SCRUM SPRINT
A sprint backlog is the immediate list of
the tasks to achieve the prioritized product backlog; it breaks down specific elements from the product backlog derived
from the WBS into smaller ones (called a
decomposition). The team selects meaningful tasks from the product backlog to
populate the sprint backlog. These tasks
are prioritized by the customer and by
the team according to technical aspects
and dependencies. The customer identifies which features have the highest priority and what they want first. These
tasks can be those the team thinks it can
do quickly or, more importantly, can accomplish in priority order so long as the
26
ed
Days of the sprint
Figure 2
End of sprint
team remembers the dependencies—that
is, when one task is dependent on the
completion of another task.
The sprint list is quasi-sacred. That
means the sprint team should only consider breaking the sprint if a dire emergency
occurs and a higher-level champion is willing to override the sprint. Otherwise, the
goal is to complete the tasks while tracking
them with a burndown chart. The burndown chart will show us progress against
plan. In fact, the burndown chart is to
Scrum project management as the EVM
chart is to conventional project management. Follow up may reveal that the
team bit off too much to chew or that
the team is subject to interruptions from
NOVEMBER 2011 | embedded systems design | www.embedded.com
DAILY SPRINT MEETINGS
The sprint itself will last somewhere between fourteen days to a month. The
team members discuss the status of the
project and obstacles to success during
the daily sprint meetings, by telling what
did they did yesterday, what they’re doing today, and where the bottlenecks and
road blocks are.
The Scrum master, who is the equivalent of the project manager, facilitates the
meeting. The burndown chart is reviewed
and the areas of expected progress and
newly discovered risks are openly discussed. This open discourse identifies areas of interference for making the
progress defined by the sprint backlog
and the burndown chart. The Scrum
master and the project stakeholder then
work to expediently resolve these areas.
SPRINT RETROSPECTIVE
At the end of the sprint, the team members review what they have done, what
they did well, and what they could do better. This meeting and activity is analogous
to the “white book” exercises of conventional project management. The benefit
of the sprint retrospective is that the
team doesn’t wait until the end of the
project or end of a project phase before
critiquing. A critique at the end of each
two-to-four week sprint makes it possible
to learn something while delivering the
product and integrate that learning into
the subsequent sprints. The retrospective need not be led by the
Scrum master and this meeting should be brief but thorough. At
the end of the retrospective, the team plans for the next sprint.
Since Scrum is a technique with intensified focus and accelerated tempo, the meeting schedules should not get in the way. The
meetings should be organized well enough that the team is not
wasting the time. You know you’ve achieved this goal when the
complaints about the meetings lessen or disappear.
SCALING UP
We can scale our Scrum approach to larger development processes by creating the Scrum of Scrums. Either the Scrum master from
a Scrum team at one level becomes a team member at the next
higher level if this approach makes sense, the team elects a team
member to represent them, or rotating representative from the
team goes, giving every team member a chance to represent the
group.
NOT JUST FOR RUGBY
In our personal experience, we have used the Scrum approach in a
line management setting. We employed the Scrum philosophy years
ago in a way we refer to as proto-Scrum. In either event, we found
no difficulties scaling the process to meet our needs. Some areas that
were less than satisfactory were the burndown charts (because they
were complicated) and the full-fledged WBS. On the other hand,
the team enjoyed the improvement in the steady-state list of projects they were working on, and the daily Scrum meetings improved communication to the point where different departments
were achieving cross-fertilization of capabilities. In addition, following Scrum enhanced the team’s “buy-in” and camaraderie. ■
Jon M. Quigley, PMP CTFL, is the manager of the Electrical and Electronics Verification and Test Group at Volvo 3P in Greensboro, North
Carolina. He is also a principal of Value Transformation, LLC, a product-development training and cost-improvement organization. He has
more than 20 years of product-development experience, ranging from
embedded hardware and software through verification and project
management.
Kim H. Pries, APICS CPIM, is a senior member of the American Society for Quality with the following certifications: CQA, CQE, CSSBB,
CRE, CSQE, and CMQ/OE. She is a principal with Value Transformation, LLC.
ENDNOTES AND REFERENCES:
1.
2.
3.
4.
Pries, Kim H. and Jon M. Quigley. Scrum Project Management. (Taylor
and Francis’s CRC Press)
U.S. Department of Defense military handbook (MIL-HDBK-881x).
Pries, Kim and Jon Quigley. “Taking a Scrum Approach to Product Development,” Digital Software Magazine, July 2010.
www.softwaremag.com/focus-areas/application-development/featuredarticles/taking-a-scrum-approach-to-product-development/.
Pries, Kim and Jon Quigley. “Defining a Work Breakdown Structure,”
Digital Software Magazine, April 2010. www.softwaremag.com/focusareas/application-development/featured-articles/defining-a-work-breakdown-structure/
www.embedded.com | embedded systems design | NOVEMBER 2011
27
new products
continued
from page 11
ity with the LEON GSM/GPRS and
LISA W-CDMA module families.
LISA-C200 supports analog and
digital voice in the 800 MHz and
1900 MHz bands, up to 153 kb/s forward and reverse data communications, and OTA provisioning methods
(OMA/DM, FUMO, OTASP and
OTAPA). The modem includes an
embedded TCP- and UDP/IP stack
and is scheduled for certification by
US CDMA operators Sprint, Verizon,
and Aeris.
ubox
www.u-blox.com
Statement of Ownership, Management, and Circulation
Required by 39 USC 3685
Publication title: Embedded Systems Design; Publication number: 5873;
Filing date: 9/22/2010; Issue frequency: Monthly with a combined Jan/Feb and July/August
issue; No. of issues published annually: 10; Annual subscription price: $55.00;
Complete mailing address of Known Office of Publication: 600 Community Drive,
Manhasset, Nassau County, NY 11030-3875; Complete Mailing Address of Headquarters or
General Business Offices of the Publisher: UBM LLC, 600 Community Drive, Manhasset,
Nassau County, NY 11030-3875; Full Names and Complete Mailing Addresses of Publisher,
300 Second Street, Suite 900 South, South Tower, San Francisco, CA 94107; Editor: none;
Managing Editor: Susan Rambo, UBM LLC, 300 Second Street, Suite 900 South, South
Tower, San Francisco, CA 94107; Owner: UBM LLC (600 Community Drive, Manhasset, NY
11030-3875), an indirect, wholly owned subsidiary of UBM PLC (Ludgate House, 245
Blackfriars Road, London SE1 9UY U.K.) Known Bondholders, Mortgages, and Other
Security Holders Owning or Holding 1 Percent or More of Total Amount of Bonds,
Mortgages, or Other Securities: None; Issue Date for Circulation Data Below: September
2011
ADI rolls new demods and PLL
Janine Love, RF/Microwave DesignLine
Analog Devices Inc. (ADI) has released two new quadrature demodulators and a new phase locked loop
(PLL) for use in wireless infrastructure and point to point systems. I
spoke to Ashraf Elghamrawi, ADI’s
Product Marketing Manager about
these new RF ICs.
Long known for their PLLs and
RF detectors, ADI has been expanded
its portfolio and now is offering two
new demodulators that offer high integration. Here are the specifics:
The ADRF6806 integrates an I/Q
demodulator, PLL, VCO, and multiple LDO regulators. Elghamrawi
points out that the integrated product maintains the same electrical performance as the discrete solution,
with all performance specifications
staying the same. The RFIC is manufactured using a SiGe BiCMOS
process. It also features an SPI port,
which allows desingers to program
features in the chip. For example, it
can accept an external LO instead or
a low power mode. The LO frequency
range is 50 to 525MHz.
Analog Devices Inc
www.analog.com
This Statement of Ownership will be printed in the November 2011 issue of this publication.
I certify that all information furnished on this form is true and complete. I understand that
anyone who furnishes false or misleading information on this form or who omits material or
information requested on the form may be subject to criminal sanctions (including fines and
imprisonment) and/or civil sanctions (including civil penalties).
Signature and Title of Editor, Publisher, Business
Manager, or Owner:
David Blaza, Publisher,
October 17, 2011.
Submit press releases to Chris Ciufo at
Chris.ciufo@ubm.com.
28 NOVEMBER 2011 | embedded systems design | www.embedded.com
break points
The semiconductor revolution
By Jack G. Ganssle
We’re on track, by 2010, for 30-gigahertz
devices, 10 nanometers or less, delivering
a tera-instruction of performance.
—Pat Gelsinger, Intel, 2002
Photos: NASA
W
e all know how in 1947 Shockley, Bardeen, and Brattain invented the transistor, ushering
in the age of semiconductors. But that
common knowledge is wrong. Julius
Lilienfeld patented devices that resembled field-effect transistors (although
they were based on metals rather than
modern semiconductors) in the 1920s
and 30s (he also patented the electrolytic capacitor). Indeed, the United States
Patent and Trademark Office rejected
early patent applications from the Bell
Labs boys, citing Lilienfeld’s work as
prior art.
Semiconductors predated Shockley
et al by nearly a century. Karl Ferdinand
Braun found that some crystals conducted current in only one direction in
1874. Indian scientist Jagadish Chandra
Bose used crystals to detect radio waves
as early as 1894, and Greenleaf Whittier
Pickard developed the cat’s whisker
diode. Pickard examined 30,000 different materials in his quest to find the
best detector, rusty scissors included.
Like thousands of others, I built an AM
radio using a galena cat’s whisker and a
coil wound on a Quaker Oats box as a
kid, though by then everyone was using
modern diodes.
As I noted last month, RADAR research during World War II made systems that used huge numbers of vacuum tubes both possible and common.
The Apollo Guidance Computer (ACG).
!
!
!
In part 3 of Jack’s series
honoring the 40th
anniversary of the microprocessor, the minis create a new niche—the
embedded system.
But that work also led to practical silicon and germanium diodes. These
mass-produced elements had a chunk
of the semiconducting material that
contacted a tungsten whisker, all encased in a small cylindrical cartridge. At
Jack G. Ganssle is a lecturer and consultant on embedded
development issues. He conducts seminars on embedded systems
and helps companies with their embedded challenges.
Contact him at jack@ganssle.com.
assembly time workers tweaked a screw
to adjust the contact between the silicon
or germanium and the whisker. With
part numbers like 1N21, these were employed in the RADAR sets built by
MIT’s Rad Lab and other vendors. Volume 15 of MIT’s Radiation Laboratory
Series, titled “Crystal Rectifiers,” shows
that quite a bit was understood about
the physics of semiconductors during
World War II. The title of volume 27
tells a lot about the state of the art of
computers: “Computing Mechanisms
and Linkages.”
Early tube computers used crystal
diodes. Lots of diodes: the ENIAC had
7,200, Whirlwind twice that number. I
have not been able to find out anything
about what types of diodes were used
or the nature of the circuits, but imagine an analog with 1960s-era diodetransistor logic.
While engineers were building
tube-based computers, a team lead by
William Shockley at Bell Labs researched semiconductors. John Bardeen
and Walter Brattain created the point
contact transistor in 1947, but did not
include Shockley’s name on the patent
application. Shockley, who was as irascible as he was brilliant, in a huff went
off and invented the junction transistor.
One wonders what wonder he would
have invented had he been really slighted.
Point contact versions did go into
production. Some early parts had a hole
in the case; one would insert a tool to
adjust the pressure of the wire on the
germanium. So it wasn’t long before the
much more robust junction transistor
became the dominant force in electronics. By 1953 over a million were made;
four years later production increased to
29 million. That’s exactly the same
number as a single Pentium III used in
2000.
www.embedded.com | embedded systems design | NOVEMBER 2011
29
break points
!
!
!
We had one of these in
college. OSes didn’t offer
much in the way of security; we learned to read
the input tape and search
for files with grades.
experimental transistorized version of
their 604 tube computer in 1954; the
semiconductor version ate just 5% of the
power needed by its thermionic brother.
(The IBM 604 was more calculator than
computer.)
The first completely-transistorized
commercial computer was the . . . uh . . .
well, a lot of machines vie for credit and
the history is a bit murky. Certainly by
the mid-1950s many became available.
Last month I claimed the Whirlwind was
A complete AM radio
that uses a cat’s whisker
“diode” as a detector.
Wikipedia: Crystal radio wiring pictorial based on Figure 33 in Gernsback’s 1922 book
Radio For All (copyright expired) with “Aerial” changed to Antenna by J.A. Davidson.
The first commercial part was probably the CK703, which became available
in 1950 for $20 each, or $188 in today’s
dollars.
Meanwhile tube-based computers
were getting bigger and hotter and were
sucking ever more juice. The same University of Manchester that built the Baby
and Mark 1 in 1948 and 1949 got a prototype transistorized machine going in
1953, and the full-blown model running
two years later. With a 48- (some sources
say 44) bit word, the prototype used only
92 transistors and 550 diodes! Even the
registers were stored on drum memory,
but it’s still hard to imagine building a
machine with so few active elements.
The follow-on version used just 200
transistors and 1,300 diodes, still no
mean feat. (Both machines did employ
tubes in the clock circuit.) But tube machines were more reliable as this computer ran about an hour and a half between failures. Though deadly slow it
demonstrated a market-changing feature: just 150 watts of power were needed. Compare that to the 25 KW consumed by the Mark 1. IBM built an
30
important at least because it spawned
the SAGE machines. Whirlwind also inspired MIT’s first transistorized computer, the 1956 TX-0, which had Whirlwind’s 18 bit word. And, Ken Olsen, one
of DEC’s founders, was responsible for
the TX-0’s circuit design. DEC’s first
computer, the PDP-1, was largely a TX-0
in a prettier box. Throughout the 1960s
DEC built a number of different machines with the same 18-bit word.
The TX-0 was a fully parallel machine in an era where serial was common. (A serial computer processed a single bit at a time through the arithmetic
logic unit [ALU].) Its 3,600 transistors,
at $200 a pop, cost about a megabuck.
And all were enclosed in plug-in bottles,
just like tubes, as the developers feared a
high failure rate. But by 1974 after
49,000 hours of operation fewer than a
dozen had failed.
The official biography of the machine
(RLE Technical Report No. 627) contains
tantalizing hints that the TX-0 may have
had 100 vacuum tubes, and the 150-volt
power supplies it describes certainly
aligns with vacuum-tube technology.
NOVEMBER 2011 | embedded systems design | www.embedded.com
IBM’s first transistorized computer
was the 7070, introduced in 1958. This
was the beginning of the company’s important 7000 series, which dominated
mainframes for a time. A variety of
models were sold, with the 7094 for a
time occupying the “fastest computer in
the world” node. The 7094 used over
50,000 transistors. Operators would use
another, smaller, computer to load a
magnetic tape with many programs
from punched cards, and then mount
the tape on the 7094. We had one of
these machines my first year in college.
Operating systems didn’t offer much in
the way of security, and we learned to
read the input tape and search for files
with grades.
The largest 7000-series machine was
the 7030 “Stretch,” a $100 million (in today’s dollars) supercomputer that wasn’t
super enough. It missed its performance
goals by a factor of three, and was soon
withdrawn from production. Only nine
were built. The machine had a staggering 169,000 transistors on 22,000 individual printed circuit boards. Interestingly, in a paper named “The
Engineering Design of the Stretch Computer,” the word “millimicroseconds” is
used in place of “nanoseconds.”
While IBM cranked out their computing behemoths, small machines
gained in popularity. Librascope’s $16k
($118k today) LGP-21 had just 460 transistors and 300 diodes, and came out in
1963, the same year as DEC’s $27k PDP5. Two years later DEC produced the
first minicomputer, the PDP-8, which
was wildly successful, eventually selling
some 300,000 units in many different
models. Early units were assembled from
hundreds of DEC’s “flip chips,” small
PCBs that used diode-transistor logic
with discrete transistors. A typical flip
chip implemented three 2-input NAND
gates. Later PDP-8s used integrated circuits; the entire CPU was eventually implemented on a single integrated circuit.
But woah! Time to go back a little.
Just think of the cost and complexity of
the Stretch. Can you imagine wiring up
169,000 transistors? Thankfully Jack Kilby and Robert Noyce independently in-
WHERE CHIPHEADS CONNECT
Conference: January 30 - February 2
Exhibition: January 31 - February 1
Santa Clara Convention Center | www.designcon.com
Don’t miss the definitive event for chip, board,
and systems designers.
Join thousands of engineering professionals who make the decision to start the
year off right with DesignCon!
INDUSTRY TRACKS:
sChip-Level Design for Signal/Power Integrity
KEYNOTE SPEAKERS:
Tuesday, January 31
sAnalog and Mixed-Signal Design and Verification
sFPGA Design and Debug
sSystem Co-Design: Chip/Package/Board
sPCB Materials, Processing and Characterization
sHigh-Speed Serial Design
Ilan Spillanger
VP Hardware and Technology,
Interactive Entertainment Business
Unit, Microsoft
sHigh-Speed Timing, Jitter and Noise Analysis
Wednesday, February 1
sPCB Design Tools and Methodologies
sMemory and Parallel Interface Design
sHigh-Speed Signal Processing, Equalization and Coding
sPower Integrity and Power Distribution Network Design
sElectromagnetic Compatibility and Interference
sTest and Measurement Methodology
sRF/Microwave Techniques for Signal Integrity
Prith Banerjee
SVP Research, Hewlett Packard
and Director of HP Labs
2EGISTERTODAYATWWWDESIGNCONCOM
5SEPROMOCODE02).4TOSAVEOFFANYCONFERENCE
PACKAGE%XPOREGISTRATIONIS&2%%4HEüRSTPEOPLE
TOUSEPROMOCODE02).4WILLGETLASTYEARlSCONFERENCE
PROCEEDINGSFORFREE
OFFICIAL HOST SPONSOR:
break points
A crystal rectifier circa 1943.
From volume 15 of MIT’s
Radiation Laboratory Series. It’s
a bit under an inch long.
Henry C. Torrey and Charles A. Whitmer. Crystal Rectifiers, volume 15 of
MIT Radiation Laboratory Series. McGraw-Hill, New York, 1948.
vented the IC in 1958/9. The IC was so
superior to individual transistors that
soon they formed the basis of most
commercial computers.
Actually, that last clause is not correct. ICs were hard to get. The nation
was going to the moon, and by 1963 the
Apollo Guidance Computer used 60%
of all of the ICs produced in the US,
with per-unit costs ranging from $12 to
$77 ($88 to $570 today) depending on
the quantity ordered. One source
claims that the Apollo and Minuteman
programs together consumed 95% of
domestic IC production.
Every source I’ve found claims that
all of the ICs in the Apollo computer
were identical: 2,800 dual three-input
NOR gates, using three transistors per
gate. But the schematics show two kinds
of NOR gates, “regular” versions and
“expander” gates.
The market for computers remained
relatively small till the PDP-8 brought
prices to a more reasonable level, but the
match of minis and ICs caused costs to
plummet. By the late 1960s everyone was
building computers. Xerox. Raytheon
(their 704 was possibly the ugliest computer ever built). Interdata. Multidata.
Computer Automation. General Automation. Varian. SDS. Xerox. A complete list would fill a page. Minis created
a new niche: the embedded system,
32
though that name didn’t surface for
many years. Labs found that a small machine was perfect for controlling instrumentation, and you’d often find a rack
with a built-in mini that was part of an
experimenter’s equipment.
The PDP-8/E was typical. Introduced in 1970, this 12-bit machine cost
$6,500 ($38k today). Instead of hundreds of flip chips, the machine used a
few large PCBs with gobs of ICs to cut
down on interconnects. Circuit density
was just awful compared with today’s
densities. The technology of the time
was small scale ICs that contained a couple of flip flops or a few gates, and medium scale integration. An example of the
latter is the 74181 ALU, which performed simple math and logic on a pair
!
!
!
Labs found that a small
machine was perfect for
controlling instrumentation;
you’d often find a rack with
a built-in mini among the
experimenter’s equipment.
of four bit operands. Amazingly, TI still
sells the military version of this part. It
was used in many minicomputers, such
as Data General’s Nova line and DEC’s
seminal PDP-11.
The PDP-11 debuted in 1970 for
about $11k with 4k words of core memory. Those who wanted a hard disk
shelled out more: a 256KW disk with
controller ran an extra $14k ($82k today). Today’s $100 terabyte drive would
have cost the best part of $100 million.
Experienced programmers were immediately smitten with the PDP-11’s
rich set of addressing modes and completely orthogonal instruction set. Most
prior, and too many subsequent, instruction set architectures were constrained
by the costs and complexity of the hardware, and were awkward and full of special cases. A decade later IBM incensed
many by selecting the 8088, whose in-
NOVEMBER 2011 | embedded systems design | www.embedded.com
struction set was a mess, over the orthogonal 68000 which in many ways imitated the PDP-11. Around 1990 I traded
a case of beer for a PDP-11/70, but eventually was unable to even give it away.
Minicomputers were used in embedded systems even into the 1980s. We
put a PDP-11 in a steel mill in 1983. It
was sealed in an explosion-proof cabinet
and interacted with Z80 processors. The
installers had for reasons unknown left a
hole in the top of the cabinet. A window
in the steel door let operators see the
machine’s controls and displays. I got a
panicked 3 a.m. call one morning—
someone had cut a water line in the ceiling. Not only were the computer’s lights
showing through the window—so was
the water level. All of the electronics
were submerged. I immediately told
them the warranty was void, but over
the course of weeks they dried out the
boards and got it working again.
I mentioned Data General: they were
probably the second most successful
mini vendor. Their Nova was a 16-bit design introduced a year before the PDP11, and it was a pretty typical machine in
that the instruction set was designed to
keep the hardware costs down. A barebones unit with no memory ran about
$4k—lots less than DEC’s offerings. In
fact, early versions used a single 74181
ALU with data fed through it a nibble at
a time. The circuit boards were 15" x 15",
just enormous, populated with a sea of
mostly 14- and 16-pin DIP packages. The
boards were typically two layers, and often had hand-strung wires where the layout people couldn’t get a track across the
board. The Nova was a 16-bit machine,
but peculiar as it could only address 32
KB. Bit 15, if set, meant the data was an
indirect address (in modern parlance, a
pointer). It was possible to cause the
thing to indirect forever.
Before minis, few computers had a
production run of even 100 (IBM’s 360
was a notable exception). Some minicomputers, though, had were manufactured in the tens of thousands. Those
quantities would look laughable when
the microprocessor started the modern
era of electronics. ■
NAND FLASH MEMORY
SOLID Performance | SOLID Reliability | SOLID Endurance
Vision by You.
SOLID Storage
Solutions by Toshiba.
Turn your design vision into reality with solid storage solutions from
Toshiba, the inventor of NAND Flash. Advanced technology for
your advanced consumer, OEM or industrial designs with the proven
performance, reliability and endurance that make NAND Flash
solutions Toshiba SOLID.™ Visit us today at solid.toshiba.com.
■
19nm Process Technology
Announced World’s Smallest NAND
Flash Chip1
■
e-MMC NAND™
High-Density, High-Speed Managed
NAND Solution
■
SmartNAND™
Raw MLC NAND with Robust
On-board ECC
■
Solid State Drives
Leading Provider of SATA SSDs in
Multiple Form Factors
■
SLC NAND
Reliable, High Performance SLC
NAND
■
Multi-chip Packages
Multiple Memory Technologies in
a Single Package
■
Flash Memory Cards
SDHC UHS-I Cards with Read
Speeds up to 95MB/s
■
USB Flash Drives
Portable Storage with Capacities up
to 64GB
solid.toshiba.com
© 2011 Toshiba America Electronic Components, Inc. All rights reserved. e-MMC is a trademark of
MultiMediaCard Association. SmartNAND is a trademark of Toshiba Corporation. 1As of April, 2011.