Crosslink V2 N1 - netdna

Transcription

Crosslink V2 N1 - netdna
Concurrent design
Estimating probable system cost
Launch vehicle reliability
Small-satellite costs
Designing a sensor system
Crosslink
Winter 2000/2001 Vol. 2 No. 1
4
Concurrent Design at Aerospace
Clockwise from left: Thomas W. Trafton is Director of the Vehicle Concepts
Department, which has been leading the development of the Concept Design Center. He holds a B.S. in mechanical engineering from California State Polytechnic
University in San Luis Obispo and has been with Aerospace since 1966
(thomas.w.trafton@aero.org). Stephen P. Presley, Associate Director, Vehicle Concepts Department, oversees the work of the Concept Design Center. He holds a
B.S. in engineering from the University of Washington and an M.A. in organizational
leadership from Chapman University. He has been with Aerospace since 1990
(stephen.p.presley@aero.org). Patrick L. Smith, Principal Director, Architecture
and Design Subdivision, has many years of experience in Kalman filters and control. He supported efforts to improve the Air Force acquisition of space systems,
particularly in the area of risk analysis. He holds a Ph.D. in control systems
engineering from the University of California at Los Angeles and has been with
Aerospace since 1968 (patrick.l.smith@aero.org). Rhoda G. Novak, former Director of the Software Acquisition and Analysis Department, founded the Concept Design Center’s Ground Systems Team. She holds an M.S. in computer science from
Loyola Marymount University and has been with Aerospace since 1978 (rhoda.
g.novak@aero.org). Andrew B. Dawdy, Vehicle Concepts Department, is one of
the founding developers of the Concept Design Center. He manages a section responsible for systems engineering and conceptual system design. He holds an M.S.
in aeronautics and astronautics from the University of Washington and has been
with Aerospace since 1992 (andrew.b.dawdy@aero.org).
12
4
Departments
2 Headlines
55 Links
56 Bookmarks
59
Aerospace Systems
Architecting and Engineering
Certificate Program
22
12
Estimating Probable
System Cost
Stephen A. Book, Distinguished Engineer, Systems Engineering Division,
was a member of the 1998 Cost
Assessment and Validation Task Force
on the International Space Station and
the 1998–99 National Research Council Committee on space shuttle upgrades. Book shared the 1982 Aerospace President’s Achievement Award
for work showing that a particular
nonuniform 18-satellite Navstar GPS
constellation would satisfy the original
GPS system requirements, with potentially large cost savings. He holds a
Ph.D. in mathematics with concentration in probability and statistics from the
University of Oregon and has been with
Aerospace since 1979 (stephen.a.book
@aero.org).
33
Cover: Progressive refinements of a conceptual design
of a proposed boost-phase interceptor system. (See the
article “Concurrent Design at Aerospace.”)
33
From the Editor
Small-Satellite Costs
David A. Bearden, Civil and
Commercial Division, is Systems
Director of the JPL and NASA
Langley Independent Program
Assessment Office, supporting
programs such as Mars Exploration, New Millennium, and Discovery. He led development of
the Small Satellite Cost Model,
as well as its application to NASA
independent reviews and deployment of the Concurrent Engineering Methodology at JPL’s Project
Design Center. Bearden is also
editor of this issue of Crosslink.
He holds a Ph.D. in aerospace
engineering from the University
of Southern California and has
been with Aerospace since 1991
(david.a.bearden@aero.org).
45
22
Space Launch Vehicle
Reliability
I-Shih Chang, Distinguished Engineer, Vehicle
Performance Subdivision,
leads research on space
launch vehicle failures and
on solid- rocket-flow and
thermal analyses. He has
organized workshops on
rocketry for the American Institute of Aeronautics and
Astronautics and is a panel
member of the White House
Space Launch Broad Area
Review Task Force. He has
a Ph.D. in mechanical engineering from the University
of Illinois and has been with
Aerospace since 1977 (ishih.chang@aero.org).
David A. Bearden
mproving space systems is a challenge in an
era of tightened budgets and reduced development schedules. Given fewer resources, how
can designers and procurers of space systems
assemble more capable systems? In their quest to
comply with government and industry requirements to do more with less, designers have embraced modern design-to-cost practices and innovative design approaches.
The Aerospace Corporation is continually working to develop advances that address this changing
procurement environment and help designers and
customers alike understand the performance and
cost implications of early decisions. A widely accepted industry guideline is that 80 percent of
design options are locked in during the first 20 percent of a project’s development time. It doesn’t pay
to cut budgets and schedules during concept definition; early decisions are critical and often irreversible at later development stages.
This issue of Crosslink showcases the experience, tools, and processes Aerospace uses to balance performance, cost, and schedule in designing
space systems. Sequential design has given way to
concurrent design. Through advanced design models, cost-estimating approaches, lessons-learned
databases, and collaborative design teams, Aerospace has created a powerful environment for the
concurrent engineering of space systems. In this
setting, designers make difficult decisions about
what objectives can be achieved, often relying on
forty years of lessons learned from space system
failures and their causes.
The next time you see a launch or read about a
satellite placed in orbit, think about the processes
presented in this issue of Crosslink. We hope they’ll
help you understand the many considerations associated with developing complex space systems.
I
45
Space-Based Systems for
Missile Surveillance
Terrence S. Lomheim, Distinguished Engineer,
Sensor Systems Subdivision, leads focal-plane
technology development and electro-optical payload design and optimization. He has published 29
articles on these topics. Lomheim received the
Aerospace President’s Achievement Award in 1985
for his work with focal-plane technology. He holds a
Ph.D. in physics from the University of Southern
California and has been with Aerospace since 1978
(terrence.s.lomheim@aero.org). David G. Lawrie,
Director, Sensing and Exploitation Department,
leads modeling and simulation efforts in support of
space-based surveillance programs. He was the recipient of The Aerospace Corporation’s highest
award, the Trustees’ Distinguished Achievement
Award, in 1997 for his role in developing a new national early-missile-warning system. He holds a
Ph.D. in astronomy from the University of California
at Los Angeles and has been with Aerospace since
1986 (david.g.lawrie@aero.org).
Headlines For more news about Aerospace, visit www.aero.org/news/
Precision Window for the Space Station
Photo courtesy of NASA
W
hen astronauts
view Earth from
the International
Space Station, they will look
through a glass porthole developed by Karen Scott of
Aerospace. This 20-inchdiameter window provides a
view of more than threequarters of Earth’s surface
and is the highest quality
window ever installed in a
crewed spacecraft. Astronauts will be performing
long-term global monitoring
with remote-sensing instruments and Earth science photographic observations.
As primary optical scientist for developing the window, Scott tested the viewing
glass originally planned for
the window and found that
it would not support highresolution telescopes or precision remote-sensing experiments. Her recommended
Karen Scott of the Aerospace Houston office, flanked by Astronaut Mario Runco
and Dean Eppler of Science Applications International Corporation (SAIC), looks
through the space station’s optical research window.
upgrade was approved for the
four-piece window, now consisting of a thin exterior “debris” pane, primary and secondary pressure panes, and
an interior “scratch” pane.
Scott led a 30-member
team from Johnson and
Kennedy Space Centers,
Marshall Space Flight Center,
and the University of Arizona
Remote Sensing Group that
conducted calibration tests on
the upgraded window before
it was installed in the Destiny
module scheduled for launch
in January 2001. The team
determined that the window
could support a wide variety
of research, including the
monitoring of coral reefs and
Earth’s upper atmosphere.
Scott’s efforts in completing
the tests on a tight schedule
brought her a Johnson Space
Center group achievement
award.
T
he Aerospace Corporation was
recognized at a NASA press
conference in June 2000 for its
role in bringing down a 17-ton
“giant.” The Compton Gamma Ray Observatory reentered Earth’s atmosphere and
safely plunged into the Pacific Ocean June
4, 2000, after nine years in orbit studying
gamma-ray emissions in space. It is one of
the largest spacecraft ever launched by
NASA.
NASA began deliberating the probability of uncontrolled reentry after one of the
observatory’s three attitude-control gyros
failed in December 1999. Given the
spacecraft’s size, scientists thought it
likely that several large masses of the
spacecraft could survive reentry. NASA
enlisted the corporation’s assistance, and
Aerospace technical experts helped to design and execute the successful splashdown.
William Ailor and Kenneth Hagen participated in a NASA “red team” review to
2 • Crosslink Winter 2000/2001
Photo by space shuttle crew, Compton Science Support Center. Courtesy of NASA
Bringing Down a “Giant”
Compton Gamma Ray Observatory.
provide recommendations to NASA program management. They were assigned
responsibility for reviewing the state of the
remaining gyros and the likelihood of uncontrolled reentry. After NASA decided in
February to deorbit the observatory,
Wayne Hallman and Benjamin Mains
helped develop the deorbit plan with the
operations team from Goddard Space
Flight Center.
Preventing Power Failure
on the Space Station
Popular Science Picks
“Picosats”
R
T
aymond de Gaston, an Aerospace engineer at NASA Johnson Space Center Operations in
Houston, received NASA’s prestigious Silver Snoopy award for his work
in preventing possible electrical power failure on the International Space Station. The
(See Crosslink, Summer 2000.) Two more
picosats, launched in July 2000, are scheduled for orbital release during the summer
of 2001.
Battling Buffet Loads on the Titan IVB
A
n Aerospace investigation has
revealed new data on why the
Titan IVB vehicles have been
experiencing much larger than
expected buffet loads during transonic
flight. To identify the origin of these loads,
Aerospace engineer William Engblom conducted a computational fluid dynamics simulation of the transonic flow environment.
Approximately 25,000 CPU hours were
required to complete the calculations on a
four-million-cell mesh over a real time of
almost one second. Data-processing resources at the Wright Patterson Air Force
Aeronautical Systems Center were key to
executing this simulation accurately and
quickly.
The solution was obtained for the Titan
IVB at Mach 0.8. Animations clearly illustrated a new, important fluid dynamic
mechanism that is likely responsible for the
anomalous pitch accelerations experienced
on several Titan IVB vehicles. The mechanism involves strong pairs of vortices (see
illustrations), which are alternately shed
from the noses of the solid-rocket-motor
upgrade boosters at a nearly constant frequency, causing substantial pressure loads
on the vehicle surface.
Photo courtesy of NASA
he smallest operational satellites
ever flown—built by Aerospace
with Defense Advanced Research
Projects Agency (DARPA) funding—were selected by Popular Science as
one of the top 100 technologies for the
year 2000. About the size of cellphones,
these picosatellites, or “picosats,” were
featured in the magazine’s December 2000
issue in the “Best of What’s New” section.
Project director Ernest Robinson of the
Aerospace Center for Microtechnology
accepted an award for Aerospace at a Popular Science exhibition in New York in
November 2000.
A pair of these picosats flew a groundbreaking mission in February 2000 with
the primary goal of demonstrating the use
of miniature satellites in testing DARPA
microelectromechanical systems (MEMS).
Raymond de Gaston, left, and Astronaut Frank
Culbertson with award.
Map of unsteady pressure loads at Mach 0.8.
“Snapshots” of pressure contours. Pair of vortices (left); new pair of vortices (right) on opposite side of core vehicle (0.035 seconds later).
This newly discovered buffet behavior is
common to all multibody vehicle configurations, according to Engblom, and should
be considered in the design of future
launch vehicles.
award, which was presented to de Gaston
by Astronaut Frank Culbertson, recognizes
outstanding performance contributing to
flight safety or mission success. The award
is given annually to less than one percent
of the space program workforce and is always presented by an astronaut. De Gaston
was recognized for finding a potentially
disastrous shortcoming in component
quality for the station’s direct-current-todirect-current converter units. These key
components alter voltage generated by
photoelectric arrays and provide all usable
electrical power to the station.
Corrections to Crosslink,
Summer 2000
A news brief in Headlines incorrectly
stated that the GPS industry is expected
to grow to $16 million in the next three
years. The number should have been
$16 billion.
The caption of a photo of a Titan IVA
rocket at the beginning of the article
“Aerospace Photos Capture Launch
Clouds” incorrectly identified the rocket
as a Titan IVB.
Crosslink Winter 2000/2001 • 3
Concurrent Design at Aerospace
Engineers and customers work together to design new space
systems in a setting that accelerates the development
process. Real-time interaction between specialists is the key.
I
magine two engineers, each designing the thermal-control subsystem
for a new satellite requested by a prospective commercial customer.
Both are experienced and highly skilled; both have good tools at their
disposal. They’re trying to accomplish the same objective, but they’re
required to work in different environments, under vastly different circumstances. Consider for a moment the striking contrast in how they complete
their tasks.
Technical specialists confer about a study in
progress at a Concept Design Center facility.
Study lead Ronald Bywater, standing, is discussing a design issue with cost specialist
Vincent Canales, left, and communication specialist John O’Donnell, right.
Scenario #1: The first engineer puts the
finishing touches on his design. A week
later, he attends a program team meeting
with representatives from all involved subsystem disciplines and learns that while
his design is impressive, it’s too heavy. He
goes back to the drawing board. In a couple of weeks, he’s got a new design that’s
lighter. His colleagues give it a thumbs-up.
But after a few days, the team learns that
the customer has changed her mind about
the payload performance requirements, so
the entire mass budget has changed. It’s
back to the drawing board again.
Scenario #2: The other engineer puts the
finishing touches on her design. She keys
some values—power and thermal requirements, mass, and so on—into an electronic
spreadsheet. She immediately receives
feedback from the design lead, who tells
her she’s a bit over the mass budget. After
15 minutes of reviewing, recalculating,
and consulting with the customer, the engineer makes a small change to the design.
It’s now within the mass budget. And the
customer is pleased, to boot.
It should be obvious that the second
scenario has significant advantages. Realtime interaction between specialists enables an accurate dialogue that resolves
issues right away. With concurrent designing, a study can be completed in hours instead of months. And getting everyone—
including the customer—together in the
same place not only speeds up the process
but also affords participants the ability to
clear up misunderstandings with face-toface communication. This scenario isn’t
just a fantasy; it’s the way conceptual design studies are now being conducted at an
innovative facility operating successfully
at The Aerospace Corporation: the Concept Design Center (CDC).
Patrick L. Smith,
Andrew B. Dawdy,
Thomas W. Trafton,
Rhoda G. Novak,
and Stephen P. Presley
CDC provides the opportunity for Aerospace customers, both government and
commercial, to work directly with corporate engineering experts on the rapid development of conceptual designs for new
space systems. Linked software models
and a computer-aided design system for instant visualization of subsystems provide
the concurrent design capability that characterizes CDC and makes it a potent facility for the development of new systems.
The Intent of Conceptual Design
A conceptual design study is a quick look
at what is feasible to build and how much
it could cost. The intent is to gain highlevel insight into a project’s scope, not determine the precise value of each design
parameter. In a project at the conceptual
stage, requirements are not yet well defined; detailed specifications are not
locked in. Participants want to explore
“What if…?” scenarios, changing a parameter here or there just to see what happens. Many, if not most, proposals for new
missions never go beyond the conceptualstudy stage. Usually the mission cost turns
out to be too high, or the study exposes a
technical Achilles’ heel.
Conceptual design studies are also useful for evaluating costs and benefits of new
technologies (e.g., advanced solar cells,
miniature sensors, inflatable structures)
and for teaching the principles of spacesystems engineering.
To get a feel for the questions that a
study will answer, consider the example of
a proposed mission to detect forest fires
from space. What is the size of the smallest
fire that the spacecraft must be able to detect? What types of sensors can be used?
Who needs the data? How quickly must it
be obtained? How many spacecraft are required? How much will the mission cost?
Conceptual design studies are not new;
they’ve always been part of the systemdevelopment process. To see how conceptual design has evolved into the sophisticated set of techniques now employed by
CDC, consider how it was conducted in the
early days of the space age.
Crosslink Winter 2000/2001 • 5
Conceptual Design at Aerospace:
The Early Days
In the 1960s and 1970s, conceptual design
studies were performed by loosely organized teams of subsystem specialists. Such
studies could take months. In the early
days of the space age, technologies were
new, and spacecraft-design methodologies
and tools were still evolving. The personalcomputer era had not yet arrived, so designers had to develop computer programs
that ran overnight on mainframes. And
without computer-aided drawing software,
they had to use manual drafting techniques
to lay out spacecraft configurations.
Like most companies in the space industry, Aerospace subdivided its engineering
division into departments such as thermal,
propulsion, structure, and cost. A spacesystem conceptual design study might
draw upon expertise from a dozen or more
of these specialty areas. The study leader
recruited specialists directly through personal contacts or through department managers. Interpersonal relationships, a critical
factor in the success of any team effort,
were unpredictable; participants might or
might not work together smoothly.
A conceptual design study during this
period was a sequential process, usually
driven by the customer’s schedule. Team
members would meet periodically as a
group, perhaps weekly, to coordinate design details but otherwise would work
alone and independently. Most studies
were poorly documented—funding often
ran out before reports could be prepared.
A follow-on study would have little to
build upon.
The Space Planners Guide, published
by Air Force Systems Command in 1965,
was the first comprehensive reference
source for the conceptual design of space
systems. Engineers from Aerospace
contributed much of the technical information published in the Guide, including
pre-computer-age nomographs for orbit
analysis and traditional (hard-copy)
spreadsheets for cost estimation. The
Guide was widely used for several years
throughout the space industry, in both
civilian and military space programs.
Attempts in the 1980s and early 1990s
to use computers to automate conceptual
design studies largely failed. Researchers
tried to capture each subsystem specialist’s
knowledge in the form of rules of thumb
and parametric sizing formulas, with the
6 • Crosslink Winter 2000/2001
Design information is passed among the team members using linked spreadsheet files such as the
one shown here. The design process repeats until all designers are satisfied that their subsystems
meet the requirements.
ambitious goal of optimizing design trades
and costs. Several attempts to create a program for this purpose met with limited
success. Subsystem specialists had said all
along that automating conceptual design
of spacecraft would be extremely difficult,
if not impossible, because their knowledge
and skills could not be fully captured in
computer code.
More successful efforts to automate
conceptual design studies focused on
computer-aided approaches that were less
ambitious. One such effort was a program
based on Mechanical Advantage, a commercial software product from Cognition
Corporation that is basically an equation
solver linked to a graphics program.
While the Cognition application had
some utility for certain aspects of conceptual spacecraft design, the program, which
ran on powerful and (at the time) scarce
workstations, was too limited for wide use.
Users needed extensive training. Some
spacecraft design models developed for the
Cognition application, however, later became the basis for some of the subsystem
spreadsheet models used in CDC today.
The Original Models
With the proliferation of personal computers and the advent of powerful spreadsheet
software in the early 1990s, more practical
interactive approaches to computer-aided
conceptual spacecraft design emerged.
Spreadsheet sizing models were developed
that linked mass, power, and other characteristics of various spacecraft subsystems,
so that changing the design of one subsystem would have immediate impact on designs of the others. The original collection
of spacecraft-subsystem-design spreadsheets developed by Aerospace proved very
useful in conceptual design studies, but
subsystem experts still needed to carefully
check the spreadsheet outputs in order to
ensure that a particular design did not exceed the limits of the spreadsheet models.
Developing the spacecraft-subsystemdesign spreadsheets taught a valuable lesson to those working to build computer
aids for conceptual design. System engineers were concerned that the subsystem
specialists might be left out of the design
process and that the models could be misapplied or give misleading results. Some
of the specialists were even reluctant to develop simplified models for the spreadsheets because they felt they could not
guarantee the correctness of the models’
results in every context in which they
might be used.
In 1994, NASA’s Jet Propulsion Laboratory (JPL) asked Aerospace to adapt the
spreadsheet models for the Advanced
Projects Design Team, also known as
Team X, in JPL’s Project Design Center.
Team X’s job was to write proposals for
“faster-better-cheaper” planetary exploration missions. The center was being designed as a facility where teams of JPL engineers could work together concurrently
to rapidly design spacecraft for NASA’s
planetary and other space-science missions.
JPL needed to find a method for linking
spacecraft-subsystem design models so that
information on the different elements in a
project (e.g., spacecraft, cost, operations)
could be shared concurrently and archived
for follow-on Team X studies.
Two Aerospace engineers, Joseph
Aguilar and Glenn Law, tried to adapt the
spreadsheet models for use in the JPL design center. But they found the models
were difficult to use in environments
where team members worked on separate
design elements at the same time. They
then undertook the task of developing the
computer network and interfaces that
would allow the subsystem models to run
on different workstations at the same time.
Using the “distributed” version of the
spreadsheet models, Team X eventually
reduced its cost to produce a proposal
from $250,000 to $80,000 and cut the time
required from 26 to 2 weeks. Team X previously produced only about 10 proposals
per year; it now produces 45.
After this success, Aguilar, Law, and
their colleague Andrew Dawdy proposed
developing a similar concurrent design capability at Aerospace, geared to the conceptual design of military and commercial
space missions. In the fall of 1996, management approved their independent research and development proposal for what
was to become CDC.
CDC Takes Shape
The three Aerospace engineers spent a
year linking new versions of spacecraftsubsystem spreadsheet models that were
developed by subsystem experts. The development of a set of spreadsheet models
to support fast-paced collaborative spacecraft design not only required experienced
CDC Teams
Team
Responsibilities
Design Disciplines
Space Segment
• Payload and spacecraft
subsystem design
• Top-level ground segment
and software sizing
• Parametric cost and performance estimation
Astrodynamics, payload, command and
data handling, communications, attitude determination and control,
electrical power, propulsion, structure, thermal,
ground, software, cost,
configuration
• Constellation design and
coverage analysis
• Vehicle size and launch
manifest determination
• Concept of operations and
ground segment architecture
• Analysis of relative cost vs.
requirements
Constellation, payload,
spacecraft, availability,
ground, cost, utility
• Payload and spacecraft
subsystem design/analysis
• Constellation and endgame
performance estimation
• Parametric system cost
analysis
Configuration, sensor,
ordnance, guidance,
power, propulsion,
structure, ground,
carrier vehicle, cost
• Flexible, fast response,
reusable launch vehicle
(requires boost vehicle)
• On-orbit payload support
subsystem design and cost
analysis
Command and data
handling, communications, attitude determination and control,
electrical power system,
propulsion, thermal,
ground, software, vehicle configuration, cost
• Estimation of facilities,
personnel, processing,
communications, and
software
• Determination of information, functional, and
communication architecture
trades
Information architecture,
communications
architecture, software,
processing, staffing,
facilities, spacecraft,
cost
• Detailed communications
payload subsystem design
and trades
• Top-level spacecraft
estimation
• Performance, hardware
configuration, and cost
estimation
Link analysis, radiofrequency units, modulation analysis, orbital
analysis, digital, coding,
laser, spacecraft
System Architecture
Kinetic Energy Weapons
Space Maneuver Vehicle
Ground Systems
Communications Payload
Crosslink Winter 2000/2001 • 7
CDC Successes
CDC conducts studies for a variety of
organizations, and the studies span a
broad range of programs. Congressional staffers and senior Department
of Defense officials suggested that
the Space Based Infrared System
Low mission could be combined with
another program on the same bus. A
quick study by the CDC System Architecture Team showed, however, that
the total cost of the combined missions would exceed the costs of keeping them separate.
A CDC study of the Communication/Navigation Outage Forecast
System showed that proposed mission requirements would have to be
reduced in order to meet cost and risk
constraints. As a result, the Air Force
Space Test and Experimentation
Program Office rewrote the mission
and technical requirements. The program recently received funding, and
CDC team members participated in
source selection.
Subjects of CDC design studies for
the Air Force Space and Missile
Systems Center have included the
Space Based Radar and the Global
Multi-Mission Space Platform. The
requirements for contractor feasibility
studies for the space platform are
based directly on results from the CDC
study.
The CDC Ground Systems Team
designed several architectures for the
Air Force Satellite Control Network.
The network needed to compare its
present acquisition plans to alternatives proposed by outside organizations. One alternative proposed using
commercial service providers for most
ground-to-space communications; another proposed moving most of the
network’s communications to spacebased relays. CDC studies helped the
customer understand key features and
drawbacks of the alternatives.
The Naval Postgraduate School in
Monterey, California, has asked Aerospace to help develop a CDC-like design center as a learning tool for use
by graduate students.
8 • Crosslink Winter 2000/2001
engineering judgment but also entailed
very careful interface design so that the
specialized subsystem spreadsheets could
be appropriately linked. Using lessons
learned at JPL, Aguilar, Law, and Dawdy
carefully explained the concurrent design
approach to other Aerospace engineers and
to potential customers, whose acceptance
was essential for the project to succeed.
Recruiting the first design-team members, who would have to work together in
a new type of environment, was initially a
challenge. Fast-paced work on systemlevel concurrent design teams would be
something new for Aerospace technical
experts. Fortunately, the engineers recruited for the first CDC team not only
possessed the required expertise but also
were enthusiastic about trying this type of
work. They seized the opportunity to apply
their design skills in the new concurrentdesign environment.
Working through some start-up problems, the engineers soon developed a
strong team spirit that would prove essential in resolving technical and administrative problems. By the second year of the
independent research and development effort, the original spacecraft design team
had completed seven design studies and
had received awards and recognition for its
efforts. Word of CDC’s successes spread,
and recruiting new team members became
much easier.
Today, more than 100 Aerospace engineers participate on CDC teams, working
in two dedicated facilities (unclassified
and classified). A new Aerospace organization, the CDC Office, coordinates the
center’s activities. Six teams currently
make up CDC:
• Space Segment Team, the original
CDC team, focuses on the space vehicle (bus) segment. Each member designs a particular spacecraft subsystem
and specifies the elements at the part
level. Computer-aided-drawing layouts
are used to visualize physical relationships among the subsystems.
• Systems Architecture Team considers
all of the space-system segments
(space, ground, and launch). The level
of detail does not extend below toplevel descriptions of each segment and
their interactions—the minimum
needed to understand the broad architecture trades.
• Communications Payload Team focuses on communications subsystems
at the part level. This team is in development.
• Ground Systems Team examines elements of the ground segment of space
systems, including facilities, staffing,
software, communications, and processing equipment.
• Kinetic Energy Weapons Team performs top-level design of space-based
ballistic-missile interceptors. The team
is similar to the Space Segment Team
but uses a different set of performance
metrics and technologies.
A dedicated facility for conducting design sessions. The configuration of workstations promotes face-to-face interaction between team
members. The customer team sits at the center
table. Overhead projectors can display any team
member’s monitor. Video teleconferencing cameras are located at the front and back walls.
• Space Maneuver Vehicle Team is also
similar to the Space Segment Team but
focuses on the requirements of launch,
orbital operations, reentry, and reuse.
These teams follow the same basic
guidelines and procedures that were established for the initial CDC spacecraft
team—the use of well-defined processes,
cross-department communication and
teaming, ownership of models and technical data by engineering experts, and direct
customer involvement during the design
sessions.
CDC Studies
A typical CDC study takes about six
weeks and requires about 300 to 500 staffhours of effort, depending on the amount
of up-front preparation required and the
scope of the study. Studies are conducted
in three phases: presession preparation,
design sessions, and postsession wrap-up.
In the presession preparation phase,
several meetings with the customer define
the design trades to be performed. Team
members often have to research new technologies and modify their models to handle unique features of the proposed concept. A formal proposal that lays out the
objectives, schedule, and cost of a study is
always provided to the customer before
work starts.
The actual design sessions that are the
heart of the CDC process take place in one
of the dedicated facilities. Team members
and customers work together in concurrent
design sessions that last from two to four
hours. During each session, they explore
alternative approaches and gain insights
into the design drivers. Working together
in one room with the right tools and procedures vastly reduces the time required to
complete a study and enables the design
team to address customer questions and
smoothly accept redirection from the customer if it becomes necessary. Two to five
sessions spread over a week or two are
usually needed to complete a study.
The focus of the final phase, postsession
wrap-up, is the creation of a report documenting the study. This report is published
within three or four weeks after completion of the design sessions. The customer
usually contributes a section describing
the mission.
Each participant in a CDC study has a
specific role. In addition to the participants
who bring their technical expertise to a
project, some team members must exercise
critical administrative skills to move the
study forward. Among the most important
roles are the facilitator and the study lead.
It is the facilitator’s responsibility to
keep all hardware and software, including
the computer network, up and running and
to quickly resolve any interface problems
that arise with the spreadsheet models.
The number of people involved in a study
and the rapid pace of the sessions make it
essential that all supporting equipment and
software perform reliably. The facilitator
is also involved in training new team
members to be effective participants in the
CDC process.
The study lead guides the customer and
the technical experts through each step in
the CDC process. It is the lead’s responsibility to ensure that customer expectations
are realistic and are met. The customer
must understand what he or she will get
out of the CDC process, how it works,
what the customer’s role in the process is,
and what the team needs from the customer to do its work.
The customer is the focus of everyone’s
attention. Customers for CDC studies have
included both military programs and commercial companies, but CDC also serves
Aerospace specialists use linked spreadsheets during a design session. Left to right: Christopher Taylor, Eric Hall, Mark Mueller, Douglas Daughaday.
Crosslink Winter 2000/2001 • 9
Aerospace Systems
Architecting and Engineering
Certificate Program
Courses for Technical Teams
The teamwork skills critical to CDC’s
concurrent engineering process apply
to all phases of system design, development, and deployment. Teams can
bring to bear expertise from multiple
mission partners and stakeholders to
ensure clearer, more direct communication of varying points of view, resulting in the rapid development of consensus solutions. The Aerospace
Institute has made teamwork skills an
integral part of its offerings in both
technical and business areas.
In the Institute’s Technical Education and Development curriculum, the
three-day course “Teaming for Systems Architects and Systems Engineers” serves as a knowledge-building
component of the Aerospace Systems
Architecting and Engineering Certificate Program (see page 59). This
course reinforces team concepts
through case studies and simulations.
Topics include problem definition,
communications, decision making,
conflict management, leadership, and
cross-functional team effectiveness.
The concurrent engineering
methodology critical to CDC is highlighted in the Institute’s Space Systems Design course. Part of the Space
Systems Engineering series, this
course describes the space systems
design process, how design fits into
the space-mission timeline, and how
requirements flow between the vehicle
payload, spacecraft, and ground system. The focus is on interactions and
dependencies between subsystems
and the relationships between subsystems and system elements. The student becomes familiar with methods
and tools used in the space-vehicle
conceptual design process that CDC
employs.
ASAE
Certificate
10 • Crosslink Winter 2000/2001
This sequence of images illustrates how a conceptual design evolves during the course of a CDC session. A performance analysis indicated that five was the optimum number of ballistic-missile interceptors per spacecraft carrier. An initial layout of the carrier was developed that was compatible with the
Delta II 3-meter launch-vehicle fairing. But further design iterations revealed the need for additional bus
surface area to support larger solar panels. The resulting configuration proved well suited for manifesting four carrier vehicles on a larger Delta IV 4.5-meter fairing.
internal corporate customers, performing
design studies for programs within Aerospace. Direct customer involvement in
each step of a study is essential. It is the
customer’s responsibility to define the
trade space to be explored and to explain
the big-picture context of the study to the
design team.
The Design Session
A design session begins with team members arriving at one of the CDC facilities.
The facilitator prepares by powering up the
computers, video equipment, and audio
systems. Participants take their places in
front of their workstations and log on. Designers check over programs and data
structures—software that could include,
for example, a sensor database, a cost
model, a computer-aided drafting program.
The customer describes the objectives
of the study, which may include development of a baseline design and cost estimate as well as the identification of cost
drivers and areas of greatest technical risk.
Then the study lead distributes a list of design options that had been developed in
the presession planning meetings and
other pertinent handouts—for example, a
data sheet that describes the power profile
for the mission, the payload operations requirements, and the technology freeze date
to be assumed in the study. The study lead
moderates a brief discussion to ensure that
everyone understands the objectives.
The facilitator initializes the system parameters in each team member’s subsystem model, and team members begin
working on their designs. The facilitator
coordinates the flow of data among the
models and periodically updates the master list of design options with the latest design parameters. As team members adjust
their subsystem parameters, they exchange
ideas about design issues with their teammates and the customer. They use parametric cost models (cost-estimating relationships, equations that predict cost as a
function of one or more drivers) and many
other parameters (mass, performance, etc.)
to compare different designs. The biggest
challenge for a CDC team is to come up
with a first viable design; subsequent designs are usually easier, often just excursions from the baseline.
Design issues surface as work proceeds.
Discussions take place in side sessions
where engineers try to resolve problems
without full team involvement. Some team
members might have to spend some time
researching new design approaches or
technologies. When it becomes necessary,
the facilitator displays an individual’s
monitor on a large screen for everyone to
Hundreds of interceptor “garages” like the one shown
here would orbit Earth as part of a national defense
strategy; upon detection of a hostile missile launch, interceptors would be fired to track down and destroy the
missile. This proposed system is the product of a conceptual design study performed by the Concept Design
Center.
see the subject of discussion. And at some
points, the customer may be required to
choose between several design options before the study can progress.
Team members are given the opportunity to explain subsystem design issues so
that the entire team understands how the
design has evolved. The process continues,
with continual redefinition and reevaluation of designs. As the design session
winds up, the study lead discusses possible
next steps with the customer and begins
collecting data for the final report.
Conclusion
Thanks to CDC, the Aerospace role in
front-end engineering and architecture
studies has become more visible. CDC’s
success has ensured that the company is
widely recognized as a leader in up-front
planning and technology development for
new space systems.
CDC has become an essential part of
the systems engineering support that Aerospace provides. Six teams currently perform a total of about 12 to 18 conceptual
studies per year. CDC has become largely
self-supporting, with most of its funding
coming directly from customer studies.
CDC teams and applications continue to
proliferate. Planned future enhancements
include increased contractor involvement,
more powerful three-dimensional modeling and visualization capabilities, and geographically distributed design teams connected via the Internet.
The basic principles that have guided
CDC in its development have not changed
since its origins—reliance on documented
processes, cooperation between disciplines, and partnering with customers.
These principles, which clearly are applicable to other corporate initiatives, such as
mission assurance teams and information
networking, have made CDC a resounding
success thus far and will no doubt serve as
an excellent foundation for its future development.
Further Reading
J. A. Aguilar and A. B. Dawdy, “Scope vs. Detail: The Teams of the Concept Design Center,”
2000 IEEE Aerospace Conference Proceedings
(March 18–25, 2000).
J. A. Aguilar, A. B. Dawdy, and G. W. Law,
“The Aerospace Corporation’s Concept Design
Center,” 8th Annual International Symposium of
the International Council on Systems Engineering (July 26–30, 1998).
Capt. A. Bartolome, USAF, S. S. Gustafson,
and S. P. Presley, “Concept Design Center
Teams Explore Future Space-Based Tools,” Signal (July 2000).
A. B. Dawdy, R. Oberto, and J. C. Heim, “An
Application of Distributed Collaborative Engineering,” 13th International Conference on Systems Engineering (August 9–12, 1999).
R. Novak, “Systems Architecture: The Concept
Design Center’s Ground System Team—A
Work in Progress,” 13th International Conference on Systems Engineering (August 9–12,
1999).
S. L. Paige, “Solar Storm Sat: Predicting Space
Weather,” 2000 IEEE Aerospace Conference
Proceedings (March 18–25, 2000).
S. P. Presley and J. M. Neff, “Implementing a
Concurrent Design Process: The Human Element Is the Most Powerful Part of the System,”
2000 IEEE Aerospace Conference Proceedings
(March 18–25, 2000).
Crosslink Winter 2000/2001 • 11
Estimating Probable
System Cost
Stephen A. Book
I
n estimating the cost of a proposed space system, cost analysts follow the adage
“What’s past is prologue.” They use costs of existing systems
to develop a cost-estimating relationship (CER),
which can help predict the cost of a new
system. The foundation of modern cost
analysis, the CER is usually expressed as
a linear or curvilinear statistical regression equation that predicts cost (the
dependent variable) as a function of
one or more cost drivers (independent variables). However, the CER
tells only part of the story.
At the beginning of a cost-estimating task,
the analyst identifies the work-breakdown
structure (WBS), a list of everything that has
to be paid for to bring a system to its full operational capability. A space system’s WBS includes high-level categories such as research,
development, and testing; operations, maintenance, and support; production; and launch.
Lower levels of the structure include software
modules, electronics boxes, and other components. In addition to CERs, the analyst bases
estimates on costs of items already developed
and produced, on vendor price quotes for offthe-shelf items, and on any other available
information that can be used to assign a
dollar value to items in the WBS.
Until recently, sponsors and managers
of space systems have expected cost analysts to provide best estimates of the costs
of various options at each project milestone and decision point, from the initial
trade-study stage to source selection,
right on to project completion. Unfortunately, “best estimate” has never been precisely defined. For example, is it the most
likely (most probable) cost, the 50-percent
confidence level cost (the dollar figure that is
equally likely to be underrun or overrun), or the
average cost? It wasn’t clear what useful information about system cost this best estimate was conveying, so the figure proved inadequate for comparing competing
options, as well as for planning system budgets. This unsatisfactory
situation led the Department of Defense (DOD) to issue formal guidance on
how system costs should be expressed and what the terminology should mean.
12 • Crosslink Winter 2000/2001
Basing a system cost estimate on past systems can be tricky. Analysts who do
this find that the sum of the most likely costs of the elements of a space system
in development does not equal the most likely cost of the entire system. Using
probability distributions to treat cost estimation as a statistical process can
provide estimates that are much more meaningful.
The typical approach to obtaining a best estimate is to view
each WBS item’s estimated cost as its most likely cost and then
roll up (sum) those estimates to arrive at a total system cost. Because high-end risks outweigh low-end uncertainties, however, a
roll-up estimate calculated this way tends to underestimate actual cost by a wide margin, leading to cost overruns
attributable purely to the mathematics of the
roll-up procedure. In fact, formal mathematical theory confirms that the sum
of the most likely WBS-item costs is
substantially less than the most
likely total cost. Experience
shows that assigning a confidence level of 30 percent or
less to a roll-up estimate is
not overly pessimistic.
While generally thought
to be a new phenomenon
specific to DOD, the
difficulties associated
with cost-estimating
uncertainty were recognized in France as
early as 1952.
The solution to the
problem is to treat the
cost-estimating process
statistically, a technique
known as cost-risk analysis. Even the use of the
term “most likely” indicates a statistical situation,
because it implies that
other, less likely estimates
exist. Probability distributions are established to
model the cost of each WBS
item. (A probability distribution contains the possible values a random variable could
take on, as well as the probaThe three elements at the highest level of a
bility of it taking on each one
space-system work-breakdown structure (WBS)
of those values.) Then, correlaare space-resident satellites, a launch system,
tions among these distributions
and a ground-based control system. In estimating a system’s probable cost, it is important to
are estimated, and the distribuconsider how the costs of these elements (XS,
tions
are summed statistically,
XL, XG) are correlated.
typically by Monte Carlo sampling.
The result is a probability distribution of
total system cost, from which one can obtain
meaningful estimates of the median (50-percent confidence
level), 70th percentile (70-percent confidence level), and other relevant quantities.
The cost-risk-analysis approach yields a
mathematically correct most likely cost
(one that is precisely defined, along with
its level of confidence), as well as costs for
all percentiles. Estimates at the 50-percent
and 70-percent confidence levels are much
more valuable to decision makers in setting program budgets than an essentially
meaningless “most likely” estimate. In the
early 1990s The Aerospace Corporation
developed many of the mathematical procedures now applied to estimate project
costs statistically.
Once the budget has been established
and the system has been in development
for a number of months, earned-value data
become available. A measure of how much
work has been accomplished on a project,
earned-value data are typically collected by
the contractor for use in comparing actual
expenditures on scheduled tasks with the
amounts budgeted for them. Over a specific time period, usually a month or three
months, and cumulatively since the beginning of development, an earned-value
management system tracks and compares
three kinds of financial information associated with each WBS item: the budgeted
cost of work scheduled to be done on the
item during the period (the estimated cost),
the budgeted cost of work actually completed on that item during that time (the
earned value), and the actual cost incurred,
per billing, for the contractor’s work done
on the item during that time. Earned-value
data can be used to forecast an estimate at
completion at any point in the program.
Measuring the Quality of a
Cost-Estimating Relationship
How does a cost analyst derive a CER for a
WBS item? Let’s say the item under consideration is a satellite’s solar-array panels.
Cost and technical data on solar arrays that
have already been produced must be organized into a database. For each array, cost is
tracked against appropriate technical characteristics such as weight, area, and storage
Crosslink Winter 2000/2001 • 13
capacity. Algebraic relationships between
cost and these potential cost drivers are
compared to determine which relationship
is the optimal predictor of solar-array cost.
It is not a priori obvious which criteria are
the best to use for assessing CER appropriateness, nor will all cost analysts agree on
the best criteria. Three statistical criteria,
however, lie at or near the top of almost
everyone’s list:
• percentage standard error of predictions made by the CER of values in the
database: root-mean-square of all percentage errors made in estimating values of the database using the relationship (a one-sigma number that can be
used to bound actual cost within an interval surrounding the estimate with
some degree of confidence)
• net percentage bias of predictions of
the values in the database: algebraic
sum, including positives and negatives,
of all percentage errors made in estimating values of the database using the
CER (a measure of how well percentage overestimates and underestimates
of database actual costs are balanced)
• correlation between estimates and actual costs (CER-based predictions and
cost values in the database): If the relationship were a perfect predictor of the
actual cost values of elements of the
database, a plot of estimates against the
actual costs would follow a 45-degree
line quite closely (correlation, a statistical measure of the extent of linearity in
a relationship between two quantities,
would be high if estimates tracked actual values but low if they did not).
Launch vehicle
Structure and
mechanical
Structure
Interstage
Adapter
Thermal
Propulsion
Avionics
Other
Main propulsion
Engines
Instrumentation
Guidance
and control
Data handling
Communications
Electrical power
Electrical wiring
Software
Payload fairing
Reentry
Protection
Landing system
A WBS is a hierarchical list of all items that must be paid for to bring a system to its full operational capability. A space system’s WBS includes high-level categories such as launch vehicle, as well as lowlevel items such as engines and software.
Standard error is better expressed in percentage terms than in dollars. Using percentage to express standard error in costestimating offers stability of meaning
across a wide range of programs, time
periods, and situations. An error of 40 percent, for example, retains its meaning
whether the analyst is estimating a $10,000
component or a $10 billion program. Conversely, a $59,425 error is huge when reported in connection with a $10,000 component, but insignificant with respect to a
$10 billion program. Even in less extreme
cases, a standard error expressed in dollars
often makes a CER virtually unusable at
the low end of its data range, where relative magnitudes of the estimate and its
standard error are inconsistent.
Similarly, in the case of bias, a dollarvalued expression is not as informative as
an expression in terms of percentage of the
estimate, because a particular dollar
amount of bias would not have the same
impact on all values in the database.
Cost-Risk Analysis
“Cost-risk analysis” is the term used by
cost analysts for any estimating method
that treats WBS-item costs and totalsystem costs as random variables rather
than deterministically derived numbers.
The term implies that, for any deterministic estimate, there is some degree of risk
that the system will be unable to be delivered or meet its stated objectives at that
particular funding level. Cost-risk analysis
recognizes that a mathematical probability
Statistics at a Glance
Probability distribution
A function that describes the probabilities of possible outcomes in a “sample space,” which is a set that includes all
possible outcomes.
Random variable
A variable that is itself a function of the result of a statistical
experiment in which each outcome has a definite probability
of occurrence.
Determinism
The theory that phenomena are causally determined by preceding events or natural laws; hence in this context, a "deterministically derived number" can be a system cost value derived from the cost of an earlier system.
Standard deviation (sigma value)
An index that characterizes the dispersion among the values
in a population; it is the most commonly used measure of the
spread of a series of values from their mean.
14 • Crosslink Winter 2000/2001
Bias
The expected deviation of the expected value of a statistical
estimate from the quantity it estimates.
Correlation
A measure of the joint impact of two variables upon each
other that reflects the simultaneous variation of quantities.
Percentile
A value on a scale of 100 indicating the percent of a distribution that is equal to or below it.
Monte Carlo sampling
A modeling technique that employs random sampling to simulate a population being studied.
Root-mean-square
The square root of the arithmetic mean of the squares of a
set of numbers; a single number that summarizes the overall
error of an estimate.
of success is associated with each deterministic cost estimate.
Are costs really random? The formal
framework for working with a range of
possible numeric values is the probability
distribution, the mathematical signature of
a random variable. Modeling costs as random variables does not imply they are random. It reflects how an item’s cost results
from a large number of very small influences, whose individual contributions we
cannot investigate in enough
detail to precisely calculate
the total cost. It is more efficient to recognize that virtually all contributors to cost
are uncertain and to find a
way to assign probabilities
to various possible ranges.
Consider coin tossing. In
theory, if we knew all the
physics involved, we could
predict with certainty
whether a coin would land
heads or tails; however, the influences acting on the coin are too complex for us to
understand in enough detail to calculate
the parameters of the coin’s motion. So,
instead, we bet that the uncertainties will
average out in such a way that the coin will
land heads half the time and tails the other
half. It is more efficient to model the physical process of coin tossing—which is in
fact deterministic—as if it were a random
statistical process and to assign probabilities of 0.50 to each of the possible outcomes. System cost can be similarly represented as a random variable rather than a
fixed number, because cost is composed of
many very small pieces, whose individual
contributions to the whole cannot be
specified in sufficient detail for precise estimation of the whole.
Standard accounting technique considers total system cost to be the sum of the
costs of all WBS items, so it first requires
us to estimate the most likely cost (the
mode, the most frequently occurring value)
of each item and then to sum those costs.
While this procedure seems reasonable, the
roll-up is almost sure to be
very different from the actual
most likely value of the total
cost. Statistical theory shows
that the sum of modes does
not generally equal the mode
of the sum. Because of the
preponderance of high-end
risks (with more probability
lying above the best estimate
than below it), most cost
probability distributions are
not symmetric about their
modes. As a result, the sum of the modes is
usually considerably smaller than the mode
of the sum.
The practical impact of this is that estimates obtained by totaling most likely costs
of WBS items tend to significantly underestimate actual cost. Examples are available to illustrate what kinds of errors might
occur in typical cases if these mathematical
peculiarities are ignored. They show that a
roll-up estimate typically has a probability
of 30 percent or less of being sufficient to
fund the program. Here as elsewhere, shortcutting proper statistical procedures leads
to erratic, unpredictable results.
Cost Correlation Between Items
Correlation among WBS-item cost distributions contributes significantly to the
uncertainty of the total system cost; this
has been realized since the probabilistic
approach to cost analysis became de
rigueur over the past decade in response
to DOD-issued requests for proposals. Understanding the degree of uncertainty in an
estimate is a necessary aspect of cost
analysis, one that is paralleled by representing system cost as a random variable
for the purpose of appropriately modeling
uncertainty.
Because risks faced in working on different WBS items are often correlated, ignoring correlation in statistical computations makes the spread of the cost
distribution narrower than it should be.
Failing to account for correlation therefore
deceives the analyst by making an estimate appear less uncertain than it really is.
The most universal statistical descriptor of
a random variable’s degree of uncertainty
(spread) is its standard deviation (sigma
value), and pairwise correlations between
random variables are significant contributors to the magnitude of the sigma value of
their sum. This makes correlation between
program-item costs a critical factor in the
estimation of total system cost uncertainty.
Cost estimating would be much simpler
if there were no interelement cost correlations; unfortunately, this is not the case.
Consider the example of a space system.
At the highest level of the WBS, there are
three elements: space-resident satellites, a
launch system, and a ground-based control
and data-analysis system. The respective
Cost Estimating at a Glance
Cost-estimating relationship (CER)
A mathematical equation that predicts cost as a function of
one or more drivers.
Net percentage bias
Algebraic sum of all percentage errors made in estimating values of the database using the CER.
Work-breakdown structure (WBS)
A hierarchical list of everything necessary to bring a system
to its full operational capability.
Most likely cost
The mode, the most probable value.
Cost-risk analysis
An estimating method that treats WBS item costs and totalsystem costs as random variables rather than deterministically derived numbers.
Earned-value management
A set of procedures used to track program expenditures
and their relationship to the amount of work that has been
accomplished.
Percentage standard error
Root-mean-square of all percentage errors made in estimating values of the database using the CER.
Estimate at completion (EAC)
An estimate of what the final cost of a program will actually be.
Cost-performance index (CPI)
A measure of the efficiency at which dollars are being spent on
a project.
Schedule-performance index (SPI)
A measure of the rate at which work is being completed on a
project.
Schedule-cost index
The product of the cost-performance index and the scheduleperformance index.
Crosslink Winter 2000/2001 • 15
100
Percent underestimated
1000
100
80
30
60
10
Number of WBS items in the roll-up
40
20
0
0
0.2
0.4
0.6
Actual correlation
0.8
1
This graph illustrates the importance of working with the numeric correlations between WBS items. Assuming these correlations to be zero causes a detrimental effect on the estimation of total-cost uncertainty. Shown is the percentage by which the sigma value (standard deviation) of the total-cost distribution is underestimated, assuming WBS interelement correlations to be zero instead of the actual
value (usually represented by ρ, the Greek letter rho). The horizontal axis tracks ρ, and the vertical axis
tracks the percentage by which the total-cost sigma value is, for each nonzero correlation value, underestimated if the correlations are instead assumed to be zero. Each curve is keyed to a unique value
of n, the number of elements in a roll-up. As the four curves show, the percent by which sigma is underestimated also depends on the number of WBS items for which the pairwise correlations are incorrectly assumed to be zero. For example, if n = 30 WBS items, and all correlations between WBS
items (ρ) are 0.2, but the estimator assumes they are all zero, the total-cost sigma values would be underestimated by about 60%. (This is meant to be a generic illustration and therefore is only approximately true in any specific case. It has been assumed that the sigma values for the WBS items are the
same throughout the entire structure.)
On the other hand, XS, XL, and XG may
be negatively correlated for different reasons. Reducing the complexity of onboard
satellite software and communications
hardware may increase ground costs by
complicating the ground software while, at
the same time, decreasing launch costs as
a result of reduction in size of on-orbit
hardware.
Further down into the WBS, costs are
more highly correlated because they correspond to specific items that physically
occupy adjacent locations within the
Probability
costs XS, XL, and XG may be positively
correlated for several reasons. An increase
in size, weight, and number of satellites to
be placed in orbit results in an increase in
launch costs, either through the number of
launches required or the needed capability
of the individual launch vehicles. An increase in number and data-gathering capability of the satellites forces an increase in
ground-operations costs, either through
the complexity of the tasking and control
system software or the number and size of
ground-station facilities.
L
M
Optimistic Most likely
Cost (dollars)
H
Worst-case
Sample triangular probability distribution of WBS-item cost, based on estimation of optimistic, most
likely, and worst-case costs. All statistical properties of the triangular distribution are determined by the
lowest possible cost L, the most likely cost M, and highest possible cost H.
16 • Crosslink Winter 2000/2001
satellites or ground stations or to software
packages that operate specific pieces of
hardware. In any particular case, the actual interelement correlations have to be
estimated along with the element costs
themselves.
While it may be difficult to justify use
of a specific numeric value to represent the
correlation between two WBS-item costs,
it is important to avoid the temptation to
omit the correlation altogether when a precise value for it cannot be established.
Such an omission will set the correlation
in question to the exact value of zero,
whereas positive values of the correlation
coefficient tend to widen the total-cost
probability distribution and thus increase
the gap between a specific cost percentile
(e.g., 70 percent) and the best-estimate
cost. Therefore, using reasonable nonzero
values, such as 0.2 or 0.3, generally leads
to a more realistic representation of totalcost uncertainty. Having been the first organization to vigorously advocate in
Washington in the early 1990s that account be taken of the significant impact
that correlation exerts on cost estimating,
Aerospace has developed most of the practical methods currently in use for dealing
with correlation-related issues.
Obtaining Cost Percentiles
Cost-risk analysis comprises a series of
engineering assessments and mathematical techniques whose joint goal is to measure the degree of confidence attached to
any particular estimate of system cost. A
three-step procedure built upon results of a
technical-risk study typically forms the
cost-risk analysis. First, an engineering assessment of the technologies involved in
each subsystem leads to probability distributions of subsystem costs. Second, these
subsystem cost distributions are correlated
and combined to generate a cumulative
distribution of total system cost. Finally,
once the cumulative distribution has been
established, the 50th, 70th, 90th, and other
cost percentiles can be read off the graph.
Based on engineering assessments, the
cost-estimation process is carried out by
assigning low, best, and high cost estimates to each item in the WBS. These
three estimates define a triangular probability distribution. All statistical properties
of this distribution are uniquely determined by three parameters: the lowest possible cost L, the most likely cost M, and
the highest possible cost H. The low estimate specifies subsystem cost under the
most optimistic assumptions concerning
development and production capabilities.
The most likely estimate is typically
derived from the output of a CER-based
cost model or other appropriate estimating
procedure such as analogy or engineering
buildup. The high-end cost encompasses
the impacts of all technical risks faced in
developing and producing the subsystem.
Translating qualitative and quantitative assessments of technical risk into dollars to
determine a realistic high-end cost typically requires extensive interactions of cost
analysts with engineers knowledgeable in
the technical state of the art.
System-design concepts usually contain
physical descriptions or lists of engineering
requirements that may be translatable into
dollars. Such translations may be derived
from knowledge of technical precedents on
which the concept is based. Alternatively,
they may be derived from the fact that such
precedents are lacking or have not been
successfully pursued in the past, despite expenditures of known amounts of funds.
Technical complexity of unproved methods
for implementation are assessed using
scales of increasing difficulty, complexity,
or uncertainty analogous to related events
in the historical record. These technicalrisk measurements may then be translated
into cost risks, based on analogous cost experience or state-of-the-art relationships
among cost, technical difficulty, and pace
of development. Management and control
of costs may then be implemented in accordance with a realistic understanding of the
primary source of risk, namely technical
difficulty.
After probability distributions of individual WBS-item costs have been established,
the next step is Monte Carlo random sampling from each subsystem’s distribution
and combining these random numbers in a
logical way to produce a representation of
the cumulative distribution of total system
cost. The ultimate objective of cost-risk
analysis, the ability to read off percentiles
of total system cost, is thus achieved.
Trade Studies and
Source Selections
For programs in progress, probabilistic information allows budget planning to be
based on the likelihood that any proposed
dollar amount will be adequate to fund the
program. Prior to formal program initiation, trade studies are typically undertaken
to find out whether a certain type of system
is feasible from the operational and cost
WBS-item triangular
cost distributions
Merge WBS-item cost distributions into
total-cost normal distribution
Most
likely
Most
likely
Cost (dollars)
Cost (dollars)
Cost (dollars)
Roll-up of most likely
WBS-item costs
Most
likely
Most likely
total cost
Cost (dollars)
Once the cost analyst has established probability distributions of individual WBS-item costs, then
Monte Carlo sampling is carried out from each subsystem’s distribution. The random numbers thus
generated are combined in a logical way to produce a representation of the cumulative distribution of
total-system cost.
points of view. Additionally, source selections are conducted to evaluate system
approaches to a problem proposed by
different contractors (under acquisition reform, the program for improving and accelerating government contracting and procurement while reducing costs, the
approaches may possibly even meet different sets of requirements).
Timeliness and simplicity are key requirements of analyses undertaken in support of trade studies and source selections
because not much technical detail is available about the system under study during
either phase. In both cases, a decision has
to be made by someone who has a very
limited factual database. For trade studies
and source selections, probabilistic information allows candidate systems to be
compared on a level playing field; the goahead decision (in a trade study) or contract award (in a source selection) can be
made on the basis of, say, the 50th percentile cost of each candidate.
2000
Mode
3000
Median
3674
Mean
50%
Cost (dollars)
Sample lognormal probability distribution, a distribution in which the natural log of a random variable
is normally distributed. Unique mathematical characteristics of the triangular and lognormal probability distributions make them both especially applicable to cost analysis at the trade-study and sourceselection stages. A random variable has a lognormal probability distribution if its natural logarithm has
a normal probability distribution; a normal distribution is the familiar bell curve, a continuous distribution that is symmetric about its mean. The asymmetrical lognormal distribution, a good model for the
statistical sum of a number of triangular distributions, is an excellent choice for representing total-cost
distributions of systems that are to be compared on the basis of their relative cost. This is because the
ratio of two independent lognormals is itself lognormally distributed.
Crosslink Winter 2000/2001 • 17
Phase
TRL
System test, launch, and operations
9
System verified by successful
mission
8
System flight-qualified through test
7
System prototype demonstrated in
space environment
6
System demonstrated in relevant
environment (ground or space)
5
Component and/or breadboard
validation in a relevant environment
4
Components validated in laboratory
3
Analytical and experimental critical
function, characteristic
proof-of-concept
2
Technology concept and
application formulated
1
Basic principles observed
and reported
Maturity Level
Technology demonstration
System/subsystem development
Technology development
Feasibility verification
Basic technology research
NASA technology readiness level (TRL) scale. This tool provides information for determining the worstcase high-end cost of a WBS item. For each spacecraft, aircraft, or payload subsystem under study, an
engineer assigns one of the TRL indexes to that subsystem, then compares that index with the indexes
of each item in the database that supports the cost estimate. The engineer can then derive the triangular distribution of cost for that subsystem from the relationship between the average TRL index of the
database and the TRL index of the subsystem under study. Each time a new program’s cost is estimated, its TRL level will likely be higher than what it had been, if progress is being made. When a new
program is at a lower level than the database, its cost will be more uncertain and its cost probability
distribution will tend to range far from the estimate derived from the database. As work proceeds,
though, the new program’s TRL level should eventually exceed the average level of the database. Then
its cost will be less uncertain, so its cost probability will be concentrated somewhat closer to the database estimate.
But a nagging question remains: “What
if System A, the lower-cost option at the
50-percent confidence level, faces risk
issues that make its 70th-percentile cost
higher than that of System B?” In other
words, System B would be the lower-cost
option if the cost comparison were made
at the 70-percent confidence level, while
System A would be the lower-cost option
if the cost comparison were made at the
50-percent level. In this classic situation,
the decision maker has to choose between
a low-cost, high-risk option and a highcost, low-risk option. To take account of all
possible risk scenarios, the decision maker
can make use of all cost percentiles simultaneously, namely the entire cost probability distribution of each candidate system
18 • Crosslink Winter 2000/2001
(which reflects the candidate system’s entire risk profile), not simply the 50-percent
or the 70-percent confidence cost. As it
turns out, the expression of system cost in
terms of a probability distribution makes it
possible to estimate the probability that
System A will be less costly than System
B, and that probability can be part of the
basis on which the decision is made.
Learning Rates
Standard cost-estimating practice involves
the application of a cost-improvement factor, or learning rate, to account for management, engineering, and/or production improvements that save money as successive
units are produced. Lack of credible analogous or applicable historical data, however,
makes it difficult or even impossible to
determine in advance exactly what an accurate learning rate will be in any particular estimating context. Nevertheless, the
estimator's choice of learning rate exerts a
major impact on the estimate of the total
spending profile of a large production
program. Even if nonrecurring and firstunit production costs are estimated precisely, small variations in the learning rate
will substantially outweigh all other contributions to uncertainty in the total system
estimate.
This is especially true in cases of largequantity procurements, such as aircraft,
launch vehicles, or proposed constellations of numerous satellites. In the case of
50 units, for example, the average-unitcost estimate will be 46.6 percent lower at
an 85-percent learning rate, compared
with what it would be at a 95-percent
learning rate. For 200 units, the estimate
will be 57.3 percent lower at an 85-percent
learning rate versus a 95-percent learning
rate. In the case of 5000 units, the estimate
will be 74.5 percent lower at an 85-percent
versus a 95-percent learning rate. The
learning rate should therefore be treated as
another source of cost risk, with optimistic, most likely, and pessimistic learning rates factored into the total system cost
probability distribution.
Allocating Risk Dollars
for Reserve
Because users of common estimating
methods often underestimate actual project cost, a management reserve fund is a
smart idea. This fund is put in place to help
overcome unanticipated contingencies that
may deplete the budget prior to project
completion. Percentiles of the cost probability distribution can serve as guidelines
for the magnitude of the appropriate management reserve. Suppose that the number
submitted as the best estimate falls at the
40th-percentile level of the cost probability distribution. Depending on the importance of the project, a prudent reserve
might consist of the funding required to
bring the total available dollars to at least
the 50th, 70th, or even 90th percentile.
Risk dollars in the management reserve
pool may sometimes constitute a large percentage of the budgeted best-estimate
funding base. Funding agencies are reluctant to set aside such large amounts of
money for management reserve, believing
that “slush funds” lead to waste and slack
management. It is therefore advisable to
justify such requests by providing a logical
Cost (millions of dollars)
400
300
200
100
0
70
75
80
85
90
Learning rate (percent)
95
Cost-estimating practice involves the application
of a learning rate to account for improvements
that save money as successive units are produced. This graph illustrates how cost can vary
at different learning rates.
allocation of the requested risk dollars
among the various project elements. A
specific WBS-based cost-risk analysis can
profile a probable need for additional
monies beyond those included in the best
estimate.
Because a WBS item’s need for risk
dollars arises out of uncertainty in the cost
of that item, a quantitative description of
that need must serve as the mathematical
basis of the risk-dollar computation. In
general, the more uncertain
the cost, the more risk dollars will be needed to cover a
reasonable probability (e.g.,
0.70) of being able to complete that element of the system. Items whose cost distributions have relatively high
probabilities of exceeding
their own most likely estimates will need
more risk dollars. Methods were developed at Aerospace to allocate management
reserve properly, taking into account not
only the skewness of each item’s probability distribution, but also correlations between the items, so that management reserve dollars will not be assigned to do
double duty.
Estimates at Completion Based on
Earned Values
Once a program is under way, program
managers must monitor how work is being done and money is being spent.
Earned-value management is a specific,
well-defined set of procedures used in program control to track expenditures and
their relationship to the amount of work
that has been accomplished. Earned-valuemanagement documentation compares
outflow of program funds with completion
of various work packages against which
the funds have been budgeted. This comparison, used properly, allows programcontrol personnel to quickly spot overruns
and possible schedule discrepancies. In
addition, earned-value-management data
are used to calculate estimates at completion at any stage in the program. Despite
the wealth of data that earned-valuemanagement systems bring to bear on the
estimating process, they have not been able
to circumvent the statistical nature of cost
that, for at least the past decade, has been
the driving force behind the development
and application of cost-risk analysis.
The two main quantities formally
tracked by earned-value-management systems are cost variance and schedule variance. Cost variance is the difference between the amount of money budgeted for
work actually completed (or completed
over some time period, such as a month)
and the amount of money actually spent to
do that work, regardless of how much work
was supposed to get done during that period. Schedule variance is the difference
between the budgeted cost of the work
completed (or completed over some period) and the budgeted cost of the work
planned for that time period, regardless of
the amount actually expended.
Mathematically related to
the cost and schedule variances, although calculated
slightly differently, are two
other quantities: the costperformance index and the
schedule-performance index.
The cost-performance index is a measure of
the efficiency at which dollars are being
spent on the project; for example, a costperformance index of 0.90 means 90 cents
worth of work is getting done for every dollar spent. The schedule-performance index
measures the rate at which work is being
completed; a schedule-performance index
of 0.90 means 90 percent of the work is getting done that is supposed to be done during
the time period in question.
Focusing on the problem of using
earned-value data to calculate estimates at
completion, Professor D. S. Christensen of
Southern Utah University tracked the
historical performance of a number of
common methods of making estimates-atcompletion calculations. His research led
him to conclude that final program cost almost always falls between estimates at
completion based on two earned-valuederived indexes, the cost-performance index and the schedule-cost index. (The latter is the product of the cost-performance
index and the schedule-performance index.) Furthermore, he found that actual
Aerospace Systems
Architecting and Engineering
Certificate Program
Courses in Cost Estimation
The Aerospace Institute offers several
courses in the field of cost analysis.
Cost Estimation and Analysis
The logical foundations, methods, and
terminology of cost analysis. Historical
cost data on space, launch, and
ground systems are described, and
work-breakdown structures are introduced. The course explains the adjustment of raw data to establish a
consistent data set for application to
cost-model development and costestimating tasks. Various costestimating techniques are discussed.
The course concludes with a description of cost models in use at Aerospace and associated organizations.
Cost-Risk Analysis
An overview of cost-risk analysis—the
techniques used to represent costs in
the form of probability distributions.
Primary topics include rolling up costs
of work-breakdown-structure elements, accounting for correlation
among element costs, and allocating
risk dollars back to elements. Also
covered are methods for using cost
risk as a criterion or figure of merit in
trade studies and source selections,
the use of NASA technology readiness levels to estimate cost risk, and
the impact of learning rate on a production cost estimate. The case is
made for using cost-risk analysis to
estimate probabilities of cost overruns
of various magnitudes.
Vital Issues in Earned-Value
Management
The capabilities and deficiencies of
earned-value management. This
course goes beyond formal earnedvalue-management theory, elucidating
the conclusions that organizations can
justifiably draw about program
progress from earned-value-data
compilations. It focuses on what program managers and their staffs must
know about earned value to make
appropriate, valid program-control
decisions.
ASAE
Certificate
Crosslink Winter 2000/2001 • 19
System Development
Undertake
trade study
Cost-Risk Analysis
Assess
subsystem
technologies
Conduct source
selection
Assign
estimates to
WBS items
Probability
distributions
Perform
Monte Carlo
sampling
Probability
distributions
Perform
Monte Carlo
sampling
Adopt specific
hardware and
software designs
Begin system
development
Estimates at Completion
Track cost
variance and
CPI
Monitor
work done and
money spent
?
Yes
No
Derive floor
and ceiling
Track schedule
variance and
SPI
Complete system
implementation
program final cost is generally closer to the
estimates at completion based on the
schedule-cost index, which he refers to as a
ceiling to the final cost. He calls the estimates at completion based on the costperformance index the floor to the actual
final cost, based on his conclusions that
program cost performance rarely improves
as the program proceeds to its completion.
The results of Christensen’s research,
namely the clear historical
precedents for estimating a
floor and ceiling to program final cost, were recently applied
by analysts at Aerospace to
construct a statistical approach
to computing estimates at
completion. Such an approach allows us to
associate levels of confidence with various
estimates at completion, to distinguish the
dollar value of a best estimate at completion from risk dollars or management reserve, and to identify WBS items that are
most likely to require an infusion of risk
dollars. While this kind of information is
20 • Crosslink Winter 2000/2001
usually inferred from a detailed technical
risk analysis, it can also be derived from
earned-value data.
Summary
At several stages in the system engineering process, it is necessary to conduct a
cost analysis to assess the likely magnitude of program funding requirements.
The cost-analysis process begins at the
trade-study stage with an applicable set of CERs statistically derived from historical
cost data. After specific hardware and software designs
have been formally adopted,
it may be possible to base estimates on components that
were previously developed and produced,
vendor price quotes for existing off-theshelf items, or other appropriate data that
can enable dollar values to be assigned to
all items to be paid for. These information
sources typically allow the analyst to estimate the most likely cost of each item, but
because many technical risk issues are of-
ten present, the sum of the items’ most
likely costs is usually significantly below
the total system’s most likely cost. This
unfortunate situation led in the early
1990s to the development of the subfield
of cost-risk analysis, a way of looking at
cost through the lens of probability and
statistics.
Once one accepts the idea of evaluating
costs probabilistically, one is locked into
the need to estimate correlations between
the cost impacts of various technical risks
because correlation is a significant driver
of total-cost uncertainty. However, while
cost-risk analysis makes demands upon
the cost estimator, it also provides benefits
to program management, particularly
when it comes to recommending a prudent
management reserve. Having in hand a
probability distribution of total program
cost, rather than just a single best estimate,
program management can propose, for example, that the basic cost estimate be budgeted at the 50-percent confidence level,
but that sufficient management reserve be
Random
numbers
Combine
random
numbers
Cumulative
distribution
Random
numbers
Combine
random
numbers
Cumulative
distribution
Read
off cost
percentiles
EACs
Identify WBS
items needing
risk dollars
Associate
confidence levels
with EACs
Allocate
risk dollars
Identify WBS
items needing
risk dollars
The basic steps in estimating probable system cost, with detailed expansion of the processes for
performing cost-risk analyses and developing estimates at completion.
included to bring the success probability
up to 70 percent.
Finally, as the program progresses,
earned-value data on expenditures and
work accomplished provide program
management with the ability not only to
maintain current knowledge of cost overruns, but also to estimate cost at completion from inside the program itself rather
than by statistical inference from historical information on other programs.
Further Reading
R. L. Abramson and S. A. Book, “A Quantification Structure for Assessing Risk-Impact Drivers,” The Aerospace Corporation, 24th Annual DOD Cost Analysis Symposium
(Leesburg, VA, September 5–7, 1990).
R. L. Abramson and P. H. Young, “FRISKEM—
Formal Risk Evaluation Methodology,” The
Journal of Cost Analysis, 29–38 (Spring 1997).
Assistant Secretary of Defense (Program Analysis and Evaluation), Cost Analysis Guidance and
Procedures (Department of Defense DOD
5000.4-M, December 1992), pp. 2–4 to 2–6.
S. A. Book, “Do Not Sum ‘Most Likely’ Cost
Estimates,” The Aerospace Corporation, 1994
NASA Cost Estimating Symposium (Houston,
TX, November 8–10, 1994).
S. A. Book and E. L. Burgess, “The Learning
Rate’s Overpowering Impact on Cost Estimates
and How to Diminish It,” Journal of Parametrics, Vol. 16, No. 1, 33–57 (Fall 1996).
S. A. Book, “Justifying ‘Management Reserve’
Requests by Allocating ‘Risk Dollars’ among
Project Elements,” The Aerospace Corporation,
Fall 1996 Meeting of the Institute for Operations
Research and Management Science (INFORMS)
(Atlanta, GA, November 3–6, 1996).
S. A. Book and P. H. Young, “General-Error
Regression for Deriving Cost-Estimating Relationships,” The Journal of Cost Analysis, 1–28
(Fall 1997).
S. A. Book, “Why Correlation Matters in Cost
Estimating,” The Aerospace Corporation, 32nd
Annual DOD Cost Analysis Symposium
(Williamsburg, VA, February 2–5, 1999).
S. A. Book, “Do Not Sum Earned-Value-Based
WBS Estimates-at-Completion,” The Aerospace Corporation, Society of Cost Estimating
and Analysis National Conference (Manhattan
Beach, CA, June 13–16, 2000).
E. L. Burgess and H. S. Gobreial, “Integrating
Spacecraft Design and Cost-Risk Analysis Using NASA Technology Readiness Levels,” The
Aerospace Corporation, 29th DOD Cost
Analysis Symposium (Leesburg, VA, February
21–23, 1996).
D. S. Christensen, “Using Performance Indices to
Evaluate the Estimate at Completion,” The Journal of Cost Analysis, 17–24 (Spring 1994).
H. L. Eskew and K. S. Lawler, “Correct and Incorrect Error Specifications in Statistical Cost
Models,” The Journal of Cost Analysis, 105–123
(Spring 1994).
P. R. Garvey, Probability Methods for Cost Uncertainty Analysis—A Systems Engineering
Perspective (Marcel Dekker, New York, 2000).
R. Giguet and G. Morlat, “The Causes of Systematic Error in the Cost Estimates of Public
Works,” Annals of Bridges and Roads, No. 5
(September–October 1952, Paris, France).
Translated from the French by W. W. Taylor,
U.S. Air Force Project RAND, Santa Monica,
CA, March 1958.
Crosslink Winter 2000/2001 • 21
Lockheed Martin Corporation
U.S. Air Force, 30th Space Wing, VAFB
Successful launches of five U.S.
launch vehicles. Clockwise from
far left: Delta II launched in 2000
from Vandenberg Air Force
Base, California; Atlas vehicle
carrying the DSCS III B13
spacecraft into orbit in October
1997; Scout lifting a radar calibration satellite into orbit in June
1993; an early morning liftoff of
the space shuttle Discovery
from Kennedy Space Center on
November 27, 1989; Titan IV,
the Air Force’s premier heavy lift
vehicle, launched from Vandenberg Air Force Base on November 28, 1992.
Lockheed Martin Corporation
Reliability
Space Launch Vehicle
I-Shih Chang
The 1993 failure of a Titan IV K-11 launch vehicle prompted the U.S. Air Force to request
The Aerospace Corporation’s participation in an analysis of space-mission failures. I-Shih Chang
led the Aerospace study, which was later expanded to support the DOD Broad Area Review of U.S.
launch failures between 1985 and 1999. This article is the second in the Crosslink series on the
history of the corporation’s role in national space programs.
W
hen you get into your car and insert the key in the ignition, you expect the car to start. You expect
to pull away from the parking spot and drive to your destination without a problem. These expectations are based on the assumption that your vehicle is reliable—you can depend on it to behave
the same way time after time. This ideal applies to launch vehicles as well. If the vehicle has been
designed, built, and tested well, we expect it to successfully leave its launchpad. But just as personal vehicles can
fail in unforeseen ways (radiators leak, engines stall), so too can launch vehicles occasionally fail.
Launch failures have been a fact of life
for most space-faring nations since the
space age began in 1957. Because a space
mission involving a launch vehicle and a
sophisticated satellite can easily cost hundreds of millions of dollars, investigation
into why launches fail can provide valuable information for improving vehicle
systems and cost savings. Any lessons
learned from past experiences that could
serve to mitigate launch failures in the future would make such an investigation extremely worthwhile.
A space launch failure is an unsuccessful
attempt to place a payload into its intended
orbit. This definition includes all catastrophic launch mishaps involving launch
vehicle destruction or explosion, significant
reduction in payload service life, and extensive effort or substantial cost for mission
recovery. It also includes the failure of the
upper stage of a launch vehicle, up to and
including spacecraft separation on orbit.
However, this definition does not include
the failure of an upper stage released from
the U.S. space shuttle. The U.S. space shuttle is both a launch vehicle and a space vehicle. An upper stage released from the
space shuttle in orbit is considered a transfer vehicle, not a launch vehicle.
The space age began with the USSR
launch of the first artificial satellite, a
liquid-fueled Sputnik (SL-1), on October
4, 1957. At present, nine countries or consortia—the United States, the Common-
wealth of Independent States (CIS, formerly USSR), the European consortium,
China, Japan, India, Israel, Brazil, and
North Korea—possess space launch systems, demonstrate space launch capability,
or conduct space launch operations.
Many current major space launch systems are based on early ballistic-missile
technology, which regarded launch costs
and schedules a higher priority than
launch quality and reliability. The design
of these space launch systems left much
room for improvement, as demonstrated
by launch failures of the past.
Financially, much is at stake in any kind
of space launch. A small launch vehicle,
such as the U.S. Pegasus, costs about $15
million, but a versatile, reusable launch vehicle, such as the U.S. space shuttle, costs
well over $1 billion. A small experimental
satellite can be purchased for a few million
dollars, but an advanced spy satellite or scientific satellite may cost more than $1 billion. Furthermore, the possible monetary
loss calculated for a launch failure does not
include the expense, time, and effort spent
during the recovery period or the cost of the
damage to national prestige. Analysis of
space launch failures is critical to a national
space program’s future success.
A systematic look at worldwide launch
successes as well as failures, including
scrutiny of various launch vehicle subsystems, can shed light on precise areas that
might be at the root of many problems. This
Photo courtesy of NASA
Crosslink Winter 2000/2001 • 23
24 • Crosslink Winter 2000/2001
1999, the European consortium consistently conducted 10 or more launches per
year.
China and Japan have also invested
heavily in space programs. China’s government publicized its first two successful
space launches—in 1970 and 1971—as
great national achievements in science and
engineering. The Chinese CZ-2C vehicle
holds a perfect record for Chinese launchers of 11 launch successes. Over the last
few years, the Chinese government has
made considerable investment in improving its launch-base infrastructure to gain a
greater share of the commercial space
market.
In Japan, the National Space Development Agency (NASDA) and the Institute
for Space and Astronautical Science
(ISAS) are responsible for space research
and development. The NASDA H-II and
ISAS Mu-5 vehicles use state-of-the-art
technology and represent the Japanese
government’s commitment to becoming a
major player in space. Japan had an 18year streak (1977–1994) of successful
consecutive space launches.
The remaining nations whose launches
have been tracked have, for the most part,
conducted space programs for very brief
time periods. They have experienced
mixed success records. India, undaunted
by a series of technical setbacks and launch
failures in its fledgling space program, allocates a significant portion of its yearly
budget to space technology development.
Israel’s Space Agency launched its first
satellite with the Shavit launch system on
3000
2589
2500
Space launches
success
failure
2000
1500
1152
1000
500
164
0
181
117
12
U.S.
56 11
Europe
52 9
7
6
Japan
3
1
Israel
0
2
0
1
10 2
1
1
1
0
N. Korea
U.K.
Brazil
France
Australia
CIS/
China
India
USSR
The number of successful and failed launches for the space-faring nations of the world between 1957
and 1999. The CIS/USSR has carried out more launches than all other countries combined.
100
93.5
Space launch success rate (percent)
type of study can also help suggest what actions to take to address those problems.
Worldwide Space Launches
To understand space launch reliability, it is
helpful to examine the history of launches
worldwide since 1957. The progress made
in space science and engineering during
this period has been truly remarkable, as illustrated by the achievements of the United
States and the CIS/USSR, the nations that
have dominated the space launch arena.
To get an idea of how great that
progress is, consider that in 1957 the
USSR’s Sputnik 1 weighed only 83.6 kilograms, and on July 16, 1969, the U.S.’s
Saturn V, the largest and most powerful
operational rocket ever built, lofted Apollo
11, with a mass of 43,811 kilograms, to lunar orbit during the moon-landing mission. Today, the U.S. Space Transportation
System routinely launches the shuttle orbiter, weighing more than 110,000 kilograms, to low Earth orbit. The orbiter flies
like a spacecraft and lands like a glider.
The USSR was the first country to place
a satellite carrying a person into Earth orbit. Its Soyuz vehicle has been statistically
shown to be the most reliable expendable
launch vehicle in the world. Since 1957,
CIS/USSR has carried out more space
launches than all other countries combined. Between 1957 and 1999, 4378
space launches were conducted worldwide, including 2770 CIS/USSR launches
and 1316 U.S. launches.
These figures include launches carried
out individually by France and the United
Kingdom (U.K.) over 25 years ago. France
was the third country (after CIS/USSR and
the United States) to attain space launch
capability, with Diamant, a rocket that
placed a satellite in orbit in 1965. The U.K.
developed a small vehicle, Black Arrow,
which was launched successfully in 1971.
Currently, France and the U.K. participate through the European Space Agency
(ESA) in the development of the Ariane
launch systems. (The ESA is composed of
14 European nations.) The Ariane vehicles
are launched from French Guiana in South
America, which is only 5.2 degrees north
of the equator and is therefore an excellent
location for launching satellites into geosynchronous orbit. Recently, the European
consortium has been catching up to
CIS/USSR and the United States and
capturing a large share of the commercial
space launch service market formerly
dominated by U.S. launches. From 1995 to
87.5
100.0
90.7
83.6
85.2
83.3
80
75.0
60
53.8
50.0
40
20
0
1152
1316
2589
2770
U.S.
117
129
56
67
Europe
CIS/
USSR
52
61
7
13
Japan
China
3
4
0
2
0
1
0.0
0.0
Israel
India
Brazil
10
12
1
2
1
1
N. Korea
U.K.
France
Australia
Each space-faring nation’s launch success rate as a percentage of its total launches.
September 19, 1988. Its third satellite,
launched April 5, 1995, contains surveillance equipment designed to provide reconnaissance and military observation.
Brazil’s satellite launch attempts using
the Veiculo Launcadror de Satelites
(VLS)—“satellite launch vehicle”—from
the Alcantara launch site failed on November 2, 1997, and December 11, 1999. North
Korea claimed to have successfully
launched the small Kwangmyongsong-1
satellite into orbit by a Taepo Dong-1 vehicle on August 31, 1998. But other countries
have received no signal from it, and the
launch is considered a failure (third-stage
malfunction).
Australia launched a small Sparta (SPecial Anti-missile Research Tests, Australia)
vehicle in 1967, which was a modified U.S.
Redstone rocket. Today Australia does not
have its own launch system. The Woomera
Rocket Range in Australia is currently dedicated to weapons and sounding rocket
testing.
It is also worth noting that since the end
of the Cold War, national boundaries in the
space launch business have become less
distinct. Companies throughout the world
have been marketing CIS/USSR launch vehicles for commercial launch service: Proton by Lockheed Martin, Zenit by Boeing,
Kosmos by Cosmos U.S.A., Soyuz by a
France-Russia consortium, and Rokot by a
Germany-Russia consortium.
Space Launch Failures
Of the 4378 space launches conducted
worldwide between 1957 and 1999, 390
launches failed (the success rate was 91.1
percent), with an associated loss or significant reduction of service life of 455 satellites (some launches included multiple
payloads). A brief look at some of the most
publicized, critical launch failures around
the world will highlight the nature of system failures.
In the United States, 164 space launches
failed, with an associated loss or significantly reduced service life of 205 satellites.
Most of the U.S. space launch failures (101
out of the 164) occurred during the first 10
years of space exploration (1957–1966). In
that period, the United States was diligently attempting to catch up to the USSR,
which had gained an early lead in space
exploration. The first space launch failure
involved a U.S. Vanguard vehicle, which
exploded two seconds after liftoff on
U.S. Army White Sands Missile Range
A Brief History of Rocketry
All modern liquid rockets can be traced to the
V-2, the first real ballistic missile. An unmanned
missile guided by a gyroscopic system, it
burned a fuel containing alcohol and liquid oxygen. The V-2 had a range of over 320 kilometers, and it could transport an explosive warhead that weighed a ton.
People have been building rockets in various forms for centuries. In 1232, the Chinese army shot “fire arrows”—solid rockets
propelled by a gunpowder mixture of charcoal, sulfur, and potassium nitrate (saltpeter)—at invading Mongols in the Battle of
Kai-Fung-Fu. Besides military weapons,
other applications for rockets over the years
have included fireworks, lifesaving devices
(as early as 1882), whaling operations (as
early as 1865), and signal and illumination
devices.
The basics of rocketry have not
changed. Ignited fuel burns in a combustion chamber, and the resulting gases are
forcefully expelled through a nozzle, propelling the rocket in the opposite direction.
Until the first quarter of the 20th century, all
rockets were developed using solid propellant. Today’s space systems employ both
solid-rocket motors and liquid-rocket engines to deliver payloads into orbit.
Original rocket technology did not
progress much until the early 19th century,
when Colonel William Congreve of England developed effective bomb-carrying
solid rockets. Long sticks trailed from his
rockets to provide stability. Soldiers put
Congreve’s rockets to use during the
British bombardment of Baltimore in 1814,
at Waterloo in the war with Napoleon in
1815, and in the Opium War with China in
1842. (The bombardment of Baltimore and
“the rockets’ red glare” inspired Francis
Scott Key’s famous poem “The Defense of
Fort McHenry,” adopted later as the U.S.
national anthem, “The Star-Spangled Banner.”) Congreve’s fellow Englishman,
William Hale, developed a more accurate
stickless, spin-stabilized rocket in 1844.
In the early 20th century, Wilhelm
Teodore Unge of Sweden improved the
mechanical strength and workability of
solid propellant and developed launcherrotated rockets that could travel eight kilometers with great accuracy. Y. P. G. Le
Prieur of France invented stick-guided solid
rockets for firing through steel tubes
mounted on the wing strut of a biplane,
which were used successfully in World
War I against German observation balloons. These constituted an early version
of air-launched missiles.
Konstantin Eduardovitch Tsiolkovsky of
Russia first introduced the idea of using
rockets for space exploration in 1903. Subsequently the idea was proposed by
Robert Hutchings Goddard of the United
States in 1919, Hermann Oberth of Germany in 1923, and Robert Esnault-Pelterie
of France in 1928.
In 1926, Goddard built a successful
rocket using liquid propellant (gasoline as
fuel and liquid oxygen as oxidizer). Germans used his 1939 design of a liquid
rocket to build and test the first full-scale
ballistic missile, Vergeltungswaffe-2 (V-2,
“Weapon of Retaliation-2”), in 1942. All
modern liquid rockets can be traced to the
V-2. With a mixture of ethyl alcohol as fuel
and liquid oxygen as oxidizer, it could travel
320 kilometers. The V-2 carried warheads
from the European continent to England in
the “Siege of London” during World War II.
At the end of the war, Russia captured
V-2 manufacturing plants, including many
German rocket scientists and engineers,
and set up its own ballistic missiles and
space launch programs under the leadership of Sergei P. Korolev. Meanwhile, Wernher von Braun, the leader of the German
V-2 program, and his coworkers continued
their research on ballistic missiles and
space launch vehicle development in the
United States. During the 46 years of the
Cold War between the United States and
the USSR (September 2, 1945, through
December 26, 1991), ballistic-missile
buildup and the space race fostered the
continued growth of rocket science.
Crosslink Winter 2000/2001 • 25
Launch Successes (s) and Failures (f), 1957–1999
Year
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
Total
57–99
60–99
70–99
80–99
90–99
U.S.
CIS/USSR
s
f
0
5
11
16
22
48
37
56
61
72
58
43
38
28
31
31
23
23
27
24
23
32
16
12
18
18
22
21
17
6
8
11
18
26
17
28
23
26
26
32
37
34
27
1
18
8
13
19
11
9
8
9
5
3
5
3
2
4
2
2
2
4
2
3
1
0
3
1
0
0
1
1
3
1
1
0
1
2
1
2
1
4
1
1
2
4
s
f
2 0
1 4
3 1
5 6
6 3
20 2
17 7
30 6
48 6
43 11
68 8
74 8
69 15
81 7
83 9
74 5
86 4
81 4
89 4
98 2
98 4
87 4
87 3
87 3
97 2
100 9
97 3
97 1
97 4
91 5
95 3
90 5
74 1
74 5
59 2
54 1
46 2
48 1
31 2
23 4
27 2
24 1
28 2
Europe
China
s
s
f
f
Japan
s
0
0
0
0
0
0
0
0
0
0
0
1
0
2
0
2
4
3
2
2
7
7
5
8
7
7
6
11
10
12
11
10
1
1
1
1
0
0
0
0
0
0
0
0
1
0
1
0
0
1
1
0
0
0
1
0
0
0
2
0
1
0
0
0
1
1
0
0
0
3
2
0
1
0
0
1
1
1
2
1
2
2
4
0
5
0
3
1
5
2
2
6
6
4
0
0
0
1
2
0
1
0
0
1
0
0
0
0
1
0
0
0
0
0
0
1
1
0
0
1
2
0
0
0
0
0
0
0
1
2
1
0
1
2
1
2
3
2
2
3
1
3
3
2
2
3
2
2
3
2
1
1
2
1
1
2
1
0
1,152 164 2,589 181 117
12
56
11
52
87.5%
89.2%
92.9%
93.4%
93.6%
93.5%
93.6%
95.5%
95.8%
95.0%
90.7%
90.7%
92.1%
93.5%
95.6%
83.6%
83.6%
83.6%
88.9%
87.2%
December 6, 1957. The failure, which
attracted tremendous public attention and
criticism in the wake of two successful
USSR Sputnik flights, was the result of a
low fuel tank and low injector pressure
that allowed the high-pressure chamber
gas to enter the fuel system through the
fuel-injector head. A fire started in the fuel
injector, destroying the injector and causing a complete loss of thrust immediately
following liftoff.
The U.S. Saturn V had a single failure
in the Apollo 6 mission on April 4, 1968,
when the third-stage engine failed to
restart because of fuel-injector burn26 • Crosslink Winter 2000/2001
f
2
1
0
1
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
1
1
9
85.2%
85.2%
91.2%
92.5%
82.4%
India
s
f
0
1
0
0
1
0
0
0
0
0
0
0
0
1
0
2
0
1
0
0
1
1
0
1
0
0
0
0
0
1
1
0
0
0
0
1
0
0
0
1
0
0
7
6
53.8%
53.8%
53.8%
58.3%
71.4%
Israel
s
f
Brazil
N.Korea
France
s
s
s
1
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
3
1
0
75.0%
75.0%
75.0%
75.0%
66.7%
f
1
0
1
0
0
2
0
0.0%
0.0%
0.0%
0.0%
0.0%
f
f
1
1
2
0
0
2
1
0
0
0
3
0
0
0
0
0
0
1
0
1
0
0
10
2
U.K.
s
Australia
f
0
1
1
0
1
1
s
f
1
0
1
0
1
0
1
0.0%
0.0%
0.0%
0.0%
0.0%
through. The versatile Space Transportation System also suffered a single launch
failure on January 28, 1986, when the
Challenger, carrying a seven-member
crew, exploded 73 seconds into flight. The
launch management had waived the
temperature-dependent launch commit criteria and launched the vehicle at a colder
temperature than experience indicated was
prudent. At such a low temperature, the
rubber O-rings in the motor case joint lost
their resiliency, and the combustion flame
leaked through the O-rings and case joint,
causing the vehicle to explode. The newly
developed U.S. commercial launch
83.3%
83.3%
75.0%
50.0%
50.0%
50.0%
100.0%
100.0%
Total
Year
s
f
2
6
14
21
28
68
54
86
110
116
129
117
107
113
119
106
109
105
124
125
123
123
106
102
121
120
126
127
120
103
110
115
101
114
86
94
78
89
72
69
84
76
70
1
22
9
19
22
13
16
14
15
18
12
14
20
12
15
7
8
8
8
6
7
5
5
7
4
10
3
3
6
9
5
7
1
7
5
3
5
4
8
8
5
6
8
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
3,988 390
Total
91.1%
91.7%
94.1%
94.5%
93.4%
57–99
60–99
70–99
80–99
90–99
systems, including Delta III, Conestoga,
Athena, and Pegasus, suffered launch failures during their early developmental
flights, a repeat of Vanguard, Juno, Thor,
and Atlas failures in the late 1950s and
early 1960s.
CIS/USSR experienced an impressive
number of space launches and a strong
launch success rate in the past. However,
the number of space launches and the success rate in recent years have declined,
mainly because of domestic financial
problems. From 1996 to 1999, for example, the United States conducted more
space launches than CIS/USSR for the
first time in 30 years.
U.S. Launch Vehicle Successes (s) and Failures (f), 1957–1999
Year
STS
Titan
Atlas
Delta*
s
s
f
s
f
s
2
8
10
9
10
8
6
8
7
8
7
8
8
7
6
5
3
5
5
3
7
1
0
3
2
4
4
2
3
1
5
4
4
5
2
3
1
2
1
1
0
0
1
0
2
1
1
1
1
0
1
0
0
0
0
0
0
1
1
0
1
0
1
0
0
1
0
0
0
0
1
2
1
0
1
4
11
9
15
14
31
14
6
5
2
4
6
4
2
3
4
6
14
4
6
5
3
6
4
5
3
2
2
1
3
3
4
5
7
12
7
8
6
5
0
1
4
7
5
1
3
5
2
0
2
0
1
2
0
0
0
2
0
1
0
0
2
1
0
0
1
0
0
1
0
0
0
1
1
1
0
0
0
0
0
0
1 183
21
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2
3
4
5
9
1
0
2
5
6
6
8
7
7
7
7
8
5
3
Total
95
f
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
57-99 99.0%
60-99 99.0%
70-99 99.0%
80-99 99.0%
90-99 100.0%
89.7%
89.7%
89.5%
89.2%
86.8%
2
3
9
7
4
7
8
12
7
9
7
4
7
5
6
12
9
8
10
3
3
5
7
8
4
0
1
2
1
8
11
5
11
7
3
2
10
10
12
10
f
1
0
0
0
1
1
0
0
1
2
0
1
0
1
1
0
0
2
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
1
1
1
257 44 259 16
85.4%
85.6%
91.3%
92.4%
95.2%
94.2%
94.2%
95.0%
96.0%
95.3%
Taurus Athena Pegasus Saturn
s
f
s
f
s
f
s
3
3
1
1
3
4
1
2
2
4
0
1
1
0
0
0
2
1
0
0
0
0
0
0
0
0
1
1
2
1
0
0
0
1
1
0
0
2
2
1
4
5
6
3
4
0
4
2
24
100.0%
100.0%
100.0%
100.0%
100.0%
66.7%
66.7%
66.7%
66.7%
66.7%
0
1
0
0
1
1
1
0
0
0
4
85.7%
85.7%
85.7%
85.7%
85.7%
25
Thor** Conestoga Scout
f
s
f
0
0
0
0
1
0
0
0
0
0
0
0
0
7
12
13
25
17
25
24
14
16
12
11
9
8
4
1
2
1
1
1
1
1
0
0
3
3
6
7
4
5
2
1
2
0
1
0
0
1
0
0
0
0
1
0
0
0
1
0
1 205 37
96.2%
96.2%
100.0%
84.7%
86.5%
90.6%
0.0%
s
f
0
1
0
1
0.0%
0.0%
0.0%
0.0%
0.0%
s
Juno*** Vanguard
f
0
1
3
4
7
5
8
6
5
1
3
5
5
1
6
2
2
1
1
3
0
1
0
1
1
2
1
1
4
0
1
1
2
1
1
1
3
2
3
1
0
0
2
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
86
14
86.0%
86.0%
97.9%
100.0%
100.0%
s
f
3
2
1
1
4
2
1
2
7
9
43.8%
40.0%
Pilot
Total
s
f
s
f
s
0
1
2
1
5
2
0
6
3
8
0
6
27.3%
0.0%
0
5
11
16
22
48
37
56
61
72
58
43
38
28
31
31
23
23
27
24
23
32
16
12
18
18
22
21
17
6
8
11
18
26
17
28
23
26
26
32
37
34
27
Year
f
1
18
8
13
19
11
9
8
9
5
3
5
3
2
4
2
2
2
4
2
3
1
0
3
1
0
0
1
1
3
1
1
0
1
2
1
2
1
4
1
1
2
4
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
1,152 164
Total
87.5%
89.2%
92.9%
93.4%
93.6%
57-99
60-99
70-99
80-99
90-99
* Includes TAD (Thrust-Augmented Delta)
** Includes TAT (Thrust-Augmented Thor), LTTAT (Long Tank Thrust-Augmented Thor), and Thorad (Thrust-Augmented Thor Delta)
*** Includes Juno-I and Juno-II
Space launch failure in a closed society
like the USSR or the People's Republic of
China was guarded as a state secret and
not publicized in news media. Recently,
though, because of “glasnost” in Russia,
commercial competition, and requirements by the launch service insurance
company, information flow on space activities has improved dramatically. Since the
collapse of the USSR on December 26,
1991, CIS has released information on
many USSR space launch vehicle failures
that were not previously known to the
West. Making this information accessible
has provided a much more complete picture of worldwide space launches, although
the vast amount of information existing in
CIS/USSR from both successful and failed
launch operations is yet to be assimilated
by space launch communities of the world.
One CIS/USSR space launch failure involved an SL-12 Proton vehicle carrying a
Mars-96 spacecraft on November 16,
1996. The second burn of the Proton’s
fourth stage did not take place, and the
spacecraft did not reach the interplanetary
trajectory. It reentered Earth’s atmosphere
over the South Pacific Ocean. For lack of
funds, CIS launched this spacecraft without conducting a prelaunch systems
checkout at the launch site. Some of the
mechanical integration of the spacecraft
and launcher was carried out by the light
of a kerosene lantern (electrical power had
been cut off because of unpaid bills). Tight
funding also made ground control difficult, even during the critical period immediately following launch. The spacecraft
itself commanded the fourth-stage release,
indicating that it had possibly sent incorrect commands. It took 10 years to complete the $300 million Mars-96 spacecraft
carrying two dozen instruments supplied
by 22 countries. This launch failure stalled
plans for gathering valuable data about the
planet Mars.
Crosslink Winter 2000/2001 • 27
The failures of the European Launcher
Development Organization (ELDO) Europa vehicle were reminiscent of the early
launch failures in the U.S. space program.
(ELDO was one of the predecessors of
ESA.) After terminating the Europa program, Europe spent many years planning
the Ariane launch systems, which have experienced eight failures since 1980.
A recent failure involved a new Ariane5 vehicle, the most powerful in the Ariane
family. During its maiden flight on June 4,
1996, it veered off its flight path and exploded at an altitude of 3700 meters only
40 seconds after liftoff. The failure was attributed to errors in the design and testing
of the flight software. The flight software
was programmed for Ariane-4 launch conditions, but it was never tested in conditions that simulated Ariane-5’s trajectory.
The more powerful Ariane-5 travels at a
much faster horizontal velocity than the
Ariane-4. Significant horizontal drift
caused an overflow error in the inertial reference system (IRS) software, halted the
primary and backup IRS processors, and
resulted in the total loss of accurate flight
guidance information.
U.S. space launch vehicles.
Notable among them is the
Saturn V, the largest, most
powerful operational rocket
ever built. With an impressive 259 successes out of
275 launches, Delta is the
United States’ most reliable
expendable launch vehicle.
(Numbers represent length
in meters.)
110.6
From 1991 to 1996, the Chinese space
launch record was marred by five failures.
The most catastrophic failure occurred
during the launch of a CZ-3B vehicle carrying a commercial satellite, Intelsat 708,
on February 14, 1996. The 55-meter-tall
CZ-3B is China’s most advanced vehicle.
On its maiden flight, the CZ-3B began to
veer off course two seconds after liftoff,
before it even cleared the tower at the Xichang launch site. The vehicle and its payload hit the ground and exploded in an inhabited area near the launch site 22
seconds after liftoff. The explosion demolished a village and a nearby military base,
and caused severe casualties and property
damage.
The cause of failure was traced to the
CZ-3B’s guidance and control subsystem.
A gold-aluminum solder joint in the output of one of the gyro servo loops failed,
cutting electrical current output from the
power module and causing the inertial reference platform of the vehicle’s guidance
and control system to slope. This caused
computers to send the vehicle veering off
the planned trajectory shortly after liftoff.
The failed module was the only one of six
similar modules that lacked conductive adhesive to reinforce the solder joint.
Japanese liquid-propellant rockets (H-II)
suffered two launch failures during 1998
and 1999. Japan’s other seven launch failures (including four Lambda-4S failures
during the period 1966–1969) involved
solid-propellant rockets. One of the failures
occurred on January 15, 1995, during the
last flight of the Mu-3S-II vehicle. At 103
seconds after launch, the vector control
thrusters, which partly control the rocket’s
pitch, began to oscillate. The rocket veered
off course at 140 seconds after liftoff.
The payload of the Mu-3S-II, a German
satellite (Express 1), was put in the wrong
orbit of 120 kilometers altitude, instead of
the intended orbit of 210 kilometers altitude. The satellite, which fell into a jungle
in Ghana after circling Earth two and a half
times, failed because of improper modeling
of the flight control dynamics relative to the
weight of the payload. (Prior to the failure,
the heaviest payload carried by the Mu-3SII had been 430 kilograms; Express 1
weighed 748 kilograms.) Extra propellant
had been added to the three stages and to
the kick motor of the vehicle to provide extra thrust for the flight of the Express 1
satellite. The flight was the eighth and final
mission of the Mu-3S-II vehicle.
Several satellites have plunged into Bengal Bay since India’s space program began
in 1979. India’s Polar Satellite Launch
Vehicle (PSLV) is designed to place payloads into a polar sun-synchronous Earth
orbit. On its maiden flight on September
20, 1993, the PSLV experienced an unplanned change in pitch when the spent
second stage separated from the vehicle at
260 seconds into flight. The third and
fourth stages ignited normally, but the vehicle was unable to recover from the pitch
change and did not reach sufficient altitude. The payload was placed in a 349kilometer orbit instead of the planned 814kilometer polar orbit. Shifting liquid fuel
in the second stage of the vehicle may have
caused the change in the vehicle’s pitch.
Malfunction of the vehicle’s guidance system or failure of the control system to respond properly to the course deviation
could have been the cause of the failure.
62.2
56.1
47.5
41.1
39
35.4
33.2
Saturn V
Titan IVB
Space shuttle
28 • Crosslink Winter 2000/2001
Atlas IIAS
Titan IIIC
Delta III
Delta II
AtlasCentaur
33.2
Titan II
31.1
AtlasAgena
29
MercuryAtlas
Causes of Failure
Available launch-failure data reveal much
about patterns in the possible causes of
failure. Many failure causes fall into the
category of human error: poor workmanship, judgment, or launch-management decisions. Some failures are the result of defective parts. Failure can have its root in any
phase of launch vehicle development—difficulties have been noted in inadequate designs and component tests; in improper
handling in manufacturing and repair
processes; and in insufficient prelaunch
checkouts. Many past failures could have
been prevented if rigorous reliabilityenhancement measures had been taken.
Launch vehicle failure is usually attributed to problems associated with a subsystem, such as propulsion, avionics, separation/staging, electrical, or structures. In
some cases failure is ascribed to problems
in another area altogether (e.g., launchpad,
ground power umbilical, ground flight
control, lightning strike), or to unknown
causes (usually when subsystem failure information is not available).
Launch vehicle failures between 1980
and 1999 have been investigated, and
launch failure causes in the United States
have been found to include fuel leaks (resulting from welding defects, tank and
feedline damage, etc.), payload separation
failures (from incorrect wiring, defective
switches, etc.), engine failure (the result of
insufficient brazing in the combustion
chamber), and loss of vehicle control (because of lightning, damaged wires that
caused shorts, and control-system design
deficiencies). In Europe and China, launch
failure causes during the same period included engine failure (from combustion
Launch Vehicle Subsystem Failures, 1980–1999
Country
Propulsion
Avionics
U.S.
CIS/USSR
Europe
China
Japan
India
Israel
Brazil
N. Korea
15
33
7
3
2
1
1
2
4
3
1
1
1
1
Total
64
Separation
Electrical
Structural
Other
Unknown
1
1
1
1
19
8
2
11
1
11
2
instability, hydrogen injector valve leakage, clogged fuel lines, etc.), short circuits,
engine thrust loss, software design errors
that resulted in guidance system failure,
wind shear, and residual propellants.
Statistics show that among the causes of
failure for space launch vehicles worldwide from 1980 to 1999, propulsion subsystem problems predominated. That particular subsystem appears to be the
Achilles’ heel of launch vehicles. Fifteen
of the 30 U.S. failures were failures of the
propulsion subsystem. The unknown failures in the CIS/USSR could include many
in the propulsion subsystem.
The propulsion subsystem, the heaviest
and largest subsystem of a launch vehicle,
consists of components that produce, direct, or control changes in launch vehicle
position or attitude. Its many elements include main propulsion components of
rocket motors, liquid engines, and
thrusters; combustion chamber; nozzle;
propellant (both solid and liquid); propellant storage; thrust vector actuator and
gimbal mechanism; fuel and propulsion
control components; feed lines; control
valves; turbopumps; igniters; motor and
engine insulation. Similar components are
3
3
27.4
24.2
23.8
23
114
also used as separation mechanisms in the
separation/staging subsystem.
Propulsion subsystem failures can be divided into failures in solid-rocket motors
and liquid-rocket engines. Solid-propellant
launch systems include Taurus, Conestoga,
Athena, Pegasus, and Scout. Liquidpropellant launch systems include Titan II,
Titan IIIA, Titan IIIB, Atlas (except Atlas
IIAS), and Delta DM19, A, B, and C. Hybrid launch systems, consisting of liquidpropellant and solid-propellant rockets, include STS, Titan IV, all other Titan III,
Atlas IIAS, and all other Deltas. The success rate of the propulsion subsystem in the
United States from 1980 to 1999 was 98.8
percent for solid-rocket motors and 97.5
percent for liquid-rocket engines.
Addressing the Propulsion Problem
Solid-rocket motors and liquid-rocket engines of the propulsion and separation/staging subsystems both require sets of precautionary measures to maximize reliability
and safeguard against failure. First, consider
solid-rocket motors. In the design phase, to
reduce risk it is important to apply current
analysis techniques to ensure fast, accurate,
and low-cost modeling of precise configurations prior to hardware fabrication.
3.7
Pilot (NOTS-SLV)
Pegasus XL
28
20
1
17
28.4
1
30
58
8
6
3
5
1
2
1
2
1
Total
22
22
21
19
15
Athena II
Thor
Taurus
ThorAgena
Thor-Able
Star
Juno II
Scout
Vanguard
Juno I
Athena I
Conestoga
Crosslink Winter 2000/2001 • 29
Demonstrated Reliability
Reliability for each space-faring nation’s launch vehicles between 1957 and 1999 is calculated from a cumulative statistical distribution function that is related to the percentage of
launch successes. Separate historical and operational (current) graphs represent the United States and the CIS/USSR,
the two nations with the most extensive launch histories.
U.S. (Historical)
U.S. (Operational)
100
Demonstrated reliability (%)
100
80
Scout
Thor
Saturn
Delta
80
Juno
60
60
40
40
20
Conestoga
Vanguard
Pilot
56
60
64
68
72
76
80
84
88
92
96
00
Athena
20
0
56
60
64
68
CIS/USSR (Historical)
80
72
76
88
92
96
00
CIS/USSR (Operational)
Sputnik Polyot
Soyuz
Vostok
Rokot
80
Proton
60
60
Start
Kosmos
40
40
Shtil-1
B-1
20
56
60
N-1
64
68
Tsiklon
20
72
76
80
84
88
92
96
00
0
Zenit
56
60
64
68
72
76
84
88
92
96
00
China
CZ-2C, -2D, -2C/SD
Ariane-4
Ariane-3
CZ-4A, -4B
80
Ariane-1
60
40
Ariane-5
Ariane-2
Europa
CZ-2
CZ-1
60
CZ-2E, -2F
40
20
FB-1
64
68
72
76
80
84
88
92
96
00
0
CZ-3B
CZ-3, -3A
20
64
68
72
76
80
84
88
92
96
00
Other
Japan
100
100
Mu-3S, 3H, 3C
80
N-I
N-II Mu-3S-II
H-II
Mu-5
20
68
72
Sparta
Shavit
Israel
PSLV
SLV-3
Australia
India
India
TPD-1
40
76
N. Korea
Black Arrow
20
Lambda-4S
64
France
60
Mu-4S
40
Diamant
80
H-I
60
0
80
100
80
0
Dnepr
Molniya
Europe
100
Demonstrated reliability (%)
84
Energia
0
Demonstrated reliability (%)
80
100
100
Demonstrated reliability (%)
Pegasus
Titan
0
Taurus
STS
Atlas
80
84
Year
30 • Crosslink Winter 2000/2001
88
92
96
00
0
ASLV
United Kingdom
64
68
72
VSL
India
76
80
84
Year
Brazil
88
92
96
00
In the construction of solid-rocket motors, a number of safeguards apply to the
preparation of solid propellant:
• Upon receipt from the supplier and
prior to use, the propellant ingredient
should be checked to ensure that it
meets specifications, and the propellant
mix procedure and burn rate should be
checked for every mix before casting.
• Motors should be cast from a single lot
of material to minimize thrust imbalance of vehicles with multiple solidrocket motors.
• Solid propellant should be cast in a vacuum, if possible, to reduce the number
and size of internal voids.
• Modern techniques (e.g., computer tomography) should be used to detect
solid-propellant defects and propellantto-insulation bondline separation before motor assembly.
Still other steps can be taken to help increase the likelihood of solid-rocket motor launch success. Chemical fingerprinting can be implemented for rare and
sensitive chemicals such as propellant
binder, motor case resins, flexseal elastomers, and adhesives. It should be possible to schedule into the development and
qualification programs a motor-case cure
study wherein a cured case is dissected to
assess the adequacy of the process used in
case-manufacturing.
The liquid-rocket engines of propulsion
and separation/staging subsystems should
be designed and built with robustness and
with high thermal and structural margins
to allow for manufacturing deviations.
Welded joints instead of seals ought to be
used for fluid lines; high-pressure margins
should be allowed in tanks, hydraulic
lines, and plumbing; and 100 percent inspection, rather than spot-checking, must
be applied to all welds.
Other preventive measures for liquid
engines include the application of redundancy in fluid valves and igniters; the
utilization of effective liquid film cooling
or ceramic coatings to increase thrustchamber durability; and the application
of advanced high-strength (aluminummagnesium) welding and milling for the
construction of thin-walled fuel and oxidizer tanks. Helium purging (for cryogenic
propellants) or nitrogen purging (for storable propellants) of oxidizer/fuel pumps
and pipelines needs to be done before engine start-up, and purging of the chamber
cooling duct should be done at engine
shutdown, to provide a clean flow duct and
to avoid the danger of fire or explosion.
Testing liquid engines is also important.
They should be qualified at above the
maximum operating environment, conditions, and duration. And extensive tests on
engine operation ought to be conducted
under various conditions after transportation of the engine, since transporting an
engine subjects it to a harsh environment
that can alter its operation.
Enhancing Launch Reliability
Information gathered from failure studies
of past launch vehicles indicates that following certain work practices could
greatly enhance the reliability of launch
vehicle systems. Of primary importance is
the need to review and implement all lessons learned from past failure studies to
avoid failure recurrences. It is necessary to
incorporate preventive measures in all aspects of system development—design,
building, testing, and operations. Launch
vehicles should be designed for low cost
in manufacturing, operations, materials,
and processes rather than for maximum
performance or minimum weight. Comprehensive design analyses should be conducted, with positive margins.
In the manufacturing phase, only flightproven and defect-free components should
be used. Advanced electronic beam welding, automation, and robotics should be
applied for quality component manufacturing. Stringent control of raw materials,
components, and semifinished products
ought to be practiced. Multistring/redundant avionics, electrical, and ordnance components should be implemented
for fault tolerance. Pyro-shock levels
ought to be reduced whenever possible.
Testing is a critical area for reliability
enhancement. It is important for a design
to be validated by testing components to
the point of destruction or with a high
enough margin to allow for manufacturing
and operating environment variances, like
the successful design margin testing performed on ballistic missiles. Electrical and
pneumatic connection tests should be performed for each stage and between the
stages before vehicle assembly.
Components, software, and systemlevel electrical elements need to be tested
under conditions that simulate an actual
launch; system performance and flight
simulation tests should be conducted; the
results of testing during the development
phase should be analyzed, and measures
Solid and
Liquid Rockets
Whether or not they know it as Newton’s Third Law, most people have
probably heard the statement “For
every action there is an equal and opposite reaction.” That expression is the
principle behind rocketry. The action is
the expulsion of gas through an opening; the reaction is the rocket’s liftoff.
It’s not unlike what happens when you
blow into a balloon, then release it. As
air escapes (that’s the action), it propels the balloon (that’s the reaction),
making the balloon zip through the
room until its air is completely gone.
In order to create a forceful expulsion of gas from a rocket, fuel in a
combustion chamber is ignited. The
fuel can be in the form of solid or liquid
substances; some rockets (“hybrid
launch systems”) may make use of
both. These substances are the propellants that characterize rockets as
either “solid-rocket motors” or “liquidrocket engines.” For the fuel to burn,
oxygen or another oxidizing substance
must be present. When the fuel burns,
gases accumulate and pressure builds
until the gases are expelled through
an exhaust nozzle.
In solid-rocket motors, the fuel and
oxidizing chemicals are suspended in
a solid binder. Solid motors are used
as boosters for launch vehicles (such
as the space shuttle and the Delta series). They’re very reliable, and they’re
simpler than liquid engines, but they’re
difficult to control. Once a solid rocket
is ignited, all its fuel will be consumed,
without any option for adjusting thrust
(force).
Liquid-rocket engines are more
powerful than solid-rocket motors—
they can generate more thrust—but the
price of their power is complexity, in the
form of many pipes, pumps, valves,
gauges, and other parts. Liquid rockets
require attention to storage issues and
oftentimes the need to maintain very
cold temperatures. Their complexity affects vehicle reliability, if only because
the introduction of more components
means the introduction of more opportunities for problems to occur.
Crosslink Winter 2000/2001 • 31
32 • Crosslink Winter 2000/2001
that any lessons learned from past failures
are worth judiciously implementing if doing so can preclude future ones.
Further Reading
I-S. Chang, “Investigation of Space Launch Vehicle Catastrophic Failures,” AIAA Journal of
Spacecraft and Rockets, Vol. 33, No. 2,
198–205 (March–April 1996).
I-S. Chang, “Overview of World Space
Launches,” AIAA Journal of Propulsion and
Power, Vol. 16, No. 5 (September–October
2000).
I-S. Chang, S. Toda, and S. Kibe, “Chinese
Space Launch Failures,” ISTS paper 2000-g08, 22nd International Symposium on Space
Technology and Science (Morioka, Japan, May
31, 2000).
The forces that drive the
costs of today’s small
satellites are very
different from the forces
that drive the costs of all
other satellites. NASA
and DOD needed a new
model to gauge smallsatellite costs—and The
Aerospace Corporation
created one.
I-S. Chang, S. Toda, and S. Kibe, “European
Space Launch Failures,” AIAA paper 20003574, 36th AIAA Joint Propulsion Conference
(Huntsville, AL, July 17–19, 2000).
R. Esnault-Pelterie, L’exploration par fusées de
la trés haute atmosphére et la possibilité des
voyages interplanétaires (Rocket Exploration
of the Very High Atmosphere and the Possibility of Interplanetary Travel) (Société Astronomique de France, Paris, 1928).
R. H. Goddard, “A Method of Reaching Extreme Altitudes,” Publication 2540, Smithsonian Miscellaneous Collections, Vol. 71, No. 2
(Smithsonian Institution, Washington, D.C.,
1919). Also in The Papers of Robert H. Goddard, Vol. 1, Eds. Esther C. Goddard and G.
Edward Pendray (McGraw-Hill, New York,
1970).
S. J. Isakowitz, J. P. Hopkins, Jr., and J. B. Hopkins, International Reference Guide to Space
Launch Systems, 3rd ed. (AIAA Publications,
Washington, D.C., 1999).
M. J. Neufeld, The Rocket and the Reich (Harvard University Press, Cambridge, MA, 1995).
H. Oberth, Die Rakete zu den Planetenräumen
(The Rocket into Interplanetary Space) (R.
Oldenkourg, Münich, Germany, 1923;
reprinted by Reproducktionsdruck von UniVerlag Feucht, Nürnberg, 1984).
“Reliability Design and Verification for
Launch-Vehicle Propulsion Systems,” AIAA
Workshop on Reliability of Launch-Vehicle
Propulsion Systems (Washington, D.C., July
25, 1989).
K. E. Tsiolkovsky, A Rocket into Cosmic Space
(published in 1903 and reprinted in Sobranie
Sochinenie, Moscow: Izd. Akademii Nauk,
USSR, 1951). Translated into English in Works
on Rocket Technology (NASA, Washington,
D.C., 1965).
Photo courtesy of NASA
taken to improve product reliability. The
separation mechanism function should be
confirmed with a full-size dummy booster.
Hardware reworks should be minimized,
and inspection testing should be tailored
for specific reworks.
When the system is operational, it is important to limit space launch operation to
the design environment and flight experience. Prelaunch procedures and launch
processes should be simplified to reduce
contamination and damage in handling
and processing. Launch-management
training needs to be improved where possible. Finally, technical risks associated
with schedule-driven launch dates should
be reduced.
Conclusion
The technologies that have been developed for space applications and their spinoffs have dramatically improved human
life, and they will no doubt continue to do
so. Global high-speed telecommunication,
videoconferencing, and Internet applications require many satellites, which means
the need for launch services will continue
to grow.
In times of conflict as well as peacetime, space technology is of critical importance to the nation. Just as, more than half
a century ago, the air advantage of the Allied Forces contributed significantly to the
course of World War II, so in the future,
space technology will have the ability to
influence conflicts. The Falkland Islands
War of 1982 and the Persian Gulf War of
1991 are two examples of how the intelligent utilization of space resources affects
outcome.
In coming years, needs for commercial
and national defense space-related technologies are expected to multiply in many
areas: propulsion, guidance and control,
communications, navigation, tracking and
data relay, weather forecast, remote sensing, surveillance, reconnaissance, early
warning and missile defense, and interplanetary exploration. The demand for
space launch services is ever increasing
and may soon exceed the U.S. government’s defense budget.
As more launches are conducted, more
possibilities for failure will present themselves. The need for developing reliable
launch systems will be ongoing. It is clear
Small-Satellite Costs
David A. Bearden
ighly capable small satellites are
commonplace today, but this wasn’t always the case. It wasn’t until
the late 1980s that modern small
satellites came on the scene. This new
breed of low-profile, low-cost space system
was built by maximizing the use of existing
components and off-the-shelf technology
and minimizing developmental efforts. At
the time, many thought that because of
their functional and operational characteristics and their low acquisition costs, these
H
small systems would become more prevalent than the larger systems built during the
previous 30 years.
But exactly which spacecraft fell into the
new category? A precise description of
small satellites, or “lightsats,” as they were
also called, was lacking in the space literature of the day. The terms meant different
things to different people. Some established a mass threshold (e.g., 500 kilograms) to indicate when a satellite was
small; others used cost as a criterion; still
others used size. Even scarcer than good
descriptions of small satellites, however,
were guidelines for cost estimation of smallsatellite projects. Clearly, a more useful definition of small space systems was needed.
By the 1990s, because of increased interest in small satellites for military, commercial, and academic research applications, the Air Force Space and Missile
Systems Center (SMC) and the National
Reconnaissance Office (NRO) asked The
Aerospace Corporation for information about
Crosslink Winter 2000/2001 • 33
Spacecraft bus cost
(FY97 dollars in millions)
1000
100
DOD large satellites
Modern DOD small satellites
Traditional DOD small satellites
10
1
0
200
400
600
800
1000
1200
Spacecraft bus dry mass (kilograms)
1400
1600
Dollars-per-kilogram comparison of DOD large satellites (500 dollars per kilogram), modern small
satellites (100 dollars per kilogram), and traditional small satellites (150 dollars per kilogram). Data
points for these three categories cluster differently, and regression analysis shows that each set of
points determines a different cost-estimating relationship. This information confirms the need for a new
model using contemporary small satellites as its basis.
capabilities and costs of such systems. In response, Aerospace commissioned a study to
compare cost and performance characteristics of small satellites with those of larger,
traditional systems. Of specific interest was
the ability to examine tradeoffs between
cost and risk to allow assessment of how traditional risk-management philosophies
might be affected by the adoption of smallsatellite designs.
Estimating costs for small systems
raised many questions. What parameters
drove the cost of small satellites? Were traditional parameters known to drive the cost
of large systems still applicable? How did
small systems compare with large ones?
Did small-satellite acquisition philosophies, which prompted reductions in levels
of oversight, independent reviews, and paperwork, enable a reduction in cost-per-unit
capability? What advantages might small
satellites offer for rapid incorporation of new
technologies? Could they help reduce the
long development cycle for military space
programs? Were small satellites really economical for operational applications, such as
navigation and communication?
These questions led to a series of studies
on technical and economic issues involved
in designing, manufacturing, and operating
small satellites. The studies found that existing spacecraft cost models, developed
during the previous 30 years to support the
National Aeronautics and Space Administration (NASA) and the Department of Defense (DOD), were of limited utility because
of fundamental differences in technical
characteristics and acquisition and development philosophies between small-satellite
and traditional-satellite programs.
This finding prompted NASA and DOD
to seek cost-analysis methods and models
Total satellite cost
(FY98 dollars in millions)
10000
1000
100
10
1
Traditional NASA missions
NASA faster-better-cheaper missions
0
1000
2000
3000
4000
5000
Satellite launch mass (kilograms)
6000
7000
This graph compares the dollars-per-kilogram ratio for traditional NASA missions (900 dollars per kilogram) with the ratio as noted in NASA’s faster-better-cheaper missions (120 dollars per kilogram). It’s
clear that the different sets of data points determine markedly different cost-estimating regimes.
34 • Crosslink Winter 2000/2001
specifically tailored to small-satellite programs. To meet this need, Aerospace eventually developed the Small Satellite Cost
Model, a small-satellite trade-study software tool that captures cost, performance,
and risk information within a single framework. Before looking at the development
of Aerospace’s trade-study tool, though, it
will be valuable to backtrack to the late
1980s and review just exactly how smallspacecraft programs had been perceived.
Streamlined Development
Activities
In the 1980s, the DOD Advanced Research
Projects Agency and the United States Air
Force Space Test Program served as the primary sources of funding for small satellites,
which typically were used for technology
experiments. The Space Test Program coordinated experimental payload flights for
the Army, Navy, Air Force, and other government agencies. Reduced development
complexity and required launch-vehicle
size enabled affordable, frequent access to
space for research applications. Relatively
low acquisition costs and short development schedules also allowed university laboratories to participate, providing individual researchers access to space—a privilege
previously reserved only for well-funded
government organizations.
Small satellites were procured under a
specifically defined “low cost” philosophy.
They were smaller in size and were built
with maximum use of existing hardware. A
smaller business base (i.e., a reduced number of participating contractors) was involved in the development process, and it
was perceived that small satellites represented a niche market relative to the more
prevalent large systems. Mission timelines
from approval to launch were on the order
of 24 to 48 months, with an on-orbit life of
6 to 18 months. Launch costs, either for an
existing dedicated small launcher or for a
secondary payload on a large launcher, remained high, but developments such as the
Pegasus air-launched vehicle and new
small launchers (such as Taurus and
Athena) offered promise of lowering these
costs. Additionally, small-satellite flight
and ground systems typically used the
most mature hardware and software available to minimize technology-development
and flight-certification costs.
Emerging advances in microelectronics,
software, and lightweight components enabled system-level downsizing. Spacecraft
often cost more than $200 thousand per
Cost model estimate/actual cost (percent)
1000
800
600
400
200
0
Microsat
LOSAT-X
ALEXIS
STEP 0
STEP 1
Satellite
A cost-percentage comparison that makes use of an older model and the updated dollars-per-kilogram relationships shown in previous graphs to estimate
modern small-satellite costs. Each bar’s height represents the percentage difference between a satellite’s estimated cost and its actual cost. Thus for
Clementine, with a percentage of 109%, the older model’s estimate was twice
RADCAL SAMPEX Clementine
NEAR
the actual cost, and for RADCAL, with a percentage of 801%, the older
model’s estimated cost was nine times the actual cost. Because the estimates far outweighed the real cost in many cases, the chart illustrates the
inadequacy of a traditional cost model for modern small satellites.
money, small-spacecraft programs came to
be perceived as fast-paced, streamlined development activities. Dedicated project
leaders with small teams were given full
technical and budgetary responsibility with
goals tailored around what could be done
inexpensively on a short schedule. Fixedprice contracts became the norm, and requirement changes (and associated budgetary adjustments) were held to a minimum.
The Next Decade
With the advent of the 1990s came a movement toward realizing routine access to
space. The development of a broad array of
expendable launch vehicles provided increased access to orbit for many different
kinds of payloads. Satellite programs attempted to incorporate advanced technology and demonstrate that fast development
cycles, low acquisition costs, and small
flexible project teams could produce highly
useful smaller spacecraft. This different
paradigm opened up new classes of space
applications, notably in Earth science,
commercial mobile-communications, and
remote-sensing arenas.
Cost vs. Pointing Accuracy
y = 1.67 + 12.98 x –0.5
16 data points
Min = 0.25
Standard
32.00
24.00
16.00
8.00
0
0
2.00
Spacecraft cost (FY94 dollars in millions)
Spacecraft cost (FY94 dollars in millions)
kilogram and could reach $1 million per
kilogram with delivery-to-space costs included. With miniaturization, every kilogram saved in the spacecraft bus or instruments represented a possible saving of up
to five kilograms in launch, onboard
propulsion, and attitude-control systems
mass. Reduced power demands from microelectronics, instruments, and sensors
could produce similar payoffs. For interplanetary missions, reduced mass had the
capability to produce indirect cost savings
through shorter transit times and mission
duration. All this downsizing eliminated
the need for large facilities and costly
equipment such as high bays, clean-room
areas, test facilities, and special handling
equipment and containers.
Engineering development units—prototypes built before the actual construction of
flight hardware—were not built; instead a
protoflight approach was favored, where a
single unit served as both the engineering
model and the actual flight item. Quality
parts were used where possible, but strict
adherence to rigid military specifications
was avoided. Redundancy—the use of
multiple components for backup in the
event the primary component fails—was
also avoided in favor of simpler designs.
Designers relied on multifunctional subsystems and software to allow operational
work-arounds or alternate performance
modes that could provide functionality if
something went wrong during a mission.
As a result of these unorthodox approaches that sought ways to save time and
HETE
24.00
Cost vs. Satellite Dry Mass
y = 0.7 + 0.023 x 1.26
20 data points
Min = 20, Max = 210
Standard error = 3.3 $M
18.00
12.00
6.00
0
0
40.0
80.0
Spacecraft cost (FY94 dollars in millions)
MACSAT
Cost vs. End-of-life Power
20.00
15.00
10.00
y = 0.507 + 1.55 x 0.452
14 data points
Min = 5, Max = 440
Standard error = 6.2 $M
EOL
5.00Power (W)
0
0
100.0
200.0
300.0
EOL power (W)
400.0
System-level cost-estimating relationships that were developed for early versions of the Small Satellite
Cost Model. The first cost-estimating relationships related total spacecraft bus cost to individual parameters such as mass, power, or pointing accuracy. These were the early predecessors of today’s
more sophisticated cost model that represents costs at the subsystem level utilizing a variety of cost
drivers.
Crosslink Winter 2000/2001 • 35
1. Define problem
2. Examine existing cost models
• Determine utility
• Determine adaptability
3. Collect cost and technical data
• Identify potential cost drivers
• Normalize cost data
5. Develop cost model
• Perform parametric weighting
• Perform statistical analysis
4. Do parametric analysis
$
• Perform regression
• Identify correlation
Mass
• Consider multiple parameters
7. Compare with known costs
6. Validate cost model
9. Deliver model to user community
8. Apply model
• Use in trade study
• Use in cost analysis
• Consider sensitivities
The cost modeling process. This is an ongoing iterative process that involves collecting data and performing regression analysis to arrive at cost-estimating relationships. The data are validated against
actual program costs. The model is delivered to the users for trade analyses.
Small-spacecraft designers, in their quest
to reduce costs through use of off-the-shelf
technology, in many cases pioneered the
use of microcircuitry and other miniaturized
devices in space. Whereas small satellites
had been unstabilized, battery-powered,
single-frequency, store-and-forward spacecraft with limited applicability to operational space endeavors, the level of functionality achievable in small spacecraft took
a dramatic leap forward in the early 1990s,
mainly because of the availability of increased space-compatible computational
power and memory. These advances led to
the current rich suite of spacecraft bus capabilities and the large array of missions using
small spacecraft.
The trend toward cost reduction and
streamlined management continued to gain
momentum with increased interest in small
spacecraft from NASA and DOD. A shift
in philosophy, where a greater tolerance for
risk was assumed, was evident in programs
like the NASA-sponsored Small and
Medium Explorer Programs, the Ballistic
36 • Crosslink Winter 2000/2001
Missile Defense Organization-sponsored
Clementine, DOD-sponsored Space Test
Experiment Platforms, and the Small
Satellite Technology Initiative’s Lewis and
Clark, among others. The end of the Cold
War (in 1991) and the drive toward reduced
development and launch costs created a political and budgetary imperative where
small satellites were viewed as one of the
few vehicles available for science and technology missions.
In response to budget pressures and in the
wake of several highly publicized lost or
impaired billion-dollar missions, NASA’s
administrator Dan Goldin in 1992 embraced small spacecraft and promoted the
notion of a “faster-better-cheaper” approach
for many of NASA’s missions. The programs implemented as a result of this tactic
dictated faster and cheaper by specifying
launch year and imposing a firm funding
cap. These constraints laid the groundwork
for what would become a decade of ongoing controversy about the definition and
success of faster-better-cheaper.
The Need for a New Model
It was against this backdrop that Aerospace
began collecting a large body of information concerning technologies and programmanagement techniques that affected
small-satellite cost-effectiveness. The programmatic aspects of traditional satellite
programs (e.g., long schedules, large
amounts of documentation, rigorous procedures, and risk abatement) were known to
dramatically affect cost. In particular, two
distinct but interrelated factors drove the
cost of the system: what was built and how
it was procured. In many cases, how the
system was procured appeared to be as important as what was procured because cost
and schedule were dependent on the acquisition environment.
A study that compared spacecraft mass
versus cost for traditional small spacecraft
of the 1960s and 1970s, traditional large
spacecraft of the 1970s and 1980s, and
modern (post-1990) small spacecraft revealed two important messages. First, the
modern small spacecraft differed dramatically from traditional large spacecraft as
well as their similarly sized cousins of the
past. It was postulated that the latter difference, as evidenced by cost reduction, was
the result of a combination of new business
approaches and advanced technology. Second, cost and spacecraft sizing models
based on systems or technologies for traditional spacecraft were inappropriate for assessing modern small satellites.
This was an understandable departure
from traditional-spacecraft cost trends.
New developments in technology are often
based on empirical models that characterize historical trends, with the assumption
that future missions will to some degree reflect these trends. However, in cases where
major technology advancements are realized or where fundamental paradigms
shift, assumptions based on traditional approaches may not apply. It became clear
that estimating small-system costs was one
such case.
Early small-satellite studies showed that
cost-reduction measures applied to smallsatellite programs resulted in system costs
substantially lower than those estimated
by traditional (primarily mass-based)
parametric cost-estimating relationships
(equations that predict cost as a function
of one or more drivers). The studies analyzed the applicability of available cost
models such as the Air Force Unmanned
Spacecraft Cost Model and the Aerospace
Three-dimensional cost-estimating relationships
(CER). Later versions of the Small Satellite Cost
Model used multiparameter cost-estimating relationships derived at the subsystem level. Emphasis was
placed on a combination of mass- and performancebased cost drivers.
16000
14000
12000
10000
8000
6000
4000
2000
0
Structures subsystem CER (composite)
100 440
780 1120
1460 1800
Beginning-of-Life
power (watts)
Satellite Cost Model to predict costs of
small satellites.
These cost models—based on historical
costs and technical parameters of traditional large satellites developed primarily
for military space programs—were found
inappropriate for cost analyses of smallsatellite programs. It became readily apparent in comparing actual costs against costs
estimated by these models that a new
model, dedicated to this new class of mission, was needed. Credible parametric cost
estimates for small-satellite systems required new cost-estimating relationships
derived from a cost and technical database
of modern small satellites.
The Making of a Model
Developing a small-satellite cost model
that related technical parameters and physical characteristics to cost soon became the
primary objective of small-satellite studies.
To accomplish this, a broad study of small
satellites was performed, with emphasis on
the following tasks:
• definition of small satellite and identification of small-satellite programs
• collection of small-satellite cost and
technical data from the Air Force,
NASA, and university, government laboratory, and industry sources
• examination of cost-reduction techniques used by small-satellite contractors and sponsors
• performance of parametric analysis to
determine which factors should be used
in the derivation of cost-estimating relationships by using best-fit regressions
on data where cost correlation is evident
54
16000
42
14000
12000
10000
8000
6000
4000
2000
0
Structures cost
(FY97 dollars in thousands)
TTC/CDH cost
(FY97 dollars in thousands)
Telemetry, Tracking, Communications/Command and Data Handling
subsystem CER (S-band)
180
120
60
0
18
36
54
72
90
Structures mass (kilograms)
• development and validation of a cost
model with parametrics and statistics;
evaluation of the cost model by performance of cost and cost-sensitivity
analyses on small-satellite systems under development
• creation of a corporate knowledge base
of ongoing small-satellite activities and
capabilities, technology-insertion opportunities, and project histories for lessons
learned, systems studies, etc.
• maintenance of a corporate presence in
the small-satellite community to advise
customers about relevant developments
• development of a cadre of people with
expertise and tools for continued studies
of the applicability of small satellites to
military, civil, and commercial missions
The cost-modeling process entailed aggressive data acquisition through collaboration with organizations responsible for
developing small satellites. One unanticipated challenge was actually gaining
access to cost data. Small-satellite contractors, in their quest to reduce costs, would
often not be contractually bound to deliver
detailed cost data, so in many cases costs
were not available. Despite this difficulty,
Aerospace collected data over a period of
two to three years for about 20 smallsatellite programs at the system level (i.e.,
total spacecraft or spacecraft bus costs
only). From this initial database, analysts
derived parametric costing relationships as
Solar-array area
(square meters)
0
a function of performance measures and
physical characteristics. The model estimated protoflight development costs and
cost sensitivities to individual parameters
at the system level.
The model was of great value in instances where evaluations needed to be performed on varying proposals with differing
degrees of detail or when limited information was available, as is often the case in an
early concept-development phase.
The Second-Generation
Cost Model
While initial system-level small-satellite
studies were sponsored by DOD and internal Aerospace research and development,
in 1995, the need to respond to increasingly frequent questions about NASAsponsored small-satellite architectures and
a need for refined small-satellite system
analysis at the subsystem level prompted
NASA to seek better cost-analysis methods
and models specifically tailored to smallsatellite programs. Consequently, NASA
asked Aerospace to gather information
regarding capabilities and costs of small
satellites and to develop a set of subsystem-level small-satellite cost-estimating
relationships.
To allow assessment of a complete
spacecraft bus cost, Aerospace collected
more data in order to be able to derive costestimating relationships for each of the
spacecraft bus subsystems:
Crosslink Winter 2000/2001 • 37
Small-Satellite Database
Program
Sponsor
Spacecraft
Contractor
Launch
Mass Launch
(kilograms) Date
Launch
Vehicle
Mission
NASA Small Planetary Satellites
Clementine
NEAR
MGS
Mars Pathfinder
ACE
Lunar Prospector
DS1
MCO
MPL
Stardust
BMDO/NASA
NASA
NASA
NASA
NASA
NASA
NASA
NASA
NASA
NASA
NRL
JHU/APL
Lockheed Martin
JPL
JHU/APL
Lockheed Martin
JPL/Spectrum Astro
JPL/Lockheed Martin
JPL/Lockheed Martin
JPL/Lockheed Martin
494
805
651
890
785
296
486
629
576
385
Jan 94
Feb 96
Nov 96
Dec 96
Aug 97
Jan 98
Oct 98
Dec 98
Jan 99
Feb 99
Titan II
Delta II
Delta II
Delta II
Delta II
Athena II
Delta II
Delta II
Delta II
Delta II
Lunar mapping
Asteroid mapping
Mars mapping
Mars lander and rover
Low energy particle
Lunar science
Technology demo
Mars remote sensing
Mars science
Comet sample return
Jul 92
Apr 95
Oct 95
Jul 96
Aug 96
Nov 96
Nov 96
Aug 97
Feb 98
Apr 98
cancelled
Jul 98
Dec 98
Mar 99
May 99
Jun 99
Jun 99
Dec 99
Mar 00
Scout
Pegasus
Conestoga
Pegasus XL
Pegasus XL
Pegasus XL
Pegasus XL
Pegasus XL
Pegasus XL
Pegasus XL
Athena I
Athena I
Pegasus XL
Pegasus XL
Pegasus XL
Delta II
Titan II
Taurus
Delta II
Science experiments
Lightning experiment
Microgravity experiments
Ozone mapping
Auroral measurements
High energy experiments
Science experiments
Ocean color
Space physics
Solar coronal
Science experiments
Hyperspectral imaging
Astronomy
Astronomical telescope
Space physics
Space science
Ocean wind measure
Sun-Earth atmosphere
Neutral atom/UV measure
Nov 85
Apr 90
Apr 90
Apr 90
Apr 90
May 90
Jun 91
Jul 91
Nov 91
Nov 92
Apr 93
Jun 93
Mar 94
Mar 94
May 94
May 94
Jun 94
Aug 94
Jul 95
Mar 96
May 96
Aug 97
Oct 97
Feb 98
Jul 98
Sep 98
Jun 00
Shuttle
Pegasus
Atlas I
Atlas I
Atlas I
Scout
Scout
Delta II
Pegasus
Scout
Pegasus
Scout
Taurus
Taurus
Scout
Pegasus
Pegasus XL
Pegasus
Pegasus XL
Pegasus XL
Pegasus
Pegasus XL
Pegasus XL
Taurus
STS/GAS
Taurus
Pegasus XL
Message relay
Message relay
Geomagnetic survey
Communications
Communications
Communications
Radiation
Sensor experiments
Communications
Sensor experiments
X-ray mapping
Radar calibration tests
Classified
Autonomy experiments
Sensor experiments
Signal detect/modulation
Atmospheric physics
Power experiments
Science/communications
Radiation
Hyperspectral imaging
Science
Science/communications
Radar altimetry
Science
Tether experiment
Remote sensing
NASA Earth-Orbiting Small Satellites
SAMPEX
MICROLAB
METEOR
TOMS-EP
FAST
HETE
SAC-B
Seastar
SNOE
TRACE
Clark
Lewis
SWAS
WIRE
TERRRIERS
FUSE
QuikSCAT
ACRIMSat
IMAGE
NASA
NASA/Orbital
NASA
NASA
NASA
NASA
CONAE/NASA
NASA
NASA GSFC
NASA
NASA
NASA
NASA
NASA, JPL
NASA, BU
NASA, APL
NASA, NOAA
NASA, JPL
NASA GSFC
NASA GSFC
Orbital
CTA (Orbital)
TRW
NASA GSFC
MIT/AeroAstro
CONAE
Orbital
LASP
NASA GSFC
CTA (Orbital)
TRW
NASA GSFC
NASA GSFC
AeroAstro, LLC
Orbital
JPL/Ball Aerospace
Orbital
Lockheed Martin
161
75
364
295
191
128
191
372
132
250
266
385
288
255
288
1360
870
115
494
DSI (Orbital)
DSI (Orbital)
DSI (Orbital)
DSI (Orbital)
DSI (Orbital)
DSI (Orbital)
DSI (Orbital)
Ball Aerospace
DSI (Orbital)
JPL/Spectrum Astro
AeroAstro
DSI (Orbital)
Ball Aerospace
TRW
Spectrum Astro
TRW
TRW
Orbital
TRW
DSI (Orbital)
Spectrum Astro
LANL/SNL
TRW
Ball Aerospace
CTA (Orbital)
Lockheed Martin
Orbital
71
68
68
56
67
61
77
76
26
144
113
91
161
489
170
180
352
209
295
110
212
210
386
357
69
691
249
Other U.S.-Built Small Satellites
GLOMR I
PEGSAT
POGS/SSR
SCE
TEX
MACSAT
REX
LOSAT-X
MICROSAT
MSTI-1
ALEXIS
RADCAL
DARPASAT
STEP 0
MSTI-2
STEP 2
STEP 1
APEX
STEP 3
REX II
MSTI-3
FORTE
STEP 4
GFO
MIGHTYSAT
STEX
TSX-5
DARPA
DARPA
STP, ONR
ONR
STP, ONR
DARPA
STP
SDIO
DARPA
BMDO
DOE
STP, NRL
DARPA/AF
STP
SDIO
STP
STP
STP
STP
STP
SDIO
DOE
STP
U.S. Navy
STP
NRO
BMDO/STP
38 • Crosslink Winter 2000/2001
•
•
•
•
•
•
•
attitude determination and control
propulsion
electrical power supply
telemetry, tracking, and command
command and data handling
structure, adapter
thermal control
Emphasis was placed on obtaining data
on spacecraft bus subsystem characteristics. In addition to technical data, costs in
ability to relate cost to those characteristics.
Programs either already completed or awaiting launch in the next year were targeted.
Because Aerospace operates a federally
funded research and development center, it
was in a unique position to receive proprietary data from private companies and enter it into a special-purpose database to
support government space-system acquisition goals and provide value added to the
industry as a whole. Proprietary informa-
SSCM98 DP
Cost Input Data
Code*
Retrieve
Inputs
Cost Driver (Units)
Save Inputs
Estimates
Value
Cost-Risk
Pp
Single contractor
# Major Contractors
Mission Orbit
Earth Orbiting
Apogee (km)
Design life (months)
Satellite Wet Mass (kg)
Satellite Dry Mass (kg)
3-axis Control
Stabilization Type
Pointing Accuracy (degrees)
Pointing Knowledge (degrees)
Attitude Control Subsystem Mass (kg)
Number of thrusters
Cold Gas
Fuel Type
Propulsion Subsystem Dry Mass (kg)
1
1
600
48
805.
468.1
2
0.6
0.029
33.9
1
1
118.
Pp
Hydrazine propulsion type
2
1,880
105
2
103.2
Re
He
Tp,Ip
R,L,Pp
Ap,Ip,Se,Pp
L
A-spin
A-3 axis
Ap-3 axis
Pe
Pp
Ep,Tp
Ie
E
Hp
He
Hp
E
S
T
Te,Le
Ee
Hp
Sp
Sp
He
Beginning-of-Life Power (W)
End-of-Life Power (W)
Solar Array Cell type
Power Subsystem Mass (kg)
Solar Array mounting type
Battery Capacity (A-hrs)
Battery Type
Dual
Gallium Arsenide (GaA
Deployable
1
9.
1
8.9
1
256.
5.
43.6
2
102.2
3.
NiCd
Solar Array Area (m2)
UHF/VHF
Communications Band
Downlink Data Rate (kbps)
Transmitter Power (W)
TT&C/C&DH Subsystem Mass (kg)
Composite
Structures material
Structures Mass (kg)
Thermal Subsystem Mass (kg)
Database
Inputs
Print sheet
Estimates
Print
sheet
Database
Valid Range
Low
High
Outside range
SSCM98 DP
400
6
75
26
4,200
60
489
410
1
0.016
2.4
0
5
1.5
59
8
10.5
118.2
64.6%
14.2%
-40.0%
ERROR: Cold gas cannot be dual!
57
1,880
8
740
SMALL SATELLITE COST MODEL (SSCM)
7
124
9
40
0.24
11.6
Developed for NASA Headquarters
2.4
1
by
13
D.A.
3,000
18
Bearden,
115
N.Y. Lao, T.J. Mosher, and J.J. Muhle
' 20
1998, The 183
Aerospace Corporation. All Rights Reserved.
0.1
10.5
This
model
may
not be redistributed without the expressed written consent of The Aerospace Corporation
Legend
required
input
do not change!
do not change!
Start
Cost-Probability Distribution
Help
Exit
Mean
Std Error
38864
6131
Percentiles of Cost
Probability Density
Percentile
0
10,000
20,000
30,000
40,000
Estimated Cost (FY97$K)
50,000
60,000
70,000
Cost (FY97$K)
10%
15%
31,401
32,631
20%
25%
33,643
34,536
30%
35%
35,359
36,138
40%
45%
36,894
37,640
50%
55%
38,389
39,153
60%
65%
70%
39,945
40,780
41,679
75%
80%
42,672
43,805
85%
90%
45,163
46,933
95%
49,684
These selected screen shots from the Small Satellite Cost Model demonstrate the parametric costestimating model. The model is easy to work with and provides useful outputs such as cost probability
distributions.
the areas of spacecraft integration, assembly and test, program management and
systems engineering, and launch and orbital operations were requested.
To gather information on the state of the
industry as a whole, as well as specific
data, analysts surveyed and interviewed
contractors who build small satellites or
provide small-satellite facilities (e.g., components, launchers). A cost and technical
survey sheet was distributed to virtually
every organization and contractor in the
small-satellite industry. It was important to
obtain information about mass, power, performance, and other technical characteristics because the development of credible
subsystem-level cost analyses of smallsatellite missions depends on the analyst’s
tion delivered to the corporation was
treated in a restricted manner, used only for
the purpose intended, and not released to
organizations, agencies, or individuals not
associated with the study team. The information was used exclusively for analysis
purposes directly related to cost-model development. Only derived information depicted in a generalized manner was released, and the database itself has remained
proprietary. In some cases, formal nondisclosure agreements between the companies
and Aerospace were necessary to facilitate
delivery of proprietary data.
After properly categorizing cost data,
adjusting it for inflation, and breaking it
out on a subsystem basis, analysts developed cost-estimating relationships for each
of the subsystems, using a subset of the
more than 70 technical parameters collected on each of the small satellites. The
effort to develop a cost-estimating relationship for a small-satellite subsystem took
full advantage of advanced developments
in regression techniques. Choosing the
proper drivers involved combining a
knowledge of statistics, sound engineering
judgment, and common sense. Graphics
software tools assisted in the development
of these cost-estimating relationships,
enabling the analyst to view the shape of a
function against its data points and to identify the function (whether linear, logarithmic, exponential, or some other form).
The end product was a set of subsystemlevel bus-related cost-estimating relationships based entirely on actual cost, physical, and performance parameters of 15
modern small satellites. This was a major
advancement over available tools for estimating small-satellite costs. Analysts also
developed factors to use in estimating both
recurring and nonrecurring costs of bus
subsystems, to enable studies of multiple
builds—such as the ones that are needed
for constellations of small satellites. The
cost-estimating relationships enabled the
inclusion of cost as a variable in system design tools. They were also incorporated
into a stand-alone, menu-driven computerized model that could be distributed to government organizations and private companies that contributed data.
Cost Model Leaves Earth Orbit
In 1996, NASA was moving to smaller
platforms for planetary exploration. This
movement afforded an important application for the Small Satellite Cost Model.
Following well-publicized problems with
the Galileo and Mars Observer spacecraft,
there had emerged in the early 1990s a
growing apprehension in the NASA planetary science community that opportunities
for planetary science data return were
dwindling. After Galileo was launched in
1989, the next planetary mission scheduled
was Cassini, which would launch in October 1997 and begin returning data in 2003,
a full six years after Galileo had stopped
sending data. Since a steady stream of new
data is important to maintaining a vigorous
program of planetary and scientific investigation, the situation was naturally a cause
for concern. Out of this concern emerged a
new NASA small-spacecraft program
called Discovery.
Crosslink Winter 2000/2001 • 39
40 • Crosslink Winter 2000/2001
Missions Evaluated by the Small Satellite Cost Model
Launched in February 1999, Stardust is journeying
to the comet Wild-2. It will arrive in 2004 and, during
a slow flyby, will collect samples of dust and gas in a
low-density material called aerogel. The samples will
be returned to Earth for analysis in 2006. Stardust
will also photograph the comet and do chemical
analysis of particles and gases. The JPL/Lockheed
Martin-built spacecraft is one of NASA’s Discovery
Program missions. It is the first NASA mission dedicated to exploring a comet and the first U.S. mission
launched to robotically obtain samples in deep space.
NEAR was launched in February 1996. Its objective
was to orbit the asteroid Eros for one year starting
in January 1999, collecting scientific data. Developed in 29 months at JHU/APL, NEAR was part of
the NASA Discovery Program. Its payload was
composed of a multispectral imager, a laser
rangefinder, an X-ray/gamma-ray spectrometer,
and a magnetometer. A software-error-induced
burn abort that occurred in December 1998
resulted in delaying the rendezvous and subsequent data acquisition until February 2000.
The Lunar Prospector, a NASA-sponsored lunar polar orbiting probe developed by Lockheed
Martin, was launched aboard Athena II in January 1998. Its primary mission was to map the
moon’s chemical, gravitational, and magnetic
properties. Data from instruments, including a
gamma-ray spectrometer, a neutron spectrometer, an alpha particle spectrometer, a magnetometer, and an electron reflectometer, were
used to construct a map of the surface composition of the moon.
Illustrations courtesy of NASA
The Discovery program’s primary goal
was to conduct frequent, highly focused,
cost-effective missions to answer critical
questions in solar-system science. Formally started under NASA’s fiscal-year
1994 budget, the Discovery program featured small planetary exploration spacecraft—with focused science goals—that
could be built in 36 months or less and
would cost less than $150 million (fiscal
year 1992), not including the cost of the
launch vehicle.
To apply its cost model to this new domain, Aerospace performed, in collaboration with Johns Hopkins University’s Applied Physics Laboratory (JHU/APL), a
cost-risk assessment of the Near Earth Asteroid Rendezvous (NEAR) mission. This
mission, one of NASA’s first two Discovery
missions, was designed to leave Earth orbit
on a trajectory to the near-Earth asteroid
Eros. The study identified a number of limitations in applying the Small Satellite Cost
Model to interplanetary missions. Out of
this information came a concerted effort to
gather data on small interplanetary missions
to enhance the model. Analysts collected
data on missions such as Mars Pathfinder,
Lunar Prospector, Clementine, and Stardust, developing cost-estimating relationships appropriate to a Discovery-class mission. Less than a year later the model was
again applied successfully to the Near Earth
Asteroid Rendezvous spacecraft, demonstrating cost estimates within a few percent
of the actual costs.
Once Aerospace demonstrated the ability to assess small interplanetary mission
costs, NASA’s Langley Research Center
Office of Space Science asked the corporation to participate in the Discovery mission
evaluation process. Aerospace evaluated 34
Discovery proposals submitted by government, industry, and university teams. These
proposals included a wide variety of payloads (including rovers, probes, and penetrators)—more than 120 in all. The goals
were to provide independent cost estimates
for each proposal, identify cost-risk areas,
determine cost-risk level (low, medium, or
high) for each proposal, and evaluate proposals in an efficient and equitable manner.
Five finalists were selected.
In 1997, as a follow-on to the successful
Discovery mission evaluation, the NASA
Office of Space Science asked Aerospace
to assist in the selection of Small Explorer
missions. This was a series of small, lowcost interplanetary and Earth-orbiting
Mars Pathfinder, the second launch in the Discovery Program developed by JPL, consists of a
cruise stage, entry vehicle, and lander. The mission of Mars Pathfinder was to test technologies
in preparation for future Mars missions, as well
as to collect data on the Martian atmosphere,
meteorology, surface geology, and rock and soil
composition. On July 4, 1997, Mars Pathfinder
successfully landed on Mars and subsequently
rolled out the Sojourner rover to analyze native
rock composition.
science missions designed to provide frequent investigative opportunities to the
research community. Aerospace served on
the Technical, Management, and Cost review panel. Fifty-two Small Explorer mission concepts were evaluated, from which
two final missions were chosen.
NASA commended Aerospace for its
work on Discovery and Small Explorer missions. Because of the work it had done on
these programs, Aerospace was invited to
participate in a National Research Council
workshop, from which a report titled “Reducing the Costs of Space Science Research
Missions” was generated. Aerospace was
also invited to join the editorial board of a
new international peer-reviewed technical
journal (Reducing Space Mission Cost, published by Kluwer Academic Publishers) and
to become a member of the Low Cost Planetary Missions Subcommittee of the International Academy of Astronautics Committee on Small Satellite Missions.
The ACE (Advanced Composition Explorer)
spacecraft carried six high-resolution sensors,
mainly spectrometers, and three monitoring instruments. It collected samples of low-energy solar
and high-energy galactic particles and measured
conditions of solar wind flow and particle events.
An Explorer mission sponsored by the NASA
Office of Space Science and built by JHU/APL,
ACE orbits the L1 libration point, a location
900,000 miles from Earth where the gravitational
effects of Earth and the sun are balanced, to
provide near-real-time solar wind information.
SWAS (Submillimeter Wave Astronomy Satellite) was
the third NASA Small Explorer mission. It was
launched aboard a Pegasus XL rocket in December
1998. The overall goal of the mission was to understand star formation by using a passively cooled
Cassegrain telescope to determine the composition of
interstellar clouds and establish the means by which
these clouds cool as they collapse to form stars and
planets. SWAS observed water, molecular oxygen,
isotopic carbon monoxide, and atomic carbon.
Launched in October 1998, DS1 (Deep Space 1)
was a NASA New Millennium Program mission. Its
objective was to validate several technologies in
space including solar electric propulsion and autonomous navigation. Instruments on board included
a solar concentrator array and a miniature integrated
camera and imaging spectrometer. The spacecraft,
built by JPL and Spectrum Astro, was designed to
monitor solar wind and measure the interaction with
targets during flybys of an asteroid and a comet.
The primary mission objectives of the Clementine lunar
orbiter, launched in January 1994 aboard Titan IIG, were
to investigate long-term effects of the space environment
on sensors and spacecraft components and to take multispectral images of the moon and the near-Earth asteroid Geographos. The Naval Research Laboratory-built
spacecraft incorporated advanced technologies, including Lawrence Livermore National Laboratory lightweight
sensors. After Clementine completed lunar mapping, its
onboard computer malfunctioned on departure from
lunar orbit and depletion of onboard fuel resulted.
Examining the Faster-BetterCheaper Experiment
Successful NASA programs such as the
Mars Pathfinder and the Near Earth Asteroid Rendezvous mission effectively debunked the myth that interplanetary missions could only be accomplished with
billion-dollar budgets. They set a new standard against which all later missions were
not only forced to measure up but go beyond. Designers were asked to meet unrelenting mission objectives within rigid cost
and schedule constraints in an environment
characterized by rapid technological improvements, immense budgetary pressure,
downsizing government, and distributed
acquisition authority.
As a result of these constraints, NASA
had greatly increased its utilization of
small spacecraft to conduct low-cost scientific investigations and technology demonstration missions. The original tenets of the
small-satellite paradigm, including low
cost, maximum use of existing components
and off-the-shelf technology, and reduced
program-management oversight and developmental effort, had been applied to increasingly more ambitious endeavors with
increasingly demanding requirements. This
move had clearly benefited the scientific
community by greatly diversifying the number and frequency of science opportunities.
A number of failed small scientific
spacecraft, however, such as Small Satellite Technology Initiative’s Lewis and
Clark, and the Wide-field Infrared Experiment, fueled an ongoing debate on whether
NASA’s experiment with faster-bettercheaper missions was working. The loss of
the Mars Climate Orbiter and the Mars Polar Lander within a few months of each
other sent waves of anxiety throughout
government and industry that the recipe for
successful faster-better-cheaper missions
had been lost. Impaired missions or “near
misses,” such as the Mars Global Surveyor,
contributed to the debate as well, and many
wondered whether programs currently on
the books or late in development were too
ambitious for the time and money they had
been allotted.
At the heart of the matter was allocation
of cost and schedule. Priorities had
changed. During the last few years the traditional approach to spacecraft design,
driven by performance characteristics and
high reliability to meet mission objectives,
had completely given way to developments
dominated by cost- and schedule-related
concerns. While it was readily apparent
that the faster-better-cheaper strategy resulted in lower costs per mission and
shorter absolute development times, these
benefits may have been achieved at the expense of reduced probability of success.
Some questions lingered. When was a mission too fast and too cheap with the result
that it was prone to failure? Given a fixed
amount of time and money, what level of
performance and technology requirements
would cause a mission to stop short of failure due to unforeseen events?
Risks often do not manifest ahead of
time or in obvious ways. However, when
examined after the fact, mission failure or
impairment is often found to be the result
of mismanagement or miscommunication
in fatal combination with a series of lowprobability events. These missteps, which
often occur when a program is operating
near the budget ceiling or under tremendous schedule pressure, result in failures
caused by lack of sufficient resources to
Crosslink Winter 2000/2001 • 41
Low-Complexity Spacecraft
High-Complexity Spacecraft
Complexity index 0–0.33
Complexity index 0.67–1
Small payload mass (~5–10 kilograms)
One payload instrument
Spin or gravity-gradient stabilized
Body-fixed solar cells (silicon or gallium arsenide)
Short design life (~6–12 months)
Single-string design
Aluminum structures
Coarse pointing accuracy (~1–5 degrees)
No propulsion or cold-gas system
Low-frequency communications
Simple helix or patch low-gain antennas
Low data rate downlink (~1–10 kilobits per second)
Low power requirements (~50–100 watts)
No deployed or articulated mechanisms
Little or no data storage
No onboard processing (“bent-pipe”)
Passive thermal control using coatings, insulation, etc.
thoroughly test, simulate, or review work
and processes.
Having maintained an extensive historical database of programmatic information
on NASA faster-better-cheaper missions to
support the Small Satellite Cost Model development, Aerospace was well positioned
to examine the situation. With a decade of
experience and more than 40 scientific and
technology demonstration spacecraft
flown, sufficient information existed for
use in conducting an objective examination. To understand the relationship between risk, cost, and schedule, Aerospace
analyzed data for missions launched between 1990 and 2000, using technical
specifications, costs, development time,
and operational status.
The study examined the faster-bettercheaper strategy in terms of complexity
measured against development time and
cost for successful and failed missions. The
failures were categorized as partial, where
the mission was impaired in some way;
42 • Crosslink Winter 2000/2001
Large payload mass (~200–500 kilograms)
Many (5–10) payload instruments
Three-axis stabilized using reaction wheels
Deployed sun-tracking solar panels (multijunction cells or concentrator)
Long design life (~3–6 years)
Partially or fully redundant
Composite structures
Fine pointing accuracy (~0.01–0.1 degrees)
Monopropellant or bipropellant system with thrusters (4–12)
High-frequency communications
Deployed high-gain parabolic antennas
High data rate downlink (thousands of kilobits per second)
High power requirements (~500–2000 watts)
Deployed and/or articulated mechanisms
Solid-state data recorders (up to 5 gigabytes)
Onboard processing (up to 30 million instructions per second)
Active thermal control using heat pipes, radiators, etc.
catastrophic, where the mission was lost
completely; or programmatic- or launchrelated, where the mission was never realized because of cancellation or failure during launch.
A complexity index was derived from
performance, mass, power, and technology
choices, as a top-level representation of the
system for purposes of comparison. Complexity drivers (a total of 29) included subsystem technical parameters (such as mass,
power, performance, pointing accuracy,
downlink data rate, technology choices)
and a few general programmatic factors
such as heritage (reuse of a part that flew on
a previous mission) and redundancy policy.
The process used to estimate spacecraft
complexity included the following steps.
• Identify parameters that drive or significantly influence spacecraft design.
• Quantify the parameters so that they can
be measured.
• Combine the parameters into an average
complexity index (expressed as a value
between zero and one).
To determine whether the faster-bettercheaper experiment was successful, analysts plotted a comparison of complexity
relative to development time and cost, noting failures. Some interesting trends
emerged. Correlation between complexity,
cost, and schedule was evident. A threshold, or “no-fly zone,” was apparent where
project resources (time, funds) were possibly insufficient relative to the complexity
of the undertaking. While it is unknown
whether allocation of additional resources
would have increased the probability of
success of a given mission, this much is
clear: When a mission fails or becomes impaired, it appears that it is too complex relative to its allocated resources.
The observation of a correlation between cost and development time and complexity, based on actual program experience (i.e., actual costs incurred and
development time required as opposed to
numbers used during the planning phase),
is encouraging because this model can be
applied to future systems. The index may
200
Successful
Failed
Impaired
120
96
Total spacecraft cost
(FY00 dollars in millions)
Development time (months)
144
72
48
24
0
Successful
Failed
Impaired
150
100
50
0
0
0.1 0.2
0.3 0.4 0.5 0.6 0.7
Complexity index
0.8 0.9
1
Cost and schedule plotted against a complexity index derived from performance, mass, power, and technology choices. The regression curves
may be used to determine the level of complexity possible for a set budget
or development time. Although the complexity index does not identify the
reveal a general risk of failure, but it won’t
necessarily specify which subsystem might
fail or how it will fail. Nevertheless, it does
identify when a new mission under consideration is in a regime occupied by failed or
successful missions of the recent past. This
process should allow for more informed
overall decisions to be made for new systems being conceived.
Conclusion
In summary, early small-satellite studies
showed that older cost-estimation models
based on historical costs and technical parameters of large satellite systems could
not be successfully applied to small systems. It was necessary to develop a model
that would be tailored specifically to this
new category of spacecraft. To this day,
there remains no formally agreed-upon
definition of “small spacecraft,” although
such spacecraft are typically considered to
be Discovery-class in size or less (i.e., for
interplanetary applications, they fit on a
Delta II launch vehicle; for Earth-orbiting
applications, they weigh less than 500 kilograms), and most are budgeted in the $50to $250-million range.
Aerospace has been studying small
satellites since 1991, and the main product
of its ongoing work is the Small Satellite
Cost Model. Based on actual physical and
performance parameters of small Earthorbiting and interplanetary spacecraft
flown during the last decade, this software
tool was developed to estimate the cost of a
small spacecraft. It has addressed many of
the questions that were originally raised
about cost estimation for small systems.
The model is used in assessment of smallsatellite conceptual designs, technology
0
0.1 0.2
0.3 0.4 0.5 0.6 0.7
Complexity index
0.8 0.9
1
manner or subsystem in which a failure is likely to occur, it does identify a
regime by which an index calculated for a mission under consideration may
be compared with missions of the recent past.
needs, and capabilities, and it is continually
updated to model state-of-the-art systems.
The Small Satellite Cost Model has developed through several generations, with
additions to the database and improvements to the cost-estimating relationships
serving as the primary drivers from version
to version. Currently, the small-satellite
database has evolved to include more than
40 programs. While initial small-satellite
studies were funded by DOD and Aerospace internal funds in the early 1990s, the
development of Small Satellite Cost Model
version 1.0 was funded by NASA in 1995.
The database and model were updated in
1996 (version 2.0), and interplanetary capabilities were added in 1998 (version 3.0
or Small Satellite Cost Model 98). The
fourth release is now in development.
A version of the model is supplied to
industry members who provide data that
can improve it. There is also a publicly
available version (see www.aero.org/software/sscm). The model is used extensively
by DOD, JPL, and virtually all of the
NASA field centers. It has become the
standard for parametric evaluation of small
missions worldwide and is used by the European Space Agency and the French Centre National d’Etudes Spatiales, among
other organizations. A number of foreignbuilt small spacecraft are included in the
database.
The most recent application of the
small-satellite study is the assessment of
NASA’s approach, under constrained
budgets and rigid schedules, to conduct
faster-better-cheaper scientific investigations. Recent instances of failed or impaired spacecraft have brought into question the faster-better-cheaper strategy,
especially for interplanetary applications.
While missions have been developed under
this strategy with lower costs and shorter
development times, these benefits have, in
some cases, been achieved at the expense
of increasing performance risk. Addressing
the question of when a mission becomes
“too fast and too cheap” with the result that
it is prone to failure, studies have found
that when a mission’s complexity is too
great relative to its allocated resources, it
fails or becomes impaired.
Aerospace’s Small Satellite Cost Model
has been highly successful, as evidenced by
NASA’s recognition for the model’s application in the Discovery and Small Explorer
programs. A critical component of many
small-satellite evaluation activities and a
major contribution to the space industry as
a whole, the model is a stellar example of
the value-added role Aerospace can play.
There is every reason to expect that more
success stories will be forthcoming.
Further Reading
R. L. Abramson and D. A. Bearden, “Cost
Analysis Methodology for High-Performance
Small Satellites,” SPIE International Symposium on Aerospace and Remote Sensing (Orlando, FL, April 1993).
R. L. Abramson, D. A. Bearden, and D.
Glackin, “Small Satellites: Cost Methodologies
and Remote Sensing Issues” (Paper 2583-62),
European Symposium on Satellite Remote Sensing (Paris, France, September 25–28, 1995).
H. Apgar, D. A. Bearden, and R. Wong, “Cost
Modeling” chapter in Space Mission Analysis
and Design (SMAD), 3rd ed. (Microcosm Press,
Torrance, CA, 1999).
D. A. Bearden, “A Complexity-Based Risk Assessment of Low-Cost Planetary Missions:
When Is a Mission Too Fast and Too Cheap?”
Crosslink Winter 2000/2001 • 43
The History Behind Small Satellites
The Soviet Union’s October 1957
launch of Sputnik, the first satellite,
stunned the world. It kicked off the
“space race” between the Soviet Union
and the United States, and in doing so,
changed the course of history. Space
would become an important setting in
which nations could demonstrate political and scientific prowess. The United
States responded to Sputnik in January 1958, launching Explorer I, a simple, inexpensive spacecraft built to answer basic questions about Earth and
near space.
Explorer and its immediate descendants were small satellites, but only
because of launch-vehicle limitations.
Size and complexity of later spacecraft
grew to match launch capability. Not
surprisingly, the early years of the
space race saw U.S. projects expand
on many levels. The Cold War spurred
the buildup of a massive space-based
defense and communications infrastructure in the United States. The
government and its contractors, essentially unchecked by budgetary restrictions, developed large, sophisticated, and expensive platforms to
meet increasingly demanding mission
requirements. NASA followed the lead
of DOD, building complex scientific
and interplanetary spacecraft to maximize research capabilities.
U.S. expertise in space science was
escalating. Launch-vehicle capability
continued to grow from the 1960s
through the early 1980s, with large
satellite platforms carrying more
Fourth IAA International Conference on LowCost Planetary Missions, JHU/APL (Laurel,
MD, May 2–5, 2000).
D. A. Bearden, R. Boudrough, and J. Wertz,
“Cost Modeling” chapter in Reducing the Cost
of Space Systems (Microcosm Press, Torrance,
CA, 1998).
powerful payloads (and, often, multiple
payloads). Engineers and scientists
worked to perfect the technologies
necessary for mission success and
lengthier operations. Major research
spacecraft took nearly a decade to develop, and they grew to cost more than
$1 billion.
As years passed, however, several
factors pointed to a need to scale back.
With the end of the Cold War, government spending in science and technology received increased public scrutiny.
Funding for large, complex flagship
missions would no longer be available.
Budget constraints forced program
managers to look seriously at smaller
platforms in an attempt to get payloads
onto less-costly launch vehicles.
At the same time, the public voiced
a growing concern over the potential
for reduced research findings in the
wake of several failures of large, highprofile, expensive NASA missions; for
example, a crippling manufacturing
defect was discovered on the Hubble
Space Telescope. NASA came under
fire for its perceived inability to deliver
quality scientific research.
The scientific community expressed
frustration about the lack of flight opportunities because only a few flagship
missions, with decade-long development times, were being undertaken. After the limited-capability launch of
Galileo in 1989 and the loss of Mars
Observer in 1993, the next planetary
mission to be launched was the Cassini
D. A. Bearden, N.Y. Lao, T. B. Coughlin, A. G.
Santo, J. T. Hemmings, and W. L. Ebert, “Comparison of NEAR Costs with a Small-Spacecraft Cost Model,” AIAA/Utah State University
Conference on Small Satellites (Logan, UT,
September, 1996).
D. A. Bearden and R. L. Abramson, “A Small
Satellite Cost Study,” 6th Annual AIAA/Utah
State University Conference on Small Satellites
(Logan, UT, September 21–24, 1992).
E. L. Burgess, N. Y. Lao, and D. A. Bearden,
“Small Satellite Cost Estimating Relationships,” AIAA/Utah State University Conference
on Small Satellites (Logan, UT, September
18–21, 1995).
D. A. Bearden and R.L. Abramson, “Small
Satellite Cost Study—Risk and Quality Assessment,” CNES/ESA 2nd International Symposium on Small Satellites Systems and Services
(Biarritz, France, June 29,1994).
M. Kicza and R. Vorder Bruegge, “NASA’s Discovery Program,” Acta Astronautica: Proceedings of the IAA International Conference on
Low-Cost Planetary Missions (April 12–15,
1994).
44 • Crosslink Winter 2000/2001
mission to Saturn in 1997, which
wouldn’t start transmitting data to Earth
until 2003, six years after Galileo
stopped sending data from Jupiter.
All these issues—budgetary
changes brought about by the end of
the Cold War, mission failures, predicted gaps in scientific data return—
meant that future space-science research and planetary exploration
would require a different approach.
In the mid-1980s, with new developments in microelectronics and software, engineers could package more
capability into smaller satellites. Funding from the DOD Advanced Research
Projects Agency, the Air Force Space
Test Program, and university laboratories allowed engineers to build lowprofile, low-cost satellites with maximum use of existing components and
off-the-shelf technology and minimal
nonrecurring developmental effort. Research organizations, private businesses, and academic institutions—all
weary of waiting years for their instruments to be piggybacked on large
satellites—began to develop small
satellites that could be launched as
secondary payloads on the shuttle or
large expendable boosters.
Small space systems were emerging that were affordable and easy to
use, and thus attractive to a larger,
more diverse customer base. A new
trend had taken shape, and a whole
new era in the history of space science was beginning.
T. J. Mosher, N. Y. Lao, E. T. Davalos, and D. A.
Bearden, “A Comparison of NEAR Actual
Spacecraft Costs with Three Parametric Cost
Models,” IAA Low Cost Planetary Missions
Conference (Pasadena, CA, April 1998).
National Research Council report, Technology
for Small Spacecraft (1994).
L. Sarsfield, “Federal Investments in Small
Spacecraft,” Report for Office of Science and
Technology Policy, RAND Critical Technologies Institute, DRU-1494-OSTP (September
1996).
L. Sarsfield, The Cosmos on a Shoestring:
Small Spacecraft for Space and Earth Science
(RAND Critical Technologies Institute, ISBN
0-8330-2528-7, MR-864-OSTP, 1998).
E
arly warning against missile attack is a key mission for
military planners dealing with missile defense systems,
the subject of considerable international debate. A
space-based infrared surveillance system can provide
such early warning. Enhancing its timeliness and usefulness to
guarantee high system performance depends on accurate appraisal
of the characteristics of the background against which the target
will be detected and how these characteristics influence sensor design and performance. Optimum sensors must be deployed to ensure high system performance.
The cost of optimum system performance, however, is being
carefully scrutinized by decision-makers as part of the ongoing
process of Department of Defense (DOD) acquisition reform that
has distinguished the procurements of the past decade. Indeed,
system cost has become a critical factor in decisions regarding
DOD acquisitions. The inclusion of system costs in architecture
studies represents a significant and important extension of the traditional role of The Aerospace Corporation in support of the Air
Force Space and Missile Systems Center.
The costs associated with the deployment of a surveillance satellite are closely related to the payload mass, both in regard to the
payload itself and that of the launch vehicle required to lift it and
its satellite platform into, for example, a geostationary orbit. The
best performing sensor has the highest resolution and the best
background clutter suppression, but is the heaviest and therefore
most expensive.
The study presented in this article
deals with a constellation of spacebased infrared surveillance sensors as
an example to show the link between
system performance and system cost.
Simulation tools are used to facilitate
the trades required for optimizing
sensor designs to meet mission requirements and to indicate the best
approach for minimizing system
costs. Sensor resolution (the ability to
see detail) is varied over three fixedparameter designs in order to examine
the impact of resolution on background clutter and, hence, system performance. The most valuable aspect
of this example is the quantitative link
between system-level performance
and payload mass over conditions of
variable clutter.
Aerospace Simulation
and Modeling Tools
Aerospace regularly provides quick-response assessments of a variety of space-based-sensor concepts. These assessments, underpinned by reasonably detailed sensor design constructs, accurately
determine system performance. The constructs are important for
estimating sensor (and ultimately space-segment) mass, power,
volume, and cost, and for evaluating associated technology risks
for sensor subsystems and their components.
A variety of analysis and simulation tools are used to assess
sensor performance. An analytical approach is usually adequate,
unless the background-scene structure interacts with the sensor to
Space-Based
Systems for
Missile
Surveillance
D. G. Lawrie and T. S. Lomheim
Aerospace uses sophisticated analysis
and simulation tools to design systems,
assess their performance, and link
design and performance to system cost.
The constellation-level analysis tool TRADIX is used to assess the
global missile-warning performance of space-based, infrared sensors. The satellites are propagated over an entire day (or epoch) with,
typically, five-minute time steps. At each time step, target missiles are
launched in up to 36 different azimuthal directions from almost 5000
launch sites uniformly distributed across Earth’s surface, resulting in
more than 50 million booster launches per epoch. All relevant sensor
parameters, target signatures and motions, atmospheric effects, and
solar and Earth backgrounds are incorporated within the simulation.
The effects of Earth background clutter on system performance, and
hence sensor design, are a main topic of this article.
Crosslink Winter 2000/2001 • 45
create a significant component of the system noise (i.e., clutter). In these cases, the
approximations required for an analytic approach are often violated; for example,
cloud edges and land-and-sea interfaces
frequently distort the normal background
amplitude distribution. When this happens,
detailed simulations of the spatial structure
of the background scene, pixel by pixel in
the focal plane, must be incorporated into
the analysis for an accurate assessment of
the sensor’s performance. If the emphasis
is on the system performance of a constellation of sensors, such a level of detail has
generally been viewed as too costly and
time-consuming.
Measuring the Impact of
Background Spatial Structure
Aerospace has developed methodologies
for quantifying the effect of the background spatial structure on the performance of space-based infrared sensors.
The results are coupled with sensor-design
constraints and mission performance tools
to allow high-level systems-engineering
trades that provide insight into relationships between cost and performance. In effect, high-fidelity sensor and phenomenology (target and background) models
generate constraints and databases for use
within constellation-level simulations, enhancing their accuracy. This integrated
simulation capability supports sensor and
system trades for a number of space-based,
infrared surveillance system studies, including those dealing with theater-missile
warning.
An early version of this capability was
used in 1994 to support the Space-Based
Infrared System (SBIRS) Phenomenology
Impact Study, conducted by Aerospace in
collaboration with the Massachusetts Institute of Technology Lincoln Laboratory.
The study recommended collecting background characteristics, a strategy that ultimately involved the Miniaturized Space
Technology Initiative 3 and Midcourse
Space Experiment satellite experiments, as
well as background observations from a
high-altitude aircraft. The phenomenology
database that the study generated was later
made available to the SBIRS High and Low
components. SBIRS High refers to a constellation in high orbit for full Earth surveillance; SBIRS Low is a constellation in
low orbit for detection and precise tracking
of postboost objects, such as reentry vehicles. The simulation capability was also
used during the 1994 Surveillance Summer
46 • Crosslink Winter 2000/2001
FOVin-scan
FOVfocal plane
FOVcross-scan
Scan pattern of sensor for theater missile surveillance; the scanner field of regard typically
covers most of the Eurasian landmass. The instantaneous field of view of the sensor is shown
as FOVfocal plane, which depends on both the detector height and the array length perpendicular
to the scan direction. The FOVfocal plane is
scanned a length FOVin-scan over three successive adjacent “scan patches” to cover a crossscan extent, FOVcross-scan.
Study as a tool for developing the system
requirements for the SBIRS program.
Infrared Sensor Design
An evaluation of a space-based infrared
surveillance architecture for early missile
warning in two potential theaters of operation will illustrate the models and analysis
procedures for assessing sensor design and
performance. System performance is derived for three generic infrared scanningsensor designs with varying degrees of
spatial resolution of 1.8, 2.6, and 3.6 kilometers, with parameters specified for potential variations and uncertainties in the
background structure. The sensor designs
were analyzed in parallel to determine sensitivity, subsystem requirements, mass, and
power. The results provide insight into the
cost of the uncertainties in phenomenology, in terms of sensor mass.
In order to relate system performance
with system cost, the sensor performance
must be linked to specific system architectures with well-defined sensor payloads
and associated spacecraft bus, communication, ground, and launch systems. The
focus in this discussion is a key system element: the infrared-sensor payload configured for the detection and tracking of
theater missiles. The sensor is assumed to
be deployed in a geostationary orbit with a
field of regard to cover the anticipated
threat areas, for example, the Middle East
and Northeast Asia.
Typical mission requirements include
the minimum detectable target, time of first
report, size of the geographical area of interest, accuracy of the reported launch location, heading of the target, and accuracy
of the predicted impact point. These are
met with a set of properly sized infrared
sensors that are configured with an optimized satellite-constellation architecture.
The constellation is deployed in a geostationary orbit, which determines the number of satellites, data communication rates
and other elements of the infrastructure,
and space- and ground-based processing
systems. Sensor sizing is driven by the required sensor sensitivity, spatial resolution
at the target range, revisit time (time between looks) or target-update rate, and the
selection of appropriate spectral bands for
discriminating targets from backgrounds.
Once the system performance parameters are defined and the constellation architecture selected, an infrared sensor-system
design is synthesized and the sensor’s firstorder technical design parameters are
obtained:
• sensor type (scanner or starer)
• system field of regard and scan pattern
• telescope field of view, detector pixel
instantaneous field of view, aperture and
focal length
• system scan rate/staring duration
• focal-plane definition (single or dual
color), sensitivity, topology, and frame rate
• signal-processing data rates and functional definition
• overall system digital output data rates
The next level of synthesis fleshes out
the infrared sensor subsystems in enough
detail to allow reasonable estimates of the
mass, power, and volume of the orbital
components of the system. Meaningful
technology risk assessments can now be
formulated.
A variety of linked analysis software
tools and databases execute the payload design and sizing process. For example,
defining the focal plane includes selecting
the focal-plane detector material and optical cut-off wavelength, spatial layout, detector or pixel dimensions, and readout
rate(s). The sensor sensitivity requirement
is translated into a focal-plane sensitivity
constraint, which allows selection of the
focal-plane temperature, using a thermalnoise model specifically tailored for the
chosen detector material (e.g., mercury
cadmium telluride). The focal-plane topology (total detector count) and maximum
readout rate then allow determination of
point the line-of-sight mirror. The subsystem masses, power dissipations, and volumes are “rolled-up” into an overall payload
mass, power, and configuration, which is
then used to size the spacecraft bus and to
select the launch system. The detailed subsystem parameters, along with the subsystem masses, power-consumption levels,
and configurations, are then passed to an
appropriate cost-estimating tool(s).
In order to relate system-level performance with cost, the foregoing process is
Sensor Performance
Simulation Tools
The remainder of the discussion focuses on
the sensor taken to be an infrared scanner;
constellation system-level simulation tools
are used to calculate end-to-end performance. An example is provided wherein the
linear size of the sensor ground sample
(spatial resolution or detector “footprint”)
is systematically varied up to a factor of
two to illustrate the impact of sensor susceptibility to background clutter. The level
Driving mission
requirements
Constellation
architecture
Driving sensor
requirements
Sensor concept
definition
First-order
parameters
Minimum detectable
target
Report time
Area-of-interest size
Tactical-parameter
accuracy
Altitude
Inclination
Number of satellites
Communications
approach
Noise equivalent target
Ground sample distance
Coverage area(s)
Revisit time
Spectral band(s)
Example:
Theater infrared
scanner
Scan pattern
Line-of-sight (LOS)
agility, stability defined
Derive field of view
(FOV), scan rate,
instantaneous FOV
Determine aperture
and focal-plane array
(FPA) temperature
Spacecraft/launch
vehicle synthesis
Payload
description
Thermal
subsystem
Focal-plane/processing
subsystems
Optical
subsystem
Spacecraft
design/sizing
Launch vehicle
selection
Mass
Power
Size
Performance
Size thermal radiator
Select thermal heat
transport hardware
Focal-plane size,
topology
Pixel size
Pixel count
Sensitivity
Line rate
Analog rates
Digital rates
Power dissipation
Processing functions
Optical design
LOS scan
mechanism
No
Acceptable
system and
system
performance
?
Mission
performance
analysis
Yes
Proceed with
detailed system
design
Dashed paths indicate potential
sensor/payload/concept iterations
Steps, decisions, and trade-offs in the process for deriving the specifications of an infrared space-based sensor system for detecting missile launches against highly structured backgrounds. In most cases, intermediate and overall iterations are required.
the electrical power, which, with the focalplane temperature, serve as inputs to a
model for determining the technology,
size, power, and volume of the cryogenic
cooling system.
A state-of-the-art optical design and tolerancing program uses the design parameters to select and refine a specific telescope
optical design for the optical system. The
optical program data are used to estimate
the mass and volume of the optical subsystem and the power required to scan and
carried out as a function of sensor performance parameters by varying, for example,
the sensor noise-equivalent target, revisit
time, and ground-sample distance (resolution). For such parametric analyses, scaling relationships are often used to interpolate subsystem mass, power, and volume
estimates between design points. This is
appropriate once a detailed design is developed for a basis; excursions from this fiducial design then use the appropriate scaling
relationships.
of this clutter is also varied over a wide
range. Variable ground-sample size is used
to derive corresponding sensor system designs from which payload masses are determined. To more clearly illustrate this
sensitivity trade, cost is assumed to be related only to the infrared sensor mass, as a
rough approximation. The specific costbenefit relationships developed by this example are illustrated in the next section.
Simulating the performance of a spacebased surveillance system involves the use
Crosslink Winter 2000/2001 • 47
Sensor Requirements
• Noise-equivalent target
• Ground sample distance
• Coverage area
• Revisit time
• Spectral-band choice
• Range to target
Noise
model
• Sensor type (scanner/starer)
• Scan or step/stare pattern
• Select sensitivity model
• Select FOV
• Select detector size/focal length based on derived IFOV
• Select line rate from scan geometry
– Obtain integration time
• Solve for optical aperture diameter/focal plane temperature
• FPA temperature
Determine
thermal
radiator
weight/power
• Number
detectors
• FPA temperature
• Electrical
dissipation
• FOV
• Integration time
Determine
FPA size,
properties,
data rate
FPA
data
rate
• Aperture
• FOV
• Focal length
Determine
signal
processor
mass/power
Sensor requirements for a hypothetical theater-missile surveillance system.
The flowchart illustrates the process that links the sensor requirements to
steps that determine the infrared sensor design and finally the sizes of the individual infrared sensor subsystems. Designing a sensor for a single mission
of an ensemble of software models and
databases:
• SSGM—Synthetic Scene Generation
Model: encapsulates many phenomenology codes under one architecture;
developed by Photon Research Associates Inc. for the DOD Ballistic Missile
Defense Organization
• CLDSIM—cloud-scene simulation model incorporated in SSGM
• VISTAS—Visible and Infrared Sensor
Trades, Analyses, and Simulations
model: combines classical imageprocessing techniques with detailed sensor models to produce static and timedependent simulations of a variety of
sensor systems, including imaging,
tracking, and point-target-detection scanners and starers; designed and coded by
Aerospace.
• TRADIX—constellation-level analysis
tool that combines electro-optical sensor models with target, background, and
atmospheric models to evaluate system
performance; developed by Aerospace.
The simulation begins with the generation by SSGM of a set of shortwave infrared
Earth-cloud background scenes. Scenes are
selected from the database of weather satellite imagery. The images are pixelized, and
the infrared properties of the scene elements are inserted into the image database.
48 • Crosslink Winter 2000/2001
• FOV
• Integration time
• IFOV
• Spectral band
Determine
optical
subsystem
mass/volume
Aperture
Determine
pointing mirror
mechanism
mass and power
and a limited set of engagement geometries is straightforward. However,
surveillance systems must operate against a wide variety of viewing
geometries, target signatures, and backgrounds. They must also be able
to perform many types of missions, often simultaneously.
A key step in this process is the use of
CLDSIM to simulate the solar scatter from
cloud tops at various altitudes. Scenes with
only terrain, sea surfaces, and low-altitude
clouds usually do not generate much clutter
in the chosen spectral band. Solar reflections from high-altitude clouds, on the other
hand, can cause a high degree of clutter,
which can, in turn, stress the sensor’s ability
to detect targets of interest. SSGM can generate selected atmospheric properties, a
specified spatial resolution, and a matrix of
scenes with a variety of viewing geometries
in the desired spectral band.
Mean frequency of occurrence of clouds above 10 kilometers (University of Wisconsin HIRS-2 data,
August 1989–1994). Earth’s surface is never completely covered with clouds on any given day. Highaltitude clouds are more likely to occur at lower latitudes. The analysis in this study generates the probability of missile warning when clouds of a given type and altitude are present at the locations of interest. A global cloud statistical model developed at Aerospace indicates that, based on six years of data,
clouds at 10 kilometers or above occur over Northeast Asia about 20 to 30 percent of the time during
the summer, with a maximum cloud coverage of 40 percent. In light of these statistics, sensor performance must be evaluated against clouds ranging in altitude up to 10 kilometers or possibly higher.
SSGM
Global Cloud Database
Synthetic Scene
Generation Model
Frequency-of-occurrence
statistics
Background
scenes
Cloud cover, types,
and altitudes
Sensor Design and Optimization
Optical design and straylight analysis
Focal-plane layout and noise analysis
Signal processing and communications
Line of sight and thermal control
Payload
parameters
VISTAS
TRADIX
High-fidelity
Constellation-level
Clutter
sensor model statistics performance tool
Clutter
statistics
Coverage, report times, and
target-detection statistics
Payload parameters
Other system-level simulation tools (e.g., tracking, resource scheduling)
The various simulation tools developed and used by Aerospace and their interconnectivity for the
evaluation of system performance. Such simulation tools must be built to accurately model the interaction of the sensors with relevant target and background characteristics over all possible viewing areas. One of the main goals of the tool development is to incorporate the effects of “real” clutter phenomena, such as cloud edges and sun glints, within system-level analyses. A scene-based
methodology generates appropriate clutter statistics, which are included within the constellation-level
simulations.
For the evaluation of sensor performance, VISTAS models the imaging chain
of the electro-optical sensor, from the
background scene input to the signalprocessor output. The sensor-system transfer function is applied to a high-resolution
input scene, such as those produced by
SSGM. The transfer function (output vs.
input) includes the effects of the optical
point spread function, that is, the blur, and
for a scanner, the temporal aperture caused
by the scan motion during the integration
time. The blurred scenes are resampled at
the system resolution, and clutter-rejection
filters are applied. The output is calibrated
to account for the sensor system’s response
to the target intensity. The output scenes
are then analyzed for figures of merit, such
as the standard deviation of the sensor response, which are statistical representations of the clutter of the background scene
and sensor noise for the sensor design under consideration.
TRADIX models space sensors operating in both above- and below-the-horizon
modes, from the visible to the long-wave
infrared. It contains a model for the infrared signature of the missile body, plume
intensity and trajectory profiles, clutter statistics generated by SSGM/VISTAS, background models including straylight from
nonrejected earthshine and sunshine, and
the atmospheric path radiance and transmission. These models are integrated with
the Aerospace orbit propagation library,
ASTROLIB, to provide a dynamic simulation tool for studying the constellation-wide
performance of electro-optical sensors.
Critical inputs to the above tools include
focal-plane pixel topology, sensor noise
characteristics, details of the filters and signal processing, optical design and straylight rejection capability, platform drift and
jitter, constellation orbits and phasing, and
concepts of operation, that is, scan modes,
revisit times, and others.
Information on the frequency of occurrence of meteorological conditions is critical to the performance of a space-based infrared surveillance system against Earth
backgrounds. Aerospace uses a global
cloud statistical model that is based on the
University of Wisconsin HIRS-2 (HighResolution Infrared Sounder) data from the
National Oceanographic and Atmospheric
Administration polar orbiters. The database incorporated in this model provides a
basis for assessing system performance
against clouds of various altitudes for a
given time of year at a specified geographical location. Typical displays of such data
would be shown on a world projection in
terms of the maximum, minimum, and
mean distributions of the probability of
clouds above a given altitude during a
specified month.
Scene-Based Clutter Analysis
The performance of an Earth-viewing infrared sensor, designed to operate in the
shortwave infrared atmospheric-absorption
band to block signals from terrestrial
sources, is dominated by the structure in
the background scene. This structure is
caused predominantly by sunlight reflected
from clouds in the scene and depends on
the sensor-cloud sun-viewing geometry,
represented by the cloud “look-zenith angle” and the “solar-scatter angle.” The
solar-scatter angle, which determines the
intensity of solar scatter off the cloud tops,
is dominant; very small values of the solarscatter angle result in very intense scattering. The other angle, the look-zenith angle,
defines the projection of the SSGM cloud
scene in the sensor line of sight, in reference to the range from the sensor to the
cloud tops. The look-zenith angle is also
directly related to the path length through
the atmosphere to a target at a given altitude, and hence to the target’s apparent irradiance for a given time after launch. At
high-zenith angles, low-altitude targets
viewed within the Earth limb suffer from
Nadir
angle
LZA
LZA = 0 deg
To
sun
SCA
Solar-scatter geometry: As the sensor line of
sight shifts from nadir to the limb, both the range
to target and the path length through the atmosphere increase, while the minimum possible
solar-scatter angle (SCA) decreases. When the
sensor is viewing targets at the nadir (lookzenith angle [LZA] equals zero degrees), the
minimum possible SCA is 90 degrees, whereas
when viewing low-altitude targets at the limb
(LZA equals 90 degrees), the sun can be directly in the sensor line of sight (i.e., SCA can be
zero degrees). Unfortunately, most of the surface area covered by a given space sensor viewing Earth lies at the larger LZAs. Overlapping
coverage provided by a constellation of many
sensors mitigates this problem.
Crosslink Winter 2000/2001 • 49
the worst combination of range, atmospheric transmission loss, and solarinduced background clutter. Unfortunately,
most of the Earth’s surface viewed by a
sensor in space lies at the larger lookzenith angles. However, the overlapping
coverage of multiple sensors can be used
to mitigate this problem.
The two SSGM scenes selected in this
study represent a “nominal” case containing low- to mid-altitude clouds, and a
“stressing” case with mid- to high-altitude
clouds. The terms nominal and stressing
refer to the level of the clutter generated
when the scene is passed through a typical
sensor simulation. Each scene covers an
area of 512 × 1170 kilometers at nadir with
a 200-meter spatial resolution, as seen
through a midlatitude summer atmosphere
at a look-zenith angle of 60 degrees and
solar-scatter angle of 90 degrees; geometric projection effects at a look-zenith angle
of 60 degrees shorten the apparent size of
these scenes to 510 × 570 kilometers. The
brighter clouds in these images are at
higher altitudes where there is less attenuation of sunlight both before and after it
scatters from the cloud tops.
The most significant caveat concerning
this analysis is the use of the CLDSIM
model within SSGM. A number of uncertainties are inherent in this model, one being the model for the solar scattering from
the clouds. Using detailed comparisons
with actual space sensor data from the
MSTI-3 (Miniature Sensor Technology Integration) sensor that collected information
on Earth and Earth-limb clutter, the resultant uncertainty in the apparent cloud
brightness has been estimated to be certainly less than a factor of three. The impact of this uncertainty was addressed by
scaling the intensity of each scene, in the
nominal case by one-third and one-half and
in the stressing case by two and three, thus
providing a total of six cases. In this way
the effect on system performance of varying the cloud types and altitudes and the
additional impact of the SSGM modeling
uncertainties were quantified.
In viewing the Earth-cloud background
scenes, a scanning sensor typically responds only to changes in the signal, in effect performing a subtraction of the mean
background radiance, thus providing an indication of the target intensity above the
mean. The effectiveness of this suppression
of clutter caused by radiance differences in
the scene depends on the instantaneous
50 • Crosslink Winter 2000/2001
Nominal scene, left, used for clutter performance analysis: clouds at 2–8-kilometer altitude, 200-meter
resolution, and midlatitude summer atmosphere. The database is referred to as nominal because it
contains mostly low- to mid-altitude clouds and results in moderate clutter levels, depending on the
sensor design and solar angle. Also, these clouds occur quite frequently over a wide range of latitudes.
The brighter clouds are at higher altitudes where there is less attenuation of the sunlight. Stressing
scene, right, used for clutter performance analysis: clouds at 4–10-kilometer altitude, 200-meter resolution, and midlatitude summer atmosphere. The database is referred to as stressing because it contains cirrus ice clouds at 10-kilometer altitude and can generate quite severe clutter levels. These
clouds do occur less frequently than those contained in the nominal scene.
field of view of the detectors, the pixel
footprint. In this study, scanning-sensor designs with footprints of 1.8, 2.6, and 3.6
kilometers were used at a nominal range of
40,000 kilometers (corresponding to a
look-zenith angle of approximately 75 degrees for a geostationary satellite). The
output scenes then exhibit a mean background level of zero, with positive and negative values apparent at the cloud edges.
These simulation outputs were used to determine the probabilities of false exceedance (false indication of a target in the
field of view) versus intensity threshold for
the background clutter.
In order to set a threshold, an acceptable
false exceedance rate for each sensor design must be determined. For architectures
where the mission data are processed on
the ground, this involves several factors:
the number of detectors in the focal plane,
the sampling rate, the number of bytes per
sample, and the capacity of the downlink,
as well as some knowledge of the target
detection algorithm. For example, a scanner design with a 1.8-kilometer footprint
1.8-kilometer footprint
3.6-kilometer footprint
Infrared-scanner outputs in the simulation for the stressing background scene. For a scanner, each
background input scene results in a single simulated static output scene. The intensity of a target relative to the surrounding background is the principal method for detection. Grey represents the mean level
of zero, while white and black represent positive and negative exceedances (false indications of a target in the field of view). In order to provide a visual guide to the clutter levels in these two output scenes,
four “target” markers with an intensity of 5 kilowatts per steradian are imbedded in each scene. All four
are clearly visible in the 1.8-kilometer case, whereas only the central, isolated target is visible in the 3.6kilometer scene. These output scenes are used to provide a statistical representation of the clutter in
terms of the number of exceedances vs. threshold level for the background scene and sensor design.
1.8-kilometer footprint
100
Stressing clutter: LZA = 60 deg
SCA = 90 deg
10–1
Exceedance probability
could indicate a threshold of 6 kilowatts
per steradian for a false exceedance rate
per pixel of 10-4, whereas a design for a
3.6-kilometer footprint viewing the same
scene would indicate a false exceedance
rate per pixel of 10-1 for the same threshold
intensity. Of course the smaller footprint
design would require many more detectors
and a larger telescope, hence a heavier payload and greater cost. The ground footprint
of an infrared scanner is a key parameter in
determining sensor performance against
structured backgrounds.
In a more complete analysis, one must
account for the sensor noise arising from
the natural fluctuations in the scene and
from the electronics, usually expressed as a
noise-equivalent target intensity. This
accounting is expressed in a thresholdversus-exceedance distribution. If the clutter noise from the background is random in
character, it can be expressed as a clutterequivalent target intensity; then a systemequivalent target intensity can be determined simply as the square root of the sum
of the squares. However, clutter from
clouds in natural background scenes is
usually far from random, so that more
complex methods are required for combining the sensor noise with the background
clutter distribution.
Target Response
Target response is a scale factor that describes the attenuation of a target through
the sensor system. The response of a scanning sensor to an unresolved target, that is,
10–2
3.6 kilometers
10–3
10–4
10–5
1.8 kilometers
10–6
0
5
10
15
20
Threshold (kilowatts/steradian)
25
30
Probability of false exceedance plotted as a function of threshold intensity for infrared scanners viewing the stressing background scene. A threshold of approximately 6 kilowatts per steradian for the 1.8kilometer-footprint scanner design corresponds to a false exceedance of 10-4. When that same threshold is applied to the 3.6-kilometer-footprint scanner design, the false exceedance is approximately
three orders of magnitude higher (10-1), whereas the reduction in the number of detector channels is
only a factor of four.
a “point source,” depends on several factors. These are the blurring caused by the
optics, the temporal aperture caused by the
scan motion during the integration time,
the sampling of the blurred target by the focal plane, the target phasing (i.e., the location of the target relative to the center of a
pixel), and the electronic filtering.
For a fast-scanning sensor system, the
response does not depend on the temporal
characteristics of the target so the calculation is fairly straightforward. The target response can be evaluated by constructing a
3.6-kilometer footprint
Infrared-scanner simulation: outputs thresholded at 6 kilowatts per steradian for the stressing background scene. If the 6-kilowatts-per-steradian threshold is applied to the output scenes, the impact of
sensor footprint on clutter response is immediately apparent: clutter is reduced in both cases, but
there is a more pronounced effect for the 1.8-kilometer footprint. Threshold levels are usually set by the
limitations of the onboard processor or the ground communications link. The need to limit the number
of false alarms reported by the system can also be a significant constraint. False-exceedance levels of
approximately 10-4 to 10-3 are typical.
scene in which a grid of many point
sources (usually 1 kilowatt per steradian)
are spaced far enough apart to avoid interference, each offset randomly by a small
amount to make the grid nonuniform, thus
to ensure many different target phasings.
This target grid is then passed through the
same simulation process as the background
scenes, namely blurring, downsampling,
and filtering. The peak response from each
target is determined, and the average is
taken as the mean target response. All simulated clutter scenes are divided by this target response so they can be referenced to
apparent intensities at the sensor aperture.
Constellation Performance
To evaluate the performance of a spacebased infrared surveillance architecture
against a variety of target and background
conditions, the sensor response to both
must be combined in a constellation-level
simulation. This is done by first choosing
one of the cloud databases contained
within the SSGM, then generating scenes
spanning the entire range of viewing
geometries and sun angles for the selected
sensor constellation. The scenes are then
processed through the detailed sensor simulation tool, VISTAS, to produce a set of
false-exceedance-threshold clutter distributions. These are combined with the sensor
noise-equivalent target intensity in TRADIX, and the threshold and minimum detectable target are calculated as functions
of the look-zenith and solar-scatter angles,
Crosslink Winter 2000/2001 • 51
52 • Crosslink Winter 2000/2001
120
Shortwave infrared
intensity (kilowatts/steradian)
for the required probability of detection
and false exceedance.
Within TRADIX, the targets and satellites are propagated in Earth-centralinertial coordinates with an appropriate
sampling interval, typically 5 to 15 minutes. This is carried out over a period of a
day at various times during the year to explore the effects of seasonal variations in
the sun’s latitude. For a constellation in
geostationary orbits and target launch sites
in the northern hemisphere, the most
stressing solar-scattering angles are close
to the summer solstice, and the resultant
background clutter is the dominant effect
in the system performance. On the other
hand, the effect of solar straylight can be
more stressing close to the solar equinoxes.
Each sensor-target line of sight at each
time results in a look-zenith-angle/solarscatter-angle pair, which, with the clutter
data, sets a minimum-detectable-target
threshold for that sensor, and a time of first
detection. For example, three out of four
detections produces a “3of4” report from
that sensor, and two such reports by separate sensors result in a stereo report for the
constellation. Target-detection and reporttime statistics are thus generated for each
sensor design against each structured background for the mission of interest. This
procedure has been applied to missions of
global- and theater-missile warning.
For a typical global analysis, target
launch sites are uniformly spread over the
surface of Earth from −90- to +90-degree
latitude. A target spatial pattern with a resolution of 3 × 3 degrees (Earth central angle)
generates 4586 distinct target launch locations that represent equal-sized areas on the
surface of Earth. Missiles are usually
launched in 12- to 36-azimuthal directions
in order to allow for aspect angle effects on
the apparent booster signature. The point of
such an analysis is to obtain a measure of
system performance that is not scenario
driven. As many as 50 million target
launches may be run in order to determine
global performance for a particular spacesurveillance constellation.
For the theater-missile-warning mission,
a representative short-range missile was assumed with an infrared intensity profile increasing as the missile rose through the atmosphere to a maximum, then decreasing
as the afterburning of the exhaust diminished (as viewed from a broadside aspect at
the two extreme look-zenith angles of 0
and 90 degrees, corresponding to the nadir
100
80
LZA = 0 deg
60
40
LZA = 90 deg
20
0
0
10
20
30
40
Time after launch (seconds)
50
60
The apparent shortwave-infrared intensity of a hypothetical theater missile vs. time after launch when
viewed at nadir and the limb; includes attenuation by the atmosphere.
and the limb). The analysis was limited to
the worst-case epoch (date or day of year)
for clutter-limited detection and was focused on two theaters of operation, one in
the Middle East, the other in Northeast
Asia. A constellation of five satellites, with
four of the satellites “pinched” to cover the
Eurasian landmass, was selected to provide
excellent overlapping coverage of both areas simultaneously.
A short-wave-infrared line scanner was
selected as the sensor-design option. Although not the most effective choice for
clutter suppression, a scanner reduces the
program risks associated with extreme
line-of-sight stability and focal-plane producibility associated with infrared-starer
designs. On the other hand, for acceptable
performance against highly structured
backgrounds, it is necessary that a scanner
Instantaneous line-of-sight coverage of two theaters by five (four pinched) geostationary satellites. The
3- × 3-degree target grid results in approximately 70 launch locations per theater. A representative type
of theater missile has a burnout time of approximately 60 seconds after launch. The ability to optimize
satellite locations and thus trade between theater performance and global coverage is most easily
achieved with geostationary constellations. The longitudes of the five geostationary satellites are
shown here on a cylindrical projection of Earth, along with the boundaries of the two theaters and the
constellation’s line-of-sight coverage. Essentially 100-percent triple coverage is provided over both
theaters of operation.
Theater-missile-warning stereo performance of five (four pinched) satellites against highly stressing
clutter. The stereo performance results across the entire Eurasian landmass are shown for all three
sensor designs against the highly stressing clutter level. The performance for the Middle East is indeed
very close to that for Northeast Asia. These designs would be even further stressed if the system were
being asked to perform additional missions simultaneously.
have a relatively small footprint. In this
study, three designs with footprints of 1.8,
2.6, and 3.6 kilometers were considered.
The infrared-scanner optics consisted of
a triplet refractor and a two-axis entry flat
for scanning. For a low noise-equivalenttarget intensity, sensor aperture was traded
against time delay and integration capability on the focal plane. The result was a 27centimeter aperture with 12 stages of time
delay and integration. A 2-second revisit
time taken as a mission requirement necessitated a scan rate of 4 degrees per second
and resulted in a noise-equivalent-target
intensity of about 1 kilowatt per steradian
at 40,000 kilometers. A single-hit probability of detection of 95 percent was chosen;
this led to a cumulative probability of detection of 99 percent for the 3of4-hits algorithm. The resultant probability of detection
and false exceedance combination led to a
minimum-detectable target as a function of
viewing geometry for each sensor design,
background scene, and clutter scale factor.
The TRADIX constellation analysis tool
was then used to evaluate the performance
of the three sensor designs against the
nominal and stressing backgrounds.
Conclusion
The costs associated with the deployment
of a surveillance satellite are closely related to the payload mass, both in regard to
the payload itself and that of the launch vehicle required to lift it and its satellite platform into a geostationary orbit. Accordingly, rather than attempting to carry out a
detailed cost analysis, this study was focused on estimations of payload mass,
which included the telescope, sensor housing, and scan mirror; the focal-plane assemblies and signal processors; power supplies; and other subsystem masses. The
highest-performing sensor with the smallest footprint and best clutter suppression
ends up also being the heaviest and most
expensive. The overall costs of this highestperforming system would have to be
weighed against the value of the mission to
the national interest; such considerations
are beyond the scope of this investigation.
The study described here illustrates the
application of analytical tools and databases
designed and assembled at Aerospace to
support the development of advanced
space-based surveillance systems. The example shown represents a hypothetical system for the specific purpose of missile
warning that covers two widely separated
geographical areas of concern. Aerospace
Crosslink Winter 2000/2001 • 53
100
Detection probability (percent)
1.8
2.6
90
80
Aerospace Systems
Architecting and Engineering
Certificate Program
3.6
Infrared Systems and Technology
Footprint (kilometers)
70
60
Five geostationary
satellites (four pinched)
Northeast Asia theater
50
40
Stressing
Stressing × 2
Clutter level
Nominal
Stressing × 3
Theater-missile-warning stereo performance sensitivity to sensor footprint and clutter. Stereo tracking
is necessary if stressing requirements associated with launch point determination and impact point
prediction are to be met. The impact of the larger footprints on the stereo performance of the three
sensor designs is severe. A 95-percent probability of stereo detection before burnout is only achievable for the 2.6-kilometer or better design against the stressing clutter level. The 3.6-kilometer design
can only meet a stereo probability of detection of 95 percent at the nominal clutter level or below. Tenkilometer clouds can be expected over Northeast Asia approximately 20 to 30 percent of the time during the summer months, and six-kilometer clouds can be expected 40 to 50 percent of that time.
Footprint (kilometers)
2.6
3.6
100
1.8
80
70
60
Increasing clutter levels
Detection probability (percent)
Nominal
90
Stressing
Stressing
×2
Five geostationary
satellites (four pinched)
Northeast Asia theater
50
Stressing
×3
40
100
125
150
Payload mass (kilograms)
175
200
Performance sensitivity to sensor footprint and payload mass at the various shortwave-infrared clutter
levels with line-of-sight stereo coverage. Essentially, the graph shows the cost of designing an infrared
sensor with guaranteed performance against an uncertain level of background clutter. The larger footprint designs are less robust against increasing clutter levels. The data show that while there are significant cost savings to be made in lower payload mass (and power), the performance penalty in
stereo track capability associated with a larger footprint design may be severe.
is currently applying comparable analyses
in its support to the Air Force Space and
Missile Systems Center and the Ballistic
Missile Defense Organization for the development of both SBIR High and Low
systems.
Further Reading
A. F. Pensa and J. R. Parsons, SBIR System Phenomenology Impact Study (SSPIS), (Sponsored
54 • Crosslink Winter 2000/2001
Infrared sensors detect reflected and
emitted radiation from sources in a
wavelength regime well beyond what
can be seen by the human eye. Such
sensors afford detection of hot missile
plumes as well as cold space targets.
The sensors also allow detection
through haze, at night, and during conditions of atmospheric radiance and
reflected solar radiation, among other
applications.
When placed on space-based platforms, these infrared sensor systems
can detect, identify, and track missiles
along most of their trajectory, from
launch to impact. Infrared space sensor systems, such as the High- and
Low-altitude segments of the Space
Based Infrared System, provide global
and theater-missile warning and have
applications in missile defense.
The Aerospace Institute offers a 30hour course, Infrared Systems and
Technology, on infrared systems design, including the relationship between resolution, search rate, and elevation angle. The course also provides
instruction on focal-plane arrays,
space radiation effects, clutter, background phenomenology, passive cooling, infrared system simulation, system performance analysis, and
alternative technologies and trades for
single and multisensor systems.
Infrared Systems and Technology is
linked to the Space Systems Design
course in the Space Systems Engineering series of the Aerospace Systems Architecting and Engineering
Certificate Program. The Space Systems Design course describes the
space systems design process, how
design fits into the space-mission
timeline, and how requirements flow
between the vehicle payload, spacecraft, and ground system.
by the U.S. Air Force Space and Missile Systems Center, El Segundo, CA, December 1994).
D. G. Lawrie et al., “Electro-Optical Sensor
Simulation for Theater Missile Warning,” paper
presented at the Fifth Symposium on Space Systems (Cannes, France, June 1996).
T. S. Lomheim et al., “Performance/Sizing Relationships for a Short-Wave/Mid-Wave Infrared Scanning Point-Source Detection Space
Sensor,” Proceedings of the 1999 IEEE Aerospace Conference (March 1999, Aspen, CO).
ASAE
Certificate
Links
Conferences, Workshops, and Symposia Sponsored or Hosted by The Aerospace Corporation
Space Power Workshop
Hosted by The Aerospace Corporation, the Air Force Research
Laboratory, and the Air Force Space and Missile Systems Center.
The workshop provides an informal, unclassified, international
forum to exchange ideas and information on space power. The
theme of this year’s workshop is “Commonality in Space and
Terrestrial Power.”
The sessions encompass the following areas of interest.
• Power systems architecture
• Power management and distribution (PMAD)
• Energy generation
• Energy storage
• Program experience
Space Parts Working Group (SPWG)
Sponsored by The Aerospace Corporation and the Air Force
Space and Missile Systems Center.
This joint government-industry working group provides an unclassified and international forum to disseminate information to
the aerospace industry and resolve common problems with highreliability electronic piece parts needed for space applications.
The meeting will include presentations from piece-part leaders in
the commercial and military space industry, parts suppliers, and
government agencies, including Air Force Space and Missile
Systems Center, National Aeronautics and Space Administration,
and Defense Supply Center Columbus.
11th Annual International Symposium of INCOSE
The Aerospace Corporation is a patron of the symposium, which is
hosted this year by the Systems Engineering Society of Australia.
The theme of INCOSE (International Council on Systems Engineering) 2000, “Innovate, Integrate and Invigorate,” calls on delegates to discuss and debate the challenges presented by continued use of existing products and systems as well as the way
forward in an era of rapid change and technological progress.
The symposium will provide a forum for addressing the demands
arising from the innovation of novel systems and systems-ofsystems, the integration of tomorrow's functionality with today’s
systems, and the invigoration of systems thinking and practice as
the globalization of technology and business accelerates in the
coming decade. The program will be published after papers, panels, and tutorial submissions have been evaluated and final content has been selected.
April 2–5, 2001
In addition to the sessions, workshop groups will discuss timely
space-power issues. An announcement of the preliminary schedule
of presentations, registration form, and final hotel information will
be available in February 2001.
The workshop will be held at the Crowne Plaza, Redondo Beach,
CA 90277. For hotel information call 800.368.9760 or
310.318.8888 or visit the hotel Web site at http://www.basshotels.com/crowneplaza?_franchisee=REDCP.
For the latest updates on the workshop, please visit the Web site at
http://www.aero.org/conferences/power. For general information,
contact Jackie Amazaki by phone at 310.336.4073 or by email at
jacqueline.y.amazaki@aero.org.
May 1–2, 2001
The sessions will discuss the following subjects.
• Piece parts and related materials and processes issues
• Parts standardization
• Enhancement of the part procurement process
• Piece-part trends
• Radiation hardness
The meeting will be held at the Hilton Hotel, 21333 Hawthorne Blvd.,
Torrance, CA 90503. For hotel information call 310.540.0500 or visit
the hotel Web site at http://www.hiltontorrance.com.
For more information, contact Mel Cohen at 310.336.0470 or
Larry Harzstark at 310.336.5883. Fax 310.336.6914.
July 1–5, 2001
The Academic Forum Program will address the following topics.
• Involvement of academics in the INCOSE Plan
• Academic contributions needed to support systems engineering
• How academia can support stakeholders
• Existing academic activity in support of systems engineering
• What next?
The symposium will be held at the Carlton Crest Hotel, 65 Queens
Road, Melbourne, Australia. For hotel information visit the hotel’s
Web site at http://asiatravel.com/australia/prepaidhotels/carton_
crest/melbourne/.
For more information, visit the symposium’s Web site at http://
www.incose.org/symp2001/.
Crosslink Winter 2000/2001 • 55
Bookmarks Recent Publications and Patents by the Technical Staff
Publications
S. T. Amimoto, D. J. Chang, and A. Birkitt,
“Stress Measurements in Silicon Microstructures,” Proceedings SPIE, Vol.
3933, 113–121 (2000).
P. C. Anderson, D. L. McKenzie, et al.,
“Global Storm Time Auroral X-ray Morphology and Timing and Comparison With
UV Measurements,” Journal of Geophysical Research, Vol. 105, No. A7,
15,757–15,777 (July 1, 2000).
E. J. Beiting, “Measurements of Stratospheric
Plume Dispersion by Imagery of Solid
Rocket Motor Exhaust,” Journal of Geophysical Research, Vol. 105, No. D5,
6891–6901 (Mar. 16, 2000).
K. Bell et al., “A Joint NASA and DOD Deployable Optics Space Experiment,” UV,
Optical, and IR Space Telescopes and Instruments, Proceedings of the Conference
(Munich, Germany, Mar. 29–31, 2000),
Bellingham, WA, Society of Photo-Optical
Instrumentation Engineers, SPIE Proceedings, Vol. 4013, 568–579.
K. Bell et al., “Ultra-Lightweight Optics for
Space Applications,” UV, Optical, and IR
Space Telescopes and Instruments, Proceedings of the Conference (Munich, Germany, Mar. 29–31, 2000), Bellingham,
WA, Society of Photo-Optical Instrumentation Engineers, SPIE Proceedings, Vol.
4013, 687–698.
K. Bell, R. Moser, et al., “A Deployable Optical Telescope Ground Demonstration,” UV,
Optical, and IR Space Telescopes and Instruments, Proceedings of the Conference
(Munich, Germany, Mar. 29–31, 2000),
Bellingham, WA, Society of Photo-Optical
Instrumentation Engineers, SPIE Proceedings, Vol. 4013, 559–567.
J. F. Binkley, J. B. Clark, and C. E. Spiekermann, “Improved Procedure for Combining Day-of-Launch Atmospheric Flight
Loads,” Journal of Spacecraft and Rockets,
Vol. 37, No. 4, 459–462 (Aug. 2000).
J. B. Blake, J. Fennell, R. Selesnick, et al., “A
Multi-Spacecraft Synthesis of Relativistic
Electrons in the Inner Magnetosphere Using LANL, GOES, GPS, SAMPEX, HEO,
and POLAR,” Advances in Space Research,
Vol. 26, No. 1, 93–98 (July 2000).
J. B. Blake et al., “Magnetospheric Relativistic
Electron Response to Magnetic Cloud
Events of 1997,” Advances in Space Research, Vol. 25, Nos. 7–8, 1387–1392 (Apr.
2000).
J. B. Blake, M. Looper, et al., “SAMPEX Observations of Precipitation Bursts in the
Outer Radiation Belt,” Journal of Geophysical Research, Vol. 105, No. A7,
15,875–15,885 (July 1, 2000).
56 • Crosslink Winter 2000/2001
W. F. Buell, R. W. Farley, et al., “Bayesian
Spectrum Analysis for Laser Vibrometry
Processing,” Laser Radar Technology and
Applications V, Proceedings of the Conference (Orlando, FL, Apr. 26–28, 2000),
Bellingham, WA, Society of Photo-Optical
Instrumentation Engineers, SPIE Proceedings, Vol. 4096, No. 190, 444–455.
D. J. Chang, P. R. Valenzuela, and T. V. Albright, “Failure Simulation of Composite
Rocket Motor Cases,” Proceedings, 2000
National Space and Missile Materials Symposium (San Diego, CA, March 1, 2000),
pp. 1–14.
I-S. Chang, “Chinese Space Launch Failures,”
Proceedings 22nd International Symposium
on Space Technology and Science
(Morioka, Japan, May 28–June 4, 2000),
pp. 1–10.
I-S. Chang, “Overview of World Space
Launches,” Journal of Propulsion and
Power, Vol. 16, No. 5, 853–866 (Oct.
2000).
J. B. Clark, M. C. Kim, and A. M. Kabe, “Statistical Analysis of Atmospheric Flight
Gust Loads Analysis Data,” Journal of
Spacecraft and Rockets, Vol. 37, No. 4,
443–445 (Aug. 2000).
J. E. Clark et al., “Overview of the GPS M
Code Signal,” Navigating into the New Millennium; Proceedings of the Institute of
Navigation National Technical Meeting
(Anaheim, CA, Jan. 26–28, 2000), pp.
542–549.
J. H. Clemmons et al., “Observations of Traveling Pc5 Waves and Their Relation to the
Magnetic Cloud Event of January 1997,”
Journal of Geophysical Research, Vol. 105,
No. A3, 5441–5452 (Mar. 1, 2000).
S. V. Didziulis, P. P. Frantz, and G. Radhakrishnan, “The Frictional Properties of Titanium Carbide, Titanium Nitride and
Vanadium Carbide: Measurement of a
Compositional Dependence with Atomic
Force Microscopy,” Journal of Vacuum Science and Technology B, Vol. 18, No. 1,
69–75 (Jan/Feb 2000).
J. F. Fennell et al., “Comprehensive Particle
and Field Observations of Magnetic Storms
at Different Local Times from the CRRES
Spacecraft,” Journal of Geophysical Research, Vol. 105, No. A8, 18,729–18,740
(Aug. 1, 2000).
J. F. Fennell et al., “Energetic Magnetosheath
Ions Connected to the Earth’s Bow
Shock—Possible Source of Cusp Energetic
Ions,” Journal of Geophysical Research,
Vol. 105, No. A3, 5471–5488 (Mar. 1,
2000).
J. F. Fennell and J. L. Roeder, “Entry of
Plasma Sheet Particles into the Inner Magnetosphere as Observed by Polar/ CAMMICE,” Proceedings of the 1998 Cam-
bridge Symposium/Workshop in Geoplasma
Physics on “Multiscalle Phenomena in
Space Plasmas II,” No. 15, Edited by T.
Chang (1998), pp. 388–393.
J. F. Fennell, J. L. Roeder, J. B. Blake, et al.,
“POLAR CEPPAD/IPS Energetic Neutral
Atom (ENA) Images of a Substorm Injection,” Advances in Space Research, Vol. 25,
No. 12, 2407–2416 (June 2000).
A. Gillam and S. P. Presley, “A Paradigm Shift
in Conceptual Design,” Proceedings, International CIRP Design Seminar (Haifa, Israel, May 16–18, 2000), pp. 41–46.
E. Harvie, M. Phenneger, et al., “GOES OnOrbit Storage Mode Attitude Dynamics and
Control,” Advances in the Astronautical
Sciences, Vol. 103, pt. 2, No. 190, 1095–
1114 (2000).
D. F. Hall et al., “Measurement of Long-Term
Outgassing from the Materials Used on the
MSX Spacecraft,” Optical Systems Contamination and Degradation II—Effects,
Measurements, and Control, Proceedings of
the Conference (San Diego, CA, Aug. 2–3,
2000), Bellingham, WA, Society of PhotoOptical Instrumentation Engineers, SPIE
Proceedings, Vol. 4096, 28–40.
D. F. Hall et al., “Observations of the Particle
Environment Surrounding the MSX Spacecraft,” Optical Systems Contamination and
Degradation II—Effects, Measurements,
and Control, Proceedings of the Conference
(San Diego, CA, Aug. 2–3, 2000), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers, SPIE Proceedings,
Vol. 4096, 21–27.
D. F. Hall et al., “Outgassing of Optical Baffles
and Primary Mirror During Cryogen Depletion of a Space-Based Infrared Instrument,” Optical Systems Contamination and
Degradation II—Effects, Measurements,
and Control, Proceedings of the Conference
(San Diego, CA, Aug. 2–3, 2000), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers, SPIE Proceedings,
Vol. 4096, 11–20.
D. F. Hall et al., “Update of the Midcourse
Space Experiment (MSX) Satellite Measurements of Contaminant Films Using
QCMs,” Optical Systems Contamination
and Degradation II—Effects, Measurements, and Control, Proceedings of the
Conference (San Diego, CA, Aug. 2–3,
2000), Bellingham, WA, Society of PhotoOptical Instrumentation Engineers, SPIE
Proceedings, Vol. 4096, No. 190, 1–10.
D. F. Hall, G. S. Arnold, T. R. Simpson, D. R.
Suess, and P. A. Nystrom, “Progress on
Spacecraft Contamination Model Development,” Optical Systems Contamination and
Degradation II—Effects, Measurements,
and Control, Proceedings of the Conference
(San Diego, CA, Aug. 2–3, 2000), Belling-
ham, WA, Society of Photo-Optical Instrumentation Engineers, SPIE Proceedings,
Vol. 4096, No. 190, 138–156.
C. Hay and J. Wang, “Enhancing GPS—Tropospheric Delay Prediction at the Master
Control Station,” GPS World, Vol. 11, No.
1, 56–62 (Jan. 2000).
J. H. Hecht, A. B. Christensen, et al., “Thermospheric Disturbance Recorded by Photometers Onboard the ARIA II Rocket,”
Journal of Geophysical Research, Vol. 105,
No. A2, 2461–2475 (Feb. 2000).
K. C. Herr et al., “Spectral Anomalies in the 11
and 12 Micron Region from the Mariner
Mars 7 Infrared Spectrometer,” Journal of
Geophysical Research, Vol. 105, No. E9,
22507–22515 (Sept. 25, 2000).
A. A. Jhemi et al., “Optimization of Rotorcraft
Flight in Engine Failure,” Proceedings AHS
International, Annual Forum, 56th (Virginia Beach, VA, May 2–4, 2000), Vol. 1,
523–536 (2000).
A. M. Kabe, C. E. Spiekermann, M. C. Kim, S.
S. Lee, “Refined Day-of-Launch Atmospheric Flight Loads Analysis Approach,”
Journal of Spacecraft and Rockets, Vol. 37,
No. 4, 453–458 (Aug. 2000).
J. A. Kechichian, “Minimum-Time Constant
Acceleration Orbit Transfer With First-Order Oblateness Effect,” Journal of Guidance, Control, and Dynamics, Vol. 23, No.
4, 595–603 (Aug. 2000).
J. A. Kechichian, “Transfer Trajectories from
Low Earth Orbit to a Large L1-Centered
Class I Halo Orbit in the Sun-Earth Circular Problem,” The Richard H. Battin Astrodynamics Symposium (College Station, TX,
March 20–21, 2000).
M. C. Kim, A. M. Kabe, S. S. Lee, “Atmospheric Flight Gust Loads Analysis,” Journal of Spacecraft and Rockets, Vol. 37, No.
4, 446–452 (Aug. 2000).
H. C. Koons, “Application of the Statistics of
Extreme Valves to Space Science,” Proceedings of the American Geophysical
Union Spring 2000 Meeting (Washington,
D.C., May 30, 2000), pp. 1–22.
H. C. Koons and J. F. Fennell, “Space Environment—Effects on Space System,” Proceedings of the Chapman Conference on
Space Weather (Clearwater, FL, March
2000), pp. 1–13.
J. C. Latta, “Production Cost Improvement,”
Proceedings, ISPA 2000 (Noorwijk, The
Netherlands, May 8–10, 2000), pp. 1–30.
T. S. Lomheim, “Hands-On Science Instruction for Elementary Education Majors,”
Optical Engineering Reports: SPIE Education Column, No. 195, 11 (March 2000).
K. T. Luey, D. J. Coleman, and J. C. Uht, “Separation of Volatile and Nonvolatile Deposits on Real-Time SAW NVR Detectors
(Nonvolatile Residue Detectors for Spacecraft Contamination,” Optical Systems
Contamination and Degradation II—Effects, Measurements, and Control, Proceedings of the Conference (San Diego, CA,
Aug. 2–3, 2000), Bellingham, WA, Society
of Photo-Optical Instrumentation Engineers, SPIE Proceedings, Vol. 4096,
109–118.
D. C. Marvin et al., “Enabling Power Technologies for Small Satellites,” Proceedings
of Space 2000: The 7th International Conference and Exposition on Engineering,
Construction, Operations, and Business in
Space (Albuquerque, NM, Feb. 27–Mar. 2,
2000), pp. 530–536.
J. E. Mazur, “Interplanetary Magnetic Field
Line Mixing Deduced from Impulsive Solar-Flare Particles,” Astrophysical Journal,
Vol. 532, L79–L82 (March 20, 2000).
J. E. Mazur et al., “Characteristics of Energetic
(Not Less Than Approximately 30 keV/nucleon) Ions Observed by the Wind/STEP
Instrument Upstream of the Earth’s Bow
Shock,” Journal of Geophysical Research,
Vol. 105, No. A1 61–78 (Jan. 2000).
J. E. Mazur, J. B. Blake, M. D. Looper, et al.,
“Anomalous Cosmic Ray Argon and Other
Rare Elements at 1-4 MeV/nucleon
Trapped Within the Earth’s Magnetosphere,” Journal of Geophysical Research,
Vol. 105, No. A9, 21,015–21,023 (Sept.
2000).
D. L. McKenzie et al., “Global X-ray Emission
During an Isolated Substorm—A Case
Study,” Journal of Atmospheric and SolarTerrestrial Physics, Vol. 62, No. 10,
889–900 (July 2000).
D. L. McKenzie et al., “Studies of X-ray Observations from PIXIE,” Journal of Atmospheric and Solar-Terrestrial Physics, Vol.
62, No. 10, 875–888 (July 2000).
M. C. McNab et al., “Mappings of Auroral Xray Emissions to the Equatorial Magnetosphere—A Study of the 11 April 1997
Event,” Advances in Space Research, Vol.
25, Nos. 7–8, 1645–1650 (Apr. 2000).
K. W. Meyer and C. C. Chao, “Atmospheric
Reentry Disposal for Low-Altitude Spacecraft,” Journal of Spacecraft and Rockets,
Vol. 37, No. 5, 670–674 (Oct. 2000).
R. L. Moser, K. D. Bell, et al., “Experimental
Control of Microdynamic Events Observed
During the Testing of a Large Deployable
Optical Structure,” UV, Optical, and IR
Space Telescopes and Instruments Proceedings of the Conference (Munich, Germany,
Mar. 29–31, 2000), Bellingham, WA, Society of Photo-Optical Instrumentation Engineers, SPIE Proceedings, Vol. 4013,
715–726.
R. L. Moser, M. J. Stallard, et al., “Low Cost
Microsatellites—Innovative Approaches to
Breaking the Cost Paradigm,” AIAA Space
2000 Conference and Exhibition (Long
Beach, CA, Sept. 19–21, 2000).
T. M. Nguyen and J. M. Charroux, “Baseband
Carrier Phase Tracking Technique for
Gaussian Minimum Shift-Keying,” AIAA
Space 2000 Conference and Exhibition
(Long Beach, CA, Sept. 19–21, 2000).
T. M. Nguyen, C. C. Wang, and L. B. Jocic,
“Wideband CDMA for NavCom Systems,”
AIAA Space 2000 Conference and Exhibition (Long Beach, CA, Sept. 19–21, 2000).
A. V. Rao, “Minimum-Variance Estimation of
Reentry Debris Trajectories,” Journal of
Spacecraft and Rockets, Vol. 37, No. 3,
366–373 (June 2000).
A. F. Rivera, “The Impact of Power Architecture on Electromagnetic Interference Control,” Space Power Workshop-2000 (Torrance, CA, April 13, 2000), pp. 1–14.
B. H. Sako, M. C. Kim, A. M. Kabe, and W. K.
Yeung, “Derivation of Atmospheric GustForcing Functions for Launch-Vehicle
Loads Analysis,” Journal of Spacecraft and
Rockets, Vol. 37, No. 4, 434–442 (Aug.
2000).
K. Siri, “Study of System Instability in SolarArray-Based Power Systems,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 36, No. 3, 957–964 (July 2000).
C. E. Spiekermann, B. H. Sako, and A. M.
Kabe, “Identifying Slowly Varying and
Turbulent Wind Features for Flight Loads
Analyses,” Journal of Spacecraft and Rockets, Vol. 37, No. 4, 426–433 (Aug. 2000).
P. Thomas et al., “MightySat II—On-Orbit
Lab Bench for Air Force Research Laboratory,” 12th AIAA/USU Annual Conference
on Small Satellites Proceedings (Utah State
University, Logan (Aug. 21–24, 2000).
W. M. VanLerbergher et al., “Mixing of a
Sonic Transverse Jet Injected into a Supersonic flow,” AIAA Journal, Vol. 38, No. 3,
470–479 (Mar. 2000).
R. L. Walterscheid, J. H. Hecht, et al., “Evidence of Reflection of a Long-Period Gravity Wave in Observations of the Nightglow
Over Arecibo on May 8–9, 1989,” Journal
of Geophysical Research, Vol. 105, No. D5,
6927–6934 (Mar. 16, 2000).
H. T. Yura, L. Thrane, and P. E. Anderson,
“Analysis of Optical Coherence Tomography Systems Based on the Extended Huygens-Fresnel Principle,” Journal of the Optical Society of America A, Vol. 17, No. 3,
484–490 (March 2000).
A. H. Zimmerman, “Charge Management Issues for NiH2 Batteries,” Proceedings, 18th
Annual Space Power Workshop (Torrance,
CA, April 11, 2000), pp. 1–15.
Crosslink Winter 2000/2001 • 57
Bookmarks Continued
Patents
C. J. Clark, A. A. Moulthrop, M. S. Muha, C. P.
Silva, “Frequency Translating Device
Transmission Response System,” U.S.
Patent No. 6,064,694, May 2000.
A three-pair measurement method determines amplitude and phase transmission response of frequency translating devices, including a device under test and two test
devices using a vector network analyzer and
a controller where one of the devices has reciprocal frequency response characteristics.
The measurement method provides a lowpass equivalent transmission response of the
devices.
J. T. Dickey, “Microelectronic Substrate Active
Thermal Cooling Wick,” U.S. Patent No.
6,070,656, June 2000.
This device relies on the latent heat of vaporization and on differences in the coefficient of thermal expansion of materials,
which will cause a flex in the wick structure
as it is heated or cooled. Current technology
removes energy at a consistent rate from all
areas of a device. Because high heat areas
change with time in most devices, this cooling wick actively modifies the fluid flow
paths in reaction to those time-dependent
changes. This allows the heat-dissipating device to maintain more uniform, cooler temperatures. The purpose is to aid in thermal
control of high heat flux components on the
micro scale.
L. K. Herman, C. M. Heatwole, G. M. Manke,
I. M. McCain, B. T. Hamada, “Pseudo
Gyro,” U.S. Patent No. 6,020,956, Feb.
2000.
By means of software processes, a pseudo
gyro emulates mechanical gyros. It
processes space system appendage measurement data and reaction wheel tachometer data within reference and control systems
of a satellite using principles of conservation
of momentum to compute vehicular bus angular velocity rate data by accounting for the
momentum transfer between satellite bus
and appendages. This technique can be useful for extending hardware gyro lifetime
and/or replacement of failed gyros on satellites.
R. S. Jackson, G. M. Manke, “Control System
for Counter-Oscillating Masses,” U.S.
Patent No. 6,107,770, Aug. 2000.
This control system stabilizes the flexible
body bending modes of a space, airborne, or
ground-based system, while providing angular position control of an oscillating mass
connected to a counter-oscillating counterbalance. The actuating mechanism uses two
drive motors to exert torques on the mass
and counterbalance, under the control of a
feedback controller. The controller’s two
channels generate first and second torque
58 • Crosslink Winter 2000/2001
command signals for the two drives. The
second channel filters out input frequencies
in a predetermined bandwidth about the frequency of the first torque command signal.
The system reduces sensitivity to flexible
body system modes and allows maximum
pointing accuracy for the payload. Applications include lidar or other similar systems.
S. H. Raghavan, J. K. Holmes, “NRZ and
Biphase-L Formatted Hexaphase Modulated GPS Transmission Method,” U.S.
Patent No. 6,075,810, June, 2000.
This transmission method provides for the
simultaneous modulation of the C/A code,
the P(Y) code, and a new code, all modulating a single carrier for Global Positioning
System (GPS) use in one or both L1 and L2
bands. This allows addition of a new military signal in the same frequency band and
C/A code deniability without affecting the
new signal.
C. P. Silva and A. M. Young “High Frequency
Anharmonic Oscillator for the Generation
of Broadband Deterministic Noise,” U.S.
Patent No. 6,127,899, Oct. 2000.
This invention provides a means to generate
very-high-frequency, broadband, chaotic
electrical signals that exhibit noise-like characteristics. The signals can be used to carry
information in the same way a modulated sinusoidal carrier is used in a conventional
AM or FM system. The oscillator is based
on a forced second-order Duffing equation
and is quite robust against the deleterious effects that normally arise in high-frequency
operation. This design provides the fundamental enabling technology needed to develop and demonstrate an operational microwave chaos-based communications link
and thus determine the unique application
contexts and features of such a system.
E. J. Simburger, “Power Sphere,” U.S. Patent
No. 6,127,621, Oct. 2000.
This invention addresses the problem of
connecting solar cells mounted on a curved
surface such as a sphere by having the cells
connected in parallel to the array power bus
through individual DC-DC converters. This
allows each solar cell to deliver all of the
power that it can produce based upon the
amount of sunlight it is actually receiving.
The DC-DC converters provide the mechanism of boosting the voltage generated by
each individual cell to a level usable by the
load connected to the solar array. This invention makes practicable solar array shapes
other than flat panels or cylinders.
R. P. Welle, “Ultrasonic Power Sensory System,” U.S. Patent No. 6,127,942, Oct. 2000.
In a coupled transducer system, a power signal sent from an external controller energizes the first transducer, which converts the
signal into an acoustic wave that is commu-
nicated through the coupling medium to the
second transducer. The acoustic wave is
transformed by the second transducer into
an electrical signal that can be converted into
useful power. The primary advantage is the
transfer of power through a coupling
medium without the use of electrical power
wires.
A. D. Yarbrough, “Micromachined Rotating Integrated Switch,” U.S. Patent No.
6,072,686, June 2000.
This is a micromachined switch and filter
combination suitable for use in high-performance, millimeter-wave receivers. It consists of an electrostatically driven switch
mechanism with a bidirectionally rotating
member having two positions for integrated
circuit connection to the input feed for
nearby filter structures. Control traces carry
electrical signals generating electrical fields
to provide electrostatic force upon the rotating member and thus turn the switch clockwise or counterclockwise.
K. Siri, “Power Converters for Multiple Input
Power Supplies,” U.S. Patent No.
6,088,250, July 2000.
This improved power converter uses two
conventional charging and discharging
switches operating in a complementary
mode and driven by a switch driver controlled by a power factor correction controller. Use of a blocking diode for blocking
cross-coupled short-circuit paths and an absorption capacitor are key design features.
Applications include switching power converters used for power factor correction in
polyphase power systems and power sharing
among distributed power sources to a common load.
S. H. Raghavan, J. K. Holmes, “NRZ and
Biphase-L Formatted Quadriphase Modulated GPS Transmission Method,” U.S.
Patent No. 6,148,022, Nov. 2000.
This transmission method provides for the
simultaneous modulation of two codes such
as the GPS C/A code and an arbitrary new
code modulating a single carrier for GPS use
in one or both L1 and L2 bands. This would
allow addition of a new military signal in the
same frequency band and C/A code deniability without affecting the new military signal.
T. M. Nguyen, J. M. Charroux, “Precoded
Gaussian Minimum Shift Keying Carrier
Tracking Loop,” U.S. Patent No. 6,148,040,
Nov. 2000.
This invention provides carrier phase tracking using data precoded GMSK signals. It is
an improved timing recover loop offering
closed loop generation of a data timing signal at a baseband frequency. The technique
improves noise rejection, and provides fast
data acquisition by operating at baseband.
Aerospace
Systems
Architecting
and
Engineering
Certificate
Program
Initial awareness
Aerospace Roles in Space Systems
Architecting, Acquisition, and Engineering
30-hour “core” course
Knowledge and
skills building
Teaming for Systems Architects and
Systems Engineers
20-hour course
Aerospace
Systems Architecting
Program
120-hour curriculum
120-hour curriculum
• System synthesis
• Front-end planning:
systems and
system-of-systems
• System analysis
and evaluation
• Acquisition life cycle:
for single system
Skills
reinforcement
Mentored
one-year assignments
Bruce E. Gardner,
Principal Director of
Learning Systems for
The Aerospace Institute,
is responsible for directing the
overall planning, development, and
delivery of education/training programs
and multimedia learning support resources for
the corporation’s employees and customers.
T
he Aerospace Systems Architecting and Engineering Certificate
Program has been the centerpiece of The Aerospace Corporation’s training curriculum since 1995. The
program, which includes architecting and
engineering tracks as well as internships
and on-the-job training, has had a major
positive impact on corporate culture and
staff technical capabilities.
The Aerospace
Architect-Engineer Role
Aerospace has always maintained a strong
national reputation as an architect-engineer
of space systems, and the corporation’s
highly skilled technical staff accounts for
much of its success. Engineers and scientists with advanced degrees and years of
experience apply their expertise and crossprogram knowledge to complex tasks,
from initial concept development to deployment and operation.
The Aerospace architect-engineer role
has evolved significantly over the last
decade, affected by the ending of the Cold
War, changing defense priorities, increasing cost-consciousness, the commercial-
Space Systems
Engineering
Program
Internship/on-the-job training
programs
ASAE
Certificate
The Aerospace Systems Architecting and Engineering Certificate Program. Space systems engineering involves processes associated with the conceptualization, design, development, and fielding of a system, including system-interface requirements management, interdisciplinary effectiveness analysis, and independent verification and validation
activities. Systems architecting is the aspect of systems engineering concerned with determining a system’s purpose, formulating the concept behind the system, structuring the
system, and certifying its fitness for use.
ization of space, and numerous emerging
technologies.
One major impact of this evolution has
been the increasing customer need for
technical staff with heightened systems
awareness and enhanced, multidisciplinary
skills in the development of complex missions and systems.
The Certificate Program
To address this need, in 1994 the corporation created The Aerospace Institute to facilitate the development of staff members’
space-systems engineering competencies.
In 1995, the Institute piloted the Aerospace
Systems Engineering Certificate Program,
which featured more than 100 hours of
instructor-led classroom training.
The program has since been expanded to
incorporate a systems-architecting curriculum plus on-the-job training and mentoring. Today the Aerospace Systems Architecting and Engineering Certificate
Program comprises more than 300 hours of
classroom training, involving more than 50
instructors from a broad cross-section of
Aerospace engineering and program office
organizations.
The program’s purpose is to develop the
next generation of Aerospace technical leaders and program managers by enhancing
• awareness of Aerospace roles and responsibilities in national space systems
architecting, acquisition, and engineering functions and processes, and how
those roles and responsibilities relate to
those of customers and their contractors
• fundamental understanding of the
Aerospace perspectives, concepts,
methodologies, and tools associated
with system-of-systems architecting,
space-systems life-cycle engineering,
space-systems engineering management, and technical-team leadership
• basic skills related to the use of those
perspectives, concepts, methodologies,
and tools
• application of those skills to real-world,
multidisciplinary technical (and related
nontechnical) issues
Program graduates receive the Aerospace
Systems Architect-Engineer certificate, a
measure of successful career progression
within the corporation.
Crosslink Winter 2000/2001 • 59
To ensure the program’s relevance and
quality, the Institute has formed a Systems
Architecting and Engineering Mentor
Team comprising 10 senior corporate managers and technical leaders. The team provides general oversight of the program’s
curriculum structure, content, funding, and
participation.
Curriculum
The curriculum consists of an initial awareness segment, a knowledge and skills building segment, and a skills-reinforcement
segment.
1. Initial awareness
This segment is a 30-hour core course,
Aerospace Roles in Space Systems Architecting, Acquisition, and Engineering, targeted to Aerospace technical staff and customers. Content includes the corporation’s
technical mission, philosophy, roles, capabilities, and approaches in space-systems
architecting, acquisition, and engineering.
The course orients participants to evolving DOD and NRO customer and acquisition environments, surveys major Aerospace systems architecting-engineering
analysis and program-management support tools, and highlights lessons learned
from cross-program experience. Senior
technical experts and program managers
conduct panel sessions and deliver interactive lectures.
2. Knowledge and skills building
The next segment enables technical staff to
develop strong foundational knowledge
and competencies in systems architectingengineering. The focus is on gaining indepth understanding of state-of-the-art
methodologies and tools and how to apply
them in their jobs.
All participants take Teaming for Systems Architecting and Systems Engineering. This three-day course develops influencing and negotiating skills for effective
participation in multiorganizational technical teams and work groups for which Aerospace may not be the formal leader.
Participants progressing beyond Teaming choose the Space Systems Engineering
Program or the Aerospace Systems Architecting Program. Each curriculum track
comprises approximately 120 classroom
hours.
Space Systems Engineering Program.
The engineering curriculum consists of four
courses that cover Aerospace analysis
methodologies and lessons learned from
the space-systems-acquisition life cycle:
60 • Crosslink Winter 2000/2001
• Concept Development: translation of
the customer’s statement of mission
need into the quantitative engineering
requirements
• Space Systems Design: translation of
engineering requirements into preliminary designs of the physical elements of
the space-system solution
• Space Systems Development, Integration, and Test: evolution of product
toward detailed design, fabrication, and
integration; system test approaches and
analysis methods leading to “go-ahead”
decisions for deployment
• Space Systems Operations: launch preparation and integration, satellite deployment, day-to-day operations, and mission
effectiveness assessment
Students apply the Aerospace Concept Design Center’s concurrent engineering
methodology to a case study involving the
development of a space-based theater missile defense system, and they tour a
contractor spacecraft-manufacturing facility as well as the Vandenberg Air Force
Base launch and processing facility.
Aerospace Systems Architecting Program. The architecting curriculum was developed in response to growing customer
demand for support in initial planning and
synthesis of architectural designs for largescale systems-of-systems in which the
space segment may be only one component. Aerospace staff must work effectively
with these customers to
• determine the real purpose for which an
architectural solution is sought
• develop measures of effectiveness and
utility for the architecture
• develop technically feasible architectural solutions that are satisfactory to all
stakeholders
To teach participants the skills needed to
accomplish these objectives, the architecting curriculum uses an interactive sequence
of lectures and integrated case studies. The
Space Systems Design
Concept Development
System concept design
Ground systems
System requirements
Spacecraft bus
Architecture alternatives
Support subsystems
System
requirements
Mission effectiveness
Payload design
Early warning
satellite
Uncued
threat
attack
Cued
threat
attack
Ground radar
Critical
parameters
Mission
need
Space Systems Development,
Integration, and Test
Space Systems Operations
Operational effectiveness
Parts, materials, assemblies
Environmental test
Track, telemetry, control
Requirements generation
User services
Operational
acceptability
Subsystem/software test
Test and evaluation plan
Test requirements
SRR
PDR
CDR
Ship
Threshold
LOS
Objective
Space Systems Engineering Program. The courses convey an ideal acquisition process from concept
through on-orbit operations. The material is presented sequentially, but its application is cyclic and iterative: within any phase, movement is forward and backward as concepts, systems, and products are
defined and refined.
architecting program also familiarizes participants with defense-community architectural description frameworks (such as
C4ISR) and develops participants’ skills in
identifying and using Aerospace corporate
systems-architecting resources.
3. Skills reinforcement
The final segment is an on-the-job-training
and internship program where participants
apply principles learned in the classroom.
They complete a mentored assignment that
consists of at least two major program systems architecting-engineering support
tasks. The assignment lasts a minimum of
one year. Afterward, participants provide a
formal briefing on accomplishments and
lessons learned to the mentor team and
other corporate executives.
Program Successes
Since its inception, the certificate program
has successfully strengthened the skills of
the Aerospace technical staff in systems
architecting and engineering. More than
30 percent of the technical staff have participated substantially in the program.
Customer personnel have also attended the
core course.
The program has had a clear beneficial
impact on Aerospace program-support capabilities and effectiveness. Graduates
successfully apply course concepts, lessons learned, and course materials in
working with customers.
The design and development of this curriculum have led to the creation and
enhancement of several major architectureengineering support tools and methodologies. Over 10 volumes of well-organized,
high-quality course lecture notes and casestudy examples have been developed by
nationally known senior Aerospace technical experts.
Finally, graduates have enhanced their
networking and career-development opportunities. As a result of interactions with instructors and fellow students, they have
Complex problem
System (new or existing)
or system-of-systems
Political and
economic realities
Diverse needs
Flexible, iterative method
Aerospace Systems Architecting Methodology
Existing processes
Raw needs, constraints
Problem
structuring
Use cases,
domain
specifics
Harmonize
Rich
picture
Purpose
analysis
Technology
Architecture
descriptions
Solution
System structuring
models
Selection/
abstraction
Candidate
system
Satisfactory solution
Multistatic air vehicle surveillance
• Global coverage
• Deploy UAVs as required
Illuminating
spacecraft
Illuminating
spacecraft
Defeats Stealth, LO
Receiving
spacecraft
Receiving
spacecraft
Conventional
Receiving
UAVs
Conventional
Stealth
Stealth
Cruise missile
Receiving
UAVs
Cruise missile
Aerospace Systems Architecting Program. The curriculum uses lectures and case studies to teach
participants the Aerospace Systems Architecting Methodology in dealing with complex, unstructured
scenarios.
Mark Maier of the Aerospace Engineering and
Technology Group developed the Aerospace
Systems Architecting Methodology.
made significant improvements in their
ability to network with colleagues in other
organizations and to gain access to relevant
corporate expertise. Some certificate recipients have reported that participation has led
to a new position or expanded job function.
What’s Next?
A third knowledge and skills building curriculum, the Space Systems Acquisition
Management Program, will be piloted during fiscal year 2001. Attendees will include
both Aerospace technical staff and Space
and Missile Systems Center personnel.
A new multimedia learning support tool,
SEEK (Systems Engineering Educational
Knowledge), will provide on-line access
by the entire Aerospace technical staff to
the full range of certificate-program course
materials, supporting technical reports, and
customer acquisition process information.
This tool, which features a user-friendly
database-search capability to obtain critical
information on specific systems architectingengineering topics of interest, will be deployed in the near future.
The certificate program’s success has
helped create and promote a highly favorable climate for the development of improved systems architecting-engineering
awareness and competencies throughout
Aerospace. Strong corporationwide interest and participation in the program are expected to continue.
Crosslink Winter 2000/2001 • 61
Crosslink
Winter 2000/2001 Vol. 2 No. 1
Editor in Chief
Donna J. Born
Board of Trustees
Corporate Officers
Bradford W. Parkinson, Chair
E. C. Aldridge, Jr.
Chief Executive Officer
Howell M. Estes III, Vice Chair
Editor
E. C. Aldridge, Jr.
David A. Bearden
William F. Ballhaus, Jr.
Managing Editor
Richard E. Balzhiser
Jon Jackoway
Guion S. Bluford, Jr.
Art Director
Daniel E. Hastings
Thomas C. Hamilton
Jimmie D. Hill
Illustrator
John A. McLuckey
John A. Hoyem
Thomas S. Moorman, Jr.
Photographers
Ruth L. Novak
Eric Hamburg
Ann C. Petersen
Mike Morales
Robert R. Shannon
Editorial Board
Donald W. Shepperd
William C. Krenz, Chairman
Jeffrey H. Smith
David A. Bearden
K. Anne Street
Harlan F. Bittner
John H. Tilelli, Jr.
Donna J. Born
Robert S. Walker
William F. Ballhaus, Jr.
President
Michael J. Daugherty
Executive Vice President
Jon H. Bryson
Stephen E. Burrin
Marlene M. Dennis
Rodney C. Gibson
Lawrence T. Greenberg
Gordon J. Louttit
John R. Parsons
Joe M. Straus
Dale E. Wallis
John F. Willacker
Linda F. Brill
David J. Evans
Isaac Ghozeil
David J. Gorney
Linda F. Halle
Michael R. Hilton
John P. Hurrell
Mark W. Maier
John W. Murdock
Mabel R. Oshiro
Frederic M. Pollack
The Aerospace Corporation
P.O. Box 92957
Los Angeles, CA 90009-2957
Copyright  2001 The Aerospace Corporation. All rights reserved. Permission to copy or
reprint is not required, but appropriate credit must be given to The Aerospace Corporation.
Crosslink (ISSN 1527-5264) is published by The Aerospace Corporation, an independent,
nonprofit corporation dedicated to providing objective technical analyses and assessments
for military, civil, and commercial space programs. Founded in 1960, the corporation operates a federally funded research and development center specializing in space systems architecture, engineering, planning, analysis, and research, predominantly for programs managed
by the Air Force Space and Missile Systems Center and the National Reconnaissance Office.
For more information about Aerospace, visit www.aero.org or write to Corporate Communications, P.O. Box 92957, M1-447, Los Angeles, CA 90009-2957.
For questions about Crosslink, send email to crosslink@aero.org or write to The Aerospace Press, P.O. Box 92957, Los Angeles, CA 90009-2957. Visit Crosslink online at
www.aero.org/publications/.
NONPROFIT ORG
U.S. Postage
PAID
El Segundo, CA
Permit No. 125
Change Service Requested