“737-Cabriolet”: The Limits of Knowledge and the Sociology of

Transcription

“737-Cabriolet”: The Limits of Knowledge and the Sociology of
“737-Cabriolet”: The Limits of Knowledge
and the Sociology of Inevitable Failure1
John Downer
Stanford University
This article looks at the fateful 1988 fuselage failure of Aloha Airlines
Flight 243 to suggest and illustrate a new perspective on the sociology of technological accidents. Drawing on core insights from the
sociology of scientific knowledge, it highlights, and then challenges,
a fundamental principle underlying our understanding of technological risk: a realist epistemology that tacitly assumes that technological knowledge is objectively knowable and that “failures” always connote “errors” that are, in principle, foreseeable. From here,
it suggests a new conceptual tool by proposing a novel category of
man-made calamity: the “epistemic accident,” grounded in a constructivist understanding of knowledge. It concludes by exploring
the implications of epistemic accidents and a constructivist approach
to failure, sketching their relationship to broader issues concerning
technology and society, and reexamining conventional ideas about
technology, accountability, and governance.
If you can meet with Triumph and Disaster
And treat those two impostors just the same . . .
—Rudyard Kipling
INTRODUCTION
“The more human beings proceed by plan, the more effectively they may
be hit by accident.” Or so wrote Friedrich Dürrenmatt, the late Swiss
playwright. The line is memorable for its counterintuitiveness. Modern
1
The author would like to thank Charles Perrow, Trevor Pinch, Bridget Hutter, Rebecca Slayton, Anna Phillips, Jeanette Hoffman, David Demortian, Terry Drinkard,
Carl Macrae, Pablo Schyfter, and the AJS reviewers for their many insights and helpful
comments on this article, which has been a long time in its development. All its failings,
of course, belong to the author alone. Direct correspondence to John Downer, Center
for International Security and Cooperation, Stanford University, Stanford, California
94305. E-mail: jrd22@cornell.edu
䉷 2011 by The University of Chicago. All rights reserved.
0002-9602/2011/11703-0001$10.00
AJS Volume 117 Number 3 (November 2011): 725–762
725
American Journal of Sociology
societies invest heavily in the belief that good plans will protect them
from accidents. Nowhere is this more true than with complex and potentially dangerous systems, such as nuclear reactors and civil aircraft.
Such technologies cannot be allowed to fail, and so we plan them meticulously and invest enormous effort in testing and analyzing those plans.
Then, when accidents come, as they invariably do, we revisit our drawing
boards to find the flaws in our blueprints. More significantly (at least from
a sociological perspective), we also look beyond the drawing boards to
reflect on the practices that surrounded (and constituted) the system that
failed. For, as Hutter and Power (2005, p. 1) put it, there is a widespread
public and academic understanding that accidents are, in an important
sense, organized. This is to say that accidents are, to some degree, allowed
to happen, albeit unintentionally: they slip through our defenses, evincing
deficiencies in our organizational practices.
Just as we believe that blueprints can be perfected, therefore, so we
hold that practices are themselves perfectible. If accidents are organized,
we imagine, then it should be possible, at least in principle, for us to
organize them out of existence. This view—that accidents are symptoms
of social and technical problems that are amenable to solutions—is central
to what Jasanoff (1994; 2005, p. 11) calls our “civic epistemology” of
technological risk. It limits the extent to which disasters shape public
attitudes toward specific technologies (and toward technologies in general)
by enabling a public ritual whereby experts find, and then (appear to)
remedy, the underlying problems that caused those disasters to occur:
continually reestablishing, in the aftermath of failures, our mastery of
practices and, hence, our ability to govern an ever-expanding domain of
increasingly consequential technologies.
Accidents, in other words, have become “sociological” as well as “engineering” problems. Few observers would hold that social theory might
one day “solve” disasters. Most would agree that the capriciousness of
human nature and vagaries of the sociological endeavor make “perfect”
organizational practices unrealizable. At the same time, however, most
suggest that organizational problems should be solvable in principle if
not in practice, such that accidents would be avoidable if it were, in fact,
possible to perfect our organizational practices. Only one group—the proponents of normal accident theory (NAT)—rejects this claim. They outline
an explanation for why some organizational flaws cannot be remedied,
even in theory, and in doing so offer an important counterpoint to the
perception of accidents as symptoms of solvable problems.
This article will illustrate this often misconstrued argument and then
go further: drawing on the “sociology of scientific knowledge” literature
to propose an alternative explanation for why some system failures are
fundamentally impossible to “organize away.” To illustrate this argument,
726
The Limits of Knowledge
it will explore a well-known aviation accident—the 1988 near-crash of
Aloha Airlines Flight 243. This accident, it will conclude, exemplifies a
useful new sociological category of disaster: one that reflects a “constructivist” understanding of failure and has far-reaching implications for our
understanding of engineering and technological risk.
To break this down into sections: The article begins by outlining the
dramatic final flight of Aloha Airways Flight 243 and its subsequent
investigation by the U.S. National Transportation Safety Board (NTSB).
It then draws on this accident to outline and illustrate the social study
of the causes of man-made accidents, focusing on NAT, which it argues
to be distinctive within this broader field. The article then introduces the
sociology of scientific knowledge and invokes it to contest a core premise
of previous accident studies. More specifically, it challenges their implicit
“rational-philosophical model” of engineering knowledge, which assumes
that engineering facts are, in principle, objectively “knowable.” This argument is then illustrated and elaborated by revisiting Aloha 243 in more
depth: examining the NTSB’s conclusions in light of the sociology of
knowledge and exploring the accident’s “engineering-level” causes in more
detail. The article then suggests a new category of inevitable disaster: the
“epistemic accident,” and compares its properties to those of normal accidents to illustrate how the former offers a new perspective with implications that are distinct from those of NAT. It concludes with a discussion
of how a constructivist understanding of failure offers new perspectives
on sociological narratives about technological risk, regulation, and disaster.
And so to 1988.
ALOHA 243
On April 28, 1988, Aloha Airlines Flight 243, a passenger-laden Boeing
737, left Hilo Airport on a short hop between Hawaiian islands, climbed
to an altitude of 24,000 feet, and failed spectacularly. The pilots would
later report a violent lurch followed by a tremendous “whooshing” of air
that ripped the cabin door from its hinges, affording the copilot a brief
(but surely memorable) glance of blue sky where she expected to see First
Class. A closer look, had there been time, would have revealed passengers
silhouetted absurdly against the emptiness, still strapped to their seats
but no longer surrounded by an airplane fuselage: all of them perched
on a nightmarish roller coaster, hurtling exposed through the sky at hundreds of miles per hour with the ocean far below.
Amidst the noise, terror, and chaos the pilots found they were still in
control of the aircraft. And so—unable to communicate with air traffic
727
American Journal of Sociology
Fig. 1.—Aloha Airlines Flight 243 (source: Hawaii State Archives)
control over the combined howling of wild winds and perturbed passengers—they set the emergency transponder and landed as abruptly as they
dared at nearby Kahului Airport (NTSB 1989, p. 2).
The airplane’s condition when it arrived at Kahului astonished the
aviation community and immediately entered the annals of engineering
folklore. An 18-foot fuselage section, 35 square meters, had completely
torn away from the airframe, like the top from a sardine can, severing
major structural beams and control cables and exposing passengers to the
sky on all sides (see fig. 1). Fierce winds and flying debris had injured
many of those on board, but by grace and safety belts, only one person
was lost: senior flight attendant Clarabelle Lansing, who disappeared into
the void as the fuselage tore, never to be seen again.2 The airframe itself
was terminal, however: never before or since has a large civil aircraft
survived such a colossal insult to its structural integrity. Even a quarter
century later it would still be widely remembered by aviation engineers,
who, with the dark humor of any profession that works routinely in the
shadow of disaster, privately refer to it as the “737-cabriolet.”
The NTSB promptly launched an investigation to determine the failure’s cause. When its report appeared the following year, it inculpated a
fateful combination of stress fractures, corrosion, and metal fatigue.
2
Sixty-five of 90 passengers and crew were injured, eight seriously. The event would
later be dramatized in a 1990 television movie, Miracle Landing, starring Wayne Rogers
and Connie Sellecca as the pilots and Nancy Kwan as the star-crossed Clarabelle.
728
The Limits of Knowledge
Boeing built the 737’s aluminum skin from layered sheets that had to be
bonded together. This was a difficult process. In the early 737s (of which
Aloha was one) this bonding relied on an epoxy glue that came on a
“scrim” tape. Assembly workers had to keep the tape refrigerated until it
was in place, to suspend the glue’s adhesive reaction, and then “cure” it
by letting it warm. If the tape was too cool when it was applied, then it
gathered condensation that impaired proper adhesion. If it warmed too
quickly, then it began to cure too early, again impairing adhesion. Even
under Goldilocks conditions, the epoxy would sometimes bind to oxide
on the surface of the aluminum rather than to the metal itself, another
difficult issue (Aubury 1992). It was relatively common for the bonding
to be less than perfect, therefore, and the NTSB concluded that this lay
at the heart of the failure. Imperfect bonding in Aloha’s fuselage, the
report claimed, had allowed salt water to creep between the sheets, corroding the metal panels and forcing them apart. This had stressed the
rivets binding the fuselage together and fostered fatigue cracks in the
sheets themselves, eventually causing the entire structure to fail.
But the causes of accidents are invariably nested, as Jasanoff (2005)
illustrates, and the “disbonding” explanation raised as many questions as
it answered. Structural failures of this magnitude were not supposed to
happen so suddenly, whatever the underlying cause. The aviation world
had long understood that some early 737 fuselages had bonding imperfections and that this induced skin cracking, but it also understood that
fatigue cracks progressed gradually enough for routine inspections to raise
alarms long before a fuselage could even come close to rupturing.
Aloha’s maintenance offers a viable level of explanation, therefore, and
this is where the NTSB eventually rested. The convention in U.S. accident
reports is to apportion blame (Galison 2000). Accordingly, the NTSB
blamed the accident on a “deficient” maintenance program (NTSB 1989,
sec. 2.3), which proved to be “messier” than its idealized public portrayal.
Yet sociologists have long noted that close examinations of technological
practice invariably reveal such discontinuities (Wynne 1988), even in
tightly controlled spheres such as aviation (Langewiesche 1998).
This is not to say, of course, that some organizations are not more
rigorous in their practices than others or that there is not a degree of
“messiness” at which an organization becomes culpable. It should, however, frame the airline’s insistence, in the wake of the report, that the
NTSB gave a false impression that its maintenance practices were untypical (Cushman 1989). Certainly nobody contested that the airplane was
compliant with all the relevant Federal Aviation Administration (FAA)
mandates, and it is perhaps telling that the report led to few sanctions
on the airline.
Even if the NTSB’s “maintenance as cause” explanation was compel729
American Journal of Sociology
ling, however, there would be many observers who would want to look
beyond it—to treat it as a phenomenon with its own cause rather than
a cause in itself. For a more satisfying account of Aloha, therefore, we
might look beyond the NTSB report and search for further levels of
explanation by drawing on the sociology of disaster. This literature suggests that an ultimate degree of causation might be found by locating the
airplane’s failures and the airline’s failings in wider social and organizational contexts. It is to such arguments that I now turn.
THE SOCIAL CAUSES OF DISASTER
Until the late 1970s, the question of what caused technological failures
like Aloha belonged almost exclusively to engineers. By the turn of the
decade, however, social scientists had begun to recognize that such accidents had social and organizational dimensions. The British sociologist
Barry Turner surveyed a series of “man-made disasters” and noted that
they were invariably preceded by discrepancies and unheeded warnings
that, if recognized and acted on, would have averted any misadventure
(Turner 1976, 1978). This realization—that experts were overlooking what
appeared, in hindsight, to be clear and intelligible warning signals—posed
the question why? This question was revolutionary. It made accidents a
legitimate object of social inquiry by recasting them as “social” rather
than “technical” problems. Turner started thinking of accidents as “failures
of foresight” and began to explore the organizational underpinnings of
technological failures.
Turner’s core insight was compelling, and in his wake, social scientists
from a range of disciplines—organizational sociologists, psychologists, political scientists, and others—began scrutinizing system failures to identify
their underlying social causes. A group centered around the University
of California at Berkeley, for instance, pioneered an approach now known
as high reliability theory (HRT), which looked for the distinctive characteristics of systems—such as aircraft carriers and air traffic control
centers—that consistently performed reliably in demanding circumstances
(and, hence, for the hallmarks of systems likely to fail; e.g., La Porte 1982;
Rochlin, La Porte, and Roberts 1987; Weick 1987; La Porte and Consolini
1991; Roberts 1993). Similarly, psychologists such as Reason (1990) and
Dörner (1996) began searching for auguries of disaster by isolating the
conditions that foster human error. In a different vein, Vaughan (1996)
notably drew on the sociology of relationships to frame “failures of foresight” in terms of the incremental acceptance of warning signals: what
she called the “normalization of deviance.”
These scholars—together with a constellation of others (e.g., Gephart
730
The Limits of Knowledge
1984; Snook 2000; Beamish 2002)3—have contributed, in varying ways,
to the search for structural shortcomings in the models and practices
through which organizations govern complex systems. Their insights have
thus helped to frame what Vaughan (2005, p. 63) calls our “technologies
of control.”
Although the above is a very brief overview of an extremely rich and
varied literature, most of this literature shares an underlying but foundational premise: Turner’s idea that the proximate causes of technological
accidents are a product of underlying social causes. This premise is significant in a way that, for the purposes of this article, transcends the
differences between these approaches. It implies a commitment to the idea
that accidents are fundamentally tractable problems at an organizational
level. Few, if any, of the scholars above would argue that we are likely
to expunge accidents altogether, if for no other reason than that this would
demand an unrealistic level of discipline; but their arguments do suggest
that—in an ideal (or “Brave New”) world, where we have perfected our
technologies of control—all accidents could, in principle, be avoided. They
suggest, in other words, that all accidents can ultimately be explained by
a “flaw,” at either an individual or (in most cases) a systematic level, which
is waiting to be found and, it is hoped, remedied.
One major approach breaks with Turner’s premise, however. It argues
that the causes of some accidents resist any search for socio-organizational
flaws, such that certain accidents are an inevitable property of some systems and would be even in a “sociologically ideal” world. This is NAT,
and although it is widely invoked and will be familiar to many readers,
it is worth examining in more detail for two reasons. First, its essential
properties and ramifications are often misconstrued. Second, a detailed
breakdown of those properties and ramifications will help frame and
contextualize the discussion of Aloha that follows.
Normal Accidents
NAT was first proposed by the Yale sociologist Charles Perrow and expounded in his book Normal Accidents (1984; 2d ed. 1999). He describes
it as a “first attempt at a ‘structural’ analysis of risky systems” and an
effort to “chart the world of organized systems, predicting which will be
prone to accidents and which will not” (1999, p. 62). It is a major line of
sociological inquiry into the causes of disaster, yet it is a signal exception
to the “Turnerian” tradition because it suggests that some accidents—
“normal accidents” (or “system accidents”)—result from failures that occur
3
This is, by necessity, an extremely brief overview of what is a rich and diverse project,
with different approaches each offering unique perspectives.
731
American Journal of Sociology
without meaningful errors, such that their causes resist organizational
explanations.4 Uniquely, NAT suggests that some failures arise from the
very fabric of the systems themselves: less a flaw than an emergent property
of systems.
The theory rests on the observation that no complex system can function
without small irregularities, that is, with absolutely no deviation from
platonically perfect operation. Each must tolerate minor aberrations and
small component failures: spilt milk, small sparks, blown fuses, misplaced
luggage, stuck valves, obscured warning lights, and so on. Engineers
might not like these events, but eliminating them entirely would be impracticable, and so engineers construe them as a fundamental design
premise. When engineers speak of “failure-free systems” (which they do),
they are not speaking of systems that never have faulty valves or warning
lights; they are referring to systems that have tolerances, redundancies,
and fail-safe elements that accommodate the many vagaries of normal
technological operation.
Perrow realized that these seemingly insignificant events and noncritical
failures sometimes interact in unexpected ways that circumvent even the
very best designs and practices to cause systemwide failures. These fateful
interactions are his normal accidents.
Perrow argues, for instance, that the 1979 near catastrophe at Three
Mile Island (TMI) exemplifies this phenomenon. By his account, the incident began when leaking moisture from a blocked filter inadvertently
tripped valves controlling the flow of cold water into the plant’s cooling
system. Redundant backup valves should have intervened but were inexplicably closed, which would have been clear from an indicator light,
but the light was obscured by a tag hanging from a switch above. A
tertiary line of technological defense—a relief valve—should have opened
but did not, while a malfunctioning indicator light erroneously indicated
that it had. None of these failures were particularly noteworthy in themselves, but complexity colluded with cruel coincidence and the reactor’s
controllers understandably struggled to comprehend its condition in time
to prevent a catastrophic meltdown (Perrow 1999).
The odds against such Hollywood-esque scenarios—where a succession
of small events in perfect synchrony cascade inexorably toward an unforeseeable disaster—are astronomical: on the order of billions to one and
4
It is true that many normal accidents do have social components, but—vitally—NAT
is equally applicable without them. “Perhaps the most original aspect of the analysis,”
as Perrow puts it, “is that it focuses on the properties of systems themselves, rather
than on the errors that owners, designers, and operators make in running them” (1999,
p. 63).
732
The Limits of Knowledge
far too insignificant,5 in isolation, to affect an engineering calculation.
Crucially, however, where potentially dangerous systems have millions of
interacting parts that allow for billions of unexpected events and interactions, “impossible” billion-to-one coincidences are only to be expected
(i.e., they are “normal”). (As Churchill once wrote in respect to a naval
mishap, the terrible “ifs” have a way of accumulating.)
This, in its most basic form, is NAT’s central insight: that trivial but
irrepressible irregularities—the background static of normal technological
practice—when taken together have inherent catastrophic potential. Underlying it is a deceptively simple syllogism: that fateful (and fatal) “onein-a-billion” coincidences are statistically probable in systems where there
are billions of opportunities for them to occur. Every significant property
of normal accidents, and every wider implication of NAT, follows from
this. Among the most significant of these properties are the following.
1. Normal accidents are unforeseeable and unavoidable.—Perhaps the
most consequential property of normal accidents is that they are impossible to predict. As explained, the events that combine to trigger them
are not unusual enough, in themselves, to constitute a “signal” or “deviance” that is distinguishable from the innumerable other idiosyncrasies
of a typically functional technological system.6 Given that normal accidents are billion-to-one coincidences, in other words, it is impossible to
predict which anomalies and interactions will be involved. So even
though, with hindsight, operators might have controlled the specific factors that contributed to any specific normal accident, they cannot control
these accidents in advance, no matter how rigorous their practices. Normal
accidents, in other words, occupy a blind spot in our “technologies of
control,” and this makes them unavoidable. “The Normal Accident cannot
be prevented,” writes Perrow (1982, p. 176). Hopkins (1999) criticizes NAT
for offering little insight into accident prevention, but herein lies its central
contribution: it forces analysts to recognize, a priori, that not all accidents
can be organized away. The only absolute and inviolable defense against
normal accidents is to avoid building certain kinds of systems altogether.
2. Normal accidents are more likely in “tightly coupled,” “complex”
systems.—NAT offers two yardsticks by which to assess the vulnerability
of systems to normal accidents: their “complexity” and the degree to which
they are “coupled.” “Complexity” here is a measure of the number of
elements in a system and their relationship to each other. Complex systems,
by Perrow’s definition, have many interacting elements (so a sheet of
advanced composite material is “simple” even though it might be very
5
This is a purely rhetorical number and does not represent a calculation by the author.
Hindsight might explain the injury that foresight would have prevented, but hindsight, as the proverb goes, is “a comb the world gives to men only after they are bald.”
6
733
American Journal of Sociology
sophisticated), and those elements interact in ways that are nonlinear and
difficult to map completely (so an assembly line is “simple” despite its
many elements, because their interactions are straightforward and readily
visible).7 “Coupling,” meanwhile, is a measure of the “slack” in a system.
A system is “tightly coupled” if interactions between its elements happen
rapidly, automatically, and inflexibly, with little room for human intervention. “Loosely coupled” systems, by contrast, interact more slowly and
flexibly, offering both time and opportunity for intervention.
Both of these measures are derived from the logic of statistical coincidence and its relationship to the organization of a system. Where a
system has more elements and more connections between them, it follows
that more interactions are possible, and there will be more coincidences.
And where there is less slack in a system, if follows that coincidences are
more likely to be dangerous. Both high complexity and tight coupling
contribute to normal accidents, therefore, and a grid with each on different
axes offers a simple rubric for estimating their likelihood.
The remaining characteristics of normal accidents similarly follow from
understanding them as “perfect storms” of otherwise insignificant events
but are less explicit in Perrow’s writing and less explored in the literature
more generally. They can be outlined relatively simply.
3. Normal accidents are unlikely to reoccur.—That is to say, “specific”
normal accidents are highly unlikely to reoccur. Where there are a billion
possible “billion-to-one events” that can instigate an accident, it is logical
to anticipate an accident but not the same accident twice. The exact
confluence of faulty valves, miswired warning lights, and errant tags
implicated in TMI might never realign if the same plant ran, unaltered,
for 10,000 (“groundhog”) years.
4. Normal accidents rarely challenge established knowledge.—Normal
accidents do not challenge common engineering understandings and theories about the world. The reason is that the errors that combine to
produce normal accidents are rarely surprising in themselves. A stuck
valve usually reveals little about valves, for example, and would do almost
nothing to challenge the engineering underlying their design. Valves sometimes stick; this is why they come with warning lights and redundancies.
This leads to the final point.
5. Normal accidents are not edifying.—A close corollary of point 4—
that normal accidents do not challenge engineers’ understandings of the
world—is that they have minimal heuristic value: engineers have little to
learn from them. Insofar as normal accidents offer lessons at all, it is
7
The interactions in linear systems, Perrow explains, are “expected,” “familiar,” and
“quite visible even if unplanned,” whereas those in complex systems are “unfamiliar,”
“unexpected,” and “either not visible or not immediately comprehensible” (1984, p. 78).
734
The Limits of Knowledge
always the same lesson: that unpredictable, but disastrous, confluences of
errors are likely to occur in complex, tightly coupled systems.
The properties of normal accidents, outlined above, set them apart from
other accidents, which, as discussed, are construed as the product of socially embedded errors that are—in principle, if not always in practice—
predictable, preventable, and edifying. This disparity leads some theorists
(e.g., Sagan 1993) to argue that NAT is antithetical to other approaches.
The dichotomy is slightly misleading, however. NAT is ontologically different from other perspectives, as argued above, and drawing contrasts
can be instructive, as Sagan shows; yet there is room for the different
approaches to coexist. Perrow does not suggest that all accidents are
normal accidents. Indeed, he argues that normal accidents are not the
norm (the term “normal” connoting “inevitability” rather than “commonness”). He credits TMI as a normal accident, for instance, but suggests
that other signal disasters—Challenger, Bhopal, Chernobyl—were not.
NAT has limited scope, therefore, and need not detract from the efforts
of others to interrogate, theorize, and prevent nonnormal accidents.8
Rather than viewing NAT’s restricted scope as a “major limitation,”
as Hopkins (1999, p. 95) does, therefore, we might say that NAT delineates
a conceptually important, yet limited, category of accidents: a category
that cannot be traced back to organizational errors and that we cannot
ever hope to “organize away.”
Managerial Tension
Some accident theorists would bring NAT into the “Turnerian” fold of
“organized disasters” by construing normal accidents as the product of
an organizational irony. This reading is certainly congruent with Perrow’s
text. It rests on his argument that complex and tightly coupled systems
call for contradictory management styles. Complex systems require decentralized management control (because it takes local knowledge and
close expertise to monitor and understand the interactions of a diverse
network of subsystems), whereas tightly coupled systems require centralized control (because in systems in which failures propagate quickly
those failures must be controlled quickly, which requires strict coordination and “unquestioned obedience”; Perrow 1999, pp. 331–35). This
managerial “push-me-pull-you,” the argument goes, inevitably creates a
tension, and this tension is sometimes understood as the core of NAT.
8
In fact, few social theorists, when pushed, contest the logic of NAT or the existence
of normal accidents. Instead, debates tend to revolve around the scope and prevalence
of normal accidents, the utility of NAT, and its sociopolitical ramifications (e.g., La
Porte 1994; Perrow 1994; Hopkins 1999).
735
American Journal of Sociology
By this construal, NAT is not an exception to the Turnerian fold. For,
if normal accidents ultimately stem from an organizational tension, then
sociologists can still paint them as organizational failures and invoke this
logic to contest their inevitability. Indeed, this understanding of NAT
practically challenges sociologists to identify an organizational framework
that would reconcile the competing demands of complexity and coupling—by accommodating both centralization and decentralization at different levels, for instance—and thereby “solve” normal accidents. (In fact,
some the work of the high reliability theorists—outlined above—can be
construed in this light; e.g., Rochlin et al. 1987; Rochlin 1989; La Porte
and Consolini 1991.)9
This understanding of NAT is misleading, however. The “centralizeddecentralized” management tension, although useful and insightful, is not
the underlying core of NAT. Perrow undeniably makes the argument (and
it is likely to have been the inspiration for NAT, as it follows logically
from his earlier work on “contingency theory”), but it is clear that it should
be subordinate to the main thesis, which ultimately rests on deeper foundations: the probabilistic syllogism outlined above.
To understand why this must be the case, it helps to understand that
even if sociologists could devise management schemata that reconcile the
organizational demands of complex, tightly coupled systems, these systems
would still be susceptible to unfortunate one-in-a-billion coincidences. It
may be true that the frequency of normal accidents would decline in these
circumstances, but no system could ever hope to be immune to every
theoretical combination of improbable contingencies. The core premise
and conclusion of NAT are both outlined in the opening pages of Normal
Accidents and are unequivocal. They are that billion-to-one events are
“likely” when there are billions of opportunities for them to occur. So
disasters are an inevitable property of certain technological systems, no
matter how well managed. It is, as Hopkins (2001, p. 65) attests, “an
unashamedly . . . determinist argument.”
Nonnormal Accidents
Normal accidents are inevitable, therefore, but limited. It is important to
understand that all social theorists who investigate the causes of disasters,
Perrow included, implicitly hold that the causes of all “nonnormal” accidents are rooted in social, psychological, and organizational flaws or
errors.
Significantly, for the purposes of this article, these nonnormal accidents
9
Indeed, the HRT literature is often portrayed as antithetical to NAT (e.g., Sagan
1993).
736
The Limits of Knowledge
include those caused by single, catastrophic technological faults (rather
than a combination of faults across a system). Perrow calls these “Component Failure Accidents” (1999, pp. 70–71) and holds, as others do, that
they are ultimately caused by institutional shortcomings. If engineers are
perfectly rigorous with their tests, thorough with their inspections, and
assiduous with their measurements, the logic goes, then such accidents
should not happen. Just as these failures lead engineers to search for flaws
in their blueprints, therefore, so they lead social theorists to scrutinize
engineering practices in search of the organizational deficiencies that allowed them to occur (and of organizational remedies that might prevent
them from returning).
By this view, therefore, Aloha 243 should not have happened, because
Aloha, by any account, was not a normal accident. It was the failure of
a single major structural element, the fuselage, from a known cause,
fatigue, not a confluence of trivial errors compounding each other and
cascading confusingly toward disaster. In Perrovian terms, it was a component failure accident, and so it should fall outside NAT’s bounded
category of “inevitable” accidents. Simply put, all existing social perspectives on accidents would suggest that Aloha was a product of errors, which
ultimately had social or organizational roots, and which might, in principle, have been “organized out of existence.”
The remainder of this article will suggest otherwise. It will outline a
perspective on accidents that suggests that some fateful technological
faults are not “normal” in the Perrovian sense and, yet, are not an avoidable product of social or organizational shortcomings. In other words, it
will suggest that some nonnormal accidents are also inevitable. By drawing on insights from the sociology of knowledge, it will argue that all
current sociological models of accidents, NAT included, subscribe to a
conception of knowledge that is at odds with contemporary epistemology
and that distorts their perception of the causes and implications of Aloha
243.
THE EPISTEMOLOGY OF FAILURE
If an airplane fuselage ruptures because it responds to fatigue differently
than its designers anticipated, then it is easy to interpret it as an engineering error. The reason is that engineers, it is believed, work in an
empirical realm of measurable facts that are knowable, binary, true or
false, and ontologically distinct. So when the facts are wrong, this wrongness, and any disaster it contributes to, can be viewed as a methodological,
organizational, or even moral failing: one that proper engineering discipline should have avoided (hence the moral dimension) and one that social
737
American Journal of Sociology
theorists might understand (and one day prevent) by exploring its social
context.
This picture of facts is rooted in what the sociologist Harry Collins
(1985; 1992, p. 185) has called the “canonical rational-philosophical model”
of expert knowledge. It construes engineering as a process governed by
formal rules and objective algorithms that promise incontrovertible, reproducible, and value-free truths grounded in measurements and deduction rather than expert opinion. Porter (1995, p. 4) notes the prevalence
of this belief and calls it “the ideal of mechanical objectivity.” It is intuitive,
convincing, and entirely rejected by most epistemologists.
The prevalence of this model is reflected in the fact that centuries of
Western philosophy firmly believed erroneous knowledge claims to be
fundamentally and recognizably distinct from those that are true such
that perfect tests and calculations should be able to eliminate errors entirely. Logical philosophers—from Francis Bacon through William Whewell to Karl Popper and beyond—assiduously honed the “scientific method”
to ensure that it led ineluctably toward truth: fiercely debating the nature
of proof and the foundations of evidence.
Yet the several hundred years that separate Bacon from Popper speak
to the elusiveness of the “ideal experiment,” “perfect proof,” and “indubitable fact.” Indeed, beginning with Wittgenstein’s later work, logical
philosophers started rejecting the entire enterprise. Philosophers such as
David Bloor (1976) and philosophically minded historians such as Thomas
Kuhn (1962) began to argue that no facts—even scientific or technological
ones—are, or could be, completely and unambiguously determined by
logic or experiment alone. Their argument was not that there are no deep
ontological truths about the world, as many critics claimed, but simply
that actors can never access those truths directly because their understanding is filtered through interpretations, theories, and preconceptions
that undermine appeals to “objectivity” and add an indelible uncertainty
to all knowledge claims.10 Today, few doubt that the canonical rationalphilosophical model of scientific knowledge is inadequate for explaining
science in action and the “facts” it produces.
Although still contested by some, the idea that all “truths” are inescapably unproven and contingent—albeit to limited and varying degrees—has become the new orthodoxy. Many refer to this as a “constructivist” (as opposed to “positivist,” “realist,” or “objectivist”) understanding
of knowledge. “Constructivism” is a loaded term, with varying interpretations, but at its most basic level, it holds that, from the perspective of
the actors who define them, “true” beliefs can be ontologically indistin10
For more a detailed discussion of this point, see Bloor (1976) and Collins and Pinch
(1993).
738
The Limits of Knowledge
guishable from beliefs that are “false.” Very broadly speaking, this is because all knowledge claims contain within them fundamental ambiguities
that allow for different interpretations of the same evidence, so there can
be equally “logical” but fundamentally incompatible versions of any
“facts.” Contrary to common criticism, constructivists absolutely recognize
that knowledge claims are deeply constrained by logic, tests, and experiments,11 but they maintain that even the most rigorous claims still contain
fundamental ambiguities and, hence, must also embody judgments: an
irreducibly social component to every “fact.”
As Bloor (1976) intuited, this view of knowledge opened the door for
a “sociology” of scientific knowledge (as opposed to scientific practice).
Before constructivism, sociologists of knowledge had respected the positivist presumption that properly “scientific” knowledge claims required
(or, indeed, tolerated) no sociological explanation because they were
shaped entirely by logic and evidence. These sociologists restricted themselves to explaining “erroneous” beliefs, where the scientific method had
fallen short: evidence of a procedural flaw or bias that needed explanation
(e.g., Merton 1973). Constructivists like Bloor undermined the Mertonian
distinction between how sociologists approach “pure” and “impure”
knowledge claims by denying the existence of pure knowledge claims (also,
e.g., Barnes 1974; Quine 1975; Kusch 2002).
Heeding this insight, a series of sociologists set out to investigate how
scientists negotiated the inevitable ambiguities of their work in practice
(e.g., Collins 1985; Latour 1987). Over roughly the same period as some
sociologists were encroaching on disaster investigations, therefore, others
were encroaching on the laboratory: charting the gap between “science
in principle” and “science in practice” through a series of epistemologically
conscious ethnographies (e.g., Latour and Woolgar 1979). Through this
work they convincingly demonstrated that the seemingly abstract concerns of philosophers had tangible practical consequences. Successive
studies, for instance, illustrated the surprising degree to which credible
and informed experts often disagree over seemingly objective facts and
the frequency with which expert communities reverse their opinion on
well-established and apparently inviolable truths (e.g., Collins and Pinch
1993).
A subset of these sociologists eventually turned their gaze to engineering
knowledge (e.g., Pinch and Bijker 1984; Bijker, Hughes, and Pinch 1987;
Latour 1996; MacKenzie 1996; Collins and Pinch 1998). They drew a
direct analogy between engineering and scientific knowledge claims and
explored the epistemology of bench tests, just as they had laboratory
11
The argument is not, as has been claimed (Gross and Levitt 1994), that we create
facts from whole cloth or that one fact is as good as any other.
739
American Journal of Sociology
experiments. Through studies of how engineers extrapolate from tests to
the real world and from past to future performance, these sociologists
leveraged logical dilemmas, such as the “problem of relevance” or the
“experimenter’s regress” (Collins 1985; Pinch 1993), to argue against the
idea that tests can objectively interrogate a technology or reveal the absolute “truth” of its functioning (Pinch 1993; Downer 2007).12 There is
more to understanding technology, they argued, than can be captured in
standardized tests.
This work projected a new image of engineering and its knowledge
claims that was at odds with their public portrayal. It suggested that
engineering’s orderly public image belied a “messy reality” of real technological practice, where experts were operating with high levels of ambiguity and routinely making uncertain judgments. (Again, the argument
was not that there were better alternatives to existing engineering knowledge, but simply that technological “facts” are best understood as hypotheses that, although profoundly constrained by engineers’ interactions
with the world, were ultimately contingent and unprovable; see, e.g.,
Petroski 1992, p. 104.)
Over time, scholars (primarily those working in science and technology
studies; e.g., Hackett et al. 2008) began to connect these insights to broader
sociological debates. This has been important because sociological discourse largely continues to assume a positivist “rational-philosophical”
worldview, where scientists and engineers are thought to work with objective evidence and—assuming they do their work properly—provide
reliable facts about the material world. This quiet but pervasive deference
to scientific positivism is unfortunate because a constructivist understanding of knowledge has important ramifications in many contexts, but sociologists rarely consider how its implications might be relevant to their
wider endeavors.
Sociologists of disaster, in particular, need to “bite the bullet of technical
uncertainty,” as Pinch (1991, p. 155) puts it. Turner (1976, p. 379) recognizes that engineering knowledge can be compromised by simplifications and interpretations, as does Weick (1998, p. 73) and others. Yet none
of these scholars adequately consider either the extent or ramifications of
this insight. Much like the early sociologists of science, they approach it
from a positivist position that frames simplifications and interpretations—
when they lead to failures—as deviances from an “ideal” method, thereby
construing “erroneous” engineering beliefs as the product of “flawed” practices, which then become legitimate objects of sociological investigation.
12
Many of these findings are echoed in introspective studies by engineers. Petroski
(1992, 1994), for instance, has written extensively about the inevitable limits of engineering knowledge.
740
The Limits of Knowledge
A constructivist perspective, by contrast, suggests that treating simplifications and interpretations as “compromises” or “deviances” ignores how
fundamental and pervasive they are in all engineering facts and the extent
to which they are constitutive of “true,” as well as “false,” engineering
knowledge.
By showing that it is impossible to completely and objectively “know”
complex machines or their behaviors, in other words, constructivism suggests that the idea of “failing” technologies as recognizably deviant or
distinct from “functioning” technologies is an illusion of hindsight. By this
view, there need be no inherent pathology to failure because there can be
no perfect method of separating “flawed” from “functional” technologies
(or “true” from “false” engineering beliefs). To paraphrase Bloor (1976), it
suggests that sociologists should approach airplanes that fall in the same
way as those that soar.
To reiterate, the argument here is not that all knowledge claims are
equal. Constructivists might argue that perfect scientific (or technological)
certainty is unrealistic on any level, but they readily concede that some
knowledge claims are much more efficacious than others and that some
understandings do get as close to “certain” for any doubt to be materially
inconsequential (unless a great many of our fundamental preconceptions
are wrong, for instance; then steel is “certainly” stronger than balsa wood,
and penicillin cures better than prayer). They do, however, hold that the
fundamental ambiguities of engineering knowledge quickly become more
consequential as we approach the frontiers of our understanding, and
perhaps their most important contribution has been to illustrate the tangible implications of this.
This insight, that there are always things that engineers do not know—
“unknown unknowns”—is hardly new, of course. Indeed it is a common
refrain among frontline aviation engineers, whose views were well summarized by the National Academy of Sciences in a 1980 study of FAA
regulatory procedures: “It is in the nature of every complex technological
system that all possible risks—even mechanical ones—cannot be anticipated and prevented by the design. Most safety standards have evolved
from the experience of previous errors and accidents” (National Academy
of Sciences 1980, p. 41). Although expert practitioners might understand
that their knowledge has fundamental limitations, however, such experts
have long struggled to fully articulate why these limitations must exist.
And, partly as a consequence of this, social scientists have been slow to
consider its implications. Sociologists might note that engineers consider
it a general truth that technical understandings are imperfect, but since
they lack an explanation for why it is a necessary truth, sociologists invariably construe it as a problem they might solve rather than a fundamental constraint that their accounts must accommodate. Quite simply,
741
American Journal of Sociology
the engineers’ refrain appears too much like an unconsidered aphorism
for it to merit deep sociological consideration.
This has created systematic lacunae in sociological accounts of disaster,
for, as we shall see, the fundamental limitations of proof have important
ramifications. Before exploring these ramifications, however, let us return
to Aloha with a constructivist eye.
ALOHA 243 REVISITED
The NTSB’s interpretation of Aloha as a maintenance failure—which
seemed to contradict sociological accounts of routine engineering practice—is further compromised by the fact, acknowledged by the report
itself, that engineers believed the 737’s fuselage to be safe even if maintenance inspections fell spectacularly short. This belief was premised on
the “fail-safe” design of its metal skin. The airplane’s architects had divided the fuselage into small rectangular panels, each bounded by “tear
straps” designed to constrain ruptures by channeling and redirecting
cracks (see fig. 2). In theory, therefore, a tear or rupture should have
caused the fuselage to “flap” open around a single panel, releasing internal
pressure in a way that was limited, controlled, and—importantly—not
threatening to the fuselage’s basic integrity (NTSB 1989, p. 34). For extra
security, the design allowed for cracks of up to 40 inches that encompassed
two panels simultaneously. Explaining Aloha, as the NTSB recognized,
requires an explanation of the failure of its fail-safe design. What the
NTSB failed to acknowledge, however, was that it is this explanation,
rather than the airline’s maintenance practices, that offers the most insight
into the accident’s cause.
When the fuselage failed in a way that completely circumvented its
“fail-safe” tear straps, it challenged basic aeronautical engineering beliefs
about the propagation of cracks and revealed shortcomings in fundamental assumptions about metal fatigue and its relationship to aircraft
manufacture. A range of intricate engineering theories—what Petroski
(1994) calls “design paradigms”—framed engineers’ understanding of the
737’s metal skin. Intricate formulas defining the tensile strength of aviation-grade aluminum, for instance, were a lens through which engineers
could interpret its design and predict its failure behavior. Aloha revealed
something that constructivists might have expected: that these formulas
held deep ambiguities and that these ambiguities hid important misconceptions.
These misconceptions primarily concerned the properties of what is
known as multiple site damage (MSD). MSD is the microscopic cracking
that appears around fastener holes as a fuselage fatigues. “The Aloha
742
The Limits of Knowledge
Fig. 2.—Boeing 737 fuselage (NTSB 1989)
accident stunned the industry,” as one expert put it, by demonstrating the
unforeseen effects of multiple site damage.13
At the time of the accident, neither the manufacturer nor the airlines
considered MSD a particularly significant safety issue. The reason was
that even though it was known to be virtually undetectable at an early
stage, experts universally believed that no MSD crack could grow from
an imperceptible level to 40 inches (the fuselage fail-safe design level) in
the period between maintenance inspections. So, in theory, it should always have been detectable long before it became dangerous. Fatefully,
however, this assumption failed to recognize that, in certain areas of the
fuselage and under certain conditions (specifically, when there was significant “disbonding” between fuselage sheets, a corrosive saltwater environment, and a significant passage of time), MSD had a tendency to
develop along a horizontal “plane” between a line of rivets (see fig. 3).
This turned out to be important, however, because such a string of almost
imperceptible, but aligned and adjacent, cracks could abruptly coalesce
13
Former FAA administrator Allan McArtor, quoted in Hoffer (1989, p. 70).
743
American Journal of Sociology
Fig. 3.—MSD cracks (NTSB 1989)
into one big crack, longer than 40 inches, that would nullify the fail-safe
tear straps and, in the event, almost down an entire aircraft (NTSB 1989,
sec. 2.3).14
Predicting Fatigue
Why did the behavior of Aloha’s MSD come as a surprise? It is reasonable
to imagine that aeronautical engineers, given the decades of metallurgical
14
This is, by necessity, a simplification of a complex and multifaceted engineering
explanation, albeit, it is hoped, not a distorted one in respect to the central argument
it is being used to illustrate. A fuller account (including a more recently proposed
alternate, “fluid hammer,” hypothesis) would complicate the narrative without undermining the main premise.
744
The Limits of Knowledge
research available to them, should have understood the properties of MSD
and been able to predict its behavior. After all, the prevailing orthodoxy
was hardly idle. It was grounded in extensive laboratory experiments and
decades of experience with aluminum airplanes.
Engineers who work closely with metal fatigue, however, are generally
unsurprised by its surprises. When fatigue felled a U.K. Royal Air Force
transport aircraft outside Baghdad in January 2005, for instance, a Ministry of Defense spokesperson echoed a long-standing engineering lament
when he said that “It’s hard to believe that there might be a structural
failure after all the close monitoring we do, but there is a history of
surprises about metal fatigue” (Air Safety Week 2005). These surprises
began in the mid-19th century, when it became apparent that seemingly
good machine parts were failing as they aged. By 1849, engineers were
researching metal fatigue to better understand the problem (in the hope
of avoiding surprises). Their work was fruitful, and by the 1870s researchers had carefully documented the failure behavior of various alloys,
even though the causes of this behavior remained opaque (Garrison 2005).
Yet despite such efforts, fatigue never lost its capacity to surprise. It shook
the nascent civil aviation industry almost a century later by felling two
airliners, both de Havilland Comets, in the first four months of 1954 (Faith
1996, pp. 158–65). (“Although much was known about metal fatigue, not
enough was known about it by anyone, anywhere,” as Geoffrey de Havilland would lament in his autobiography [in Marks 2009, p. 20].) Even
today, some 20 years since Aloha, aircraft fatigue management remains
an inexact science: one that Air Safety Week (2005) recently described as
“a dark black art . . . akin to necromancy.”
Such uncertainty might sound surprising, but as outlined above, sociologists of technology have long attested that uncertainty is a routine
(and inevitable) aspect of all technological practice (Wynne 1988; Downer
2009a). “Engineering,” as the civil engineer A. R. Dykes once put it, “is
the art of modeling materials we do not wholly understand into shapes
we cannot precisely analyze so as to withstand forces we cannot properly
assess, in such a way that the public has no reason to suspect the extent
of our ignorance” (speech to British Institution of Structural Engineers,
1976). The disjuncture he alludes to, between expert knowledge and its
public perception, is also well documented by sociologists, who have long
noted that “outsiders” tend to be less skeptical of facts and artifacts than
the experts who produce them, simply because outsiders are not privy to
the epistemological uncertainties of knowledge production (e.g., MacKenzie 1998). “Distance lends enchantment,” as Collins (1985) puts it, and
745
American Journal of Sociology
metal fatigue is no exception in this respect.15 From afar it looks straightforward, but from the perspective of the engineers closest to it, it is striking
in its formidable complexity and ambiguity: a reflection, as we will see,
of deep epistemic constraints.
The formal study of metal fatigue is known as “fracture mechanics.”
Its acolytes examine the properties of metal (principally, tensile strength
and elasticity)16 and their relationship to different kinds of stress. In the
laboratory—where minimally defective materials in straightforward
forms are subjected to known stresses—fracture mechanics is a complex
but broadly manageable science. It boasts carefully honed models that
work tolerably well in predicting where, when, and how a metal form
will fail (even if the most accurate of them must venture into the rarified
algorithms of quantum mechanics). In operational aircraft like Aloha 243,
however—where elaborate forms, replete with rivet holes and imperfections, experience variable and uncertain stresses, the vicissitudes of time,
and the insults of human carelessness—the calculations involved are
vastly more forbidding. Certain design choices, for instance, can create
unanticipated stress points in the fuselage that instigate fatigue (in the
case of the Comets in 1954, it was the shape of the windows), as can
slight imperfections such as scratch marks or defects in the metal itself.17
All these variables are a challenge to accurately model in advance, so
much so that engineers confess that even the most elaborate fatigue calculations in real airplane fuselages essentially amount to informed guesses
(Feeler 1991, pp. 1–2).
The same issues that made Aloha’s fatigue difficult to predict also made
it problematic to monitor. Most ferrous metals have a “fatigue limit,”
which is to say that they do not fatigue at all until they reach a specific
stress threshold. Aluminum alloys like those used in the 737, however,
have no clear fatigue limit and are thought to fatigue, often imperceptibly,
at any stress level. Moreover, efforts to monitor and model this process
are further complicated by its nonlinearity. As metals fatigue they retain
their original strength until they reach another threshold: a tipping point
at which actual cracks begin. This initial “full-strength” stage is itself
complex. Precrack fatigue was originally thought to develop at the mi15
MacKenzie (1998) speaks of an “uncertainty trough,” based on this principle, which
maps confidence in a technology against an actor’s epistemic distance from it.
16
“Tensile strength” is the measure of a metal’s resistance to being pulled apart, usually
recorded in pounds per square inch. “Elasticity” is the measure of a metal’s ability to
resume its original form after being distorted, the point at which this no longer happens
being the metal’s “elastic limit.”
17
Holes in the fuselage, such as those required by bolts and rivets, are particularly
vulnerable because they tend to concentrate stress around their periphery (Feeler 1991,
pp. 1–2).
746
The Limits of Knowledge
croscopic level of the metal’s crystalline structure, but it is now (after
Aloha) understood to accumulate at the atomic level, where it is imperceptible to maintenance engineers. (Modern “nondestructive evaluation”
techniques enable inspectors to detect cracks as small as 0.04 inch, beyond
which they are effectively blind; Maksel 2008.) Even today, as one engineer
puts it, “Until cracking begins, it is for all practical purposes impossible
to tell whether it will begin in 20 years, or tomorrow” (Garrison 2005).
The science of metal fatigue is inherently complex, therefore; but note
that much of its ambiguity in this instance revolved around the applicability of the science to the technology itself. Where the scientists modeled
simple shapes in the laboratory, for instance, engineers had to work with
complex forms. And, as we saw, such differences made it difficult to
extrapolate from “science” to “practice.” Constructivists have long argued
that there are uncertainties at the heart of all scientific knowledge (albeit
to varying degrees), but such uncertainties are exacerbated when engineers
try to apply that knowledge in new contexts.
It is worth noting, for example, that even a “platonically perfect” understanding of fatigue in aluminum alloys would have been insufficient
for understanding the failure behavior of Aloha’s airframe. The airplane’s
engineers would also have had to wrestle with the nuances of the adhesives
used to bind its aluminum sheets together, as well as an unknown—and
unknowable—number of other complicating variables. This unknowableness is the key here: Even if we take the “science” that engineers invoke
to be “true” and unproblematic (as might often be practical at this level
of analysis),18 there will always be uncertainties embedded in every application of scientific knowledge to technology. Aviation engineers will
keep having to judge the applicability of scientific models to functional
airplanes (as, indeed, will engineers working in all fields). Aloha was not
exceptional, in other words. Fatigue management might be a “dark black
art,” but if its ambiguities are special, then it is only in their degree.
Testing Fatigue
Given the complexities in theoretical fatigue models of aviation-grade
aluminum and the uncertainties inherent to applying those models to a
real fuselage, it is easy to understand why the 737’s designers might fail
to predict the failure behavior of their airframe, but it remains difficult
to intuit why their errors went undiscovered for so long. Engineers have
long understood that they cannot perfectly deduce technological perfor18
Constructivists, as discussed, assert that there are judgments embedded in every
scientific “fact.” They might concede, however, that such judgments could often be
inconsequential at this level of analysis.
747
American Journal of Sociology
mance from models, algorithms, and blueprints. This is why they perform
tests. Yet Boeing, under the FAA’s keen regulatory gaze, extensively tested
the fatigue resilience and decompression behavior of the 737’s fuselage.19
It even acquired and retested an old airframe as some 737s in service
approached their original design life. And in each case the aircraft performed just as calculations predicted.
As sociologists of knowledge have again long argued, however, tests
and experiments are subject to their own fundamental epistemic limitations (Pinch 1993; Downer 2007). Hence, we might understand the disparity between the 737’s performance in the laboratory and its performance in service by reference to a well-documented tension between tests
and the phenomena they attempt to reproduce.
Tests simplify by necessity. They get engineers “closer to the real world,”
as Pinch (1993, p. 26) puts it, “but not all the way.” The reason is that
the world is infinitely complex, and so examining it requires what Scott
(1998, p. 11) calls “a narrowing of vision.” Engineers cannot perfectly
reproduce the “real world” in their laboratories, in other words, so they
must reduce its variables and make subjective judgments about which
of them are relevant.20 As Aloha exemplifies, however, such judgments
are always theory-laden and fallible.
Aloha’s fatigue tests were deeply theory-laden. The way engineers understood fatigue before the accident led the 737’s architects and regulators
to conclude, logically, that escaped engine blades, not fatigue, would cause
the largest possible fuselage ruptures (NTSB 1989, sec. 1.17.2). When
framing the tests of the fuselage, therefore, Boeing interrogated the failsafe panels by “guillotining” a fully pressurized fuselage section with two
15-inch blades to simulate an escaped fan blade. This test created punctures smaller than 40 inches (the specified design limit) that traversed one
tear strap and, as anticipated, allowed the skin to “flap” safely (NTSB
1989, sec. 1.17.2). Unfortunately, however, it also left engineers blind to
the dangers of MSD.
This fateful understanding of fatigue was itself grounded in tests, but
these tests were similarly constrained. They were framed by the widely
held engineering belief that the key determinant of fatigue in an aircraft
fuselage was its number of “pressurization cycles” (usually one full cycle
19
In general, the structural components of an airplane (such as the airframe and wings)
are designed such that “an evaluation of the strength, detail design, and fabrication
must show that catastrophic failure due to fatigue, corrosion, manufacturing defects,
or accidental damage, will be avoided throughout the operational life of the airplane”
(NTSB 2006, p. 37).
20
MacKenzie (1990), for instance, explores high-level doubts about whether successful
test launches proved that America’s nuclear missiles would hit their targets under
wartime conditions.
748
The Limits of Knowledge
for every takeoff and landing). To examine the 737’s fatigue life, therefore,
its designers pressurized and depressurized (cycled) a fuselage half section
150,000 times (representing twice the design life goal) in a test facility.
This produced no major cracks and fulfilled all FAA certification requirements (NTSB 1989, sec. 1.17.2). Again, however, the engineers were
misled by assumptions in a way that would be familiar to constructivists.
The NTSB suggests that the test was unrepresentative, not because it
was inexpertly executed but because its very theoretical foundations were
flawed. By isolating “cycles” as the limiting factor in fatigue, the engineers
unwittingly excluded a range of variables that, in the event, were highly
significant to understanding Aloha’s failure. Flight 243 was, to be sure,
a highly cycled aircraft (because of its short routes), but there were other
factors that contributed to its decay. One was its age: manufactured in
1969, it was one of the longest-serving 737s in operation. Another was its
operating setting: the warm, saltwater environment of Hawaii, a misery
for metal. A third was its flawed construction: as outlined above, imperfect
bonding in the airplane’s fuselage allowed salt water to creep between
its alloy sheets. These three factors—age, environment, and manufacture—set Aloha 243 apart from Boeing’s test fuselage. The disbonding
created gaps that allowed salt water to creep in and, over a long time,
to corrode the metal in a way that stressed the aircraft’s rivets and nurtured cracks in its fuselage. Unsurprisingly, therefore, it fatigued differently from the new, properly bonded fuselage that engineers repeatedly
pressurized in a dry laboratory over a compressed time (and differently
from the older fuselage used in later follow-up tests, which had a properly
bonded fuselage that had not been flying routes in salty Hawaii).
In a multitude of ways, then, the designers and testers of the 737’s
fuselage ran afoul of the logical problems highlighted by constructivists.
Unable to know everything (or to know if they knew everything) about
the materials they were using, they were forced to work with hypotheses;
and unable to test these hypotheses without relying on further hypotheses
to frame the tests, they were forced to make judgments and assumptions
about relevance and representativeness. On a deep epistemological level,
there lay an unavoidable but fateful uncertainty in their understanding
of the 737’s failure behavior.
In an ideal world, engineers would have been able to perform perfectly
representative fuselage tests on the 737—recreating every possible combination of environmental conditions and manufacturing imperfections.
In this ideal world, the tests would have run for decades to accurately
reproduce the vicissitudes of time (as some clinical trials do) and would
have maximized their statistical representativeness by involving hundreds, if not thousands, of aircraft. This ideal world was clearly impractical, however. All tests must simplify. The testers had to strike a balance
749
American Journal of Sociology
between “control” and “authenticity”: between making their tests accurate
and making them useful (Downer 2007). As we saw above, this balance
necessarily involved judgments and assumptions: that the airframes
would be properly bonded, for instance, that fatigue cracks grow slowly,
and that fuselage punctures would stretch no further than 40 inches. These
assumptions were based on the best information available, which came
from fracture mechanics: a laboratory science with unknowable applicability to operational aircraft, and where experts freely conceded that
even the most sophisticated models were little more than “informed
guesses.”
Inevitability
From the vantage of hindsight, Aloha 243, like most accidents, looks
replete with early warnings. Engineers might have spotted and acted on
its MSD cracks, for instance, had they understood better what to look
for and how to frame their tests to find it. Without this knowledge, however, there was no logical way to know where to look or what to look
for; the dominant understanding of metal fatigue made the implications
of MSD invisible.
Just as if Aloha 243 were a normal accident, therefore, its “warning
signals” were visible only in retrospect. Indeed, it is perhaps more accurate
to say that those signals existed only in retrospect. Relative to the contemporary paradigms of metal fatigue, most of Aloha’s warning signals
would have been unremarkable even if they were visible: they could hide
in plain sight. Put differently, “hindsight” could not have been “foresight,”
in this instance, because the former is epistemically privileged in a way
that the latter is not.
Aloha, in other words, was unavoidable. Or, at least, an accident of its
kind was unavoidable. This is to say that, although there was nothing
that made that specific failure inherently unknowable, there was also no
chance of an organizationally and logically perfect social system that could
have guaranteed that every critical engineering judgment underpinning
the 737’s design was correct (just as no idealized social system can guarantee against a fateful one-in-a-billion normal accident). Aloha’s fuselage
embodied a complex and detailed understanding of metal fatigue; this
understanding was based on theories; those theories were built on (and
reified by) tests; and those tests were themselves inescapably theory-laden.
Conventional sociological accounts might explain how engineers came
to understand fatigue in a way that left them blind to MSD (as opposed
to something else), therefore, but only an account based in the sociology
of knowledge demands that observers recognize that such oversights are
inevitable in a more general sense. All complex technologies are “real-life
750
The Limits of Knowledge
experiments,” as Weingart (1991) puts it, and it is the nature of experiments
that their outcomes are uncertain.21
Hindsight to Foresight
Like a normal accident, therefore, Aloha could not have been “organized
away,” even though hindsight gives that impression. Unlike a normal
accident, however, hindsight from Aloha almost certainly did allow engineers to organize away a meaningful number of future accidents.
This is a subtle but important distinction. As noted above, normal
accidents offer relatively few engineering lessons because the incredibly
random but fateful coincidences that cause them are highly unlikely to
reoccur. Nuclear engineers might rectify all the small errors that combined
to instigate TMI, for instance, but with so many other irregularities that
might align to cause the next normal accident, this effort would be unlikely
to dramatically affect nuclear safety.
Accidents arising from epistemic limits, by contrast, are much more
likely to reoccur, and so the insights they offer are more useful. If Aloha
243 had disappeared over an ocean, cause undiscovered, then other aircraft would likely have failed in the same way, and for the same reason.
Hence its investigation offered valuable lessons: having belatedly discovered that MSD could create large fissures under certain conditions, there
was no reason for engineers to be surprised by this again.
Aloha, in other words, was edifying in a way that TMI was not. The
accident allowed engineers to revisit their understanding of fatigue and
its specific relationship to the 737’s fuselage, so that similar accidents
might be identified in the future. This is not a process that could have
worked in reverse, as we saw above, but it is significant nevertheless.
Since Aloha, aeronautical engineers have modified their understanding of
metal fatigue, tear panels, fuselage life, and much else besides,22 and because of these changes there has never been another incident quite like
it (although fuselage failures relating to fatigue continue to occur for
different reasons).23
The same hindsight-foresight relationship is evinced by the 1954 Comet
crashes, outlined above. When fatigue felled the first Comet, it was lost
at sea, and so investigators were unable to learn that the airplane’s square
21
Aloha, we might say, underlines Petroski’s point that “even years of successful application [do] not prove an engineering hypothesis to be valid” (1992, pp. 104–5).
22
Among other things, Aloha led to a shift in maintenance paradigms, away from a
“fatigue safe-life” approach to an approach with greater emphasis on “damage tolerance” and the physics of crack growth (NTSB 2006, pp. 37–38).
23
Fuselage breaches still happen, but not uncontained breaches like Aloha’s.
751
American Journal of Sociology
windows were inducing fractures. The problem remained, therefore, and
caused another disappearance just four months later. Only when flights
were halted, wreckages recovered, and the causes determined did the
problem disappear (Faith 1996, pp. 158–65). Engineers at the time had
no reason to suspect that square windows would have this effect but were
able to leverage their hindsight into foresight: the Comet investigation
killed the United Kingdom’s effort to dominate the nascent passenger jet
market, but civil aircraft have had oval windows ever since.
The epistemology of Aloha’s eventual demise has complex ramifications, therefore. It reinforces Perrow’s claim that some accidents are inevitable but carries different implications about our relationship with
disaster. Sociologists lack a term for accidents like Aloha, which are inevitable but neither “normal” nor condemned to be unavoidable forever.
For this reason, a supplemental sociological category of disaster is called
for: the epistemic accident.
EPISTEMIC ACCIDENTS
If normal accidents are an emergent property of the structure of systems,
then epistemic accidents are an emergent property, a fundamental consequence, of the structure of engineering knowledge. They can be defined
as those accidents that occur because a scientific or technological assumption proves to be erroneous, even though there were reasonable and
logical reasons to hold that assumption before (although not after) the
event.
Epistemic accidents are analogous to normal accidents in several ways,
not least in that they both offer an irreducible explanation for why some
disasters are inevitable in some systems. As discussed above, however,
the two are not the same. They stem from different root causes and have
different implications (see table 1). Nevertheless, an economical way to
explore the properties and implications of epistemic accidents is to echo
the breakdown of normal accidents above.
1. Epistemic accidents are unpredictable and unavoidable.—As discussed, epistemic accidents, like normal accidents, are unavoidable. They
circumscribe another subset of unforeseeable accidents that will inevitably
elude our “technologies of control” and obdurately resist the best-intentioned prescriptions of sociologists. This unavoidability stems from a fundamentally different cause, however. Normal accidents are unpredictable
because engineers cannot wholly predict the multiplicity of possible (yet
unremarkable) interactions in complex, tightly coupled systems. Epistemic
accidents, by contrast, are unavoidable because engineers necessarily build
technologies around fallible theories, judgments, and assumptions. These
752
The Limits of Knowledge
TABLE 1
Epistemic and Normal Accidents
Normal Accidents
Unforeseeable/unavoidable
More likely in tightly coupled, complex
systems
Unlikely to reoccur
Do not challenge design paradigms
Offer no insights/not enlightening
Epistemic Accidents
Unforeseeable/unavoidable
More likely in highly innovative systems
Likely to reoccur
Challenge design paradigms
Offer insights/enlightening
different mechanisms translate into very different consequences, as seen
below.
2. Epistemic accidents are more likely in highly innovative systems.—
If normal accidents are more likely when technologies are complex and
tightly coupled, then epistemic accidents are more likely in systems that
stretch the boundaries of established theory and prior experience. This is
to say, they vary with “innovativeness.”24 An airframe panel made from
a new composite material, for instance, is neither complex nor tightly
coupled, yet from an epistemic perspective, its newness is inherently risky.
Perrow (1999, p. 128) cites the aviation industry’s “use of exotic new
materials” as a factor that directly contributes to the safety of modern
aircraft; but this ignores the fact that when engineers design airframes
with traditional aluminum panels, they draw on decades of metallurgical
research and service experience in a way that they cannot when working
with new materials (Downer 2009b).25
3. Epistemic accidents are likely to reoccur.—Unlike normal accidents,
which are—almost by definition—“one-of-a-kind” events, epistemic accidents are highly likely to reoccur. As discussed above, if 10,000 identical
reactors were run for 1,000 years, it is unlikely that one would fail for
the exact same reasons as TMI. But if Aloha 243 had not failed when it
did, then it is probable that—like the Comets in 1954—a similar airplane
in similar circumstances would have failed soon after in a similar fashion.
Epistemic accidents are not random, one-in-a-billion events.
4. Epistemic accidents challenge design paradigms.—Unlike normal accidents, which rarely challenge engineers’ understandings of the world,
epistemic accidents invariably reveal shortcomings in existing design par24
Of course, “innovativeness,” much like “complexity” and “coupling,” is difficult to
measure exactly or quantify objectively, but this should not detract from its analytical
value. Beauty might be in the eye of the beholder, but this hardly negates its existence,
its value as an explanatory category, or its tangible consequences.
25
Aloha testifies to how service experience reveals unanticipated properties even in
well-established materials.
753
American Journal of Sociology
adigms. The fact that a freak combination of pipes, valves, and warning
lights could lead to the near meltdown of TMI did not force engineers to
rethink their understanding of pipes, valves, or warning lights. When a
fatigue crack coalesced from nowhere to blow the roof off Aloha 243,
however, this undermined a number of important engineering theories,
facts, and understandings.
5. Epistemic accidents are edifying.—Our socio-organizational “technologies of control” have little to learn from epistemic accidents,26 which,
like normal accidents, will arise in even the most perfect settings; yet these
accidents do offer vital engineering-level insights into the systems themselves. Epistemic accidents allow engineers to leverage hindsight in a way
that normal accidents do not. Maintenance crews, after Aloha, understand
MSD differently and are wise to its implications.
DISCUSSION
The Limits of Control
Aloha 243 is an exemplary epistemic accident, but it illustrates a syllogism
that should stand on its own logic. If we accept the premise that some
accidents result from erroneous beliefs and we further accept that even
the “best” knowledge claims necessarily contain uncertainties, then it follows that epistemic accidents must exist. To argue that Aloha was not an
epistemic accident, therefore, would not refute the idea itself, which should
hold irrespective of the interpretive flexibilities of individual examples.
(Sociologists, we should remember, have long been comfortable with the
existence of general phenomena that transcend the vagaries of specific
cases.)27
To say that epistemic accidents must exist is not, of course, to say that
all, or even many, accidents fit this description ( just as to say that normal
accidents exist is not to say that every accident is normal). As a generation
of sociologists since Turner have compellingly illustrated, the social dimensions of technological systems are undeniably consequential and
demonstrably improvable, even if they are not perfectible. From a constructivist perspective, therefore, a wide range of perspectives on failure
absolutely remain valuable and viable. It should go without saying that
social scientists must continue to explore the social, psychological, and
organizational foundations of error.
At the same time, however, the understanding that some accidents are
26
Except about their own limitations.
Beginning, perhaps, at the very dawn of the discipline with Durkheim’s ([1897] 1997)
thoughts on suicide and anomie.
27
754
The Limits of Knowledge
inevitable, unavoidable, and unforeseeable has to frame the sociology of
disaster and its relationship to policy discourse. As discussed above, sociologists increasingly participate alongside engineers in the public rituals
through which states uncover, and then appear to remedy, the underlying
problems that led to disasters (e.g., Vaughan 2003). These rituals (performed formally by investigative committees and informally by the media
and the academy) serve a socially and politically important function in
that they help restore public faith in a technology (and its governance)
in the wake of accidents. Through this role, therefore, sociologists participate in the civic epistemology through which we frame important social
choices pertaining to technological risk. “This nuclear plant may have
failed,” they reassure us, “but we have found the underlying problem, and
the other plants are now safe.”
The fact that accidents happen—Murphy’s infamous law—might seem
obvious, but it is routinely neglected by social theorists, publics, and policy
makers alike, all of whom are prone to claim that “no” accidents “will
not” happen if we work hard enough. We might say that modern societies,
together with most sociologists, are implicitly committed to a realist conception of technical knowledge and, through it, to a worldview that sees
technological systems as “objectively knowable.” Murphy’s law has no
place in this conception of the world. Accidents, we believe, need not
happen, and when they do happen we feel that someone (or, in the case
of most sociologists, some phenomena) should always be culpable. Accidents reveal flaws, we believe, and even though they are routine, we do
not see them as inevitable.
The result, as La Porte and Consolini (1991, p. 20) put it, “is an organizational process colored by efforts to engage in trials without errors,
lest the next error be the last trial.” It allows for a public discourse that
freely conflates “safety” with “regulatory compliance,” thus legitimatizing
systems that, unlike airplanes, absolutely cannot be allowed to fail:28 nuclear plants near major cities, for instance, or the complex apparatus
behind the strategic arsenal, with its sinister promise of “mutually assured
destruction.”29
If sociologists were to recognize that a certain fraction of accidents are
inevitable and able to articulate why this must be the case, they might
meaningfully resist such discourses. Instead of always framing failures as
the product of sociostructural flaws or errors, they could view some as
evidence that technological knowledge (and, thus, formal assurances) is
inevitably imperfect and recognize that this insight has its own sociological
28
In Lee Clarke’s (1989) terms, it allows us to avoid thinking “possiblistically.”
A system that has come closer to devastating failure than we often imagine (Sagan
1993).
29
755
American Journal of Sociology
implications. An emphasis on inevitability may not be a heartening perspective, but it is an important one. It is why NAT is so significant: it
compellingly explains why catastrophic potential is woven into the very
fabric of some technological systems, and it offers a convincing (if qualitative) sense of which systems are most at risk (i.e., complex, tightly
coupled systems) and how that risk might be mitigated.
A constructivist understanding of failure both complements and expands on NAT. By drawing on epistemology rather than probability, it
offers an alternative explanation for why some failures are inevitable and,
through it, a different perspective on which systems are vulnerable. It
suggests that unavoidable accidents are likely to vary with the “innovativeness” and “familiarity” of a system, as well as with its “complexity”
and “coupling.” Engineers might intuitively judge technologies along these
axes, but civil bureaucracies—wedded to an idea of formal assessments
underpinned by objective tests and formal measures—rarely do.
Such metrics, we might suppose, are often eschewed because of their
inherent subjectivity. Bureaucracies prefer to measure the measurable
(Scott 1998), yet measuring factors such as “familiarity” and “innovativeness” is a complex undertaking. Both metrics depend, to a large degree,
on the extent to which a system is similar to other systems (i.e., its peers
and predecessors; Downer 2011), but as Collins (1985, p. 65) eloquently
illustrates, “similarity” will always have a bounded and socially negotiated
meaning. To require that regulators assess systems in these terms, therefore
(or indeed, in terms of complexity and coupling), would be to require that
they more explicitly acknowledge the qualitative and contingent nature
of technology assessment. This would be institutionally difficult—given
the quantitative biases of modern “audit societies” and their “risk regimes”
(Power 1997, 2007)—but it would be an important sociological contribution to public discourse.
Learning from Disaster
A corollary to the inevitability of epistemic accidents, and a significant
way in which they differ from normal accidents, lies in the insights they
offer about the systems themselves. By portraying all new technologies
as “real-life experiments” with uncertain, but ultimately instructive, outcomes, a constructivist understanding of failure highlights the instructive
nature of accidents, as well as their inevitability.
Engineers have long understood and articulated this principle. Perhaps
most notably, Petroski (1992, 1994, 2006) stresses the necessity of “learning
from failure” in engineering. Yet its importance is underrecognized in
sociological and public discourse. If taken seriously by these spheres, how756
The Limits of Knowledge
ever, it would offer a wide range of insights into the relationships between
engineering, reliability, and disaster.
For example, a constructivist view of failure suggests that the civil
aviation industry slowly achieved the reliability of modern aircraft as its
understandings of their designs were successively refined by a series of
“surprises” in the field30—many being tragic, but ultimately instructive,
accidents.31 A huge number of the design choices reflected in a modern
airliner have an admonitory story (an epistemic accident) behind them.
Oval windows hark back to the early Comet failures, for instance, while
stipulations about fuselage bonding reflect insights from Aloha. Boeing
diligently compiles such lessons into a confidential volume—Design Objectives and Criteria—a canonical text in the planning stages of its new
jetliners.
By this perspective, moreover, it is intuitive to view the legacy of accidents that characterize the early days of aviation engineering as (to some
extent) a necessary “rite of passage” rather than a product of early shortcomings in our “technologies of control,” which might now be circumvented. Aviation’s oft-cited reliability, by this view, is necessarily built on
highly specific insights gleaned from enlightening but unavoidable surprises. This, in turn, would explain why reliable high technologies invariably have long and inglorious legacies of failure behind them; why
“the true sin” in aviation engineering is “not losing an aircraft, but losing
a second aircraft due to the same cause” (Cobb and Primo 2003, p. 77);
and why “understanding aircraft,” as the director of the FAA testified to
Congress in 1996, “is not a year’s process, but a fifty-, sixty-, or seventyyear process” (U.S. Congress 1997, p. 63).
The same logic also suggests a new mechanism of “path dependency”
in safety-critical systems. The insights that experts cull from epistemic
accidents are most applicable when the design of a system remains stable,
because design changes introduce new uncertainties and, with them, new
judgments and avenues for failure. (The many painful lessons learned
about metal fatigue are becoming less useful as manufacturers start to
abandon metal in favor of composite materials, for instance.) Certainly,
some of the insights gleaned from airplane wreckages are broadly applicable across a range of engineering endeavors, but many are extremely
specific to the technology in question. (As Aloha illustrates, epistemic
accidents can arise from subtle misunderstandings of very design-specific
30
The improving reliability of aircraft is reflected in the number of accidents per million
departures, which has been declining steadily since the dawn of the jet age (Research
and Innovative Technology Administration 2005).
31
This is a necessary, but not sufficient, condition. Of course, many other factors
contribute to the safety of modern aviation.
757
American Journal of Sociology
interactions.)32 Hence, there may be a degree to which designs of safetycritical systems become “locked in” (David 1985), not because they are
optimal in principle but because engineers have been with them long
enough to have encountered (and engineered out) many of their unanticipated surprises. If critical technologies must perform reliably, we might
say, and unreliability is the cost of innovation, then it follows that the
designers of critical technologies must exercise a high degree of “innovative
restraint.”33
To say that the aviation industry has learned to make ultrareliable
airplanes, in other words, is not to say that it has mastered the social and
organizational practices of building and regulating high technologies in
general. Sociologists will never find an infallible shortcut to technological
performance by documenting and dissecting the practices of institutions
that have achieved it. “One cannot create experience,” Camus once wrote;
“one must undergo it.” This is a poignant truth in an age in which, as
La Porte and Consolini (1991, p. 19) put it, “perhaps for the first time in
history, the consequences and costs associated with major failures in some
technical operations are greater than the value of the lessons learned from
them.” Bureaucracies mislead us when they invoke achievements in aviation as evidence of their mastery of other complex systems.
Coda
This is not an argument that is specific to aviation engineering, or even
to engineering in general. All man-made systems, from oil refineries to
financial models and pharmaceuticals, are built on theory-laden knowledge claims that contain uncertainties and ambiguities. Constructing such
artifacts requires judgments: judgments about the relevance of “neat”
laboratory knowledge to “messy” environments, for instance, or about the
similarity between tests and the phenomena they purport to represent.
These judgments are usually buried deep in the systems themselves, invisible to most users and observers. Yet they are always present, and to
varying degrees, they always hold the potential to “surprise.”
Ominous uncertainty is a fundamental dimension of modernity, therefore, and one that sociologists and publics must accommodate. To refuse
to fly airplanes until all disagreements are resolved, all “deviances” are
32
The dangers posed by Aloha’s MSD, remember, were a product of a complex relationship between the airplane’s irregular bonding, fuselage design, and flight conditions.
33
The streets of Edwards Air Force Base, where the U.S. Air Force performs the
majority of its test flights, are named after lost pilots: a constant reminder of innovation’s perils.
758
The Limits of Knowledge
identified, and all “facts” are established is, quite simply, to refuse to fly
airplanes. It follows that a constructivist understanding of technical
knowledge is essential to understanding the worlds we are creating. It
should frame a wide range of debates about foresight and failure, be it
in markets, medicines, or machines. Sociologies of all spheres are compromised if they implicitly defer to expert “certainties.”
REFERENCES
Air Safety Week. 2005. “Hercules Crash in Baghdad Points to Metal Fatigue in C130’s
Wing Center.” February 21.
Aubury, Martin. 1992. “Lessons from Aloha.” BASI Journal (June), http://www.iasa
.com.au/folders/Safety_Issues/others/lessonsfromaloha.html.
Barnes, B. 1974. Scientific Knowledge and Sociological Theory. London: Routledge &
Kegan Paul.
Beamish, Thomas. 2002. Silent Spill: The Organization of an Industrial Crisis. Cambridge, Mass.: MIT Press.
Bijker, W., T. Hughes, and T. Pinch, eds. 1987. The Social Construction of Technological
Systems: New Directions in the Sociology and History of Technology. Cambridge,
Mass.: MIT Press.
Bloor, David. 1976. Knowledge and Social Imagery. London: Routledge & Kegan Paul.
Clarke, L. 1989. Acceptable Risk? Berkeley and Los Angeles: University of California
Press.
Cobb, Rodger, and David Primo. 2003. The Plane Truth: Airline Crashes, the Media,
and Transportation Policy. Washington, D.C.: Brookings Institution Press.
Collins, Harry. 1985. Changing Order. London: Sage.
———. 1992. Changing Order: Replication and Induction in Scientific Practice. Chicago: University of Chicago Press.
Collins, Harry, and Trevor Pinch. 1993. The Golem: What Everyone Should Know
about Science. Cambridge: Cambridge University Press.
———. 1998. The Golem at Large: What You Should Know about Technology. Cambridge: Cambridge University Press.
Cushman, John. 1989. “U.S. Investigators Fault Aloha Line in Fatal Accident.” New
York Times, May 24.
David, Paul A. 1985. “Clio and the Economics of QWERTY.” American Economic
Review Papers and Proceedings 75:332–37.
Dörner, Dietrich. 1996. The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. New York: Metropolitan Books.
Downer, John. 2007. “When the Chick Hits the Fan: Representativeness and Reproducibility in Technological Testing.” Social Studies of Science 37 (1): 7–26.
———. 2009a. “Watching the Watchmaker: On Regulating the Social in Lieu of the
Technical.” Discussion paper no. 54, June. London School of Economics, Center for
Analysis of Risk and Regulation.
———. 2009b. “When Failure Is an Option: Redundancy, Reliability, and Risk.” Discussion paper no. 53, May. London School of Economics, Center for Analysis of
Risk and Regulation, http://www.lse.ac.uk/collections/CARR/pdf/DPs/Disspaper53
.pdf.
———. 2011. “On Audits and Airplanes: Redundancy and Reliability-Assessment in
High Technologies.” Accounting, Organizations and Society 36 (4), in press. Preprint:
http://dx.doi.org/10.1016/j.aos.2011.05.001.
Durkheim, Émile. (1897) 1997. On Suicide. New York: Free Press.
Faith, Nicholas. 1996. Black Box. London: Boxtree.
759
American Journal of Sociology
Feeler, Robert A. 1991. “The Mechanic and Metal Fatigue.” Aviation Mechanics Bulletin (March/April): 1–2.
Galison, P. 2000. “An Accident of History.” Pp. 3–43 in Atmospheric Flight in the
Twentieth Century, edited by P. Galison and A. Roland. Boston: Kluwer.
Garrison, Peter. 2005. “When Airplanes Feel Fatigued.” Flying, September, http://
www.flyingmag.com/article.asp?section_idp13&article_idp578&print_pagepy.
Gephart, R. P., Jr. 1984. “Making Sense of Organizationally Based Environmental
Disasters.” Journal of Management 10 (2): 205–25.
Gross, Paul, and Norman Levitt. 1994. Higher Superstition: The Academic Left and
Its Quarrels with Science. Baltimore: Johns Hopkins University Press.
Hackett, Edward, Olga Amsterdamska, Michael Lynch, and Judy Wajcman, eds. 2008.
The Handbook of Science and Technology Studies, 3d ed. Cambridge, Mass.: MIT
Press.
Hoffer, W. 1989. “Horror in the Skies.” Popular Mechanics (June): 67–70.
Hopkins, Andrew. 1999. “The Limits of Normal Accident Theory.” Safety Science 32
(2): 93–102.
———. 2001. “Was Three Mile Island a ‘Normal Accident’?” Journal of Contingencies
and Crisis Management 9 (2): 65–72.
Hutter, Bridget, and Michael Power, eds. 2005. Organizational Encounters with Risk.
Cambridge: Cambridge University Press.
Jasanoff, Sheila, ed. 1994. Learning from Disaster: Risk Management after Bhopal.
Philadelphia: University of Pennsylvania Press.
———. 2005. “Restoring Reason: Causal Narratives and Political Culture.” Pp. 207–
32 in Organizational Encounters with Risk, edited by Bridget Hutter and Michael
Power. Cambridge: Cambridge University Press.
Kuhn, Thomas. 1962. The Structure of Scientific Revolutions. Chicago: University of
Chicago Press.
Kusch, Martin. 2002. Knowledge by Agreement: The Programme of Communitarian
Epistemology. Oxford: Oxford University Press.
Langewiesche, William. 1998. Inside the Sky: A Meditation on Flight. New York:
Pantheon.
La Porte, T. 1982. “On the Design and Management of Nearly Error-Free Organizational Control Systems.” Pp. 185–200 in Accident at Three Mile Island, edited by
D. Sills, V. Shelanski, and C. Wolf. Boulder, Colo.: Westview.
———. 1994. “A Strawman Speaks Up: Comments on the Limits of Safety.” Journal
of Contingencies and Crisis Management 2 (2): 207–12.
La Porte, T., and P. Consolini. 1991. “Working in Practice but Not in Theory: Theoretical Challenges of ‘High-Reliability Organizations.’” Journal of Public Administration Research and Theory 1 (1): 19–48.
Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers
through Society. Cambridge, Mass.: Harvard University Press.
———. 1996. Aramis, or The Love of Technology. Cambridge, Mass.: Harvard University Press.
Latour, Bruno, and Steve Woolgar. 1979. Laboratory Life: The Social Construction of
Scientific Facts. Beverly Hills, Calif.: Sage.
MacKenzie, Donald. 1990. Inventing Accuracy: A Historical Sociology of Nuclear
Weapon Guidance. Cambridge, Mass.: MIT Press.
———. 1996. Knowing Machines: Essays on Technical Change. Cambridge, Mass.:
MIT Press.
———. 1998. “The Certainty Trough.” Pp. 325–29 in Exploring Expertise: Issues and
Perspectives, edited by Robin Williams, Wendy Faulkner, and James Fleck. Basingstoke: Macmillan.
Maksel, Rebecca. 2008. “What Determines an Airplane’s Lifespan?” Air and Space
760
The Limits of Knowledge
Magazine (March 1), http://www.airspace.mag.com/need-to-know/NEED-lifecycles
.html.
Marks, Paul. 2009. “Why Large Carbon-Fibre Planes Are Still Grounded.” New Scientist (August 22): 20.
Merton, Robert. 1973. The Sociology of Science: Theoretical and Empirical Investigations. Chicago: University of Chicago Press.
National Academy of Sciences. 1980. “Improving Aircraft Safety FAA Certification of
Commercial Passenger Aircraft.” Committee on FAA Airworthiness Certification
Procedures Assembly of Engineering, National Research Council, Washington, D.C.
NTSB (National Transportation Safety Board). 1989. Bureau of Accident Investigation.
“Aircraft Accident Report: Aloha Airlines, Flight 243, Boeing 737-200, N73711, near
Maui, Hawaii, April 28, 1988.” Report no. NTSB/AAR-89/03, Acc. no. PB89-91C404.
NTSB, Washington, D.C., June 14.
———. 2006. “Safety Report on the Treatment of Safety-Critical Systems in Transport
Airplanes.” Safety report NTSB/SR-06/02. PB2006-917003. Notation 7752A. NTSB,
Washington, D.C.
Perrow, Charles. 1982. “The President’s Commission and the Normal Accident.” Pp.
173–84 in Accident at Three Mile Island: The Human Dimensions, edited by D.
Sils, C. Wolf, and V. Shelanski. Boulder, Colo.: Westview.
———. 1984. Normal Accidents: Living with High-Risk Technologies. New York: Basic
Books.
———. 1994. “The Limits of Safety: The Enhancement of a Theory of Accidents.”
Journal of Contingencies and Crisis Management 4 (2): 212–20.
———. 1999. Normal Accidents: Living with High-Risk Technologies, 2d ed. Princeton,
N.J.: Princeton University Press.
Petroski, Henry. 1992. To Engineer Is Human: The Role of Failure in Successful Design.
New York: Vintage Books.
———. 1994. Design Paradigms: Case Histories of Error and Judgment in Engineering.
Cambridge: Cambridge University Press.
———. 2006. Success through Failure: The Paradox of Design. Princeton, N.J.: Princeton University Press.
Pinch, Trevor. 1991. “How Do We Treat Technical Uncertainty in Systems Failure?
The Case of the Space Shuttle Challenger.” Pp. 143–58 in Social Responses to Large
Technical Systems: Control or Anticipation, edited by T. La Porte. NATO Science
Series. Boston: Kluwer.
———. 1993. “‘Testing—One, Two, Three . . . Testing!’ Toward a Sociology of Testing.” Science, Technology, and Human Values 18 (1): 25–41.
Pinch, Trevor, and W. Bijker. 1984. “The Social Construction of Facts and Artifacts,
or How the Sociology of Science and the Sociology of Technology Might Benefit
Each Other.” Social Studies of Science 14:339–441.
Porter, T. 1995. Trust in Numbers: The Pursuit of Objectivity in Scientific and Public
Life. Princeton, N.J.: Princeton University Press.
Power, Michael. 1997. The Audit Society: Rituals of Verification. Oxford: Oxford University Press.
———. 2007. Organized Uncertainty: Designing a World of Risk Management. Oxford:
Oxford University Press.
Quine, W. V. O. 1975. “On Empirically Equivalent Systems of the World.” Erkenntnis
9 (3): 313–28.
Reason, James. 1990. Human Error. Cambridge: Cambridge University Press.
Research and Innovative Technology Administration. 2005. “Transportation and
Safety.” Volpe Journal, http://www.volpe.dot.gov/infosrc/journal/2005/graphs.html.
Roberts, K. H. 1993. “Introduction.” Pp. 1–10 in New Challenges to Understanding
Organizations, edited by K. H. Roberts. New York: Macmillan.
Rochlin, G. 1989. “Informal Organizational Networking as a Crisis-Avoidance Strat-
761
American Journal of Sociology
egy: US Naval Flight Operations as a Case Study.” Industrial Crisis Quarterly 3:
614–24.
Rochlin, G. I., T. R. La Porte, and K. H. Roberts. 1987. “The Self-Designing HighReliability Organization: Aircraft Carrier Flight Operations at Sea.” Naval War
College Review 40 (4): 76–90.
Sagan, Scott. 1993. The Limits of Safety. Princeton, N.J.: Princeton University Press.
Scott, James. 1998. Seeing Like a State: How Certain Schemes to Improve the Human
Condition Have Failed. New Haven, Conn.: Yale University Press.
Snook, S. 2000. Friendly Fire. Princeton, N.J.: Princeton University Press.
Turner, B. A. 1976. “The Organizational and Interorganizational Development of Disasters.” Administrative Science Quarterly 21 (3): 378–97.
———. 1978. Man-Made Disasters. London: Wykeham.
U.S. Congress. 1997. “Aviation Safety: Issues Raised by the Crash of ValuJet Flight
592.” In Hearing before the Subcommittee on Aviation, Committee on Transportation
and Infrastructure, House of Representatives, 104th Cong., 2d sess., June 25, 1996.
Washington, D.C.: Government Printing Office.
Vaughan, Diane. 1996. The Challenger Launch Decision. Chicago: University of Chicago Press.
———. 2003. “History as Cause: Challenger and Columbia.” Pp. 195–202 in Report,
vol. 1, Columbia Accident Investigation Board. http://anon.nasa-global.speedera.net/
anon.nasa-global/CAIB/CAIB_lowres_intro.pdf.
———. 2005. “Organizational Rituals of Risk and Error.” Pp. 33–66 in Organizational
Encounters with Risk, edited by Bridget Hutter and Michael Power. Cambridge:
Cambridge University Press.
Weick, K. E. 1987. “Organizational Culture as a Source of High Reliability.” California
Management Review 29:112–27.
———. 1998. “Foresights of Failure: An Appreciation of Barry Turner.” Journal of
Contingencies and Crisis Management 6 (2): 72–75.
Weingart, Peter. 1991. “Large Technical Systems, Real-Life Experiments, and the Legitimation Trap of Technology Assessment: The Contribution of Science and Technology Studies to Constituting Risk Perception.” Pp. 5–18 in Social Responses to
Large Technical Systems: Control or Anticipation, edited by T. La Porte. NATO
Science Series. Boston: Kluwer.
Wynne, Brian. 1988. “Unruly Technology: Practical Rules, Impractical Discourses and
Public Understanding.” Social Studies of Science 18:147–67.
762
Copyright of American Journal of Sociology is the property of University of Chicago Press and its content may
not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.