ad mj mygames server web

Transcription

ad mj mygames server web
The
Journal of
Advancing
Technology
Volume 2
t a b l e
|
o f
Spring 2005
c o n t e n t s
Letter from the Editors…2
A Lateral Approach to Game Development…5
JONATHAN
HARBOUR
Maltese Physician Takes on the Greek Gang of Three: A Review of
Edward de Bono’s Theories and Publications…9
MATT JOLLY
Intelligent Content Delivery and the Bottom Line: Smart Websites…17
JOE
MCCORMACK
Cyborging: Rhetoric Beyond Donna Haraway and the Cyborg Manifesto…23
ROCHELLE
RODRIGO
A Conversation with Chris LaMont…35
CRAIG BELANGER
Software Development Practices: The Good, the Bad and the Ugly…46
EVAN
ROBINSON
Giving the Antagonist Strategic Smarts: Simple Concepts and
Emergence Create Complex Strategic Intelligence…60
MARK
BALDWIN
Call for Submissions…70
The JAT Editors:
Craig Belanger Matt Jolly Shelley Keating
Advisory Board:
Dave Bolman Julia T. Elefson Jason Pistillo Rebecca Whitehead
A Publication of the University of Advancing Technology
2
l e t t e r
f r o m
t h e
e d i t o r s
Spring 2005
Welcome to the second issue of the Journal of Advancing Technology. Our search for knowledge in the
universe continues… For this and future issues, we cast our net wide with a request that our contributors
simply address the confluence of society and technology. In this issue, we believe, they have succeeded.
Two articles in these pages examine disparate approaches to games and game programming, and the
benefits of these approaches to our ability to think creatively and deliberately. Jonathan Harbour
proposes a methodology for games programming based on the lateral thinking techniques created by
Edward de Bono and famously used by the television character, MacGyver. Mark Baldwin presents a
method for creating a strategic antagonist to challenge an electronic game player.
Two articles in this issue address special problems technology companies may encounter as high technology grows from a very large industry into the probable future of most commercial
enterprises. Evan Robinson draws on his extensive experience within the gaming and
programming industry to present a survey of best, middling and worst practices—the good,
the bad and the ugly—in software development (with a sly nod to the 1966 Clint Eastwood-Sergio Leone
“spaghetti western” of the same name, which itself was a survey of best and worst practices in industry,
albeit among thieves during the Civil War). Elsewhere, Joe McCormack sheds light on the creation of
an alternative to commercially available web-tracking software.
Rochelle Rodrigo dissects the essay “A Cyborg Manifesto,” by critical feminist theoretician Donna
Haraway (author of Simians, Cyborgs, and Women: The Reinvention of Nature, among other
critical works). A rich example of hypertextual writing, Rodrigo’s piece allows no easy access to her (or
Haraway’s) material, but instead invites readers to navigate their own path through her essay and
source materials, allowing for a presentation that is both organic and, ultimately, surprising.
We also include a literature review and a conversation, respectively, with two of the more
interesting people we have encountered lately, both of whom not only had an idea, but a fantastic idea,
and have spent much of their careers proclaiming that idea to great effect. Within our pages, we examine
the works of lateral thinking guru Edward de Bono, which we believe are crucial to understanding his
methods and thinking systems. We also present an interview with film director and producer Chris
LaMont in which he illuminates the financial and aesthetic factors involved in organizing the Phoenix
Film Festival and explores the place of digital filmmaking in a world where the video game market is
rapidly overtaking the motion picture industry as the most lucrative of all entertainments.
Enjoy these works and plan on many more to come as we expand our horizons, tear down more boundaries
between the reader and the text and continue that search for the elusive universal knowledge…
--The JAT Editors
3
A Lateral Approach to
Game Development
Jonathan Harbour
University of Advancing Technology
As a teenager, I was a huge fan of the TV show,
MacGyver, starring Richard Dean Anderson.
MacGyver was produced by Henry Winkler, John
Rich and Paramount Pictures and aired on ABC
for seven seasons between 1985 and 1992. The
character MacGyver was a former Special Forces
agent who worked for a think tank organization
called the Phoenix Foundation. MacGyver was
hired to solve problems that no one else was
capable of solving, and he used his vast knowledge
of science (mainly chemistry) to come up with
novel solutions to the problems he faced, most of
which were life threatening.
4
The character MacGyver was
a former Special Forces agent
who worked for a think
tank organization called the
Phoenix Foundation.
The real appeal of this show was not a
broad-shouldered hero figure, bearing weapons
of personal destruction, who rescued an
attractive woman in distress every week.
MacGyver promoted creative problem solving
not just for its lead character, but for the
production of the show itself, which was
unorthodox and a challenge to the
stereotypical shows of the 1980s. The show
tackled subjects that were rarely featured
on TV and made assumptions about the
viewer that did not insult one’s intelligence,
but instead fostered empathy for the
show’s characters and their challenges.
Possibly the most entertaining aspect of
MacGyver was the intrigue and mystery in each
episode: how will MacGyver solve the next
problem using little more than his trademark
Swiss Army knife, a paperclip and a candy
bar? In the pilot episode, MacGyver used a
chocolate bar to seal an acid leak and the lens
from a pair of binoculars to redirect a laser
beam. Although some of his solutions might
seem improbable, they do demonstrate the
A Lateral Approach to Game Development
value of creative problem solving.
MacGyver was a hero figure for me at an
impressionable age, who led me to believe that
there are always alternatives and always more than
one solution to a problem. One did not always
need to approach a problem from the front (or
attack an enemy’s front line). In a very real sense,
MacGyver taught me to think laterally as a young
man, and I wholly embraced the concept,
adopting this way of thinking.As an aspiring game
programmer, I found that this form of thinking—
which at the time I referred to as creative problem
solving—helped me to solve problems in ways that
surprised those around me.
I developed a bit of a reputation as a problem
solver among friends, relatives and fellow
students. I came up with interesting ways to study
for exams. For instance, rather than cramming
the night before an exam, I would cram two
nights before the exam, and then allow my mind
to relax. I found that this
method allowed me to bury
the information more deeply
in my mind and retain facts
more effectively. This was like
storing information rather than
dumping raw data into my
mind, as is the case with a
typical “exam cram,” the goal of
which is to pass a test rather than
to learn something.
The concept has deep roots in many disciplines
and is most often illustrated by historical military
scenarios wherein a military strategist is able to
overcome a numerically superior enemy. How do
you defeat someone who is much stronger than
you? Obviously, if you cannot beat an opponent
head on, you must come up with alternative
strategies, and many such life-and-death situations
throughout history have required lateral thinking.
According to Dr. de Bono, one practical way to
solve a problem using lateral thinking is to break
up a problem into elements—or to itemize the
factors in the problem—and then recombine
them in different ways. The recombination of
elements in a problem is the part of the process
that requires the most creativity, although random
combinations often do reveal new solutions that
one might not have otherwise considered. Edward
de Bono describes this type of thinking as “water
thinking,” as opposed to “rock thinking.”
What are water thinking and rock thinking? Water
What is Lateral Thinking?
The phrase “lateral thinking” was
coined by Edward de Bono, a
Rhodes scholar and the Rather than cramming the night before,
author of more than 60 books,
including the bestsellers Six I would cram two nights before the
Thinking Hats,The Six Value Medals
and Lateral Thinking: Creativity
exam, and then allow my mind to relax.
Step by Step. According to de
Bono, lateral thinking “treats creativity as the is a malleable substance—a liquid—that has the
behavior of information in a self-organizing ability to fill any space. Thinking in this manner
information system, such as the neural networks allows us to adjust our perceptions and
in the brain.” In other words, creativity is not a gift perceive things in different ways. “Rocky”
but a learnable skill that everyone can utilize concepts, on the other hand, cannot be used to
when thinking—creativity is a novel feature of the solve a myriad of problems because rocks are
not malleable. Imagine that you have been
human mind.
5
encouraged to think a certain way throughout
your life. Given that, even if you are a very
creative person, are you truly able to think
creatively? According to de Bono, lateral
thinking is a skill that can be learned. In fact, de
Bono teaches that creativity is a skill, and that
it can be learned through practice. Water
thinking allows us to capitalize on the innate
problem-solving capabilities of our brains,while
rock thinking is limited to pre-packaged, if you
will, problem-solving methods.
How can you learn to think laterally? You can start
by changing your perceptions and assumptions
when you approach a problem. This is called
perception-based thinking: using your brain’s
innate skill at pattern recognition. In other words,
if you allow your mind to perceive things without
allowing your conscious effort at logical thought
to get in the way, you may find powerful and
elegant solutions to problems.
6
that tickles three out of five of our senses (sight,
hearing and touch, excluding taste and smell).
Database programs simply do not need to render
complex shapes, process user input and
coordinate input with an object, check for
collision with other objects on the screen, and so
forth.The query statements used to retrieve and
manipulate data are the most complex aspects of
a database application. In other words, a game
programmer must write code that is entirely
asynchronous and maintain a large amount of
state-based information, while a traditional
application is written to operate synchronously.
In order to write a game, one must begin to think
asynchronously in order to truly make a good game.
Not only must a game programmer exhibit
vivacious creativity, he must learn to apply
creative problem solving to the very logical
process of writing source code to construct a
complete program. To understand the process
that a game programmer must go
Even thinking about lateral thinking through, imagine the work of a master
architect such as Edward Larrabee
Barnes, who is famous for having used
provides new boundaries for us.
the modular approach to architecture.
As you might imagine, even thinking about Barnes preferred to use pre-formed concrete,
lateral thinking provides new boundaries for us. stone and glass panels that not only reduced conIf you begin to see yourself as a “rock thinker,” and struction time, but also enforced modularity in his
work toward expanding your boundaries to structures.Barnes was able to design complex and
become more of a “water thinker,”then your inner beautiful structures using simple building blocks,
thought processes are changed. What happens while also remaining flexible in his designs.
when you begin to think in a different way—such
as laterally? You are causing a physical change to Game programmers—not to be confused with
take place in your brain. I don’t know about you, game designers (who are comparable to movie
but I am absolutely fascinated by the prospect directors)—must follow the same coding
that, by thinking, I am capable of changing the standards and guidelines, and use the same tools
neural connections in my brain. This gives used by CS and CIS programmers (who produce
the phrase “change of mind” a very tangible, commercial and business application software),
literal meaning.
but are also required to solve very difficult
problems on-the-fly; that is, while producing a
steady flow of useful code for a game.The ability
Applying Lateral Thinking to
to solve problems without pausing for any length
Game Programming
Game programming requires a whole new of time to work out such problems is a developed
layer of thinking above and beyond the skill. Game programmers must use creative
techniques taught by traditional computer problem solving—lateral thinking—not just to
science (CS) and computer information systems get the job done, but to remain actively employed.
(CIS) curriculums. Unlike a database application If you approach a game programming problem
or a word processor, a game is a dynamic, head on and try to solve it with brute force
fast-paced, highly complex computer program methods, this is like attacking a numerically
A Lateral Approach to Game Development
superior army and counting on the skill of your
soldiers to overcome the odds.The problem with
this approach is that a numerically superior army
will always win an engagement if attacked head
on, all other things being equal. Therefore, it is
essentially a suicidal tactic.
The best approach for a game programmer, the
only approach that the most successful game
programmers have learned, is to solve
programming problems laterally using a creative
approach. For instance, suppose you are a 3D
engine programmer who is working on the very
core engine of the game upon which an entire
project relies to succeed. To say that you are
under pressure to produce high quality code that
is fast and efficient is an understatement—in fact,
it is simply a given requirement of your job.You
must produce an extremely fast 3D engine with a
limited amount of time. Do you tackle the
problem head on, with a limited amount of time
(e.g., a limited number of soldiers) or do you
find an alternative way to write the code?
containing a game level (i.e., “the world”), and it
must render the level, containing mountains,
rivers, lakes, trees, houses, rocks and many other
natural objects and constructed buildings. You
start by writing the code to draw textured
faces—also known as polygons—on the screen.
You take it to the next step by drawing simple
3D-textured shapes.Your third step is to load a
3ds max file describing an entire game level
broken down into individual meshes (a collection
of faces/polygons). From that point forward, the
engine is refined and features are added, and the
rendering code is optimized.
A Lateral Thinking Exercise
How might you use lateral thinking to produce
the game engine as quickly and efficiently as
possible? More importantly, how might you write
the engine with capabilities that the designer and
other programmers might not have foreseen? In
other words, you want the engine to be flexible,
customizable and extendable. First, break down
the problem into a list of elements such as the
following (this is just a simplistic list):
List 1
1. Faces or polygons.
2. Meshes or models.
3.Textures or bitmaps.
4. Rendering the scene.
5. Game loop.
What about the popular team technique called
brainstorming to solve problems? Brainstorming
is a technique invented by the advertising
industry to come up with seemingly random
approaches to build product awareness. This
technique may be useful for the recombining
process, as I explain below, but not solely as a
problem-solving tool. Brainstorming can, at best,
give you a list of goals that you would like to
achieve; brainstorming should not be used to
come up with solutions.
At this point, you want to come up with a list of
unrelated words, perhaps at random, and this is
where brainstorming may be helpful, but not in
every instance. It is most helpful to write down a
list of unrelated words or phrases such as the
following (note that I include alternate versions of
each word to help with the thought processes):
List 2
1.Tree or forest.
2. Freeway or road.
3. Book or library.
4. Phone or communication.
5. Steel or metallurgy.
One method for solving problems laterally is to
divide a problem into elements and then
recombine them in different ways in order to
produce new and unexpected results. Your 3D
engine must have the ability to load a 3ds max file
Now apply each of the items in List 1 with each
of the items in List 2 and note any interesting
possibilities that the combinations reveal to you.
This is where a cultivated skill in creativity may
help because associations are not always obvious:
7
1. Faces/polygons AND tree/forest.
2. Meshes/models AND tree/forest.
3.Textures/bitmaps AND tree/forest.
4. Rendering the scene AND tree/forest.
5. Game loop AND tree/forest.
8
Examine these combinations: what associations
can you come up with? Remember, this is an
exercise in pattern recognition, not in logical
step-by-step problem solving.You want to allow
your mind to come up with these water
associations without allowing your rocky
assumptions to affect the result.While everyone’s
results will be different, a real benefit to this type
of exercise arises when several members of a team
are able to find some shared associations. Here is
what I have come up with:
1. Can new polygons be grafted onto existing
polygons like leaves on a branch?
2. What if the game engine will allow new
models to grow or spring up without
being loaded from a file, and what if we
use #1 above to help facilitate this? Then
might we have just one tree model, and the
engine may create thousands of variations
of that tree automatically?
3. A forest is home to many animals as well as
trees. What if the engine has the ability to
construct a more compelling and realistic
game world that is not entirely designed by
using a large store of textures and models?
4. Perhaps the rendering engine will have a
main “trunk” that does most of the work,
and any special effects or features can be
added on later using add-on modules, which
are like branches.
5. The game loop needs to be as fast as possible
at rendering, but timing must be maintained
for the animated objects in the game.What
if each object in the game is spawned off
as a separate thread, like a seed falling
from a tree in the forest, and that thread
is self-contained with its own behaviors
in the game?
Final Thoughts
As you can see, some of these associated results
are interesting, and some are probably not useful
at all, but the important thing is to follow this
technique and do the associations with the other
words and word phrases in the list. After coming
up with associations for tree/forest, you would
move on to the second item in List 2 (freeway or
road) and see what associations you can make.
This process need not have a finite list of words
either.You might grab a random word out of a
dictionary and produce a hundred or more
interesting associations, any one of which may help
you to solve an otherwise intractable problem.
As applied to game programming, you can see
that even a trivial example produces some
fascinating possibilities. And that, as far as lateral
thinking goes, is the whole point of the exercise.
MacGyver used intelligence, a vast knowledge of
science and lateral thinking to earn himself a
reputation as the man to call when a difficult
problem arises. Likewise, as a game programmer,
if you learn to use your intelligence, your vast
knowledge of computer systems and a heavy dose
of lateral thinking, you may earn your own
reputation and be surprised by the difficult
problems you are able to overcome.
References
de Bono, E. (1967). New Think: The Use of Lateral
Thinking in the Generation of New Ideas. New York: Basic
Books.
de Bono, E. (1992). I Am Right—You Are Wrong: From This
to the New Renaissance: From Rock Logic to Water Logic.
London: Penguin Books.
de Bono, E. (1999). Six Thinking Hats. Boston, MA: Back
Bay Books.
de Bono, E. (2005). The Six Value Medals. London:
Vermilion.
de Bono, E. (1973). Lateral Thinking: Creativity Step by
Step. London: Perennial.
The Great Buildings Collection. Great Architects. Edward
Larabee Barnes, United States. http://www.greatbuildings.
com/architects/Edward_Larrabee_Barnes.html.
Winkler, H., & Rich J. (1985-1992). MacGyver [Television
series]. Paramount Pictures: ABC.
Biography
Jonathan has been an avid gamer and programmer for 18
years. He earned a BS in CIS in 1997 and pursued a software
development career before being brought onto the faculty
of UAT. He worked on two commercial games in the early
1990s, and has recently released two retail Pocket PC games,
“Pocket Trivia” and “Perfect Match.” Jonathan has authored
numerous books on game development covering several
languages, such as C, C++, Visual Basic and DarkBasic. Visit
his web site at www.jharbour.com.
Maltese Physician
Takes on the
Greek Gang of
Three: A Review
of Edward de
Bono’s Theories
and Publications
Matt Jolly
University of Advancing Technology
9
At the root of vertical
thinking is the rigid system
of thinking that was defined
and employed by Socrates,
Plato and Aristotle.
In the introduction to his text I Am Right—You
Are Wrong, Dr. Edward de Bono announces the
impending arrival of a new Renaissance. “The
last Renaissance,” he explains, “was clearly
based on the re-discovery of ancient Greek
(about 400 BC) thinking habits” (1990, p. 3).
The next wave, accordingly, will place greater
emphasis on creative, perception-based
thinking, exploration, construction, design and
the future (1990, p. 26-27). In his first book,
New Think (also published as The Use of Lateral
Thinking), Edward de Bono coined the term
lateral thinking, a concept that is central to his
“new Renaissance,” and one that permeates
almost all of his 62 publications (1968). By way
of defining the term, de Bono contrasts lateral
thinking with traditional (vertical, classical)
thinking. He suggests that at the root of
vertical thinking is the rigid system of thinking
that was defined and employed by Socrates,
Plato and Aristotle (whom he routinely refers
to as the “Greek gang of three”). De Bono
rejects many of the assumptions inherent in
Cerebral Space, by Dawn Lee
10
classical thought, including the notion of
universally accepted objective truth,
argument as a tool for discovering truth (i.e.,
dialectic or the Socratic method, and
rhetoric) and the binary structures that are
foundational to logic (winner/loser,
right/wrong, true/false) and which posit
argument as adversarial. De Bono sees these
ancient structures inelegantly practiced in
many contemporary forums for social
discourse: in courtrooms and legislative bodies,
in classrooms (as teaching methods), in homes
(as parenting methods), in the pages of
scientific journals and in the op-ed sections of
newspapers the world over.
Classical Thought
The structures of many contemporary means of
discovering knowledge have their roots in the
dialectical approach to philosophy. Bertrand
Russell defines dialectic as “the method of
seeking knowledge by question and answer”
(p. 92). For example, a lawyer in a trial engages
in a dialectic with a witness when he examines
the witness on the stand. The aim of the
dialectic is to expose—or reason toward—the
truth. Similarly, a teacher might direct a
discussion by asking a series of carefully
tailored leading questions that result in the
students’ “discovery” of the subject matter.
Dialectic is the primary mode of knowing that
is reflected in Plato’s record of the Socratic
dialogues. While the dialectical approach
certainly existed before Socrates, he was
the first to hone it as the sole means for
discovering truth.
The self-avowed myth of Socrates (as it is
recorded by Plato) was that he received from
the Oracle of Delphi the instruction that there
was no one wiser than himself. Considering
himself completely lacking in wisdom, he spent
his life in pursuit of someone who was wiser.
Eventually Socrates had offended so many
powerful men—by reducing their “wisdom”
to logical inconsistencies—that they tried
him for godlessness and the corruption of
youth. Plato’s Apology documents Socrates’
explanation of his dialectical approach to the
jury that sentenced him:
I go about the world… and search, and
make enquiry into the wisdom of any one,
whether citizen or stranger, who appears
to be wise; and if he is not wise, then in
vindication of the oracle I show him that
he is not wise; and my occupation quite
consumes me (1999a, p. 17).
In each of the dialogues recorded by Plato, the
process is fairly parallel: Socrates interrogates
each of his adversaries (including politicians,
poets and artisans) until he bumps up against
some inconsistency in their position. He then
deconstructs that position in such a way that
the audience is convinced that the other
speaker lacks sufficient wisdom.
Maltese Physician Takes on the Greek Gang of Three: A Review of
Edward de Bono’s Theories and Publications
While the dialectical method of Plato and
Socrates relied on reason and questioning,
Plato’s pupil, Aristotle, forged a concrete
connection between “the true” and another
form of discourse: rhetoric. In On Rhetoric,
Aristotle provides perhaps the clearest
articulation (by an ancient Greek) of those
foundational assumptions of classical thinking
that de Bono resists: “[R]hetoric is useful [first]
because the true and the just are by nature
stronger than their opposites… None of the
other arts reasons in opposite directions;
dialectic and rhetoric alone do this, for both are
equally concerned with opposites” (pp. 33-34).
As opposed to the dialectical approach of
Socrates, where an admittedly pompous
person was badgered by a series of questions
designed to expose or deconstruct that
victim’s flawed logic, the Aristotelian approach
to truth presents us with a veritable battle
between opposing positions, where truth and
justice are the likeliest victors because they are
naturally easier to support and defend. In our
courtroom example, dialectic exists in the
examination and cross-examination of the
witnesses, and rhetoric exists where the
plaintiff’s lawyer and the defendant’s lawyer
are engaged in the larger debate over the
suit, the outcome—the truth—of which is
ultimately determined by the strength of
their arguments.
Aristotle is also generally regarded as the
founder of formal logic. His greatest
contribution to this field was his development
of the syllogism, a form of logical argument
that presents two truths (premises) from which
a third truth can be derived. The most famous
example of a syllogism is as follows:
Premise 1: All men are mortal
Premise 2: Socrates is a man
Conclusion:Therefore Socrates is mortal.
This approach to truth-making is almost
algebraic in its construction. De Bono aptly
describes this logic as the process of managing
the tools “yes” and “no.” So long as the syllogism
is sound and we say “yes” to the first two
premises, then we are required to agree with
the conclusion.
The paragraphs above have briefly introduced
dialectic, rhetoric and the binary, adversarial
system that underpins each. De Bono’s third
concern in relation to these classical models
for thinking is their dependence on
objective truth:
.. humans have a natural disposition to the
true and to a large extent hit on the truth;
thus an ability to aim at commonly held
opinions is a characteristic of one who has
a similar ability to regard the truth…
(Aristotle, pp. 33-34).
Embedded in this passage is the conviction that
“the true” is a fixed, shared and innate quality of
humankind. As a pupil of Plato, it’s likely that
Aristotle received this notion of the good from
his teacher. In The Republic, Plato makes a clear
delineation between the subjective truth and
objective truth:
… those who see the many beautiful, and
who yet neither see absolute beauty, nor
can follow any guide who points the way
thither; who see the many just, and not
absolute justice, and the like—such
persons can be said to have opinion, but
not knowledge… But those who see the
absolute and eternal and immutable may
11
Embedded in this passage is the
conviction that “the true” is an
innate quality of humankind.
be said to know, and not to have opinion
only (1999b, 135).
He goes on to argue, in “Book IX,” that the
world of the senses exists only as an imitation
of the singular, true form that was created by
God. For this section, Plato uses bed-making as
an example, demonstrating that there is a
singular form, made by God, that possesses
what we might call bedness, and all other beds
are created in imitation of this singular and
perfect form. To possess knowledge is to have
knowledge of that ultimate form, and not the
imitations of it which belong to the false world
of our senses (1999b, pp. 219-24).
Edward de Bono’s Response to
Classical Thinking
Throughout Edward de Bono’s prolific career,
he has repeatedly distinguished his approach
to thinking from those classical structures
outlined above—what he alternately refers
to as “vertical thinking” (1970), “old think”
(1972), “table-top logic” (1990a), “rock logic”
(1990b) and “judgment-based thinking”
(2000). In New Think, de Bono announces his
departure from traditional logical forms:
Vertical thinking has always been the only
respectable type of thinking. In its ultimate
form as logic it is the recommended ideal
towards which all minds are urged to
strive… If you were to take a set of toy
blocks and build them upwards, each
block resting firmly and squarely on
the block below it, you would have
an illustration of vertical thinking.
With lateral thinking the blocks are
12
process of rain falling on a field: as it falls,
the rain, in conjunction with gravity, erodes
the landscape, resulting in the organization of
the runoff into streams and pools (1969, p. 60).
Similarly, as we encounter new information,
our brain directs and organizes the data into
familiar paths. The effect of this system is that
humans come to rely on established, reinforced
beliefs and assumptions. Logic, then, is capable
of nothing more than the directing of data into
pre-established and accepted channels of
knowing. If we compare this cognitive model to
Aristotle’s rigid logic of the syllogism, we
find that in both cases new ideas can only
develop out of old ones—in Aristotle’s logical
system, we must present two accepted truths
(all men are mortal, Socrates is a man) in order
to arrive at a new truth (Socrates is mortal).At
the same time, the very chemistry of our
brains reinforces this thinking by relying on
pre-existing patterns in order to make sense
of new data. De Bono recognizes that
To a large extent, de Bono sees the the self-organizing nature of the brain
is a powerful tool for defining and
logical structures as a manifestation understanding experience; however,
he argues that it is also a significant
barrier to the discovery of new ideas.
of the electrical and chemical
functioning of the brain.
scattered around… the pattern that may
eventually emerge can be as useful as the
vertical structure (1968, pp.13-14).
To a large extent, de Bono sees the logical
structure as a manifestation of the electrical
and chemical functioning of the brain. In his
The Mechanism of Mind (1969), de Bono
introduces a model of the brain that explains
this connection. The book relies largely on
inductive reasoning, examples and metaphors
to demonstrate how the human brain
functions as a self-organizing information
system. As opposed to a passive system, where
the organizational surface simply receives
information (a metal-framed filing cabinet is a
good example), a self-organizing system is one
where both data and the surface that stores it
exist in a reinforcing, fluctuating relationship.
He compares the functioning of the brain to the
In his 1945 History of Western Philosophy,
Bertrand Russell laments that “through
Plato’s influence, most subsequent philosophy
has been bound by the limitations resulting
from this [dialectic] method,” concluding that
“the dialectic method… tends to promote
logical consistency, and is in this way useful.
But it is quite unavailing when the object is to
discover new facts” (pp. 92-93). In I Am Right—
You Are Wrong, de Bono describes argument as
“the basis of our search for truth and the basis
of our adversarial system in science, law and
politics” (1990a, p. 5). It is the binary logic of
classical argumentation that de Bono most
sweepingly rejects as counter-productive:
Dichotomies impose a false and sharp
(knife edge discrimination) polarization on
the world and allow no middle ground or
spectrum… It is not difficult to see how
this tradition in thinking has led to
persecutions, wars, conflicts etc.When we
add this to our beliefs in dialectic,
Maltese Physician Takes on the Greek Gang of Three: A Review of
Edward de Bono’s Theories and Publications
of truth: game truth, experience truth
and belief truth. The first of these is
concerned with the rules that govern
systems—he includes in this category
mathematics and logic. The second of
these is concerned with truths that are
discovered through living, and these
most directly correlate to the sensory
world that Plato treated as an illusion or
imitation of ideal forms. De Bono
describes belief truth (the one most
closely related to Plato’s ideal forms) as
the truth “held most strongly even
though the basis for it is the weakest.”
He correlates “stereotypes, prejudices,
discriminations, persecutions, and so
on” with this kind of truth. “Whatever
the actual situation,” he argues, “belief
truth interprets [the data] to support
that belief system” (2000, p.74).
argument and evolutionary clash we
end up with a thinking system that is
almost designed to create problems
(1990a, p. 197).
As our natural system of thinking—the system
that is routine and is reinforced through the
self-organizing system of the brain—relies on
clear and distinct binary relationships, we
approach discourse as a conflict where there
will inevitably be a winner and a loser, a right
position and a wrong one.
Edward de Bono’s
Applied Theories
Throughout his publishing and speaking
career, Edward de Bono has always
chosen the path of future-oriented,
solutions-based thinking over the scholarly
models of analysis, deconstruction, definition
and problematization. His rejection of those
academic modes manifests in subtle and overt
ways. In the conclusion to Six Thinking Hats, he
writes that “the biggest enemy of thinking is
complexity, for that leads to confusion.” The
book Practical Thinking suggests that successful
communicators should limit the level of
detail and complexity that they engage in when
confronting a problem as that complexity
relates to their needs and outcomes—a
scientific understanding of the microwave, for
example, is hardly required in order to warm
De Bono further disagrees with those
classical notions of Platonic truth. “Where
[argument] breaks down,” he asserts, “is in He writes that, “the biggest enemy
the assumption that perceptions and
values are common, universal, permanent of thinking is complexity, for that
or even agreed” (1990a, p. 5). de Bono
admits that “[t]o challenge the allleads to confusion.”
embracing sufficiency of truth is sheer
cheek… But to challenge our treatment of up a frozen burrito. On a more fundamental
truth,” he argues, “is not to recommend level, he sees the scholarly forum as having
untruth but to explore that treatment” (2000, broken down in the Renaissance: “Scholarship
p.74). He goes on to distinguish different kinds
13
14
Most of de Bono’s work, by extension, focuses
on developing the applications for his ideas.
These applications include detailed techniques
for lateral thinking outlined in Lateral Thinking:
Creativity Step by Step (1970), techniques for use
of his “provocation operation” (Po) in Po: A
Device for Successful Thinking (1972), discourse
strategies outlined in Six Thinking Hats (1985),
design thinking strategies presented in New
Thinking for the New Millennium (2000) and
lesson plans for his Cognitive Research Trust,
or CoRT. Each of these texts works in some
way to overcome the obstacles that de Bono
sees inherent in the thinking systems that we
inherited from the ancient Greeks.
seeing the problem anew, using singular entry
points as a way of refocusing energy and using
random word generation in order to discover
new aspects of the problem (1970).
Each of the techniques listed above functions to
direct the mind away from the primary
direction of thought and onto side paths—they
force the mind to move laterally away from
logical, obvious conclusions. De Bono classifies
these techniques into three categories:
challenges, where the path to the natural
conclusion is purposefully blocked; concepts,
where a thinker deliberately stops forward
movement in order to explore possible paths;
and Po, which introduces some concept for
consideration that falls outside of the realm of
the question—this could come in the form of a
random word, an inversion of the idea, etc.
Detail from Got Bono on the Brain, by Lisa Stefani
was perfectly acceptable at the time [of the
Renaissance],” he argues, but “[t]oday it is much
less appropriate, because we can get much
more by looking forward than by looking
backward” (1990a, p. 24). De Bono sees much
more purpose in creating new ideas, the logic
of which can be reconstituted once the solution
is complete. He argues that many significant
advances in science, medicine and technology
have been arrived at through accident, intuition
and association.
For instance, in Lateral Thinking, de Bono
identifies the steps we need to take in order
to overcome the roadblock of judgment
thinking—that mode of thought that is
reinforced through the functioning of our brain
as a self-organizing system. Lateral thinking,
then, is an intentional mess. It is a technique
that requires a purposeful willingness
to disorder the subject matter, to interrupt
and challenge those logical assumptions
that will likely only create ordered systems
of analysis rather than new ideas. These
steps include (among others) challenging
assumptions, suspending judgment, inviting We need some strategies for
random stimulation and allowing for
fractionation (the reorganization of a forcing our thinking to jump paths.
problem’s structure). Some techniques that
he presents include the reversal method (where De Bono explains that, because our thinking
a thinker starts with some small known naturally falls into routine paths, we need some
aspect of the problem and follows that small strategies for forcing our thinking to jump
part out until it encompasses the whole), paths, to depart from the routine motions and
brainstorming, using analogies as a way of therefore encounter problems in ways that we
Maltese Physician Takes on the Greek Gang of Three: A Review of
Edward de Bono’s Theories and Publications
haven’t previously imagined.This leaping of the
mind from a known path to an unknown path
requires an interruption of the velocity of our
ideas. By provoking new conceptualizations of
the problems, we can then conceive new,
unanticipated solutions. Where vertical
thinking rests squarely on logic, de Bono’s
lateral thinking rests on provocation.
While the above approaches to thinking
function to counteract the force of traditional
thought structures, they have only tangentially
approached De Bono’s concern for the
adversarial nature of most contemporary
discourse. De Bono identifies the dualistic
underpinnings of logic as the cause of our
adversarial approach. One solution to this
binary system is to apply de Bono’s Six Thinking
Hats to problem solving. Instead of falling into
the trap of embodying the judgment-driven
assumptions that argument necessitates right
and wrong, good and bad, winner and loser,
which we are programmatically inclined
towards, de Bono urges any person involved in
a dialogue to participate in each possible
position as it relates to the subject being
discussed. De Bono theorizes six distinct
approaches to problem-solving and assigns
them a representative color: information
(white), feelings (red), caution (black),
optimism (yellow), creativity (green) and
overview (blue). He suggests that, in most
cases, we confuse emotion with optimism, or
caution with overview—it all gets jumbled up
so that the group can’t identify what sort of
information it is receiving. The six hats
approach forces all group members to
participate in each stage simultaneously. Not
only does this approach necessitate the
inclusion of all perspectives, it also validates
each approach and encourages empathy among
participants as each is forced to examine the
problem from the others’ perspectives.
In the opening pages of Po, de Bono observed
that “[t]here are few things which unite hippies
and big-business corporations, painters and
mathematicians. The need for new ideas does
just this” (p. 4). By that time, de Bono had
already begun to attract a fair amount of
attention to himself and his “new ideas,” having
published three well-received texts that would
prove foundational to the further development
of his theories on thinking and creativity: The
Mechanism of Mind, Lateral Thinking,and Practical
Thinking. Over thirty years later, his
observation proves to have been an apt
prediction of the sort of career that the Maltese
physician would enjoy. A search through a
global news database revealed de Bono’s
increasing (and increasingly varied) involvement
with government leaders, commercial
businesses, educational institutions, student
groups and cricket teams—Australian coach
John Buchanan has given de Bono a certain level
of credit for the team’s 2003 World Cup
victory (Independent, p. 41). The de Bono
Group’s official webpage lists over seventy
companies and institutions that are currently
using his course materials, including companies
as large as Microsoft and AT&T, the New York
Times and the US Marine Corps, and the
Defense Intelligence Agency (2005).While this
list is impressive, it is more impressive to
consider the impact that Edward de Bono might
have on what he refers to as the software of the
brain, that foundational, unimpeachable notion
of human logic.
References
Aristotle (1991). On Rhetoric: A Theory of Civic Discourse
(G. A. Kennedy, Trans.). New York: Oxford University Press.
de Bono, E. (1968). New Think. New York: Harper and Row.
de Bono, E. (1969). The Mechanism of Mind. New York:
Harper and Row.
de Bono, E. (1970). Lateral Thinking: Creativity Step by
Step. New York: Harper and Row.
de Bono, E. (1972). Po: A Device for Successful Thinking.
New York: Harper and Row.
de Bono, E. (1985). Six Thinking Hats. New York: Harper and Row.
de Bono, E. (1990). I Am Right—You Are Wrong: From
Here to the New Renaissance: From Rock Logic to Water
Logic. New York: Penguin Books.
de Bono, E. (1992a). Serious Creativity: Using the Power of
Lateral Thinking to Create New Ideas. New York: Harper
and Row.
de Bono, E. (2000). New Thinking for the New Millennium.
Beverly Hills, CA: New Millennium Press.
de Bono Group, The (2005). What We Do. Retrieved April
28, 2005, from The de Bono Group website:
http://www.debonogroup.com/what_we_do.htm.
Living review life etc: Mind think yourself gorgeous; the
15
man who gave us lateral thinking now says he can make you
beautiful (2004, May 30). Independent on Sunday, First
Edition, Features, 41.
Plato (1999a). Apology (B. Jowett, Trans.). Retrieved April
25, 2005, from Project Gutenberg Web site:
http://www.gutenberg.org/etext/1656.
Plato (1999b). The Republic (B. Jowett, Trans.). Retrieved
April 25, 2005, from Project Gutenberg Web site:
http://www.gutenberg.org/etext/1497.
Russell, B. (1945). A History of Western Philosophy. New
York: Simon and Schuster.
16
Intelligent Content
Delivery and the
Bottom Line:
Smart Websites
Joe McCormack
University of Advancing Technology
Project Vtracker was
developed to meet the
requirements of
intelligent content
delivery and also serve
as a traditional statistical
reporting application.
Development of Vtracker
A successful website is one that delivers the
content a user expects to see without pushing
content, such as the dozens of links, tickers and
advertisements that most likely will
overwhelm the user and may detract from the
communication of the desired content. Large
websites and website conglomerations
(a network of websites owned by a single
entity) can easily lose users through gigantic
navigation themes, search results and a lack of
structural uniformity between websites. I've
found that the more options you give a user
multiplies the ways you can lose that user.
As an active participant in rebuilding a portal
website, I found that one of the most
important objectives in this task was to be
able to tie intelligent content delivery
(i.e., delivering relevant content) to users,
while incorporating many additional features,
such as these:
1. Remembering user surf patterns each
time a website is visited, and maintaining
surf histories for each user.This facilitates
being able to tailor content for each
individual user in real time.
2. Remembering the user even if that user's
IP address changes or if the user switches
web browsers without heavy dependence
on cookies throughout multiple sessions.
17
3. Being able to cast a user to a “group” or
“genre” in order to readily determine
content rolls and relevance to the
website's purpose. This also allows for
the delivery of focused navigation themes
and content, thereby reducing site
navigation complexity for the user.
4. Being able to accurately determine how
many visitors visit a website, whether as a
new visitor who has never been to the
website before, or as a visitor who has
returned to the website several weeks later.
18
Project Vtracker was developed to meet the
requirements of intelligent content delivery
and also serve as a traditional statistical
reporting application. The use of current
statistical tracking software (i.e., a server-based
application that tracks hits to web pages,
browsers being used, etc.) at the inception of
the project seemed to be a logical solution in
this instance described above. However, one of
the problems with commercial tracking
software, such as WebTrends, is that there are
no mechanisms in place to dynamically
influence website content delivery based on
previous visit data on an individual user basis to
individual users in real-time (this is one
problem–I will discuss other significant and
related issues as well).
The most challenging aspect of Vtracker, both
during its inception and its later development,
was envisioning how the four points above
could be seamlessly integrated into a robust
architecture by developing new, imaginative
methods using current technology to derive a
real-world solution to the deficiencies of other
applications. In this case, I was able to rely on
server-side technology only, without having to
resort to installing any software on individual
users' computers or becoming reliant on
specific features of a handful of web browsers.
Despite this, it took less than a month to move
from inception to development and then
completion of an application that would
support the requirements.
Vtracker Features
Vtracker–the “brain” of the portal site–can not
only gather statistical data that is available with
common tracking software, but can also
directly control how a website serves content
to each individual user; it also has the capacity
to tailor such content delivery over time for
each user as their habits change. Because
Vtracker is able to remember users across
multiple sessions and maintain histories for
each user (see Figure 1), website traffic and
marketing effectiveness can be tracked with a
greater degree of precision compared to
Figure 1: Example of a user's history across multiple sessions.
Intelligent Content Delivery and the Bottom Line: Smart Websites
Figure 2: Tracking where the user has come to this web page from. Also, this captures an IP Usage History.
traditional tracking applications.
One of the reasons why Vtracker is more
precise at delivering statistical tracking
information (such as web page hits) is that it
can distinguish between new and repeat users.
Vtracker can remember exactly where and
when a user does something on the website;
this data is available under the section labeled
“User Surfing Pattern On Site” (Figure 1).
Vtracker easily remembers date of access, time
and what page the user has hit, even across
multiple sessions.
Maintaining histories is vital, particularly
from a marketing perspective and from the
perspective of those who monitor a company's
bottom line: the effectiveness of an advertising
campaign can be more accurately determined
by distinguishing new visitors (who may be
visiting the website after seeing an ad) from
visitors who are simply revisiting the site and
may never have seen the ad.
Vtracker also categorizes visitors into interest
groups automatically and can dynamically
update categorizations in real-time as user
interests change, as seen under “User ‘Genre’”
(Figure 1). Because many advanced web
development and maintenance issues are
sensitive to the type of web browser the user
employs, Vtracker remembers the web
browser type. This, aside from assisting a
web development team with development
concerns, can also show web browser trends
and market shifts.The same also holds true for
the “Language” column (Figure 1); in this case,
Vtracker has identified the user as an English
speaker by “en-us.”
A side benefit of being able to remember and
track users across multiple sessions is that a
history of IP addresses the user has used can be
referenced in the event of security issues that
may arise (Figure 2).
Vtracker also has the capacity to remember
where a user comes from and can even track the
time at which the user came from another website. This is illustrated by “Sites User Has Been
Referred From” (Figure 2). This can impact
marketing campaigns and the bottom line by
showing which websites are referring the most
visits. The more links that originate from a
particular website may indicate that the website is
generating a lot of interest; this may contribute to
marketing decisions that may be made in relation to
such a website.
19
Figure 3: Example of user-agent gathering.
20
Figure 4: Separation of users from web bots.
Another statistical feature of Vtracker is that it
tracks browser, user languages and web bot activity
over time. This is a collective reporting feature
available in a different section of the program
(Figures 3 and 4).
Vtracker is more precise at delivering
statistical tracking information (such as web
page hits) because it is able to differentiate real
users from web bots–programs that may be
used to artificially inflate the reported website
traffic of a website–by separating them and
reporting their percentage of bandwidth
consumption (by number of requests and via
percentage). This function allows quick
identification of such activities so that unbiased
traffic numbers can still be gathered. As
indicated earlier, this summarization shows the
Intelligent Content Delivery and the Bottom Line: Smart Websites
popularity of various web browsers among the
user audience and also shows the access rates of
different web bots.
Being able to maintain surfing histories on each
user can not only identify which web pages a
user prefers the most (Figure 5), but also allows
the fundamental process of delivering custom
content and navigation, allowing the website
architects and webmasters to do the following:
1. Identify the most visited web pages.
2. Identify probable content pages that
a user may be confused with.
3. Identify how long a user stays on the
website per session each time they visit.
4. Identify how users are going through
the website per session each time
they visit.
Since Vtracker was developed around the core
capability of being able to cast users into “groups”
or “genres” by remembering users and their
activities, areas of the website can be associated as
belonging to a “group.” From a statistical
standpoint, this allows Vtracker to report where a
user goes after entering a key page associated to a
“group” (Figure 6).
The Future of Vtracker
The inherent abilities of Vtracker, i.e., being
able to remember users across multiple
sessions, as well as what they do during
those sessions, opens the door to webbased, persistent artificial intelligence
systems scalable to individual users. Imagine
a website-based AI system that could
communicate in some manner with the user,
then learn from that user and provide help to
locate content for the user.
As a practical reference, which could be
developed with technology available currently,
let me put forth the following concept: let's say
that a visitor comes to a website and starts
surfing. If no patterns can be discerned as to
what they are doing, it may indicate that the
visitor is curious, bored or simply confused.
Vtracker could be programmed to invoke a
trigger that would spawn a “virtual person” in
the user's web browser. After the virtual
person is active for that specific user, the
virtual person might initiate dialogue via text
or voice synthesis (if the user's connection can
support it). Because Vtracker can already
associate the user to a history,Vtracker's “brain”
could be easily expanded to support storing
more information about the user, such as
dialogue transactions, for instance, between the
virtual person and the user. Through the
dialogue transactions, the virtual person would
Figure 5: Ranking web page user interest.
21
Figure 6: Reporting the top web pages users go to after hitting key pages.
22
be able to narrow the user as being either
curious, bored or confused, and then could
offer assistance accordingly.
More “building blocks” could be applied to
Vtracker's foundation to push the envelope of a
persistent artificial intelligence system. One
block at a time, Vtracker and the virtual
person visual representation of it could evolve
into sophisticated action and reaction to users
on an individual basis. Over time, Vtracker
could also have in-depth knowledge of a user
Impossibility only exists if you
can’t change perception.
and be able to interact with the user on
multiple levels that may not be directly related
to the website's purpose. For example,Vtracker
may learn that the user really likes a certain
baseball team; using simple methods, Vtracker
could access data on the baseball team, such as
current standings, and communicate relevant
data to the user the next time the visitor goes to
the website. Impossibility only exists if you can't
change perception.
Biography
Joe McCormack, an alumnus of UAT and an instructor, has
been involved with web development using a wide range of
programming languages on different server platforms
since the mid 1990s. Joe has published two books on the
subject of server/web-based programming and actively
develops new methods and processes to automate and
streamline functions to improve performance and
task-handling abilities of web-based applications on
different server platforms. Joe is also the author of two
books on Perl/CGI programming.
Cyborging: Rhetoric
Beyond Donna
Haraway and the
Cyborg Manifesto
Rochelle Rodrigo
Mesa Community College
with the JAT Editors
Socialist feminism
should use the
concept of the cyborg
as a metaphor to
theoretically describe
and enact a
feminist agenda.
To the reader
Rochelle Rodrigo’s “Cyborging: Rhetoric Beyond
Donna Haraway and the Cyborg Manifesto” was not
written for the static page. It is a hypertext, a
collection of sixteen distinct, interrelated vignettes
that orbit a single theme. Each page of the original
online version of this document includes layers of
images and language so that a reader positioning the
mouse over a section of text or clicking on an image
might reveal complementary text beneath;we indicate
these in our version by the use of alternating text
colors which also serve to distinguish between the
different authors whose works are cited throughout
the essay.Furthermore,each page of the online version
includes hyperlinks to other pages within the work,
allowing the reader to choose the direction of their
reading based on their reactions to the linking words.
Hyperlinks,in our version,are indicated by the arrows
which traverse these pages; follow them to see where a
mouse-click would have taken you on the web page.
The form of this essay contains significant meaning
for the Journal of Advancing Technology. The
hypertext medium challenges readers’ expectations for
linear order, authorial control and completion.Where
a traditional, printed work moves from the first
sentence to the next,from the first page to the last,the
hypertext essay has no distinct beginning or end, a
condition which is reflexive of most content in the
contemporary online world. Instead, the reader
participates in the ordering of this text in much the
same way that someone might organize sixteen
23
separate pictures into a collage. Where the
traditional text depends on an author to control
the movement of the work, the hypertext shares
the responsibility of authorship with the reader.
Meaning, in this essay, is found through
making connections more than it can be
found in a single thesis. In this way, the
hypertext medium is one that places discovery
over order, description over prescription, and
meaning-making over comprehension.
What you will read in the following pages is a
version of the essay—one of many possible
versions—that has been decided for you by the
editors of this journal. To read this essay in its
original form,visit http://www.mc.maricopa.edu
/~rrodrigo/cyborg_theory/Cyborg_Theory.htm.
Note:
If citing this work from this print journal, the
author requests that the editors of the journal be
acknowledged as secondary authors. If citing it
from the online version, please include yourself
as the secondary author.
24
Introduction
In 1985, Donna Haraway published
“Manifesto for cyborgs, science,
technology, and socialist feminism in the
1980s” in the Socialist Review, volume 80.
In this piece, Haraway theorizes that
socialist feminism should use the concept
of the cyborg as a metaphor to
theoretically describe and enact a feminist
agenda. Haraway argues that the innate
duality of the cyborg can be a positive
role model for socialist feminist women
to combat hierarchized dichotomies that
inevitably discredit their subject
positions. However, until the readers
of the “Cyborg Manifesto” clearly see,
even better understand, the physical
connection between the physical parts of
Haraway's cyborg, her myth remains
trapped in the metaphorical image of
hard technology next to organic flesh.
Without understanding the connection,
the interface she places so much emphasis
on, individuals cannot emulate the
existence of the cyborg nor act out the
socialist-feminist goals Haraway calls for.
She does make a call to action; however, a
lack of understanding about how to be a
cyborg slows the reader's participation in
answering Haraway's hailing. A more
complex understanding of that connection
will allow for a different, possibly more
actively engaged, understanding of and
participation in Haraway's manifesto.
In this essay, I develop a better picture of
how to imagine Haraway's physical
connection between technology and the
flesh, that physical connection that is so
important to her political metaphor. I use
Elizabeth Grosz's theory of queer desire,
and corporeal feminism to help explore
Haraway's language of connection–the
terms, images and metaphors she uses
to describe the interface between
technology and organic life–and her
emphasis on the cyborg's role in pleasure
and survival. Haraway calls for a "cyborg
theory of wholes and parts" that
addresses issues of race, gender and
capital (181). Answering her call, I argue
Haraway's “cyborg” needs to become a
verb; a verb that describes the action,
interfacing, that occurs between her
cyborg subjects. I also use Mark Taylor's
theory of complexity and network
culture to help expand the concept to
cyborg as a form of rhetorical invention.
Cyborg Legacy
Haraway's “Manifesto” has started its own
line of cyborg studies that include both
texts that further theorize the cyborg as a
metaphor for various postmodern ways
of being and studies of actual cyborgs in
fiction and the “real” world, along with
studies that juggle both; for example
some of these texts include:
• Technoculture (1991)
• Feminism and the Technological
Fix (1994)
• The Cyborg Handbook (1995)
• The War of Desire and Technology at
the Close of the Machine Age (1995)
• Electronic Eros: Bodies and Desire
in the Postindustrial Age(1996)
• Technologies of the Gendered Body:
Cyborging: Rhetoric Beyond Donna Haraway and the Cyborg Manifesto
Haraway uses the metaphor of flesh
and machine found in the cyborg as
a metaphor to theorize a way to
overcome binaries in an attempt to achieve
various socio-political goals. Simply, the
cyborg is a metaphor for the hope, desire
and possibility of overcoming binary
differences to complete some action.
Haraway does not necessarily give detailed
directions on how this interfacing is
possible. In other words, how might
someone be Haraway's cyborg?
Reading Cyborg Women (1997)
• Virtual Culture: Identity and
Community in Cybersociety (1997)
• Cyborg Babies: From Techno-Sex to
Techno-Tots (1998)
• Growing up Digital: the Rise of
the Net Generation (1998)
• Cyborgs@Cyberspace:
An Ethnographer Looks to the
Future (1999)
• How We Became Posthuman: Virtual
Bodies in Cybernetics, Literature,
and Informatics (1999)
• Cyborg Citizen: Politics in the
Posthuman Age (2001)
• Natural Born Cyborgs: Minds,
Technologies, and the Future of
Human Intelligence (2003)
Layered Readings
This layered reading of Haraway begins a
cyborg rhetorical theory of invention. To
become Haraway's cyborg we must
interface; to interface, we must desire to
connect with others that rub us the
right, wrong, and every which way.
Through that interface, we change
ourselves and the world with which
we interface.
Grosz theorizes lesbian sexuality as
interfacing between different physical
entities. Grosz's description of the physical
interface gives a more detailed description
of how Haraway's cyborg actually starts to
interface. Grosz's theory also emphasizes
how the cyborg is made up of parts that
do not merge into one; in other words,
the interface is not a melting pot. The
interface is a temporary connection to
achieve some goal.
25
To move beyond the theories of Foucault,
Derrida and Baudrillard, Taylor theorizes
how individuals at the dawn of the
21st century must interface with the
world around them, a world of
“unprecedented complexity” where
“speed becomes an end in itself ” (3).
Although Taylor's theory is not
specifically about technology, he realizes
that the changes he theorizes are
derived from “the development of
cybernetic, information, and telematic
technologies” (4). Taylor theorizes how
individuals make meaning and participate
through the interface that is the current
techno-culture.
Beyond Binaries
Thoughout the essay, Haraway describes
how the cyborg acts as a metaphor for
other “couplings” trapped as binary
opposites. The cyborg's connection
between man and machine represents the
possible connections between all other
binaries:
Self
Mind
Culture
Male
Civilized
Reality
Whole
Agent
Other
Body
Nature
Female
Primitive
Appearance
Part
Resource
(Haraway, p. 177)
conceal and to show. Enacting what it
designates, screen implies that concealing
is showing and showing is concealing.
Screen, screening, screenings:
Verb
Show
Reveal
Presence
Purity
Light
Noun
Hide
Conceal
Absence
Pollution
Darkness
Forever oscillating between differences it
joins without uniting, the meaning of
screen remains undecidable. Far from a
limitation, this undecidability is the
source of rich insight for understanding
what we are and how we know. In network
culture, subjects are screens and knowing
is screening” (Taylor, p. 200).
Cyborging
cy•borg (sï'bôrg') v. cyborged, cyborging,
cyborges. To willingly unite two, or
more, different agents for a desirable
purpose and action.
26
Since Haraway argues that there is no
origin or center to the cyborg myth, the
definition of “cyborg” replicated from
Haraway's myth has no essential qualities
to ground the definition of cyborg as a
noun. There is no common essence, piece
of “T”ruth, to the cyborg image allowing
for a solid definition. Instead, Haraway's
cyborg represents a call for action; in that
light, “cyborg” should define the action
that Haraway calls for in the “Manifesto.”
"This duplicity of the screen is captured
in the verb: to screen means both to
There is no origin or center to the
cyborg myth; the definition of
“cyborg” has no essential qualities
to ground the definition as a noun.
“Cyborg,” as a verb, demonstrates the
connection, affinity, coupling, marriage,
coalition that is made between cyborgs.
The cyborg act is rhetorical, it
is situated. Individuals or parties
cyborg only when desired. Cyborging
is dependent on space and time;
therefore, the picture of the cyborg
changes. Haraway calls for a
cyborg-action between women of
various races, ethnicities, classes,
genders and sexualities with the “task of
Cyborging: Rhetoric Beyond Donna Haraway and the Cyborg Manifesto
recoding communication and intelligence
to subvert command and control” (p. 175).
images and ideas of a forced, violent
sexual connection. These cyborg
“In the midst of these webs, The participants anxiously desire the
networks, and screens, I can
no more be certain where I interface between the meeting pieces.
am than I can know when
and where the I begins and ends. I am
connections are made willingly, all
plugged into other objects and subjects
(leaving the possibility for more than
in such a way that I become myself in
two) the participants anxiously desire and
and through them, even as they
await, possibly anticipate, the interface
become themselves in and through me”
between the meeting pieces.
(Taylor, p. 231).
If there is a challenge here for
Pleasurable Encounters
cultural critics, then it might be
Grosz concludes that erotic desire is: a mode
presented as the obligation to make our
of surface contact with things with a world
knowledge about technoculture into
that engenders and induces transformations,
something like:
intensifications, a becoming something
other. Not simply a rise and fall, a waxing
A hacker's knowledge, capable of
and waning, but movement, processes,
penetrating existing systems of rationality that
transmutations (p. 204).
might otherwise be seen as infallible;
If thinking is not to be the eternal return
of the same, it must recast in ways
that refigure nature and culture as
well as body and mind so that
neither remains what it was believed
to be when it appeared the opposite
of the other. Instead of promoting
apocalyptic other worldliness through
new technologies, what we need is a
more radical style of incarnational
thinking and practice (Taylor, p. 224).
From the start, Haraway's cyborg
is sexualized, described in terms of
desire. She continues, arguing that the
“Manifesto” is “for pleasure in the
confusion of boundaries and for
responsibility in their construction”
(150, her emphasis). Emphasizing the
existence, even necessity, of “pleasure” in
the space between the “boundaries”
–the cyborg interface between the
technological and the organic–gives this
reader the freedom to describe the
cyborg-body's connections in terms of
the sexual act and desire. By emphasizing
“pleasure,” Haraway also eradicates
A hacker's knowledge, capable of
reskilling, and therefore rewriting the cultural
programs and reprogramming the social values
that make room for new technologies;
A hacker's knowledge, capable also of
generating new popular romances around the
alternative uses of human ingenuity (Ross,p.132).
Boundaries
Although both Grosz and Haraway
emphasize the two interfacing zones
remain separate functioning identities,
both also agree that the boundaries
between these separate spheres do not
hold. Grosz states that during an erotic
contact the "borders blur, seep, so that,
for a while at least, it is no longer clear
where one organ, body, or subject stops
and another begins" (p. 198).
The strange logic of the parasite makes it
impossible to be sure who or what is
parasite and who or what is host. Caught
in circuits that are recursive and reflexive
yet not closed, each lives in and through
the other. In these strange loops,
27
nothing is ever clear and precise;
everything is always ambiguous and
obscure (Taylor, p. 98).
Haraway similarly describes the cyborg
as "fluid," "ether," "quintessence," both
"material and opaque" (153). While
describing the specific interface between
women's bodies and technology, Haraway
claims the bodies are "permeable" (169)
and claims that same technology is
"polymorphous"(161). She also describes
another binary boundary–“between tool
and myth, instrument and concept,
historical systems of social relations and
historical anatomies of possibly bodies"
–as "permeable" (p. 164).
28
In a world where signs are signs of signs,
and images are images of images, all reality
is in some sense screened.The strange loops
of information and media networks create
complex self-organizing systems whose
The interface is not about
penetration or acquisition, it is
about peaceful "rearrangements."
structures do not conform to the
intrinsically stable systems that have
governed thought and guided practice for
more than three centuries (Taylor, p. 78).
Although the individuals remain true to
self, the interface merges, melds, blurs
and rubs. The interface is not about
penetration or acquisition or occupation,
it is about peaceful "rearrangements"
of participating factions.
According to complexity theorists, all
significant change takes place between
too much and too little order.When there
is too much order, systems are frozen and
cannot change, and when there is too
little order, systems disintegrate and can
no longer function. Far from equilibrium,
systems change in surprising but not
necessarily random ways (Taylor, p. 14).
Never Complete
Although Haraway imagines the cyborg
transforming “wholes from parts,” she
also emphasizes that the cyborg,
unlike Frankenstein's monster, does not
seek completion (p. 151), nor “rebirth”;
instead, “cyborgs have more to
do with regeneration.” According to
Haraway, cyborgs both build and
destroy “machines, identities, categories,
relationships, space stories” (p. 181).
Grosz's lesbian desire is also not about
heterosexist "complementarity."
As boundaries become permeable, it is
impossible to know when and where
this book began or when and where
it will end. Since origins as well as
conclusions forever recede, beginnings
are inevitably arbitrary and endings
repeatedly deferred (Taylor, p. 196).
That is what constitutes the appeal and
power of desire, its capacity to shake up,
rearrange, reorganize the body's forms
and sensations, to make the subject and
bodyas such dissolve into something
else, something other than what they
are habitually (Grosz, p. 204).
If we are to study cyborgs, the
technoscience that makes them possible
and the phenomenon around them, we
must reexamine our romanticism for
the “Whole,” our desire for the
transcendent, and our notions of the
human (Hogle, p. 214).
While exploring the work of Alphonso
Lingis on erotic sensibility, Grosz states
that desire's temporality “is neither
that of development… nor that of
investment… Nor is it a system of
recording memory… In this case, there is
not recollection but recreation, or rather,
creation, production” (p. 195). In both
cases, the cyborg and desire, the
connection is never stable, always
Cyborging: Rhetoric Beyond Donna Haraway and the Cyborg Manifesto
remade for a specific time and space, a
specific purpose.
Invention
Haraway's cyborg seeks otherness, seeks
difference, seeks "illegitimate fusions"
(p. 176), seeks "potent fusions, and
dangerous possibilities which progressive
people might explore as one part of
needed political work" (p. 154).
Contemporary interfaces involve mutual
reshaping and resignification of the body.
Cyborgs show little respect for crass
dualisms (e.g., technologies/bodies), but
prefer to mobilize the polymorphous
effects which emerge from fields of
techno-social dynamics. It is the (in)
ability to maintain distinct boundaries
which facilitates new readings of cyborgs.
We have attempted to advance retroand pro-spective interpretations of the
textualities in which hybrid figures appear
(Macauley & Gordo-Lopez, p. 442).
Modes of greatest intensification of
bodily zones occur, not through the
operations of habitual activities, but
through the unexpected, through the
connection, conjunction and construction
of unusual interfaces which re-mark
orifices, glands, sinews, muscles
differently, giving organs and bodily
organization up to the intensities that
threaten to overtake them, seeking the
alien, otherness, the disparate in its
extremes, to bring into play these
intensities (Grosz, p. 198)
If information is “a difference which
makes a difference,” then the domain
of information lies between too little
and too much difference. On the one hand,
information is a difference and, therefore,
in the absence of difference, there is no
information. On the other hand, information is a difference that makes a difference.
Not all differences make a difference
because some differences are indifferent
and hence inconsequential. Both too little
and too much difference creates chaos.
Always articulated between a condition
of undifferentiation and indifferent
differentiation, information emerges at the
two-sided edge of chaos (Taylor, p. 110).
Haraway's cyborgs, Grosz's desires and
Taylor's subjects do not do the expected;
they revel in the unexpected. Haraway
goes further, claiming that "no objects,
spaces, or bodies are sacred in
themselves; any component can be
interfaced with any other" (p. 163). All
three theories recognize that potent and
powerful things emerge from interfacing
with the known and the unknown,
cyborging with the new and the old.
Changed Separately
To cyborg is to change and transform. In
coming together to act, for pleasure
and survival, the individuals that
cyborg can not help but change.
Haraway's cyborg, the ideal member
of a cyborging, "is a kind of disassembled
and reassembled, postmodern collective
and personal self" (p. 163). The
cyborging cyborg is one and many,
29
Conclusion
While discussing epistemology and
rhetoric, Gage claims there are two
primary relationships between rhetoric
and knowledge:
One is that rhetoric consists of techniques
for successfully communicating ideas which
are either unknowable or are discovered
and tested by means which are prior to or
beyond rhetoric itself; and . . .
The other way of regarding this
relationship is to view rhetoric itself as a
means of discovering and validating
knowledge (Gage, p. 203).
individual and collective, fragmented
and whole, never returning to one
subjectivity as the early individual before,
always having changed during the interface.
30
Haraway argues the cyborg skipped “the
step of original unity,” and therefore
does not “expect” its “completion”
(p. 151). Later she claims we, cyborgs,
have “the confusing task of making
partial, real connections” (p. 161).
Their relations cannot be understood in
terms of complementarity, the one
completing the other (a pervasive model
of the heterosexual relation since
Aristophanes), for there can be no
constitution of a totality, union, or merger
of the two. Each remains in its site,
functioning in its own ways (Grosz, p. 197).
Noise is not absolute but is relative to
the systems it disrupts and reconfigures,
and, conversely, information is not
fixed and stable but is always forming
and reforming in relation to noise.
Forever parasitic, noise is the static that
prevents the systems it haunts from
becoming static (Taylor, p. 123).
A cyborg rhetoric is definitely an example
of the later. The cyborg's emphasis on
invention through interface constructs
rhetoric as both a psychological and
physical connection to others, whether
they are a person, place or thing. Cyborg
rhetoric also acknowledges that cyborgs are
always communicating, whether they
explicitly are aware of sending and receiving
or not. Most importantly, a cyborg rhetoric
acknowledges that interfaces are always
different–even if two individuals rub
against the same texts, those experiences
will not be the same. Part of the pleasure of
cyborg rhetoric is comparing experiences,
interfaces and interpretations. A cyborg
rhetoric is never done, it is only waiting for
the next interface so it may transform into
something new and different.
Haraway argues that the cyborg myth is
"for pleasure in the confusion of
boundaries and for responsibility in their
construction" (p. 150).
The zone between the physical
manifestation of the body and the virtual
has perhaps permanently altered the
way we gather, process and understand
knowledge (Flanagan, p. 449).
What constitutes the appeal and power of
desire, its capacity to shake up, rearrange,
Cyborging: Rhetoric Beyond Donna Haraway and the Cyborg Manifesto
reorganize the body's forms and
sensations, to make the subject and body
as such dissolve into something else
(Grosz, p. 204).
The moment of writing is a moment of
complexity in which multiple networks
are cultured (Taylor, p. 198).
Cyborg ReSources
On the one hand, Taylor claims “any
adequate interpretive framework must
make it possible to move beyond the
struggle to undo what cannot be
undone as well as the interminable
mourning of what can never be changed”
(p. 72). However, he also outlines the
characteristics of complex systems which
appear to leave space for Grosz's
rewriting and reclaiming.
• Though generated by local
interactions, emergent properties
tend to be global.
• Inasmuch
as
self-organizing
structures emerge spontaneously,
complex systems are neither fixed
nor static but develop or evolve.
Such evolution presupposes that
complex systems are both open
and adative.
• Emergence occurs in a narrow
possibility of space lying between
conditions that are too ordered and
too disordered.
• This boundary or margin is “the edge
of chaos,” which is always far from
equilibrium (pp. 142-143).
Cyborg Resources Continued
No medium today, and certainly no single
Haraway and Grosz want to reclaim images
media event, seems to do its cultural
and ideas that appear to be finished.
work in isolation from other media, any
Grosz, in "Experimental Desire," wants to
more than it works in isolation from
use "a set of rather old-fashioned concepts
other social and economic forces. What
and issues that [she] believes remain
is new about new media comes from the
useful and can be revitalized if they are
particular ways in which they refashion
reconsidered in terms of the politics and
older media and the ways in which older
media refashion themselves to
answer to the challenges of new The moment of writing is a moment
media (Bolter and Grusin, p. 15).
• Complex systems are of complexity in which multiple
comprised of many
different parts, which are networks are cultured.
connected in multiple ways .
theory of lesbian and gay sexualities”
• Diverse components can interact
(p. 207). She wants to “rewrite,” “reclaim”
both serially and in parallel to
and“clarify some of the issues that [she]
generate sequential as well as
believes are crucial to the area now known
simultaneous effects and events.
as 'queer theory'” (p. 207). She refuses “to
• Complex systems display spontaneous
give up terms, ideas, strategies that still
self-organization, which complicates
work, whose potentialities have still not
interority and exteriority in such a
been explored, and which are not quite
way that the line that is supposed to
ready to be junked just yet” (p. 208).
separate them becomes undecidable.
Significant inventions are like omelets:
• The structures resulting from
you have to break some eggs to make
spontaneous self-organization emerge
them happen. But we would do well to
from but are not necessarily reducible
spend some time contemplating those
to the interactivity of the components
broken shells, to learn more about the
or elements in the system.
31
32
discards and miscarriages, the “creative
destruction” that propels all high-tech
advancement (Johnson, pp. 207-208).
Haraway's use of the cyborg idea is a
similar attempt to rewrite, reclaim and
clarify a term she refuses to junk just
yet. Although The Terminator (1984)–one
of the American culture's predominate
images of the "bad" cyborg–was just
released as Haraway published the
"Cyborg Manifesto,” there were plenty of
both good and bad cyborg images around,
both in science fiction literary texts and
films. Some of the most notable bad
images of cyborgs, prior to both
The Terminator and the “Cyborg Manifesto,”
are Mary Shelley's Frankenstein, Star Wars
(1976), Blade Runner (1982) and Videodrome
(1982). Haraway's choice to use Blade
Runner, Anne McCaffrey's The Ship Who
Sang (1969), Joanna Russ's The Female Man
(1975) and Vonda McIntyre's Superluminal
(1983) demonstrate her ability to rewrite,
reclaim and clarify science fiction,
predominately feminist science fiction.
When I think of cyborgs and Blade
Runner, I always remember the Rutger
Hauer character, the "bad" cyborg.
Haraway, on the other hand, reminds the
reader that the character Rachel "stands
as the image of a cyborg culture's
fear, love, and confusion" (p.178).
Goals
Haraway, Grosz and Taylor share similar
goals. The subtitle to Grosz's essay
"Experimental Desire" is "Rethinking
Queer Subjectivity." Similarly, Haraway
begins the "Cyborg Manifesto" with,
"This chapter is an effort to build an
ironic political myth faithful to feminism,
Cyborging: Rhetoric Beyond Donna Haraway and the Cyborg Manifesto
socialism, and materialism.” Later in that
same introductory paragraph, Haraway
further defines irony, an important aspect of
her "political myth," as a "rhetorical
strategy" (p. 149). Taylor attempts “to show
how complex adaptive systems can help us
to understand the interplay of self and
world in contemporary network culture.”
All three want to theorize a new
rhetorical strategy, a new subjectivity
that gives agency and power to their
intended material audiences, socialist
feminists, queers, and participants of the
emerging network culture, respectively
(Haraway, p. 177).
I had hoped to show both that there is,
and must be, a place for the transgression
itself; that there must be a space,
both conceptual and material, for
(perpetually) rethinking and questioning
the presumptions of radicality, not from a
position hostile to radicalism or
transgression (as the majority of attacks
are) but from within (Grosz, p. 5).
I argue that the relationship between
information and noise can clarify recent
philosophical and critical debates about
interplay between system and structure,
on the one hand, and, on the other,
otherness and difference ( Taylor, p. 15).
I am making an argument for the
cyborg as a fiction mapping our social and
bodily reality as an imaginative resource
suggesting some very fruitful couplings
(Haraway, p. 150).
Exigency
Grosz, in "Animal Sex," hypothesizes
new ways of understanding how desire
functions both within the individual and
within society. She theorizes about a new
way to understand desire, especially
lesbian desire, so it is not explained in
terms of "(heterosexual) norms of sexual
complementarity or opposition, and
reducing female sexuality and pleasure
to models, goals, and orientations
appropriated for men and not women"
(p. 188). In other words, Grosz is
theorizing a new way to make desirable
connections between subjects, surfaces,
fragments and/or pieces. This is exactly
the same type of connection Haraway's
manifesto and Taylor's theory relies on.
Using Grosz to further explain Haraway's
and Taylor's connections works since all
three are trying to theorize a way of
bonding, for both pleasure and agency.
Survival is the stakes in this play of
readings (Haraway, p. 177).
The task we now face is not to reject or
turn away from complexity but to learn
to live with it creatively (Taylor, p. 4).
… a desire to show the complex interplay
between embodied forms of subjectivity
and arguments for disembodiment
throughout the cybernetic tradition
(Hayles, p. 7).
References
Bolter, J. D. & Grusin, R. (1999). Remediation:
Understanding New Media. Cambridge: MIT Press.
Cameron, J. (Director). (1984). Terminator, The
[Motion picture]. United States: Orion Pictures.
Clark, A. (2003). Natural Born Cyborgs: Minds,
Technologies, and the Future of Human Intelligence.
Oxford: Oxford University Press.
Croissant, J. L. (1998) “Growing Up Cyborg:
Development Stories for Postmodern Children.”
Cyborg Babies: From Techno-Sex to Techno-Tots (pp.
285-300). New York: Routledge.
Cronenberg, D. (Director). (1983). Videodrome
[Motion picture]. United States: Universal Pictures.
David-Floyd, R. & Dumit, J. (1998). Cyborg Babies:
From Techno-Sex to Techno-Tots. New York:
Routledge.
Flanagan, M. (2002). “Hyperbodies, Hyperknowledge:
Women in Games, Women Cyberpunk, and Strategies
of Resistance.” Reload: Rethinking Women +
Cyberculture (pp. 425-454). Cambridge: MIT Press.
Gage, J. T. (1994). “An Adequate Epistemology for
Composition: Classical and Modern Perspectives.”
Landmark Essays on Rhetorical Invention in Writing
(pp. 203-219). Davis, CA.: Hermagoras Press.
Gray, C. H. (1995). Cyborg Citizen: Politics in the
Posthuman Age. New York: Routledge.
33
Gray, C. H. (2001).The Cyborg Handbook. New York:
Routledge.
Tapscott, D. (1998). Growing Up Digital: The Rise of
the Net Generation. New York: McGraw-Hill.
Grosz, E. (1995). Space, Time, and Perversion: Essays
on the Politics of Bodies. New York: Routledge.
Taylor, M. (2001). The Moment of Complexity:
Emerging Network Culture. Chicago: The University
of Chicago Press.
Hakken, D. (1999). Cyborgs@Cyberspace?: An
Ethnographer Looks to the Future. New York:
Routledge.
Haraway, D. (1997). Modest_Witness@Second_
Millennium.FemaleMan©_ Meets_OncoMouseTM:
Feminism and Technoscience. New York: Routledge.
Haraway, D. (1991). “A Cyborg Manifesto: Science
Technology, and Socialist-Feminism in the Late
Twentieth Century.” Simians, Cyborgs, and Women:
The Reinvention of Nature (pp. 149-181). New York:
Routledge.
Hayles, K. N. (1999). How We Became Posthuman:
Virtual Bodies in Cybernetics, Literature, and
Informatics. Chicago: University of Chicago Press.
Hogle, L. F. (1995). “Tales from the Cryptic:
Technology Meets Organism in the Living Cadaver.”
The Cyborg Handbook (pp. 213-216). New York:
Routledge.
Johnson, S. (1997). Interface Culture: How New
Technology Transforms the Way We Create and
Communicate. New York: HarperEdge.
Jones, S. (1997). Virtual Culture: Identity and
Community in Cybersociety. London, England: Sage
Publications.
34
Lucas, G. (Director). (1977). Star Wars [Motion
picture]. United States: 20th Century Fox.
Macauley, W. R. & Gordo-Lopez, A. J. (1995). "From
Cognitive Psychologies to Mythologies: Advancing
Cyborg Textualities for a Narrative of Resistance." The
Cyborg Handbook (pp. 433-444). New York:
Routledge.
McCaffrey, A. (1969). The Ship Who Sang. New York:
Ballantine Books.
McIntyre, V. N. (1983). Superluminal. New York:
Pocket Books.
Penley, C. & Ross, A. (1991). Technoculture.
Minneapolis: University of Minnesota Press.
Ross, A. (1991). "Hacking Away at the
Counterculture." Technoculture (pp. 107-134).
Minneapolis: University of Minnesota Press.
Russ, J. (1975). The Female Man. New York: Bantam
Books.
Scott, R. (Director). (1982). Blade Runner [Motion
picture]. United States: Columbia TriStar.
Shelley, M. (1985). Frankenstein: or the Modern
Prometheus. London: Penguin Books.
Springer, C. (1996). Electronic Eros: Bodies and
Desire in the Postindustrial Age. Austin: University of
Texas Press.
Stabile, C. A. (1994). Feminism and the Technological
Fix. Manchester: Manchester University Press.
Stone, A. R. (1995). The War of Desire and
Technology at the Close of the Machine Age.
Cambridge: MIT Press.
Biography
Rochelle (Shelley) Rodrigo teaches a variety of writing
and film courses, presented in a myriad of methods and
modes, at Mesa Community College, Mesa, Arizona.
She is also completing her doctoral studies at Arizona
State University with a dissertation about how faculty
cope with technological change. Shelley's research
includes studies about writing instruction, distance
learning, teaching with technology, and professional
development. Almost all of her studies involve
issues of how people interact with different
technologies. Shelley has published chapters in
Strategies for Teaching First-Year Composition, Composition
Pedagogy & the Scholarship of Teaching and an article in
Teaching English in the Two-Year College.
A Conversation
with Chris LaMont
Interviewed by Craig Belanger
I was a filmmaker when
I was in 4th, 5th, 6th
grades, all the way into
high school, and I was
sort of an island unto
myself. Filmmaking is
a collaborative art, and
there was no one really
around me who was
doing film.
Chris LaMont is a filmmaker, producer and
screenwriter who has been making films since the 4th
grade. He has been instrumental in the growth of the
local film community in Phoenix,Arizona,in his roles
as the founder and President of the Phoenix Film
Foundation, founder of the Phoenix Film Project
(www.phxfilmproject.com),Advisory Board member of
the Project (its filmmaker community support
program), and member of the Phoenix Film Society, a
membership-based film appreciation organization
that screens independent films and hosts film events in
the Valley of the Sun. Chris is an Associate Faculty
member at Arizona State University, teaching
independent film appreciation and production
courses,and is a member of the Scottsdale Community
College Film and Television Program. He is also the
co-founder of the International Horror & Sci-Fi Film
Festival,which is scheduled to debut in October 2005.
He and his business partner, Golan Ramras, as well
as an army of volunteer staff, initiated the first
Phoenix Film Festival in 2000. That first year, as he
explains in our conversation, festival attendance
reached 3,000 people; in 2005, festival attendance
reached 15,000 attendees, a five-fold increase in
just five years.
In this conversation, Chris LaMont discusses the
inherent difficulties in creating a successful film
festival, the community aspect of presenting films to
festival audiences, and the effects of the video game
industry and the digital film revolution on the
motion picture industry.
35
Journal of Advancing Technology: Why
did you start the festival in Phoenix? What
were some of the funding and sponsorship
issues that you encountered?
36
Chris LaMont: We started in Phoenix
simply because I grew up in Phoenix. I was a
filmmaker when I was in 4th, 5th, 6th
grades, all the way into high school, and I was
sort of an island unto myself. Filmmaking is a
collaborative art, and there was no one really
around me who was doing film. I had to
learn the way on my own, and I
basically went through the schools–Phoenix
College, Scottsdale Community College,
ASU–but there was no formal film program
here in Phoenix at all. Then I went to
Los Angeles and came back because I hated
Hollywood. What I discovered after I came
back was that I wanted to be John Sayles,
I wanted to make my own movies, I wanted
to chart my own course. And so I started
doing direct-to-video movies, nothing of
significance, but Phoenix wasn't really a place
where you could shoot a film. You couldn't
buy film here, you couldn't develop film here;
everything happened out of L.A., even though
Phoenix had some great locations.
Fast forward to the year 2000: I did a short
film parody of the movie Fight Club, directed
by David Fincher. My partner, Golan Ramras,
and I had done this movie, and we'd sent it to
a website called MediaTrip.com.They bought
the film the day they got it. It's a very good
parody, shot like a movie trailer for this movie
called Film Club, and it's basically [about]
frustrated independent filmmakers trying to
make movies. And so they're sweating in a
basement, you know, shooting film. Film Club
spoke to me as: “If you're frustrated with
something, do something about it rather than
let the frustration eat you apart.” So we did
this film and it got sold and we were on DVD
and on the Internet and were wondering one
night why there wasn't a good film festival in
Phoenix to show the film. And it sort of
dawned on us that–at the time Phoenix was
the sixth largest city in the country–every
large city serves a film festival. And so Golan
and I just sort of looked at each other
and said, “Why don't we be the guys to
start it?” So we put aside our personal
production calendar and spent the next six
months getting the first Phoenix Film Festival
up and running.
JAT: What was the initial reaction from
people in Phoenix that you were starting a
film festival here?
CL: A lot of people didn't really know what a
film festival was. They'd heard of Sundance
(everybody's heard of Sundance) but if you ask
someone, “Have you been to a film festival?”,
nine times out of ten they'll say “No,” which
made it very hard to market the festival,
because we weren't just marketing a festival,
we were also educating the population about
what a film festival is–what happens and the
kind of product you see at the film festival. It's
been tough, but as we have grown–we're in
the fifth year now–knowledge of, and the
marketing impact of, the festival is at its
highest point ever.We have several marketing
partners, like the largest newspaper in
Arizona, the biggest news talk radio station in
Arizona, the biggest alternative rock
station...These are all partners of the festival,
so they're helping us to market the event.We
have the art film theatre chain, Harkins
Theatres; that's actually where we hold the
festival. We're slowly gaining ground through
that marketing, but as an educational tool as well
as a marketing tool, the brand is out there. But
with that has to be: “Here's what the festival's all
about.” So it's almost a two-tiered approach.
You asked earlier about funding. In year one,
we had no money.We literally had no money,
because to support anything, people have to
see it, people have to know what it is, and
what we found was people were willing to
trade out and give us support but not actually
write a check or give us dollars. I wrote the
first check for the festival for about $1,500 to
create our marketing and sponsor package so
we could go and try to solicit funds and
partners. The first partner we ended up
getting was the City of Phoenix Film Office,
A Conversation with Chris LaMont
(from L-R) Executive Festival Director Chris Lamont, Festival mascot Camerahead, actor Tom Arnold and Senior
Festival Director Golan Ramras at the 2005 Phoenix Film Festival. Arnold presented an award to
Lion’s Gate Films executive Tom Ortenberg and appeared at the Arizona premiere of the film, Happy Endings.
Photo by Charles Gabrean
and although they had no money–like every
municipality in the world, it seems, they had
no money–they said, “We'll give you a desk
and a telephone and a computer to help you get
things started,” so we had an office and an idea
to start a festival, and we were able to move
from there.
JAT: What films were on that first Phoenix
Film Festival program?
CL: It was pretty thin. We did get about 200
films submitted to the festival that year.
Compared to now: we're at 800 this year. To
quadruple that amount in five years is pretty
impressive. We called a lot of people who we
knew already had independent films and
asked them, “Can we show your movie?” And
they let us show their films for free. We also
slated a retrospective of Clerks, El Mariachi and
Reservoir Dogs and showed those as Midnight
Movies to further add to the slate. But we
did have some short film blocks, and we
actually got a couple of films from local
Arizona filmmakers, but we didn't actually have
our Arizona short filmmaking competition
until year two.
We were lucky, though–we actually got
Allison Anders, the director of Gas Food
Lodging. She came out because she had a friend
she wanted to see in Phoenix. She came for
the weekend and got a free hotel room. The
comedian Tommy Davidson, who was in Spike
Lee's Bamboozled, came out because he could
book a comedy gig the night before; he did
our awards ceremony and did a celebrity
conversation with us. So we had some stars
and some pretty good movies: a local feature
called How to Be a Hollywood Player or How to Be
a Star, something like that. We worked really
hard trying to spread the word locally. Our
hope was to have 500 people come out.
JAT: How many showed up?
CL: 3,000.We were able to break even on the
first festival year. It made us feel great. It was
like, “Alright, we've done our job, we created
a festival, we're done.”
JAT: The festival ends after year one and you
decide what? That it's a success, more than a
success?
37
CL: Oh, we were happy to break even, lots of
people came out, it was a good thing, so we
went back to making movies for six months.
And that's kind of the building process for all
things: first, you have to prove you can make
it, then you have to prove that not only can
you make it, but you can make it
successfully, and then continue to
build on it and make it grow. Every
year we've had more films
submitted. Attendance has grown. If
we ever start backtracking, then it
tells me that something's wrong with
the business model and what we're
doing. If we don't sell enough tickets,
if we don't have enough movies
coming in, we need to fix that, but
right now, the growth has continued
exponentially since year one.
JAT: Why do you think this film
festival is so successful?
38
CL: We have a 99% volunteer staff.
They have a passion for film and a
passion for the event, and they work
hard, although they do not get paid
monetarily for this. But they love the
event and the event is important to
the community. It's important to a
metropolitan community which is
now the fifth largest in the country
to have an arts and entertainment
Kevin Bacon and Kyra Sedgwick at the 2005 Phoenix Film
venue like this that they can experience.
Festival. Bacon and Sedgwick were both honored at the festival
and attended the Arizona premiere of the film, Loverman.
The more that community funding
Photo by Charles Gabrean
is cut–in theatre programs, music
And then we started wondering, “Hey, are we programs–the more important it is to
going to do this again?”(laughter). We still had actually hold on and support things like the
the office over at the City of Phoenix. I can't film festival within the community, and we've
say “office”–we had a cubicle at the City of been very very blessed to have such great
Phoenix. So we said, “Well, sure, let's give it supporters, great volunteers, great staff in
one more shot. We'll see what happens with order to maintain the momentum of the
this, but you know, we gotta raise some festival. And without that core, the festival
money.” It was really hard trying to raise would not be where it is today.
money, so we went out and tried to do that
with sponsor packets, and I think we ended up I suggest to anyone who ever wants to
raising about $8,000, maybe, in total cash that make something successful happen in this
year from a couple of different sponsors. The world, you can't do it by yourself. You've got
thing was, the people that had come that first to build a team around you of diverse
year had seen what it was like and they were opinions, perceptions and ideas and work
willing to provide more resources in the together towards a common goal. They say
that it's a lot easier for a million people to
second year.
move a mountain than just one. I think that's
A Conversation with Chris LaMont
how they built the pyramids; it wasn't just one JAT: And now it's become a standard film
guy pushing blocks in Egypt. That's the festival that filmmakers consider when
mentality that we've taken with the festival, submitting work. Filmmakers who are
and it's been very successful. A
reason for thesuccess that I suggest to anyone who ever wants to
we've had in the country and
internationally is unparalleled. make something successful happen in
There's only one other festival
that's more successful than us this world, you can't do it by yourself.
which started within the last
five years, and that's the Tribeca Film Festival, looking to see where Phoenix itself might fit
because they had a million dollar check from in with the film world at large...
American Express, and they have Robert De
Niro and Tribeca [Entertainment] and Martin CL: There [are] two reasons for that. One is
Scorsese. Everything we've done has been very this: when we first started the festival, we
grassroots, but we're now changing to a realized that we had to have a niche to get
business model–a year-round business model– films submitted to us. You can't just have a
and the volunteers and staff had a large, festival. You can't just say, “Hey, we're the
important role in that development.
Phoenix Film Festival!” and “Hey, we're
showing movies here!” There's got to be a
JAT: The Phoenix Film Festival also has an reason, an incentive, why a filmmaker would
send us their film. For the first four years, we
educational outreach program, doesn't it?
actually focused on low-budget independent
CL: Educationally, the festival is very filmmaking. The way we did that was to say
important to this community. I recognized that all of the features in competition must
that as far back as my childhood: I was this 4th have been made for under a million dollar
grader trying to make movies with my best budget. So we got some very eclectic films
friend with a Super 8 camera, not knowing that people really didn't see on the big screen,
that anybody else in the world had a love for films that ultimately ended up going straight
this art the way that I did. And so what we've to video and DVD, but the audience members
done with the festival is to create an were appreciative of the filmmakers coming
educational outreach program where grade out. We make sure that there's a filmmaker
school students and high school students and representative for every competitive feature
college students are getting involved and are at the festival, because there's a Q&A
understanding that there's a collaborative afterwards. That's the difference between
community out there that wants them to make going to a festival and the cinema multiplex:
movies, wants them to be creative, to put there's a filmmaker there who's going to
their vision out there and find that introduce the work and afterwards will be
special thing that they're about, that there to talk about what you've seen and
unique perspective of the individual. Our answer your questions about the filmmaking
educational outreach is growing year after process, what inspired them and how they
year with something that I personally wanted made the movie. Establishing that was
to initiate because of my experience growing important–it helped us to really break
up. I didn't want to see the kids of Phoenix ground, really get accelerated. We actually
struggle like I did to understand my gifts and took off the million dollar cap this year
talents and to encourage them, to put a because we found our feature submission
camera in their hands, to write a script, go amounts were not increasing. Our shorts
get some software and start editing and were increasing (in the amount we were
getting every year) but the features weren't.
make a movie.
We discovered that the producers don't want
39
can do that. The landscape, then,
has changed in the last five
here in Phoenix, and it's
struggle like I did to understand my gifts years
making things a lot easier from a
filmmaking perspective. And
and talents and to encourage them...
also, as the filmmaking has grown
here (and the quality has grown greatly since
anyone to know what their budgets are–if you we started the festival), there's a greater
know they're in our festival, then you know interest in film overall. We decided, then, to
that their budget was under a million. Because launch–near Halloween–the Phoenix Horror
everybody wants to make more money from and Sci-Fi Film Festival. [It's] not like your
the distributors. So they were not willing to typical science fiction and horror convention:
send their movie to our festival.This year, we this is showing movies with an expo, a costume
went from 125 features submitted [for the party on Saturday night, an opportunity for
2004 festival] to 175. Because we do ask for Horror and Sci-Fi film lovers to get together and
the budget amount, [we found that] some of enjoy some movies that maybe they haven't seen
those films were made for less than a million on the big screen in a while.
dollars, but it doesn't matter because we're
not printing that. The stigma is gone, so our JAT: Such as…
volume of films has increased greatly.
I didn't want to see the kids of Phoenix
JAT: What are your plans for future festivals?
40
CL: What we realized was that the festival
can't be a once-a-year event. After year one,
we took six months off and then said, “Hey,
let's do that again.” Year two, we went up to
4,000 people, we had more entries. And then
year three was our big breakout year: we
had 7,000 people, John Waters came out,
Edward Burns, James Foley. It was really big.
We realized from a financial standpoint that
we couldn't just have a once-a-year event. It
was impossible, especially after we got kicked
out of City Hall after three years. So we had to
rent an office and put a roof over our heads.
The model that I had switched to was one of
adding revenue sources and adding ongoing
activities on a monthly basis to turn us from a
once-a-year event to a continuing, ongoing
organization. One of the things we've seen is
that, as the film industry here in Phoenix
grows–it is growing by the way. That time I
talked about when you couldn't buy film,
when you couldn't develop a film, that no
longer makes a difference here in Phoenix
because the digital revolution is taking care of
that. Now, you can shoot on HD, edit and film
in, you know, closets and bedrooms, and then
send your movie to Hollywood. Literally, you
CL: Such as the Evil Dead movies. We're
hoping that Bruce Campbell might be
involved in the festival. We've invited him to
be our Honorary Director. How great would
it be to go, as a film fan of the Evil Dead series,
to go see The Evil Dead or Evil Dead 2, and then
Bruce Campbell comes out and does a Q&A.
There's a specific and very, very excited
audience for genre films, especially the horror
and sci-fi genre, and they will support an
event like that. We're kind of going back to
our roots for how we'll be modeling the
festival–it's only going to have three screens,
it's going to be a smaller festival than we
normally do. It's not going to have the prestige
of getting on the back of DVD boxes, like the
Phoenix Film Festival; there are certain films
where you can see the palm leaves saying
“Official Selection” and “Award Winner” for a
number of movies from the Phoenix Film
Festival.We're looking at this as more of a fun,
fan-based festival.
Then, to specifically address the population of
Phoenix, we're going to start a Hispanic film
festival sometime in early 2006.We've already
had some interest from Univision and a
number of places. The idea is that, because of
who we are and what we've done in the past,
instead of going out blindly like we did in year
A Conversation with Chris LaMont
aware that you can create
film or tell stories with
a not-necessarily huge
budget, where the story
and character are more
important. When you see
one of these characterdriven DV films, as
compared to your big
blockbuster SpiderMan 3
or whatever it is that
gets on the big screen
at the multiplex next,
filmmakers for the most
part… When I took my
16mm live action class at
ASU, just to do a
one minute short film
without sound, it cost me
$400. I can go get a DV
camcorder, a $5 tape and
make that same exact
movie for $5. If I had
access to a camera and
some lights, now, as a
filmmaker, the difference is
that I can practice the
language and understand
how to tell a story before I
have to invest a lot of
money in film.
A lot of people, especially
in Hollywood, will still look
down on digital: it doesn't
matter that 28 Days Later or Open Water or
Celebration or any of the Robert Rodriguez
films that are shot on DV–even Star Wars:
Episode Two and Episode Three, which were shot
on digital or HD–people still want to see
35mm film. In the past, it's always been,
“Well, unless you can get the money, you're
not really a filmmaker.” It's a Darwinian
system: you've got to be able, with your
talents, to raise enough money to be at this
particular level, to shoot on 35mm, for
example. Now, you don't have to do that,
necessarily, to get a good movie. With Film
Club, the total budget of that film was less than
$5,000. We took it to Los Angeles and showed
it to managers, agents, producers; they all
A guest at the 2005 Phoenix Film Festival poses with the festival mascot,
Camerahead. Photo by Charles Gabrean
one and hoping for support, we have a
reputation and people know we can accomplish
events, they're more willing to partner with us
and provide us with resources and marketing to
actually bring these things to life.
JAT: Let's move away from business and talk
more about the digital film revolution. How has
that changed films? Not just making them, but
watching them. What is the effect of digital
video (DV) on cinema?
CL: Digital has done, in my opinion, two
things. One is that it's made the tools and
resources available to filmmakers at a much
lower price, and, two, it's made film audiences
41
thought it was a $50,000 short, because, in
reality, nobody knows how much it cost unless
you tell them, and if you can tell the story well,
to a point where the audience is engaged, cost
doesn't matter. It doesn't matter how much you
can spend as long as you are able to engage the
audience emotionally or intellectually.
you're seeing a film that isn't genuinely good
because there was one vision that was piled
onto another vision that was piled onto another
vision and so on. Film is collaborative, but it
gets to a point where it's ridiculous.
Film, especially Hollywood film, gets stuck in
the games of accounting and lawyering, rather
than worrying about making movies.They call
it “show business” for a reason, not “show fun.”
The business side of the motion picture
industry is so much more important–I use the
word “product” a lot when I'm talking to
studios because to them, it's not a movie, but
a product. They have invested a lot of
So DV is giving these resources and tools to
filmmakers so they can do that.They can hone
their skills, and with filmgoers, all they
appreciate is a good story. It doesn't matter
how bad the film looks: OpenWater looks like a
camcorder movie, but, because the story is
engaging, it's thrilling because you care about
the characters and the
story. That's why you It doesn't matter how much you can spend
watch the movie, and
that's why it made a few as long as you are able to engage the
million dollars at the
box office.
audience emotionally or intellectually.
42
JAT: Let's discuss the divide between the
people in the media industry who hold
the key to film technologies: the large,
expensive equipment that make a $5,000
film become a $50,000 film, and the creative
teams behind them. Are you seeing an
evolution in filmmaking, with these less
cumbersome technologies and lower expenses
being introduced into the film system to make
it cheaper? Where does the big money go? And
what power does it hold over the filmmakers?
CL: Big money in Hollywood, generally, is
going into two places: special effects
and actors’ salaries. That's where a large
portion of budgets go. That's how it affects
filmmakers, because now you're also at the
beg and mercy of the special effects designers
and the actors, the bankable actors who you
sign for $20 million so that you can open
your movie. It doesn't matter how much
money you spend–it comes down to the idea
of the story and the screenplay being the
really important thing, because it doesn't
matter. Sometimes, you go see these
Hollywood movies and they have six or eight
screenwriters who have worked on the
project, and generally when you see that,
money into putting this product in the
marketplace for consumers to purchase and
that's how it is seen in Hollywood. It's a
tremendous investment–the average budget of
a Hollywood film is $72 million. That's the
average budget! So, for every $180 million
Troy, you've got a $30 million The Life Aquatic
With Steve Sizou.
A Conversation with Chris LaMont
JAT: Which, depending on your artistic taste,
may or may not be worth it (laughter). What
about narrative? It seems to be both evolving
and devolving at the same time. You talked
about films that have eight screenwriters and
half of the budget went to the special effects
team and one actor. We know what to expect
when we see something like that–a Terminator
7 being, to us, representative of narrative
devolution or degeneration; that is, it's the
same story as the first six episodes, only the
pyrotechnics are bigger. But what about the
types of narrative changes which have been
popularized by, say, Pulp Fiction...
CL: Pulp Fiction, Memento, Adaptation…
Narrative has become very avant-garde in that
respect. In the past, you've dealt with a system
that's been so rigorously set in place–the
mentality of Hollywood is to say, “No.”
Because so many jobs are riding on a project
sometimes that, if one film doesn't do well
financially, the entire studio management slate
could be removed. Pictures getting made in
this environment are screened by multiple
committees. Now, these types of films–Pulp
Fiction and the others that have non-traditional
narratives–have changed the way that stories
are told, but they're all smaller films.
Nobody's taking a huge gamble of $150
million on a movie that they don't think
is going to play well to an audience.
These kinds of films require a different
audience altogether to appreciate them. Your
popcorn munching, lock-your-brain-in-thecar-before-you-enter-the-multiplex audience
isn't necessarily going to appreciate a movie like
The Usual Suspects or Memento or The Others
because it requires them to actively engage and
pay attention. It's easier to force feed them
everything, from a studio's standpoint, because
you don't want people thinking too hard about
your film, which is why you see these
experimental narratives taking place on the
indie film side.This brings us back to why the
film festival exists in the first place: to be able
to give the filmmakers the opportunity to tell
those kinds of stories and show them to
audiences and see what the reaction is.
Bryan Singer, the director of The Usual
Suspects, did a film called Public Access, which I
actually saw when it made the festival rounds.
It's a very dark film about a stranger who takes
over a public access TV show and starts
revealing all of the dark secrets of the town, a
film that never would play. It was buried and
gone–it barely got any distribution at all–but
he knew how to tell a story, and the festival
circuit gave him an opportunity to do that.
The Charlie Kaufmans of the world are
anomalies because, now, people know a
Charlie Kaufman script and he's a name unto
himself. He could get any script made that he
wants to, now, because of who he is. But how
long was he working in obscurity, trying to
make Being John Malkovich or Adaptation before
someone finally said, “That's a good idea and I'm
willing to take a chance with you?” But pushing
the envelope has always been part of the
collaborative process, and it should always be
encouraged and it always will be encouraged.
But certainly not by Hollywood.
JAT: In our view, a film festival actively
engages the audience. Yes, there's the side
where the festival presenters decide what the
audience will watch, but that decision is made
based on “This is good, this is artistic, this is
beautiful, etc. and it's something special for an
audience.” Now, in that world, directors like
Bryan Singer, John Sayles, Yasujiro Ozu,
Andrei Tarkovsky or Ingmar Bergman have a
place in our society. It seems, though, that
their only place is at a film festival, a place
where things are carefully considered and
43
Film, especially Hollywood film,
gets stuck in the games of
accounting and lawyering.
delivered to an audience. Is there a way to move
that kind of art beyond one specific audience?
CL: There’s always going to be specific
audiences for everything. What the web and
the Internet have really done is to fracturize
audiences. Technology today has changed so
44
many things: when you think about things,
say, 25 years ago, when there were four TV
networks and everyone got their information
from those four sources, maybe you also had a
newspaper and a radio station in there
somewhere. But now, there are so many
niches. In Phoenix for example, there are
several film festivals: the Jewish Film Festival,
the Gay and Lesbian film festival, the Black
film festival.We're trying to do one that invites
all audiences. Now, how do you move these
works outside of the festival audience? The
most important marketing for any product or
piece of art or anything comes from word-ofmouth and audience awareness. Bringing
people to a festival and getting them to see a
movie they've never experienced before,
going back and telling their friends, “I saw this
great movie at the festival”–that movie may
then later get a chance to come back
theatrically. That's what happened with a
movie called What the Bleep Do We Know!? It
was shown at the Sedona film festival. Dan
Harkins, who owns the Harkins theatre chain,
saw it and said, “I'd like to play that movie.” It's
been playing for 13 straight months now at
the Valley Art theatre. The goal of most film
festivals is to get distributors and appreciative
audiences to these festivals to see these films,
to show that there is an audience for that kind
of picture and see if it can platform out from
there. Worst case scenario, you're talking
about companies like Blockbuster and
Netflix, who are pushing indie film as hard as
they push the big Hollywood blockbusters
because people are tired of brainless
entertainment. That's where I see everything
really working together: the more indie film is
shown at these events, not just ours, but at
ongoing events, through contests and
exhibitions, through the Independent Film
whatever means, the more people are engaged
and discuss them.
There's a communal aspect, and that's
probably the biggest thing about the film
festival experience. It's not just [about] seeing
a bunch of movies, but rather a bunch of
movies you've never heard of before with
an audience who are all there to experience
the same thing. In fact, the community aspect
of ours in particular is one of the most
important things about it because, unlike
some film festivals like Sundance, where there
are three or four different venues (where you
have to take a bus or tram all over the place),
the sense of community gets a little bit lost.
We actually take over a wing and
six screens of the Cine Capri, and we have a
patio tent and give seminars. Everything is
concentrated in one area, and what happens is
it's a synergy of community where people are
talking about what they've seen (“Did you see
that one?”, “We have to go see that one,” etc.).
It's not just seeing the films that's attractive,
but finding other people and talking–people
you've never met before whom you meet in
line, sharing an experience. It's pretty great.
Because of the Internet, it's all about the
individual experience; the more communal
experiences we still hold sacred in this world.
Getting art to these people, that's where the film
festival holds a special place in the world.
JAT: Recently, we sat through a presentation
by Habib Zargarpour, an art director for
Electronic Arts (EA). One of the points he
made was that the video games EA
produces have tended to, and are in danger of
doing so, usurp traditional film narrative as an
entertainment for certain audiences. The
example he gave was the video game, 007:
Everything or Nothing. He
There's a communal aspect, and that's mentioned that the reason no
James Bond film was released
probably the biggest thing about the last year was that all of the
resources, including the actors
and probably a good portion
film festival experience.
of the budget for what would
Channel–the more there's an appreciation for have been that year's Bond film, went into
these kinds of stories on film or TV or making the video game instead, essentially
A Conversation with Chris LaMont
producing the film experience in an
interactive setting.
JAT: It's not as meaningful an experience as
creating something…
CL: The gaming industry scares the crap out of
me and I'll tell you why: right now, the video
game industry is a $6 billion a year industry.
Hollywood is a $5 billion a year industry.Video
games are making more money than movies,
and it's because of the interactive experience
and also the high price–you have to move less
product to make more revenue from a video
game release. They're $50 apiece now,
compared to your $10 movie ticket, plus the
shipping cost of the print, plus the piece of the
pie for the theatre owner. This will continue.
Hollywood recognizes that the revenue being
taken by video games needs to come back to
Hollywood. I see a lot of the synergy getting
closer and tighter. That's the film company
contracting with the video game manufacturer
to make a game, but a good portion of the
money is going back to the movie studio rather
than staying at the video game company, the
idea that you can actually have interactive video
games where you control the adventure,
basically. Interactivity is something that
CL: It's regurgitating. It's taking someone else's
creativity and adopting it and putting your own
spin on it, but it's still imitation rather than true
creativity. There should always be a place in
society for that creativity, that personal vision, or
else we become mindless robots. We stop
thinking, and we're just taking in data without
processing any of the knowledge we have. I'm
curious to see how things will progress.
Certainly, if you love film, continue to go out
and watch movies but understand that video
games are highly entertaining and can provide
you with a great interactive experience–and it's
a different experience–but there has to be
respect on both sides.
JAT: So we can agree that 007: Everything or
Nothing is not going to sit alongside the
Bergman box set at retail?
CL: No, and there are people who have every
Bond movie on DVD who don't play video
games, and there are people who have
every hot video game
Interactivity is something that everyone and no Bond films.
Again, these are specific
because of
loves but there will always be a need to audiences
the way that society is
becoming fracturized.
watch a good story, to see characters, to see You'reappealing to very
specific people with
actors and for you not to control them.
these products, and
being able to educate
everyone loves, but there will always be a need an audience and market appropriately, these
to watch a good story, to see characters, to see are the kinds of things these companies need
actors and for you not to control them. I think to do to make it work well. An older generayou want to see stories, you want to find out tion that is weaned on films is not the generahow things are going to go. There's a place for tion that is now weaned on video games.
both in the world. Film lovers will always love Understanding who your audience is and seefilm, video gamers will always love video ing how film fits in with that specific audience,
games, and they will cross-pollinate. I've seen that's the trick.
some video games where you can be a
filmmaker, if you will, by choosing among
scenes and can cut them into your own little
movie, but that's not actually making a movie.
45
Software
Development
Practices:The
Good, the Bad
and the Ugly
Evan Robinson
TheGameManager.com
with Sara Robinson
You kids have no idea. You
46
sit there with your fancy
3GHz machines, with a gig
of RAM and a terabyte of
drive space at the ready.
I'm a geezer–been in the business of building
games and shrink-wrapped software for over
two decades. Seen a lot of change, I have. Back
in the day, we programmed for target
machines with 1MHz CPUs, 48K of RAM,
and 88K disk drives–barely enough juice to
run an electric toothbrush today. The best
screens had 240 x 192 resolution and gave you
a palette of 16 beautiful colors to choose
from. Support libraries? Hah. We were lucky
to have compilers: most of us just did it all in
assembly language.The runtime code on most
of our projects was in the 8K and 16K range.
There was no such thing as Jolt Cola. We had
to write code day and night, without
lowercase letters, three miles through the
snow with no shoes, uphill both ways.
You kids have no idea.You sit there with your
fancy 3GHz machines, with a gig of RAM and
a terabyte of drive space at the ready. Your 3
megapixel displays put up millions of distinct
colors.You compile your code using third and
fourth generation languages with runtimes
that support thousands of library functions,
generating megabytes of code and gigabytes of
supporting content. You've got interpreted
scripting languages for describing in-game
behavior. You've got iPods and Red Bull and
Software Development Practices: The Good, the Bad and the Ugly
Aeron chairs. It's a hell of a long way from the
cold garages we started off in.
But some things never change, even when they
should.The projects you're building today are
10 to 15 orders of magnitude larger than the
ones I built in the early 1980s. You've got
more power, more money, more content and
more profit riding on the outcome. That's a
hell of a lot more to keep track of. So why
are so many of us still managing software
projects using essentially the same concepts
and techniques that we used 20 years ago?
Last November, a post by a user known as
ea_spouse on LiveJournal (http://www.livejournal.com/users/ea_spouse/) struck a
chord of sympathy (and horror, as well as
some derision) so strong that over 3,000
people responded to it in just the first few
weeks1. It launched an industry-wide debate
on how we manage the process of building
software. All of us have been there: endless
months of required overtime, 80+ hour-perweek super-crunches, upset spouses, kids we
never met. All of this only proves that the
management techniques we use to build game
software are failing us in some very important ways.
This article examines the good, the bad and
the ugly among today's common software
development practices. I'll talk about why the
good is good, and what we can do to turn the
bad and the ugly into something at least a bit
more endurable. The discussion will focus on
several clusters of issues in which the biggest
gains stand to be made. The information is out
there.We just have to put it to work.
DEFINITIONS
First, I'd like to clarify a couple of terms. Most
people use the terms “software development,”
“software engineering” and “programming”
more or less interchangeably. However,
I'll be using them specifically. “Software
engineering” refers to best practices and
techniques used by programmers and
software engineers in the creation of source
and object code. “Software development” is a
more inclusive term, covering the full range
of disciplines used to coordinate the efforts of
programmers and software engineers (as well
as other creative, professional and supporting
disciplines) to finish a shippable product.
“Programming” refers to the actual act of
creating code. Thus, “software development”
is largely a management discipline, while
“software engineering” and “programming”
are programming or engineering disciplines.
LIFE CYCLE ISSUES
Game development culture has a historical
bias toward chaotic development. In the old
days, we used to mess around with our
computers until we found some cool idea,
then wrapped other things around that core
until we had something we thought we could
sell. But as games got bigger–beyond the
scope of a single lone wolf–things became
more formal. We needed a life cycle process
that would coordinate all the activities and
work products involved in creating a game:
requirements specification, product design,
software design, implementation, testing,
release and maintenance, not to mention the
coordination with Marketing on stuff like
boxes, ads and manuals.
Enter the Waterfall.
Waterfall Life Cycle: Bad
Back when I was a Technical Director (TD) at
Electronic Arts (EA), my fellow TDs and I
realized that many, if not most, of the problems
we were seeing came about because the
company had no hard and fast definitions of
“alpha” and “beta.” So we wrote some, and they
looked roughly like this:
Alpha–First version of the software
containing all required features. Features
may be buggy or require tuning. For
Position Only (FPO) art may be present.
All further programming work will involve
1 Editor’s note: This post, entitled “EA: The Human Story,” appeared on LiveJournal in November 2004. In this
post, ea_spouse–whose significant other is (or was at the time of the post) an Electronic Arts game
programmer—considers the physical, emotional and financial effects of stressful and possibly illegal working
conditions on programmers working to complete a project, as well as on their families, and questions whether
a change in policy might not, in the end, benefit both employers and workers.
47
debugging, adding art or sound resources,
and/or tuning existing features.
Beta–First version of the software with
no further resources or additional
feature tuning required and no
known blocking bugs. All further
programming will be debugging.
These definitions reflected the fact that, back
of today's game products are still developed
using something approximating the EA
waterfall model.
Now, however, we've got another option that's
much more appropriate for the vast size and
complexity of today’s projects.
Iterative Development: Good
Many game teams are now experimenting
48
around 1984, EA started building its
business structures around a waterfall
development process. The above definitions,
along with other pieces of the process we
defined, grew legs and have since become
widely adopted throughout the game industry.
Fifteen years later, they are still codified in
various development contracts and embedded
in management practices such as the Technical
Design Review (TDR), Go To Alpha decision
and the weekly Product Status Meeting (PSM).
The odd thing about this is that, even back in
1984, everybody knew that a pure waterfall
process–full design, full implementation,
debug without redesign, ship–had serious
flaws. For one thing, it's unlikely to be strictly
followed. For another, it's statistically unlikely to
produce a valuable product. But, even so, many
with “iterative” or “evolutionary” life cycle
planning techniques, which bring the product
to release quality several times during
development. This kind of development is
sometimes also called “feature-based” because
each iteration adds one or more quantum
“features” to the software. Sometimes a
feature is added over several successive
iterations, with each bringing more
functionality to the feature.
The iterative life cycles offer some
strong advantages:
• You can cut off development and ship
on relatively short notice, since most
iterative cycles run no more than
four to eight weeks;
• You have greater insight into the actual
state of the project at any given moment;
Software Development Practices: The Good, the Bad and the Ugly
• There's more opportunity to test the
product on stakeholders or focus groups
early and often; and
• It works better with “agile” development
methodologies
like
eXtreme
Programming (XP) or Scrum.
Agile Methodologies: Good
If you know who your customers are, XP and
other agile methodologies make it easier to
match your project to their actual wants and
needs. From the beginning, you can hand
them a prototype; just as quickly, you can do
so in response to their desires. Agile
development systems can shorten development
times, improve reliability (if only by avoiding the
creation of features that nobody wants) and
vastly improve the responsiveness of a
development organization.
Mutant Life Cycle Forms: Ugly
As we've seen, the waterfall process is
embedded in the game industry's cultural
DNA, to the point where all of our
conversations about scheduling, resource
allocation and accountability occur within
its frame. As we move toward iterative
development, however, we're going to need to
consciously re-think all of those foundational
assumptions in order to make room for these
new processes and technologies and allow them
to change the way we manage our projects.
Some forward-looking companies (usually
younger firms with less investment in old
ways of doing things) are already onto this.
They're openly embracing iterative
development and building their business
structures to support it fully. But other groups
are having a harder time. They've done well
with the old waterfall model. Sure, there are
pitfalls. It never worked all that well, and its
flaws are becoming more obvious as projects
grow exponentially bigger. But it's the devil
they know, and they're reluctant to let go of it.
But these methods are not a silver bullet that
will solve all development problems. Nor are
they suitable for all development styles.
Among the common pitfalls are these:
49
• Strictly limiting design and development
work to 30-day cycles (as an unUnfortunately, this ambivalence can sabotage
enlightened Scrum-master might) makes
a company's attempts to integrate iterative
it difficult or impossible to design and
approaches and agile development
implement more elaborate features.
technologies. Half-heartedly adopted, poorly
Obviously, one must be able to plan and
thought through, under-supported by upper
execute longer-term work.
management, timidly implemented by middle
• One of the “rules” of XP–“Don't write management and often not well understood
code you don't need”–can also be by the programmers themselves, it's not
translated as “Don't plan for the
future.”This may be a good rule for
vacations, but it won't get the Back in 1984, everybody knew that
product out the door.
• The short-cycle development style a pure waterfall process–full design,
can tempt you into extensive,
unnecessary rework and thrashing. full implementation, debug without
• Implementing these techniques
properly requires more investment redesign, ship–had serious flaws.
and training than most organizations
are willing to commit to. It's not surprising that the results are usually less
enough to buy your developers a couple successful and less repeatable than a more
of books and tell them to start using XP. carefully-envisioned, committed and deliberate
They're the next hot buzzword–but no adoption process might have been.
revolution comes easy or cheap.
At the Game Developers Conference (GDC)
2005, one speaker discussed his team's Scrum
process, or, I should say, partial process: he
kept saying things like, “We didn't do [project
phase] using Scrum because we'd already done
it in waterfall. I'm looking forward to seeing
what it's like in Scrum.” Sadly, this account
couldn't accurately describe the benefits (or
the pitfalls) of Scrum-based development,
because it wasn't fully Scrum-based. And the
lessons that this team may have learned from
the experience may not be useful in the future
because every half-thought-out unholy hybrid
of waterfall and iteration is going to be
different.This way lies madness.
Sane, well-designed hybrids may indeed meet
your needs. Mutant ones cobbled together by
necessity and brute force will probably not.
Choose your life cycle with your eyes wide
open, and change it only when you fully
understand a) why you need to, and b) how
you plan to go about it.
50
WORK ENVIRONMENT ISSUES
Elementary management books like Fred
Brooks' The Mythical Man-Month, Tom
DeMarco and Timothy Lister's Peopleware and
Edward Yourdon's Death March all make it
plain that the working environment is a
critical factor in programmer productivity.
Simple things like enclosed offices, phones
that turn off, quiet periods, adequate
development equipment and sufficient time to
think and recharge are not luxuries; rather,
they're absolutely essential to getting the most
productivity out of your people. If you don't
provide a supportive work environment, you
can expect huge amounts of time leakage.
Appropriate Development Tools: Good
I've worked with managers (and, no doubt,
so have you) who refused to invest
$4000 in a faster computer for a
$70K/year programmer, even though that
small outlay would have added an hour or two
each day–about 15% per year–to his
productivity. There is no excuse for not
providing adequate equipment for your
developers, unless you're running an
under-funded startup. The Return On
Investment for high-end tools and computers
beats any dot-com investment you could have
made in 1998.
Use the old machines for testing. Give them to
managers who aren't front-line developers any
more (no matter what they think they are) and
whose needs are limited to testing, word
processing, project management and e-mail.
Assign them to the administrative assistants.
Keep them around as spares. But don't make
your primary developers use them.
The same goes for software tools. Back when
I started, good game development tools were
so hard to come by that even reputable
companies begged, borrowed and bootlegged
them from other developers. These days,
programmers creating PC games have access
to a strong selection of modern source
control management, project management,
compiling, debugging, content development
and other tools. Game console programmers
don't have quite the same embarrassment of
riches, but even they've got stuff we'd have
bogarted in a minute in the old days. If you want
a top-end game, start with top-end tools.
Software Development Practices: The Good, the Bad and the Ugly
It's harder to find good asset-management
tools. Best-of-breed source control systems
are expensive to buy and maintain–but plenty
of companies that make the investment don't
follow through by training their managers
and developers to use them fully and well.
In particular, I'm often amazed at how
many managers ignore the extremely
useful branching and merging features.
More stunningly, many teams don't use
asset-management tools at all, even though they
can get high quality packages online for free.
Interruptions: Bad and Ugly
DeMarco and Lister's book, Peopleware, is a
must-read for anyone trying to improve their
development process. One of their key
insights is that knowledge work requires a
certain state of mind. The best work is done
within a particular state which psychologists
call “flow.” During flow, you tend to lose
track of time as the work output just seems to,
well, flow out of you:
immersion, you're not really doing work
(DeMarco and Lister, 1987, p.63).
I won't go so far as to agree that you're not
doing work, but you're definitely not doing
your best work. Every time you break out of
the flow state, you lose at least fifteen minutes
of your most productive time. That means
your environment must support you being
uninterrupted. This is why most industries
(and most non-game software companies, as
well) prefer offices with doors. They're not
popular in the games industry because many
of us prefer the “creative ferment” of cubes
and bullpens. But that ferment comes with a
large productivity cost that needs to be
accounted for.
If you must put people in cubes, encourage
them to use headphones and arrange their
desks so that they face the cube's opening–few
people feel entirely safe sitting with
headphones on at a computer screen with
their backs to an open doorway.
51
Not all work roles require that you
attain a state of flow in order to be And, as a side note: all doors need windows.
productive, but for anyone
involved in engineering, design, Every time you break out of the flow
development, writing, or like
tasks, flow is a must. These state, you lose at least fifteen minutes
are high mometum tasks.
It's only when you're in flow of your most productive time.
that the work goes well.
Legend has it that, in the early days of EA, the
Unfortunately, you can't turn on flow like a company's founder escorted a group of
switch. It takes a slow descent into the investors to the boardroom for a presentation
subject, requiring fifteen minutes or more only to find the windowless door locked,
of concentration before the state is curtains drawn and the room apparently in
locked in. During this immersion period, active use for a, uh, non-work-related private
you are particularly sensitive to noise and meeting. Since that episode, EA has built
interruption. A disruptive environment windows into every door and conference
can make it difficult or impossible to room, a feature that spread across the industry
within a few years.
attain flow.
Once locked in, the state can be broken
by an interruption that is focused on you
(your phone, for instance) or by insistent
noise... Each time you're interrupted,
you require an additional immersion
period to get back into flow. During this
Refuse-able Communication: Good
It must be possible to mute phone alarms.
Most modern phone systems come with signal
lights that can be used instead of audible
alarms, allowing employees to ignore them
until they're ready to be interrupted. E-mail
is, on the one hand, ignorable communication
and, on the other hand, a constant stream of
requests for your energy. For most of us, it's
better than phones or instant messages, as
long as you only read your e-mail a couple of
times a day or when you've already been
interrupted. If you answer your e-mail when
you arrive at work, when you return from
lunch and just before leaving the office, you'll
have two long uninterrupted periods in which
to do real work.
This advice does not apply to managers.Their
job requires constant monitoring and
attention to others, so they need to check
their e-mail and instant messages frequently.
52
TRAINING ISSUES
One of the most striking things about the
games business is the huge number of people
running development groups who have
little or no formal management training. This
long tradition has left us with some
dysfunctional attitudes toward management that
bear examination.
Management by Default: Ugly
Often, people become managers in this
business not because they had any burning
desire to manage, but because they were the
best or most senior programmers, testers or
artists in a company where only the managers
were eligible for top salaries and bonuses.
Once they hit the top of their pay grades, they
had to move into management or stop
getting raises.
Of course, this makes about as much sense as
giving the lead singer's job to the head
roadie just because he's been hanging around
the band a long time and knows where the
stuff goes on stage. There are two problems
here. First, being the company's most valuable
programmer or artist should, on its own, be
enough to put you in the top salary tier and
qualify you for any perk the company offers.
Second, those who do make the transition
from production into management (as I did)
don't often realize how much they need to
learn if they're going to play the game as a pro.
If you want to get into management, then it's
time to go back to school.
Lack of Professional Training: Bad
There's a great deal of mischief that can be
easily avoided by simply recognizing
management as a separate discipline from
programming or art or testing, and treating it
as such.
Any company lawyer worth his Gucci loafers
will warn you that a manager who doesn't
understand basic hiring, promotion,
contracting and firing regulations is a lawsuit
waiting to happen, so the first thing all new
managers need is some coursework on the
legal aspects of their jobs (at Adobe Systems,
we called this the “Don't Get the Company
Sued Because You're a Jerk” class sequence).
This is a good start, but it shouldn't be the
end. Rising leaders also need to master
day-to-day people and process management
techniques in order to do their jobs
competently. They need to know how to use
the company's planning tools and how to make
sound estimates of time, resources and
features, and they should understand at least
the fundamental principles of software
engineering. Until they've gotten all of this,
they're not ready for prime time.
Management salaries are expensive and
management mistakes can be catastrophic.
Short-changing your managers on the
training they need to do their jobs properly
can lead to a company-wide disaster.
Formal Management Education and
Training: Good
Fortunately, you can get professional
management training almost anywhere. The
Software Engineering Institute (SEI) at
Carnegie Mellon has software evaluation and
training programs. So does your nearest
college with a Computer Science degree
program. For more general management
education, find out who's minting MBAs in
your area and go see what they can offer.
Software Development Practices: The Good, the Bad and the Ugly
However, don't make the
fatal mistake of confusing
management training with
management experience.
An MBA fresh out of
college is as well-qualified
for his job as a
programmer fresh out of
college is for hers. You
wouldn't hand a $10
million project budget over
to a lead programmer with
no practical experience at
her craft, so why should
that project lead answer to a manager with no
practical experience at his? Fresh managers
need to be seasoned the same way fresh
engineers are–by gaining experience in
smaller projects and by careful mentoring
and supervision.
Whatever you do, make sure that the people
you promote to management a) are there
because they truly want to manage, and have
the essential organizational and interpersonal
skills to handle the job, and b) are supported
with a strong investment in ongoing
management training that will sharpen their
existing skills and help them acquire new ones.
Knowing the Literature: Good
A general rule of thumb is that all knowledge
workers should spend 10% of their time
keeping current with new information and
techniques in their field. That means
committing half a day a week just to taking
classes, attending conferences and reading the
literature. I've never worked anywhere that
supported this rule adequately, although
several employers have at least been willing to
purchase technical books and periodicals that
would help me stay current.
The good news is that the past fifty years have
produced a vast body of literature on software
development for project managers to draw on.
The bad news is that the lessons discussed in
these classic works are obviously still unlearned
and unheeded in some mainstream software
development houses.This isn't surprising: at his
presentation during GDC 2005, Steve
McConnell of Construx Software asserted that
the average programmer reads less than one
book on software development per year2.
If names like DeMarco, Boehm, Jones,
Yourdon and McConnell are unfamiliar to
you, get familiar with them. They are the
institutional memory of the software business.
Want to know when a design is precise
enough? It's in DeMarco's Controlling Software
Projects3. Want to know whether your
people should be in cubicles or offices? It's in
1987's Peopleware, by DeMarco and Lister.
How about a survey of best practices? Try
McConnell's Code Complete and Rapid
Development. Curious about the ways that
software projects go bad? Check out T. Capers
Jones' 1994 book, Assessment and Control of
Software Risks, modeled on the American
Public Health Association's Control of
Communicable Diseases in Man.
2 “The average software developer reads less than one professional book per year (not including manuals) and
subscribes to no professional magazines”(Construx, n.d.).
3 “The human brain seems to make use of a number of different working buffers, each with limited capacity.
Some applications fit well into the brain without overloading any one of its limits. I shall characterize such
applications as tiny. Anything other than a tiny system requires a qualitatively different approach. Long before
my time, engineers had discovered the essence of this different approach:
First Modeling Guideline: When a system is larger than tiny, partition it into pieces so that each of the pieces is
tiny. Then apply traditional methods to the pieces” (DeMarco, 1982, p.43).
53
If names like DeMarco, Boehm, Jones,
Tracking Tools: Good
and Ugly
Yourdon and McConnell are unfamiliar to The industry's embrace of
basic project management
software like Microsoft Project
you, get familiar with them.
is an encouraging trend. The
Then there's the simple practicality of popular way this software intrudes on developer
management books. The One Minute Manager workdays is not.To do the job better, you need
by Ken Blanchard and Spencer Johnson distills an improved tracking and reporting tool, a
an amazing amount of information into an source control system that integrates with your
easily digestible 60 minute airport read. Not scheduling program and decent bug tracking
every popular work is also a good one but you tools as well.
can quickly pick up some very useful skills
with a small investment at a good bookstore.
I'm confused and bothered by the widespread
failure to use automated estimation tools.
Plenty of websites offer management Estimation (and the related disciplines of
advice and experience. Some provide training. modeling, specification and scheduling) is dry,
I highly recommend Johanna Rothman's difficult work, but it lies at the very core
sites, Managing Product Development of effective software management. If half the
(www.jrothman.com/weblog/blogger.html) energy expended on avoiding specification
and Hiring Technical People (www.jrothman. and estimation were instead spent on doing it
com/weblog/htpblogger.html). The SEI at well, we'd all be a lot better off.
Carnegie Mellon (www.sei.cmu.edu) hosts
documents and offers degree programs Failure to Use Productivity Metrics:
54 and other training. Joel on Software (joelon
Bad and Ugly
software.com) provides insight into a variety Humans have an almost infinite genius for
of software and management related topics. misinterpretation and rationalization. We
Construx Software (www.construx.com) is often convince ourselves that our opinions
Steve McConnell's development management and/or prejudices have the force of natural
consulting company. For more of my own law. Careful measurement cuts through
writings, check out my software development such self-delusions and tells us objectively
management blog at The
Game Manager (www.the
gamemanager.com).
PLANNING ISSUES
Planning, estimation and
the measurement of actual
productivity are some of
the hardest issues most
project managers face. It's
natural to want to avoid
them in favor of more
exciting aspects of the job,
but, at the end of the day,
these activities are the job. As such, they
deserve your full attention.
whether one process is better, faster or
cheaper than another.
Most organizations take pains to measure
money flowing out. Far fewer measure
Software Development Practices: The Good, the Bad and the Ugly
output from a software development team.
Some companies give up on this entirely and
instead rely on the default metric: “The
product shipped and made X amount of
money.” While this is indeed the metric
that matters most to your investors, it's not
very useful if you're trying to improve your
development process.
These companies duck the issue because
measuring the output of individual
programmers and teams is hard. Deciding
what constitutes a unit of output from
a programmer is much tougher than
establishing a unit of output for an
automobile assembly line. There are a lot of
obvious things you can measure–number of
lines of code, number of features completed,
etc.–but every one of them has its limitations
and can easily deliver wrong information if
you're not aware of the pitfalls.
Besides the sheer difficulty, programmer
output tracking can be expensive, not to
mention tough on a team's morale. And,
deep down, it's possible that you don't
really want to know the answer anyway.
Good measurements may shake up your
assumptions regarding what's working and
what's not. Nobody likes finding out that
they've been doing things the wrong way for
years, no matter how useful it may be to find
a better way.
If you want your software development
organization to improve, a metrics program is
essential. If you need resources, the SEI
(http://www.sei.cmu.edu/) is a good place to start.
RESOURCE ALLOCATION ISSUES
In Project Management 101, you learn that
scheduling and construction (of anything) can
be plotted on three axes: Features/Quality
(Good), Time/Length of Schedule (Fast) and
Cost/Resources (Cheap).
The relationship between these axes is simple.
You can pin down any of one them within a
reasonably large range, and still have a wide
variety of options open to you. Once you pin
down the second one, however, you are left
with no choices at all because the third one must
be left to vary as it will, or, as the popular phrase
sums it up, “Cheap, fast, good: pick any two!”
The key to managing this Holy Triangle is to
consciously decide up front where you want
your trade-offs to be.
Working All Three Axes of the
Triangle: Good
In game development, the Holy Triangle
generally plays out in one of two ways.
In some projects, we lock down the features
list when we choose a specific design. Team
size and budget put a cap on the resources
available. With those two axes pinned, the
remaining axis, the schedule, is now
uncontrollable. This is often the reality (if not
the plan) for brand new products that have
never been attempted before.
Much more often, however, we start by
locking down the ship date (schedule) and
establishing the team size (resources). Once
that's done, the only flex point left in the
system is in the product design (features).
If the schedule is not long enough, or if you
outrun your resources, features will eventually
get dropped.
We tend to go into a new project assuming
that we're locked into one of these two
scenarios, but I'm suggesting that you
examine that assumption very closely and
don't lock yourself down to any point until
you absolutely have to. For example, some of
the most successful products are those with
a narrow focus, not a huge number of new
features. You may find that you'll come up
55
with a better game design and have more
flexibility in deploying your resources if you
can at least take the time to scope the project
before trying to pin down a ship date.
Learn to accurately evaluate all three axes of
the Holy Triangle, and choose your limitations
instead of letting them choose you.
Humphrey (the man behind the SEI's
Capability Maturity Model and the Personal
Software Process) puts “unrealistic schedules”
first in his list of five major reasons software
projects fail (2001). Unrealistic schedules lead
to failure largely because one of the usual
solutions to it–long-term crunch mode–is the
ugliest common practice there is. It's so
common and so bad, in fact, that it gets a
special section all its own below.
It would be wonderful if we could all sit down
hand-in-hand and agree that we shall “ship no
software before its time.” Realistically, that's
not going to happen. Still, we can recognize
that some software projects do not require
hard deadlines.We can recognize that we must
tailor either team size or feature requirements
to our deadlines (instead of assuming that two
years worth of work can be done in a single
year if we all just “work harder”), and we
can recognize that short development
times require additional support from
other departments (marketing, IT, upper
management) instead of simply demanding
that development people take up all the slack
on their own.
56
Hard Ship Date: Bad and Ugly (but also
unavoidable)
As stated above, most game projects start with
a hard ship date. Usually, there's no way
around this: the game has to be in stores by the
start of the season, the movie release date or
Christmas. But hard ship dates inevitably
create schedule pressures. In turn, these
pressures just as inevitably lead to a
cascade of desperation-driven management
decisions that may, in the end, put the entire
project at risk.
Capers Jones' Assessment and Control of Software
Risks cites excessive schedule pressure as
“the most common of all serious software
engineering problems” (1994, p.118). Watts
TIME ALLOCATION ISSUES
And here we come down to the issue that
sparked the whole discussion: crunch mode.
As we've seen, crunch mode is a natural
by-product of hard ship dates. Since fixed ship
dates are usually unavoidable, given the nature
of our business, many managers just assume
that crunch mode is a necessary way of life for
game developers.
But, like many articles of faith, this one
doesn't stand up to the cold light of reason.
Many studies, dating back more than one
hundred years4, have proven conclusively that
long-term crunch mode is destructively
counterproductive. If you're interested in
4 Among the studies from which I draw this conclusion are the following: Psychophysics in Cyberia(http://work
less party.org/wlit blog/archives/000653.html), Work Less Institute of Technology, November 18, 2004;
Psychology and Industrial Efficiency (http://psy chclassics. yorku.ca/Munster/Industrial/chap17.htm), Hugo
Münsterberg, 1913, available at Classics in the History of Psychology, (http://psychclassics.yorku.ca/) maintained
by Christopher D. Green, York University, Toronto, Canada; Samuel Crowther’s interview with Henry Ford
(http://www.worklessparty.org/timework/ford.htm), World’s Work, 1926, pp. 613-616; Scheduled Overtime
Effect on Construction Projects (http://www. curt.org/pdf/156.pdf): Business Roundtable, November 1980. For
a summary of construction overtime studies dating back to the 1940s, see The Revay Report, vol. 20, number 3,
November 2001 (http://www.revay.com/english/v20n3eng.pdf).
Software Development Practices: The Good, the Bad and the Ugly
seeing this research, my white paper
reviewing some of the highlights can be
found at the websites for the International
Game Developers Association (www.igda.org
/articles/erobinson_crunch.php) and The
Game Manager (www.thegamemanager.com
/why-crunch-mode-doesnt-work). Here, I'll
just briefly critique current management
practices, based on the conclusions I’ve
drawn from these studies.
Short-Term Crunch Mode:
Good but Ugly
When used in short bursts (typically under
four weeks), crunch mode can be an effective
tool for increasing total output. (The extant
literature includes discussions of how to
calculate the break-even point beyond which a
short-term crunch becomes futile.)
as a “death march”) is any crunch mode that
lasts longer than a few weeks. It is a practice
that is completely without economic
justification, and it deserves to be abolished.
This is heresy, but it's a heresy I've already
defended at length. LTCM is the single most
expensive way to get the work done. Studies
over the past hundred years5 conclude that it
creates an almost immediate drop in hourly
productivity and that this drop accelerates
with increasing speed until, within just a few
weeks, the lost productivity more than
outpaces any gains achieved during the extra
hours worked. It's a law of nature: the hurrier
you go, the behinder you get.
Other studies show that mental function
declines even more rapidly than physical
function under the stress of long hours and
lost sleep, which means that knowledge
workers are significantly more sensitive to
exhaustion than other types of workers
Even when used judiciously, though,
short-term crunch mode (STCM) should
always be regarded as a management lapse. If
you're paying close attention to all the
used in short bursts 57
other factors I've discussed above, When
crunch mode–either long- or shortterm–should never be necessary. Those (typically under four weeks), crunch
responsible should not be rewarded
because a lot of people worked mode can be an effective tool for
really hard. Sun Tzu said: “Those skilled
in war subdue the enemy's army without
increasing total output.
battle.” I say: “Those skilled in scheduling
and management get the project done without studied. Most critically, LTCM exposes the
crunch mode.”
entire company to greatly increased risk of
catastrophic failure, key employee loss due to
Still, despite the morale-crushing nature of health and family problems, a missed ship
the beast, it's a sad fact that any game date or shipment of an unusable or
company getting by with only STCM is unmarketable product.
probably already way ahead in the quality-oflife sweepstakes. A company that compensates Good managers respond to schedule pressure
employees for the extra hours (by paying by improving processes, information flow,
overtime, granting some legal version of equipment and tools, training and the work
compensatory time or giving bonuses close to environment. LTCM does not increase
what overtime would cost) is in the running worker output. In fact, it inevitably sends
productivity into a tailspin from which the
for the grand prize.
project may not recover. Given the risks
LTCM poses to programmers, projects,
Long-Term Crunch Mode:
company assets and the bottom line, we
Ugliest of All
Long-Term Crunch Mode (LTCM–also known need to join the many other capital-intensive
5 Ibid. Especially note The Revay Report (http://www.revay.com/english/v20n3eng.pdf).
58
industries that stigmatize it as the hallmark of
incompetent management.
References
Summary
The current state of game software
development is not dire, nor is it healthy. The
techniques we use to manage people,
resources, time and processes are largely out
of date, or known to be suboptimal, although
many of them have staunch defenders. Our
casual attitudes toward skills training and
management education have also taken a toll.
In particular, we do not pay enough attention
to measuring productivity, and we rely on
dysfunctional tactics like crunch mode rather
than invest in other measures that will
enhance output.
Blanchard, K. & Johnson, S. (1983). The One Minute
Manager. New York: Penguin Putnam.
There are bright spots. These include the
adoption of incremental or evolutionary life
cycles and the recognition that effective tools
are a worthwhile expense. Another bright
spot is the promise of agile development
methodologies, especially the increased
visibility they may provide into what has
traditionally been a rather opaque process.
One final bright spot is the tremendous
potential for improvement. With a new
generation of game consoles and
multiprocessor workstations around the
corner, we have both great incentive for
Benenson, A. S. (1995). Control of Communicable Diseases
In Man. Washington, DC: American Public Health
Association.
Brooks, F.P., Jr. (1995). The Mythical Man-Month: Essays on
Software Engineering, Twentieth Anniversary Edition,
Reading, MA: Addison-Wesley.
Brunies, R. & Emir, Z. (2001, November). The Revay Report
(vol. 20, number 3) Calculating Loss of Productivity Due to
Overtime - Using Published Charts - Fact or Fiction
http://www.revay.com/english/v20n3eng.pdf.
Carnegie Mellon Software Engineering Institute (n.d.)
http://www.sei.cmu.edu/.
Construction Industry Cost Effectiveness Task Force.
(1980, November). The Business Roundtable. Scheduled
Effect
on
Construction
Projects.
Overtime
http://www.curt.org/pdf/156.pdf.
Construx. (n.d.). Individual Professionalism. Retrieved
March 25, 2005 from http://www.construx.com/professional
dev/individual/.
Crowther, S. (1926). HENRY FORD: Why I Favor Five Days'
Work With Six Days' Pay. An excerpt from World's Work,
1926, pp. 613-616. http://www.worklessparty.org/timework/ford.htm.
DeMarco, T. & Lister, T. (1987). Peopleware: Productive
Projects and Teams. New York: Dorset House Publishing.
DeMarco, T. (1982). Controlling Software Projects. New
York: Yourdon Press.
We do not pay enough attention
Ea_spouse. (2004, November 10). EA: The Human
Story [Msg 1]. Message posted to http://www.livejournal. com/users/ea_spouse/274.html.
Game Developers Conference. (2005, March 7-11)
http://www.gdconf.com/.
to measuring productivity, and we
rely on dysfunctional tactics like
crunch mode.
improvements and great opportunities for our
own benefit. Relatively simple changes in the
way we plan our processes, train our managers
and allocate our resources can bring us great
returns and prepare us to manage software
development more intelligently in the future.
Humphrey, W. (2001). Winning With Software,
Boston, MA: Addison-Wesley.
International Game Developers Association. (n.d.).
Why Crunch Mode Doesn't Work: 6 Lessons.
http://www.igda.org/articles/
erobinson_crunch.php.
Jones, T.C. (1994) Assessment and Control of Software
Risks. Upper Saddle River, NJ: Yourdon Press.
McConnell, S. (2004). Code Complete (2nd ed.).
Redmond, WA: Microsoft Press.
McConnell, S. (1996). Rapid Development. Redmond, WA:
Microsoft Press.
Münsterberg, H. (1913). Classics in the History of
Psycholog. Psychology and Industrial Efficiency. http://
psychclassics.yorku.ca/Munster/Industrial/chap17.htm.
Robinson, E. (n.d.). The Game Manager. http://www.the
gamemanager.com.
Robinson, E. (n.d.). The Game Manager. Why Crunch
Mode Doesn't Work: 6 Lessons. http://www.the
game manager.com.
Software Development Practices: The Good, the Bad and the Ugly
Rothman, J. (n.d.). Managing Product Development.
http://www.jrothman.com/weblog/blogger.html.
Rothman, J. (n.d.). Hiring Technical People. http://www.
jrothman.com/weblog/htpblogger.html.
Spolsky, J. (n.d.). Painless Software Management. http://
joelonsoftware.com.
Work Less Institute of Technology. (2004, November 18).
Psychophysics in Cyberia. http://worklessparty.org/
wlitblog/archives/000653.html.
Yourdon, E. (2003). Death March (2nd ed.). New York:
Prentice Hall.
Biography
Evan Robinson (TheGameManager.com) started making
games professionally in 1980 and moved to computer games
in 1983. Following many successful years as an
independent developer, he served as a Technical Director at
EA and the Director of Games Engineering at Rocket
Science Games. He also worked as an Engineering Manager
and Senior Software Engineer at Adobe Systems. A frequent
presenter at early Game Developers Conferences, he writes
and consults on game programming, project management
and development management.
Sara Robinson entered the games business in 1986 as a
writer for EA. She has contributed to over 100 games for
LucasArts, Disney, Sega and dozens of other
companies. A former contributing editor and columnist for
Computer Gaming World, she was among the early
creators of the GDC.
The Robinsons live in Vancouver, B.C.
59
Giving the Antagonist
Strategic Smarts:
Simple Concepts and
Emergence Create
Complex Strategic
Intelligence
Mark Baldwin
Baldwin Consulting
University of Advancing Technology
60
Not all games have active
challengers like an opponent
in a chess game or an
opposing coach, but many do.
To control these challenges,
one needs intelligence.
Introduction
In order for a game to provide entertainment,
there must be barriers or challenges to be
overcome by the game player. The act of
succeeding in overcoming challenges is where
much of the entertainment value comes in for
a player. These barriers can be passive, such as
figuring out which gem to choose in a simple
puzzle, or active, such as that Big Dragon
sitting on top of the treasure that you so
desperately want, glaring at you as he
prepares to burn you to cinders. Of course,
that dragon will probably have an artificial
intelligence (AI) that responds when you
attack it, but for much of the game it just sits
there in a vegetative state brooding over its
treasure, or, at most, it is concerned with its
own personal goals within the game world,
independent of your own goals.
However, there can be a third type of barrier:
an intelligent antagonist that is reacting to
every decision you make and that is actively
putting barriers in your way, an intelligence
that is controlling its own units, resources or
characters striving for the same goals that you
are, but trying to accomplish them first. This
Giving the Antagonist Strategic Smarts
Figure 1: The game of Capture the Flag.
intelligent antagonist is your equal in the
game, normally working under the same rules
and resources as you are. It might be your
opponent in a game of chess, or the coach of
the New England Patriots as you try to beat
him in the Super Bowl. For the purposes of
this argument, however, the example I will
use is an opponent in a simple game of
Capture the Flag.
While not all games have active challengers
like an opponent in a chess game or an
opposing coach, many do. To control these
challenges, one needs “intelligence,” either in
the form of another human–as in an online
chess game–or in the form of AI. Decision
making in this form of challenge is typically
strategic in nature and quite complex. The
computer as the antagonist must manage and
coordinate a complex set of resources
towards a goal that is in conflict with the
protagonist, i.e., the game player.
Strategic Smarts
Over the years, I have developed a number of
methodologies to solve this problem. For
example, I spent a great deal of time trying to
find practical ways to implement neural
networks for an antagonist AI. One that I find
to be both practical and effective, and on
which I have given a number of lectures over
the years, is a methodology I describe as
Project AI. The reason for the label will
become apparent later in this paper, but I
would like to build up to it piece-by-piece.
61
First, let us look at the problem and some of
its components, the most important of which
is the goal. In theory at least, the goal of a
Decision making in this form of
challenge is typically strategic in
nature and quite complex.
good antagonist AI is to win the game and in
so doing defeat his opponent (again, the game
player). In practice, this is not quite true.
Instead of trying to win the game, the goal of
the antagonist is to entertain the game player.
Trying to win is a means to that end.
Normally, winning the game is accomplished
through victory conditions, perhaps to
destroy all of the enemy or to gain the highest
score. The player achieves this victory by
Figure 2: Level 1 – Random decision making (Brownian motion).
62
controlling a large set of resources (units,
people, money, etc.) in a coordinated and
complex manner.
where there are a large number of strategic
decisions being made to control a large
number of unique resources.
Although I could select a number of models to
describe this process, let us use a simple game,
Capture the Flag. Consider a computer game
of Capture the Flag, where each player
controls five members on his own team. The
object of the game is to capture the
opponent's flag by moving a player onto it, but
he may also eliminate opponent players by
shooting at them (see Figure 1). In this
example, the player (be it human or
computer) must control units (resources) in a
coordinated manner. There is a decision cycle
in which the player issues orders to his
resources, the game progresses, the results are
observed and new orders are then issued.
There is an objective or victory condition that
is mutually exclusive for all players of the
game: there can only be one winner. And the
decisions can both move the player towards
this victory condition and/or prevent the
opponents from doing so. Although I will be
looking at this Capture the Flag example, this
can be applied to any complex strategic game
To develop our ideas, we need to start with
the simple and build to the complex. I will do
this by looking at levels of complexity from
which the decisions of the AI will be made.
Each level will be explored based on the
previous level, and I will examine both how it
would be implemented and some problems
that might occur.
Level 1–Brownian Motion
For our first level, let's keep things simple. In
each decision cycle, let us examine each unit,
build a list of possible orders that can be given
to the unit and then pick one randomly. In
other words, look around and do something
(see Figure 2).
The problem with this level of decision
making is that the units are not directed in any
way towards achieving the computer player's
goals, i.e., winning the game or preventing
the opponent from achieving his goals. It's just
Brownian motion (the random motion of
Giving the Antagonist Strategic Smarts
VICTORY!
Figure 3: Level 2 – Taking an Action that wins a game.
atoms), but it might have the advantage that it
would confuse the heck out of any opponent.
Level 2–Grab the Win
Let's address the problem of Brownian
motion. Instead of random decisions, for each
unit let us pick an action that achieves the
game's victory conditions.To be specific, look
around for each unit and try to determine if
there are any victory goals achievable by the
unit. If so, implement it. In Figure 3, I moved
a unit near an opponent's flag to show this.
When there are alternate actions that achieve
the same goal, there is no filter in differentiating
equal or nearly equal actions. Also, in most
games except for the most simplistic, any one
decision or action cannot achieve a victory
condition. This puts our decisions back to
random decisions like Level 1.
Level 3–Head Towards the Goal
Expanding on the ideas of the Level 2 process,
we can evaluate each alternate action by how
well it moves us towards the victory
conditions. The actions are not evaluated in a
true/false analysis like the last level but,
instead, by creating an evaluation function and
giving each decision a numerical value. The
action with the best value would be the one
chosen. For example, a decision that would
move a unit to a victory location in two turns
63
All of the information exists in
one place (the player's mind) and
the barriers of communication
are therefore removed.
would be worth more than a decision that
would move the unit to a victory location in
ten turns (see Figure 4). Note that no unit is
shooting at any other unit because that doesn't
move us towards our victory conditions.
So what problems do we see with this level? For
one thing, we attribute no value to decisions that
support reaching the conditions but that, in and
of themselves, do not achieve victory; an
example of this might be killing an opposing unit.
Level 4–Using Sub-goals
Many actions that units can engage in cannot
be directly attributed to the victory goals of a
game. The solution is to develop sub-goals
which assist the AI in achieving the victory
conditions, but which are not victory
conditions per se. These sub-goals are generally
easier to accomplish in the short term than the
primary game goals may be. When making a
decision for a unit, the AI evaluates the
possibility of achieving these sub-goals. Such
sub-goals might include killing enemy units,
protecting the flag, maintaining a defensive
line or achieving a strategic position. If the
sub-goal creation and evaluation process is
done judiciously, each resource will then be
taking some action that moves the player
towards the final victory goals. Because of
this, using sub-goals can actually produce a
semi-intelligible game (see Figure 5).
64
However, even though we are starting to get a
playable AI, there are still problems: each unit
makes its decisions independent of all
others, but it is playing against a human who
does coordinate his resources. It's like
pre-Napoleonic warfare: it works well until
the opponent starts coordinating their forces.
Level 5–Coordination
How, then, do we allow for this coordination?
Let us allow the AI making a decision for a unit
to examine the decisions being made for other
friendly units. Weigh the possible outcomes
of the other units’ planned actions and then
balance those results with the current unit's
action tree. Now we are balancing our
resources with the goals that need to be
accomplished and are not engaged in overkill
or underkill.
This allows for coordination, but not strategic
control of the resources. However, this level is
actually beyond what many computer game
AI's actually do today. One of the big
problems is that it can lead to iterative
cycling: the process of changing a unit's
decision based on other unit's decisions then
requires the other units to evaluate, and their
decisions thus have the potential of creating a
circular continuous process that is either not
resolved or consumes a great deal of resources
to resolve.
Level 6–A Command Hierarchy
This is where it gets interesting. Let us
create a strategic (or grand tactical) decision-
Figure 4: Level 3 – Making decisions that move your resources towards the goal.
Giving the Antagonist Strategic Smarts
Figure 5: Level 4 – Using sub-goals to make decisions.
making structure that can control units in a
coordinated manner.
This leads us to the problem of how does one
coordinate diverse resources to reach a
number of sub-victory goals. This question
may be described as strategic level decision
making. One solution would be to look at how
the problem is solved in reality, i.e., on the
battlefield or in business.
The solution on the battlefield is several layers
of a hierarchical command and control. For
example, squads 1, 2 and 3 are controlled by
Company A, which is controlled by a higher
layer of the hierarchy. Communications of
information mostly go up the hierarchy (e.g.,
information about a squad 20 miles away is
not relayed down to the local squad), while
control mostly goes down the hierarchy. On
occasion, information and control can cross
the hierarchy, and, although it's happening
more now than 50 years ago, it is still
relatively infrequent.
As a result, the lowest level unit must depend
on its hierarchical commander to make
strategic decisions. It cannot do this itself
because a) it doesn't have as much information
with which to make the decision as its
commander, and b) it is capable of making a
decision based on known data different than
others with the same data, causing chaos
instead of coordination.
First cut solution, then: we build our own
hierarchical control system, assigning units to
theoretical larger units or, in the case where
the actual command/control system is
modeled (V for Victory), the actual larger
units. Allow these headquarters to control
their commands and to work with other
headquarters as some type of “mega-unit.”
These, in turn, could report to and be
controlled by some larger unit (see Figure 6).
However, there seem to be some problems
here: the hierarchical command system
modeled on the real world does not make
optimal use of the resources. Because of the
hierarchical structure, too many resources
may be assigned to a specific task, or resources
in parallel hierarchies will not cooperate. For
example, two units might be able to easily
capture a victory location near them, but,
because they each belong to a separate
65
Squad 2
Move to better position
Squad B
Squad A
Squad 1
Protect Flag
Move to Enemy Flag
Kill Unit
Figure 6: Level 6 – Using a command hierarchy to make decisions.
66
command hierarchy (mega-unit), they will
not coordinate to do so. Note that if, by
chance, they did belong to the same hierarchy,
they would be able to accomplish the task. In
other words, this artificial structure can
be too constraining and might produce
sub-optimal results.
Additionally, a single human opponent does
not have these constraints. All of the
information exists in one place (the player's
mind) and the barriers of communication are
therefore removed.
First, we must ask ourselves this: if the
hierarchical command and control structure is
not the best solution, why do businesses and
the military use it? The difference is in the
realities of the situations. As previously noted,
on the battlefield, information known at one
point in the decision-making structure may
not be known at another point in
the hierarchy. In addition, even if all
information was known everywhere, identical
decisions might not be made from the same
data. However, in game play, there is only one
decision-maker (either the human or the AI)
and all information known is known by
that decision-maker. This gives the decisionmaker much more flexibility in controlling
and coordinating resources than does the
military hierarchy.
In other words, the military and business
system of strategic decision making is not our
best model. Its solution exists because of
constraints on communication, but those
constraints do not usually exist in strategy
games–command and control are considered
perfect in this context–and therefore
modeling military command and control
decision making is not our perfect model to
solve the problem in game play AI.
We want the best technique of decision
making that we can construct for our AI.
Below, then, is an alternative Sixth Level
attack on the problem.
Project AI–A Level 6 Alternative
This leads us to a technique I call Project AI.
Project AI is a methodology that extrapolates
the military hierarchical control system into
something much more flexible.
Giving the Antagonist Strategic Smarts
The basic idea behind Project AI is to create a
temporary mega-unit control structure
(called a Project) designed to accomplish a
specific task. Units (resources) are assigned to
the Project on an as-needed basis, used to
accomplish the project and then released
when not required. Projects exist temporarily
to accomplish a specific task and then
are released.
Therefore, as we cycle through the decisionmaking process of each unit, we examine the
project the unit is assigned to (if it is assigned
to one). The project then contains the
information needed for the unit to accomplish
its specific goal within the project structure.
Note that these goals are not the final victory
conditions of the game but are very specific
sub-goals that can lead to game victory.
Capturing a victory location is an obvious goal
here, but placing a unit in a location with a
good line of sight could also be a goal,
although less valuable (see Figure 7 for an
example of this).
Let's get a little more into the nitty gritty of
the structure of such projects and how they
would interact.What are some possible
characteristics of a project?
• Type of project–What is the project
trying to accomplish? This is basically our
goal for the project. Project type examples:
Kill an enemy unit.
Capture a location.
Protect a location.
Protect another unit.
Invade a region.
• Specifics of the project–Exactly what are
the specifics for the project? Examples are
“kill enemy unit 2,”“capture location 4,” etc.
• Priority of the project–How important is
the project compared to other ongoing
projects toward the final victory? This
priority is used in general prioritizing and
killing off low priority projects should
there be memory constraints.
• Formula for calculating the incremental
value of assigning a unit to a project–in
other words, given a unit and a large
number of projects, how do we discern
what project to assign the unit to? This
formula might take into account many
different factors including how effective
the unit might be on this project, how
quickly the unit can be brought in to
Project A Move to strategic position
Project 1 Defend Flank
Project 2 Defend Flag
Project B Kill Unit
Project 3 Kill Unit
Project 1 Defend Flag
Project C Defend Self
Figure 7: Project AI – Using the project structure to make decisions.
67
support the project, what other
resources have already been allocated to
the project or what is the value of the
project, among others. In practice, I
normally associate a different formula
with each project type and then each
project carries specific constants that are
plugged into the formula. Such constants
might include enemy forces opposing
the project, minimum forces required to
accomplish the project and probability
of success.
• A list of units assigned to the project.
• Other secondary data.
How do we actually use these projects? Here
is one approach:
68
1. For every turn, examine the domain for
possible projects, updating the data on
current projects, deleting old projects
that no longer apply or have too low a
priority to be of value, and initializing
new projects that present themselves.
For example, we have just spotted a unit
threatening our flag, so we create a new
project which is to destroy the unit, or,
if the project already existed, we might
have to reanalyze both value and resources
required considering the new threat.
2. Walk through all units one at a time,
assigning each unit to that project which
gives the best incremental value for the
unit. Note that this actually may take an
iterative process since assigning/
releasing units to a project can change
the value of assigning other units
to a project, which means that we may
have to reassess this multiple times.
Also, some projects may not receive
enough resources to accomplish their
goal and may then release those
resources previously assigned.
3. Reprocess all units, designing their
specific move orders taking into account
what project they are assigned to and
what other units also assigned to the
project are planning to do. Again, this
may be an iterative process.
The result of this Project structure is a very
flexible floating structure that allows units to
coordinate between themselves to meet
specific goals. Once the goals have been met,
the resources can be reconfigured to meet
other goals as they appear during the game.
One of the implementation problems which
Project AI can generate is that of oscillations of
units between projects. In other words, a unit
gets assigned to one project in one turn, thus
making a competing project more important,
grabbing the unit the next turn.This can result
in a unit wandering between two goals and
never going to either because, as it changes its
location, the decision-making algorithms
would then reassign it. The designer needs to
be aware of this possibility and protect for it.
Although there can be several specific
solutions to the problem, there is at least one
generic solution. Previously, we mentioned a
formula for calculating the incremental value
of adding a unit to a project–the solution lies
in this formula.To be specific, a weight should
be added to the formula if a unit is looking at
a project it is already assigned to (i.e., a
preference is given to remaining with a
project instead of jumping projects). The key
problem here is assigning a weight large
enough that it stops the oscillation problem
but small enough that it doesn't prevent
necessary jumps in projects. One may have to
massage the weights several times before a
satisfactory value is achieved.This is a trial and
error type of process and can almost be an art
in developing the proper values.
And onward…
One can extrapolate past the “Project”
structure just as we built up to it. One
extrapolation might be a multilayer level of
projects and sub-projects. Much could be
done in developing methods to resolve
the iteration cycles to improve performance.
There are other possibilities as well
to explore.
This methodology has been used and
improved in a number of my games over
the years, including Empire,The Perfect General
Giving the Antagonist Strategic Smarts
and Metal Fatigue. It is an effective technique
for giving the computer opponent
strategic smarts.
Biography
After earning a master's degree in Engineering, Mark
Baldwin initially had a successful career working on the
Space Shuttle as a flight designer. Jumping careers during
the mid 1980s, he moved from being a rocket scientist to a
game designer. Since then, he has written, programmed,
designed, directed and/or produced over 30 commercial
computer games and has won numerous awards for his
games, including “Game of the Year.” He is currently a
consultant in the game industry as well as a computer games
teacher. One of his current related interests is virtual
railroading and railroading history which can be found at
his web site http://gilpintram.com.
69
c a l l
f o r
p a p e r s
Call for Submissions and Publication Guidelines
Works published in the Journal of Advancing Technology (JAT) focus on all areas of
technology, including theory, innovation, development and application. Topics might
include, but are not limited to, the following:
•
•
•
•
•
•
•
•
•
•
•
•
70
Network Architecture
System Security
Biometrics
Computer Forensics
Video Game Theory and Design
Digital Video and Animation
Digital Art and Design
Web Design and Programming
Software and Database Development
Entrepreneurship through Technology
Society and Technology
Artificial Life Programming
We also welcome a wide range of forms and approaches: narrative discussions,
observations of trends and developments, quantitative and qualitative research,
technological application, theoretical analysis and applied theory, interviews with or
profiles of individuals involved in technology-related fields, literature and product
reviews, letters to the editor, updates to research previously published in this journal, etc.
Themes we would like to explore in future issues include the development of
emerging technologies as they are tied to social need; examinations of developing
industry practices in relation to workflow, project development or human resources;
intelligence systems; simulation or entertainment technology in relation to popular
culture; stealth technologies; and the development and application of environmental
simulation software. However, we routinely read and consider all submissions relating
to advancing technology, regardless of theme.
We want to publish profound works that display creativity, and we will actively
participate in the development of new and original voices in technology writing.
The JAT editors encourage works from beyond the boundaries of traditional
scholarship. Not every Nobel Prize winner attended college or dedicated their lives to
academic research, a fact which should never discount the authenticity of their work.
Authors will be paid in copies of the Journal.
Format criteria
• Submitted work should be 2,000 to 10,000 words in length, or five to
twenty pages in length (based on a manuscript submitted in Times New Roman
10-pt font).
• JAT publication style is based on APA style and formatting guidelines.
• Articles may be accompanied by illustrations, graphics, charts, tables or other
images. Please submit these in the highest resolution possible. If images are
unavailable, please provide a concise explanation of what images will suffice.
• Alternative submission formats will be considered on an individual basis.
• Articles should include a short abstract of no more than a half-page in length that
describes the paper's focus.
• Articles should include a short biography of 25-50 words.
• Authors must indicate their by-line affiliation (school, company, etc.) in
their manuscripts.
Contact Information
Letters to the Editor
Letters to the editor on any matter related to JAT content will be considered. JAT
reserves the right to edit these for clarity. Please send messages to journal@uat.edu or
to our offices at 2625 W. Baseline Rd.,Tempe, AZ, 85283-1056.
Submissions
JAT reads submissions year-round. Articles submitted before March 15th may be
considered for the Spring/Summer issue; articles submitted before August 15th may
be considered for the Fall/Winter issue. Unsolicited article submissions are always
welcome. Please send submissions to the editors at journal@uat.edu or to our offices
at 2625 W. Baseline Rd.,Tempe, AZ, 85283-1056.
Subscriptions
To subscribe to the Journal of Advancing Technology, visit www.uat.edu/JAT or call us
at 602.283.8271 and let us know if you prefer an email copy or a printed copy. If you
prefer a printed copy, please include a full address and telephone number.
2625 W. Baseline Rd. > Tempe, AZ 85283-1056
Phone 800.658.5744 > Fax 602.383.8222
www.uat.edu
71