Techno Savvy - NYU Stern School of Business

Transcription

Techno Savvy - NYU Stern School of Business
See page 115 for Analyst Certification and Important Disclosures
E Q U I T Y
R E S E A R C H :
U N I T E D
S T A T E S
Thematic Investing
Edward M. Kerschner, CFA
212-816-3532
edward.kerschner@citigroup.com
Michael Geraghty
212-816-3534
michael.j.geraghty@citigroup.com
Techno Savvy
Profiting from Adapting, Exploiting, and
Innovating Technologies
¤ Tech Adapters: A significant competitive advantage can
be gained by adapting existing technologies.
¤ Tech Exploiters: Identifying the optimal application of a
relatively new technology is often as important as the
technology itself.
¤ Tech Innovators: By contrast, tech innovators actually
create something new, often by taking science from the
laboratory to the marketplace.
¤ Key technologies today that may be utilized by technosavvy companies include: dual clutch transmission,
e-money, fuel cells, health spending account (HSA)
“smart cards,” oil and gas drilling and completion
technology, “phood,” radio frequency identification
(RFID), real-time patient monitoring, and voice over
Internet protocol (VoIP).
¤ Among companies well positioned to use those
technologies are BJ’s Wholesale Club, BorgWarner, Cisco
Systems, eBay, EOG Resources, Euronet Worldwide,
Martek Biosciences, Mechanical Technology, Medtronic,
Plug Power, Senomyx, and UnitedHealth Group.
Smith Barney is a division of Citigroup Global Markets Inc. (the “Firm”), which does and seeks to do business with companies covered in
its research reports. As a result, investors should be aware that the Firm may have a conflict of interest that could affect the objectivity
of this report. Investors should consider this report as only a single factor in making their investment decision.
United States
April 8, 2005
Techno Savvy – April 8, 2005
Table of Contents
Summary .................................................................................................................................................... 3
Tech Adapters, Exploiters, and Innovators.................................................................................................. 3
Lessons from History .................................................................................................................................. 3
Identifying Key Technologies ..................................................................................................................... 4
12 Case Studies ........................................................................................................................................... 5
Risks to Our Thematic Outlook................................................................................................................... 6
Techno Savvy ........................................................................................................................................... 10
Identifying Key Technologies ................................................................................................................... 12
Techno-Savvy Companies......................................................................................................................... 14
Tech Adapters: A Historical Perspective ............................................................................................ 15
Companies That Adapted Existing Technologies...................................................................................... 15
Thomas Edison: “Study and Read Everything You Can on the Subject”................................................. 15
Henry Ford: “I Invented Nothing New” ................................................................................................... 17
David Sarnoff: “A Plan of Development” ................................................................................................ 20
Tech Exploiters: A Historical Perspective............................................................................................. 22
Companies That Identified the Optimal Application of a Technology...................................................... 22
Columbia Graphophone: Exploiting the Phonograph............................................................................... 22
Sony: Exploiting the Tape Recorder ........................................................................................................ 23
Sony: Exploiting the Transistor................................................................................................................ 25
RCA: Exploiting Television ..................................................................................................................... 27
Tech Innovators: A Historical Perspective............................................................................................ 29
Companies That Innovated by Using New and/or Existing Technologies ................................................ 29
IBM: Innovations in Hardware................................................................................................................. 29
Texas Instruments: Innovations in Chips ................................................................................................. 32
Microsoft: Innovations in Software.......................................................................................................... 33
Genentech: Innovations in Biotechnology................................................................................................ 35
Lessons from History; Implications for the Future .............................................................................. 36
Four Key Implications............................................................................................................................... 36
Tech Adapters: Three Case Studies....................................................................................................... 40
BorgWarner’s Dual Clutch Transmission ................................................................................................. 40
EOG Resources’ Oil and Gas Drilling and Completion Technology ........................................................ 43
UnitedHealth Group’s Health Savings Account (HSA) “Smart Card”...................................................... 47
Tech Exploiters: Three Case Studies ..................................................................................................... 53
BJ’s Wholesale Club and Radio Frequency Identification ........................................................................ 53
Cisco and Voice over Internet Protocol (VoIP)......................................................................................... 58
Medtronic’s Real-Time Patient Monitoring .............................................................................................. 66
Tech Innovators: Six Case Studies......................................................................................................... 74
Fuel Cells: Mechanical Technology and Plug Power ............................................................................... 74
E-Money: PayPal and Euronet Worldwide............................................................................................... 82
Phood: Senomyx and Martek ................................................................................................................... 95
Appendix A ............................................................................................................................................ 104
Technology Candidates ........................................................................................................................... 104
The following analysts contributed to the case studies in this report: Charles Boorady,
Matthew Dodds, B. Alex Henderson, Jon V. Rogers, David B. Smith, Elise Wang, Deborah
Weinswig, Tony Wible, and Gil Yang.
2
Techno Savvy – April 8, 2005
Summary
History shows that the “users” of technology are often more successful than the
“makers” of technology. Techno-savvy companies use technologies — both new
and existing — to gain a competitive advantage.
Tech Adapters, Exploiters, and Innovators
Tech Adapters: A significant competitive advantage can be gained by adapting
existing technologies (e.g., Ford’s use of mass production and Edison’s use of
electrical lighting).
Tech Exploiters: Identifying the optimal application of a relatively new technology
is often as important as the technology itself (e.g., using the phonograph for music
reproduction, not dictation, and transistors for radios, not hearing aids).
Tech Innovators: By contrast, tech innovators actually create something new (e.g.,
Texas Instruments’ integrated circuit and Genentech’s human insulin), often by
taking science from the laboratory to the marketplace.
Lessons from History
A review of how companies have exploited technologies to their advantage has a
number of lessons.
A Significant Competitive Advantage Can Be Gained by Adapting
Existing Technologies
Ford adapted three manufacturing technologies that had evolved over the course of
100 years, and which were employed by other companies in disparate industries:
interchangeable parts (Singer Manufacturing Company), continuous-flow production
(Campbell Soup), and assembly-line production (Swift). Similarly, Thomas Edison
combined the existing technologies of electricity and the incandescent bulb with the
proven business model of the gas industry in order to establish his electrical
illumination system, which provided power to entire cities.
Identifying the Optimal Application of a Relatively New Technology
Is Often as Important as the Technology Itself
Thomas Edison insisted for years that the primary use of his talking machine should
be for taking dictation in offices. In the 1950s, it was thought that the tape recorder
would lead to “talking magazines.” The engineers at Western Electric suggested
that the transistor be used to satisfy burgeoning demand for hearing aids. As it
turned out, significant mass markets never developed for office dictating machines,
“talking magazines,” or hearing aids. But significant markets did develop for the
phonograph, the tape recorder, and the transistor radio.
The Company That Becomes Most Associated with a Technology Is
Not Always Its Inventor
It’s said that Picasso once observed that “mediocre artists borrow, great artists
steal.” “Stealing” is a strong word when it comes to exploiting technology. But it
certainly is the case that a company that first develops a new technology is not
always the ultimate beneficiary, given that other companies may leverage that
3
Techno Savvy – April 8, 2005
technology more successfully. It’s said that Steve Jobs of Apple not only admitted,
but even boasted, of having stolen the graphical user interface (GUI) from Xerox
PARC (Palo Alto Research Center), and when Mr. Jobs accused Bill Gates of
Microsoft of stealing the GUI from Apple and using it in Windows 1.0, Mr. Gates
fired back: “No, Steve, I think its more like we both have a rich neighbor named
Xerox, and you broke in to steal the TV set, and you found out I’d been there first,
and you said, ‘Hey that’s no fair! I wanted to steal the TV set!’”
Innovation Is Not Always Associated with Commercial Success
The recurring theme of this report is that the “users” of a technology are often more
successful than its “makers.” An excellent example of this is Xerox, which is
credited with developing: one of the first personal computers (the Alto), the concept
of personal distributed computing, the GUI, the first commercial mouse, Ethernet,
client/server architecture, laser printing, and many of the basic protocols of the
Internet. Yet Xerox has become famous for “fumbling the future” by failing to
commercially exploit any of those ideas. Likewise, in 1942, John Mauchly, an
engineering professor at the University of Pennsylvania, proposed and built an
electronic calculator. Professor Mauchly was unable (or, some say, unwilling) to
exploit the commercial possibilities of computing, and his company quickly
disappeared inside Remington Rand. Subsequently, that (UNIVAC) division was
neglected by Remington’s top executives and it fell far behind its key competitor,
International Business Machines (IBM).
Identifying Key Technologies
Smith Barney retained Gartner Consulting, one of the leading providers of research
and analysis on the global information technology industry, to assist in identifying
dozens of candidate technologies. Of these, many were rejected, not because of
doubts about their feasibility but, rather, because they did not have meaningful profit
potential for identifiable publicly traded companies in the next three to five years.
We whittled the original list of more than 50 candidates (see Appendix A) down to
nine technologies:
¤ dual clutch transmission,
¤ e-money,
¤ fuel cells,
¤ health spending account (HSA) “smart cards,”
¤ oil and gas drilling and completion technology,
¤ “phood” (i.e., food that offers pharmaceutical benefits),
¤ radio frequency identification (RFID),
¤ real-time patient monitoring, and
¤ voice over Internet protocol (VoIP).
4
Techno Savvy – April 8, 2005
12 Case Studies
We then did case studies of 12 techno-savvy companies that seem well positioned to
use these technologies:
¤ BJ’s Wholesale Club,
¤ Martek Biosciences,
¤ BorgWarner,
¤ Mechanical Technology,
¤ Cisco Systems,
¤ Medtronic,
¤ eBay,
¤ Plug Power,
¤ EOG Resources,
¤ Senomyx, and
¤ Euronet Worldwide,
¤ UnitedHealth Group.
In order to understand how these companies are seeking to gain a competitive
advantage, we interviewed managers responsible for the development and
implementation of the respective technologies.
Techno-Savvy Companies — Technology Adapters
BorgWarner’s dual clutch transmission is based on concepts that go back to the
1940s, but it has only been recent advances in electronics and hydraulic controls that
have made the technology feasible today.
EOG Resources is achieving superior results by applying some well-known drilling
and completion technologies in a more effective way than its competitors.
UnitedHealth Group is combining the existing technology of “smart cards” with
the proven business model of the managed care industry in order to establish a
strong competitive position in the new environment of health savings accounts
(HSAs).
Techno-Savvy Companies — Technology Exploiters
An excellent use of radio frequency identification (RFID) — first introduced for
aircraft identification during World War II — is tracking items through a supply
chain; BJ’s Wholesale Club is well positioned to use RFID to lower its labor and
distribution costs.
Cisco Systems is exploiting the adoption of voice over Internet protocol (VoIP),
because enterprises tend to spend three to five times as much on security and
network switching equipment as they do on VoIP itself.
Medtronic is exploiting the latest developments in networking technology to
develop real-time patient monitoring devices.
Techno-Savvy Companies — Technology Innovators
The concept of fuel cells has been around for almost 200 years, but Mechanical
Technology just recently introduced a commercial micro fuel cell, while Plug
Power is the first company to bring to market a reliable, economically viable fuel
cell for on-site power generation.
With regard to emerging e-money, eBay’s Internet-based PayPal payment system is
facilitating global peer-to-peer e-commerce, while Euronet Worldwide’s eTop Up
service is facilitating local day-to-day e-payments.
5
Techno Savvy – April 8, 2005
As for “phood,” Senomyx has a unique approach to developing flavor enhancers by
applying the same innovative technologies used by biotechnology and
pharmaceutical companies; Martek Biosciences has a library of over 3,500 species
of microalgae from which it derives a food additive that may have cardiovascular
benefits, and may also decrease the risk of diseases such as dementia and
Alzheimer’s in older adults.
Risks to Our Thematic Outlook
The key risks to our techno savvy theme are that some of the technologies that we
discuss do not succeed as expected or, alternatively, that some of the companies we
have identified as techno savvy fail to execute.
We further note that our analysis does not consider stock-specific metrics such as
valuation, EPS, and P/E ratios, or balance sheets, market capitalization, and
liquidity. Accordingly, when making investment decisions, investors should view
thematic analysis as only one input to their investment decision. Since thematic
analysis employs a longer-term methodology, its results may differ from the
conclusions of fundamental analysis.
6
Techno Savvy – April 8, 2005
Figure 1. Summary of Nine Key Technologies and 12 Techno Savvy Companies
Dual Clutch Transmission: Using this transmission, which is based on a manual gearbox, the driver can
initiate the gear change manually, or can leave the shift lever in fully automatic mode. In other words, a car can
meet the demand for a sporty, responsive driving experience or, alternatively, for the convenience of an
automatic that offers good fuel economy.
¤ BorgWarner is the only supplier on the market with a dual clutch transmission (DCT). The technology is
particularly attractive in Europe, with its small car engines and highly taxed gasoline, because inherent in
the DCT design is the ability to take advantage of the investments that automakers have already made in
manual transmission production facilities. Asian markets also have potential: Manual transmission
penetration is nearly 100% in India, around 75% in China, and more than 50% in South Korea.
Oil and Gas Drilling and Completion Technology: Horizontal wells improve production volumes. 3-D seismic
data (geologic imaging) can help avoid geologic pitfalls. Successful fracturing creates controlled fissures (i.e.,
not too big or small) so that gas can flow around the rock and into the pipe.
¤ EOG Resources is achieving superior results in the Barnett Shale — a geologic formation in the southern
U.S. containing vast amounts of natural gas — by applying some well-known drilling and completion
technologies in a more effective way than its competitors. Improved drilling and completion technology is
most beneficial when it is novel. But once the technology is widely employed in the industry, leasehold
prices escalate. EOG’s average acquisition prices in the Barnett Shale are around $200 per acre, but now
that EOG’s E&P technology is more widely accepted, recent deals in the area average $11,000–$17,000
per acre.
Health Savings Account (HSA) “Smart Card”: Just as banks must have real-time data for an ATM to work, in
the forthcoming HSA environment, health plans and their members will have to operate in a real-time
environment too.
¤ UnitedHealth Group is approaching completion of a HSA “smart card.” United’s technology is superior in
large part because its technology spending — funded by more than $3 billion in annual free cash
generation — significantly outstrips its competitors. United is the only managed care organization with all
four HSA pieces (i.e., the company can offer the plan, administer it, act as an HSA custodian, and offer an
HSA debit card). Moreover, United’s position as one of the largest benefit providers in the country enables
it to roll out a new technology that quickly enjoys widespread adoption, thereby spreading development
costs over a large membership base. Further, given that United’s technology group is forced to compete
for the business of other United units, that puts pressure on the group to be at the leading edge. Reflecting
the success of this approach, today the in-house technology group competes against companies (e.g.,
Accenture, EDS, and Perot) that provide services to United’s competitors.
Radio Frequency Identification (RFID): RFID is a generic term for technologies that use radio waves to
automatically identify individual items. RFID offers retailers decreased distribution and labor costs.
¤ BJ’s Wholesale Club seems well positioned to benefit from the introduction of RFID. Warehouse format
retailers are likely to be early adopters, given that most large suppliers are currently including RFID tags at
the pallet level. BJ’s is a relatively small company with only three distribution centers (versus 26-110 for
its closest competitors), so it would be relatively simple for BJ’s to upgrade its distribution system to RFID.
Subsequently, it would be easier to track BJ’s large number of SKUs, which have been adding to its labor
and distribution costs. Deborah Weinswig, Smith Barney’s broadlines retailing analyst, estimates that RFID
adoption could boost BJ’s 2007 EPS by 27%, versus 22% for Wal-Mart and 15% for Costco.
7
Techno Savvy – April 8, 2005
Voice over Internet Protocol (VoIP): VoIP involves sending voice communications as digital packets over data
networks, such as the Internet, alongside e-mails and Web traffic. In contrast to traditional phone calls, which
require a dedicated circuit to connect the callers at each end, VoIP service relies on software. Without VoIP,
some companies need to maintain four separate communications infrastructures: The data network; a “private
branch exchange” (PBX) for external phone calls; an “automatic call distributor” to route calls internally; and a
voice-mail system.
¤ Cisco Systems is exploiting the adoption of VoIP because the conversion tends to cost three to five times
as much in security and network switching equipment as VoIP itself costs. Given Cisco’s dominant (80%)
share of the networking equipment market (in contrast to its one-third share of the VoIP market), the
company is particularly well positioned to grab the lion’s share of those revenues. A key driver of VoIP
adoption is that, for the majority of enterprises, the telecommunications equipment in place today is much
older than it should be, and so it is increasingly costly to maintain.
Real-Time Patient Monitoring: The evolution of wireless networking technology is facilitating real-time
monitoring of patients’ conditions.
¤ Medtronic is developing a wireless system for real-time monitoring of congestive heart failure, a muscular
problem whereby the heart enlarges and loses pumping function, causing fluid to back up in the lungs.
Medtronic appears well in front of the competition in terms of fluid management, the next “game
changing” technology in the CRM (cardiac rhythm management) space. Its Chronicle ICD (Implantable
Cardioverter Defibrillator) will offer physicians access to changes in fluid levels on a real-time basis.
Medtronic is also developing a real-time monitor of blood sugar levels for those suffering from diabetes. In
2007, the company is expected to combine the glucose sensor with an insulin pump to create a single
device. Medtronic appears to hold a major advantage over its competition in the development of an
artificial pancreas.
Fuel Cells. Like batteries, fuel cells generate electricity through a chemical reaction. However, unlike batteries,
fuel cells are recharged with a fuel source (e.g., methanol).
¤ Mechanical Technology just recently introduced a commercial micro fuel cell. A new technology —
direct methanol fuel cells (DMFC) — seems likely to replace lithium ion batteries in the plethora of portable
electronic devices (PCs, cell phones, PDAs, cameras, etc.) because of its advantages of lighter
weight/greater portability (a few drops of neat methanol can provide the same power as a lithium ion
battery) and improved energy density (the power produced by a fuel cell is approximately two to three
times that of an equivalent lithium ion power pack). Mechanical Technology has entered into a strategic
alliance agreement with Gillette, whereby Mechanical Technology, Gillette, and Gillette’s Duracell business
unit intend to develop and commercialize micro fuel cell products to power handheld, mass-market, highvolume, portable consumer devices.
¤ Plug Power is the first company to bring to market a reliable, economically viable fuel cell for on-site
power generation. Plug Power’s GenCore system targets telecom and broadband backup systems. At
$15,000 for a 5-kilowatt system that lasts ten years or more, the GenCore unit compares favorably to lead
acid batteries that cost $18,000 for the same output, but last only three to five years and cost $12,000 to
replace each time. Plug Power will subsequently introduce fuel cells into markets such as forklifts and
auxiliary power on heavy trucks and marine applications. Thereafter, the GenSys platform is expected to
roll out on-site prime power electricity generation for residential uses.
8
Techno Savvy – April 8, 2005
E-Money: Various types of e-money are emerging to facilitate global peer-to-peer e-commerce and local dayto-day e-payments.
¤ eBay’s PayPal payment system (comprising approximately 25% of eBay’s revenues) is effectively the
global e-money that facilitates global peer-to-peer e-commerce. The PayPal service uses the existing
financial infrastructure to enable its account holders to send and receive payments via e-mail in real time.
PayPal currently has users in 45 countries; yet, PayPal is currently being used on only about 23% of eBay’s
international transactions, compared to 50% of U.S. transactions. Furthermore, while most PayPal
customers use the service for trading on eBay, other forms of “off eBay” use are growing. This is a big
market that PayPal has just begun to tap — only 0.5% of international “off eBay” e-commerce
transactions are processed across its platform. In 2004, the company had around $5.7 billion in these
“Merchant Services payments” (25% of PayPal’s total payment volumes), representing an 80% compound
annual growth rate over the past three years.
¤ Euronet Worldwide’s eTop Up service is emerging as a form of e-money that facilitates local day-to-day
e-payments. Given the high penetration of cell phones in many countries overseas, prepaid mobile phone
service was an obvious market for eTop Up to address first. However, any transaction that is recurring and
cash based could potentially be moved to the eTop Up model. So Euronet recently expanded into other
segments, including payment methods for music and gambling. This e-money format is particularly
applicable in the developing economies of Eastern Europe, Latin America, and Asia, as well as the
developed economies of Western Europe, where the absence of a “credit culture” and the lack of a credit
history necessitates a consumer prepaid/debit model.
“Phood”: “Phood,” or food that offers pharmaceutical benefits (such as milk with vitamin D or orange juice
with calcium), can also involve using biotechnology for more sophisticated health benefits.
¤ Senomyx is a biotechnology company focused on sensory and taste-receptor-based technology. One
aspect of “phood” is flavor enhancers that improve the health aspects of packaged food and beverage
products by reducing additives such as monosodium glutamate (MSG), salt, and sugar, while maintaining
or enhancing the taste of the product. Senomyx has collaboration agreements with Campbell Soup, CocaCola, Kraft Foods, and Nestlé. Following Senomyx’s recent receipt of Generally Recognized as Safe (GRAS)
designation for its savory flavor enhancers, Nestlé will be able to begin consumer acceptance testing of
food products containing Senomyx’s savory enhancers. The first commercial sale of such products could
occur during the first half of 2006, which would result in royalty payments to Senomyx. Based on the
existing agreements with its four collaborators, Senomyx’s immediate addressable market opportunity is
approximately $36 billion in sales. The company could receive royalties on product sales in a range of
1%–4%, suggesting approximately $360 million–$1.44 billion in revenues.
¤ Martek Biosciences has a library of over 3,500 species of microalgae from which it derives a food
additive (DHA) that may have cardiovascular benefits, and that may also decrease the risk of diseases such
as dementia and Alzheimer’s in older adults. Martek manufactures the only source of DHA approved by
the FDA for use in infant formula. The company recently announced it has entered into a 15-year,
nonexclusive license and supply agreement with Kellogg. Under the terms of the agreement, Kellogg will
develop food products containing Martek’s DHA. Martek’s oils contain minimal fatty acids; are derived
from all-natural, vegetarian sources; have minimal taste and odor; and have a high oxidative stability and a
long shelf life.
Source: Smith Barney
9
Techno Savvy – April 8, 2005
Techno Savvy
Sir Francis Bacon, the 17th century English philosopher, argued that European
scientific progress was built upon a foundation of three key technological
discoveries — printing, gunpowder, and the magnetic compass. As he noted in his
book Novum Organum, published in 1620, this triumvirate was responsible for
revolutionizing literature, warfare, and navigation:
Again, it is well to observe the force and virtue and consequences of discoveries,
and these are to be seen nowhere more conspicuously than in those three which
were unknown to the ancients, and of which the origin, though recent, is obscure
and inglorious; namely, printing, gunpowder, and the magnet. For these three
have changed the whole face and state of things throughout the world; the first
in literature, the second in warfare, the third in navigation; whence have
followed innumerable changes, insomuch that no empire, no sect, no star seems
to have exerted greater power and influence in human affairs than these
mechanical discoveries.
While Bacon wrote that the origin of these discoveries was “obscure,” all three
inventions were the products of Chinese, not European, civilization. It is arguable,
however, that the Europeans exploited these technologies more successfully than the
Chinese.
Exploiting technologies
is the focus of this
report.
Exploiting technologies is the focus of this report. To begin with, we examine three
types of “technology” companies from a historical perspective:
¤ Those that adapted existing technologies — as The Ford Motor Company did in
the early 1900s when it combined three manufacturing technologies that had
evolved over the course of 100 years, namely interchangeable parts, continuousflow production, and assembly line production.
¤ Those that identified the optimal application of a technology — as Tokyo
Telecommunications Engineering Corp., later renamed Sony, did with the tape
recorder in the 1950s.
¤ Those that innovated by exploiting new and/or existing technologies — as
Genentech did when it used the laboratory science of DNA technology to
produce human insulin.
Figure 2 provides a summary of this historical analysis. We then employ this
template to undertake case studies of 12 techno-savvy companies that seem well
positioned to utilize key technologies today.
10
Techno Savvy – April 8, 2005
Figure 2. Technology Adapters, Exploiters, and Innovators
Cotton Gin
Electricity
Interchangeable Parts
Wireless Broadcasting
Inventor/Discoverer
Ancient Indians
Michael Faraday and others
Eli Whitney
Guglielmo Marconi
Originally Used for
Homespun cotton
Lighthouse illumination
Rifle production
Ship to shore messages
Adapter
Eli Whitney
Thomas Edison
Henry Ford
David Sarnoff/RCA
Product
Gin for Short Staple Cotton
Electric Lighting System
Mass-Produced Cars
Radio
Phonograph
Magnetic Tape
Transistor
Television System
Inventor/Discoverer
Thomas Edison
AEG Co.
Bell Labs
Philo T. Farnsworth
Originally Used for
Dictation
Military applications
Long distance calling
Laboratory
Exploiter
A.G. Bell, Emile Berliner
Totsuko (Sony)
Totsuko (Sony)
David Sarnoff/RCA
Product
Music Reproduction
Sony Tape Recorder
Transistor Radio
Broadcast Television
Electronic Calculator
Transistor
BASIC
DNA
Inventor/Discoverer
Mauchly & Eckert
Bell Labs
Dartmouth Professors
Watson & Crick
Originally Used for
Gun firing tables
Electronic circuits
College computing
Cancer research
Innovator
IBM
Texas Instruments
Microsoft
Genentech
Product
Business Computer
Integrated Circuit
PC Software
Human Insulin
Source: Smith Barney
11
Techno Savvy – April 8, 2005
Identifying Key Technologies
In order to identify technologies that could be utilized by techno-savvy companies,
Smith Barney retained Gartner Consulting, one of the leading providers of research
and analysis on the global information technology (IT) industry. As a first step,
Gartner Consulting highlighted 48 candidate technologies. A separate survey of
Smith Barney industry analysts identified many of the same technologies, as well as
five other non-IT candidates (dual clutch transmission, Health Spending Account
[HSA] “smart cards,” oil and gas drilling and completion technology, “phood,” and
real-time patient monitoring).
We whittled down the list
of more than 50
candidates to nine
technologies that have
significant profit
potential over the next
three to five years.
After careful analysis, we whittled down this list of over 50 candidates to nine
technologies that, in our opinion, have significant profit potential over the next three
to five years:
¤ Dual Clutch Transmission. While Gartner Consulting identified telematics as a
key technology, Smith Barney auto analyst Jon Rogers believes the benefits of
telematics will likely accrue to a wide range of companies in this industry, so it
will be difficult for any one company to gain a competitive edge. Instead, Jon
makes a compelling case for BorgWarner’s dual clutch transmission.
¤ E-Money. Gartner Consulting highlighted the potential of micropayments, but
Smith Barney analyst Tony Wible recommended extending the analysis to
include other forms of e-money.
¤ Fuel Cells. While Gartner Consulting highlighted the potential of micro fuel
cells, Smith Barney analyst David B. Smith strongly advocated broadening the
analysis to cover the entire range of fuel cells.
¤ Health Spending Account (HSA) “Smart Cards.” These combine the existing
technology of “smart cards” with the new concept of HSAs.
¤ Oil and gas drilling and Completion Technology. Gartner Consulting has
limited mindshare when it comes to non-IT technologies that are specific to the
exploration and production (E&P) sector, which is increasingly dependent on
technology. Accordingly, Smith Barney analyst Gil Yang highlights the
leading-edge oil and gas drilling and completion technology of EOG Resources.
¤ Phood. Biotechnology is, of course, a very diverse sector, and while Gartner
Consulting highlighted some key technologies in that industry (e.g.,
bioinformatics) Smith Barney analyst Elise Wang believes strongly that
“phood,” which refers to food that offers pharmaceutical benefits, has significant
profit potential over the next three to five years.
¤ Radio Frequency Identification (RFID). This will be particularly beneficial in
helping to track items through the retail sector supply chain.
¤ Real-Time Patient Remote Monitoring. The latest developments in networking
technology are facilitating real-time patient monitoring devices.
¤ Voice Over Internet Protocol (VoIP). While this technology has implications
for many communications markets, our focus is on VoIP for enterprises.
12
Techno Savvy – April 8, 2005
Many candidates were
rejected because they
did not have meaningful
profit potential for
identifiable, publicly
traded companies in the
next three to five years.
Importantly, of the more than 50 candidate technologies, many were rejected not
because of doubts about their feasibility but, rather, because they did not have
meaningful profit potential for identifiable publicly traded companies in the next
three to five years. Some examples of technologies that, in the opinion of Smith
Barney analysts, did not satisfy this criterion are the following (see Appendix A for
explanations of these technologies):
¤ Bioinformatics. Smith Barney biotechnology analyst Elise Wang believes that,
with many pharmaceutical and biotechnology companies utilizing this
technology, it is difficult to identify one or two clear beneficiaries.
¤ Biometrics. Smith Barney aerospace and defense analyst George Shapiro points
out that many of the large companies in his sector have a small presence in
biometrics.
¤ Blu-Ray. Smith Barney entertainment analyst Elizabeth Osur observes that there
is still considerable speculation in the industry as to whether the next standard
for digital video will be Blu-Ray or HD-DVD.
¤ Computer-Brain Interface. Neuromarketing studies can use neuroimaging to
gain insight into drivers of consumer behavior. However, in the opinion of
Smith Barney analysts, it is likely that many consumer products companies will
use this technology, and that no one company will have a strong competitive
advantage.
¤ Driver-Load Matching. Smith Barney airfreight and surface transportation
analyst Scott Flower notes that freight brokers will be able to offer this
technology, thereby allowing smaller truckers to remain competitive with larger
companies.
¤ Natural Language Processing. Artificial intelligence will likely be employed
for the analysis of words and phrases entered in natural language by the
customer service departments in numerous industries, ranging from airlines to
banks.
¤ Powerline Broadband. Smith Barney electric utilities analyst Greg Gordon
points out that, given that utilities are regulated entities, they have historically
had no real profit incentive to innovate in ways that enhance profitability, as
returns are generally capped in most regulatory structures. Therefore, they are
unlikely to exploit the full potential of this technology.
¤ Virtual Prototyping. This technology facilitates the highly detailed modeling of
products. Once again, however, it is likely that the technology will be employed
by all the leading players in a given industry (e.g., General Motors and Ford,
Boeing and Airbus, etc.).
13
Techno Savvy – April 8, 2005
Techno-Savvy Companies
After determining our list of nine technologies, we worked closely with the Smith
Barney industry analysts to identify companies that seem well positioned to utilize
them:
We interviewed key
personnel at the
companies in order to
gain a deep
understanding of how
they are exploiting the
technologies.
➤ Technology Adapters: BorgWarner’s dual clutch transmission is based on
concepts that go back to the 1940s, but it has only been recent advances in
electronics and hydraulic controls that have made the technology feasible today.
EOG Resources is achieving superior results by applying some well-known
drilling and completion technologies in a more effective way than its
competitors. UnitedHealth Group is combining the existing technology of
“smart cards” with the proven business model of the managed care industry in
order to establish a strong competitive position in the new environment of health
savings accounts (HSAs).
➤ Technology Exploiters: An excellent use of radio frequency identification
(RFID), first introduced during World War II for aircraft identification, is
tracking items through a supply chain; BJ’s Wholesale Club is well positioned
to use RFID to lower its labor and distribution costs. Cisco Systems is
exploiting the adoption of voice over Internet protocol (VoIP) because
enterprises tend to spend three to five times as much on security and network
switching equipment as they do on VoIP itself. Medtronic is exploiting the
latest developments in networking technology to develop real-time patient
monitoring devices.
➤ Technology Innovators: The concept of fuel cells has been around for almost
200 years, but Mechanical Technology just recently introduced a commercial
micro fuel cell, while Plug Power is the first company to bring to market a
reliable, economically viable fuel cell for on-site power generation. With regard
to emerging e-money, eBay’s Internet-based PayPal payment system is
facilitating global peer-to-peer e-commerce, while Euronet Worldwide’s eTop
Up service is facilitating local day-to-day e-payments. As for “phood,”
Senomyx has a unique approach to developing flavor enhancers by applying the
same innovative technologies used by biotechnology and pharmaceutical
companies; Martek Biosciences has a library of over 3,500 species of
microalgae from which it derives a food additive that may have cardiovascular
benefits, and that may also decrease the risk of diseases such as dementia and
Alzheimer’s in older adults.
As part of the case study of these 12 companies, Smith Barney strategists and the
relevant fundamental industry analysts interviewed key personnel at the companies
in order to gain a deep understanding of how they are exploiting the technologies.
We note that the Smith Barney analysts sought out the managers responsible for the
development and implementation of the technologies rather than Investor Relations
contacts.
14
Techno Savvy – April 8, 2005
Tech Adapters:
A Historical Perspective
Companies That Adapted Existing Technologies
As we discuss in detail below, Henry Ford once acknowledged, “I invented nothing
new.” He was not alone:
¤ The first cotton gin, used to produce cotton cloth in India centuries before the
Christian era, was a variant of a sugar cane press. This gin was used wherever
long staple cotton was grown and processed. Eli Whitney became famous for
inventing a gin to clean short staple cotton, which could be grown in most of the
southern United States.
¤ Thomas Edison’s electrical lighting system, which used existing technologies,
including electricity (discovered in the 18th century and subsequently used to
power lighthouses) and the incandescent light bulb, appears today to be obvious.
Mr. Edison’s system was certainly not obvious to his contemporaries, many of
whom thought that, at best, he was taking the wrong approach and that, at worst,
he was either a fool or a fraud.
¤ The concept of interchangeable parts has been around since the 1800s, when Eli
Whitney fulfilled a huge government order for 10,000 muskets primarily by
making all of the parts of his rifles so nearly identical that they could be
interchangeable from one gun to another. Henry Ford employed the concept of
interchangeable parts as a key element in the mass production of automobiles.
¤ The first commercial use of radio was for the sending of coded wireless
messages between ships at sea and from ships to shore. David Sarnoff of RCA
imagined the possibilities of a mass market for wireless telephonic broadcasting
— what we now call radio.
Eli Whitney won fame,
though not fortune, as
the inventor of the cotton
gin. Whitney failed to
profit from his invention
because imitations of the
machine appeared
quickly.
Eli Whitney won fame, though not fortune, as the inventor of the cotton gin.
Whitney failed to profit from his invention because imitations of the machine
appeared quickly, and his 1794 patent was not upheld for many years. By contrast,
Thomas Edison, Henry Ford, and David Sarnoff established dominant franchises,
and profited handsomely, from their adaptation of existing technologies. As we
discuss in detail below, a key commonality that these men and their companies
shared was that they each took technologies that, at the time, had only a relatively
narrow application and used them to create mass-market products.
Thomas Edison: “Study and Read Everything You Can on
the Subject”
Scientists, who had been experimenting with electricity since the middle of the 18th
century, knew that under certain conditions electricity could produce light. In the
1830s, an Englishman named Michael Faraday conducted research that enabled him
to build an electric generator, which created a continuous flow of current strong
enough to be used for lighting. By the 1860s lighthouses were using steam engines
to generate electricity that powered arc lamps.
15
Techno Savvy – April 8, 2005
The light in arc lamps was produced when two carbon rods connected in an
electrical circuit were brought in close proximity of one another, resulting in a kind
of continuous spark in the gap. Although this produced an intense white light of
1,000 candlepower, it was a dangerous process, which meant that arc lamps had to
be placed away from anything that might be ignited by sparks. This created a
challenge for scientists: the need to design an incandescent lamp in which light
would be derived from a glowing, yet highly flame-resistant filament.
The incandescent light
bulb had been around for
decades before Edison
made his fame by
“inventing” it.
Another challenge in the creation of a workable electric lighting system was that the
generating plant that produced the electricity for arc lights was located on the
premises and was owned and operated by the consumer of the light. Moreover,
individual arc lights were series-connected, which meant that they all had to be
switched on or off simultaneously.
By borrowing heavily from a variety of sources, Thomas Edison solved these and
other problems. Indeed, Edison himself gave the following advice on innovation:
1st Study (the) present construction. (2nd) Ask for all past experience…study
and read everything you can on the subject.1
The incandescent light bulb had been around for decades before Edison made his
fame by “inventing” it. Indeed, Edison’s original patent application for the electric
light was rejected, it was reported in the January 18, 1879, issue of Scientific
American, because “Edison’s invention was an infringement upon that of John W.
Starr of Cincinnati, who filed a caveat for a divisible light in 1845.”2
Figure 3. Edison Electric Light
“Do not attempt to light with match”
Source: National Science Foundation
16
1
Andre Millard, Edison and The Business of Innovation, (Baltimore: The Johns Hopkins University Press, 1990).
2
Ibid.
Techno Savvy – April 8, 2005
However, Edison’s eventual achievement was inventing not just an incandescent
electric light, but also an electric lighting system that contained all the elements
necessary to make the incandescent light practical, safe, and economical.
Many thought that, at
best, Edison was taking
the wrong approach and
that, at worst, he was a
fool or a fraud.
As for the model for his electrical illumination system, Edison chose the
contemporary gas industry, which derived 90% of its revenues from the lighting of
interior spaces. Illuminating gas was generated at a central gas works and then
piped beneath the city streets to homes and other buildings, where tubing carried the
gas to individually controlled lighting fixtures. The hub of Edison’s electrical
system was a central station, remote from the consumers of light, which provided
power to homes and offices over wires. Just as gas meters were installed at each
residence, so too were electric meters, which were integral to Edison’s system.
While the design of this system appears obvious today, Edison’s system was
certainly not obvious to his contemporaries, many of whom thought that, at best, he
was taking the wrong approach and that, at worst, he was a fool or a fraud. After a
lengthy set of consultations with Britain’s leading scientists and physicists, a British
parliamentary committee of inquiry concluded in 1878, several months after Edison
publicly announced his intentions, that the commercial production of incandescent
lighting was utterly impossible and that Edison demonstrated “the most airy
ignorance of the fundamental principles both of electricity and dynamics.”3
Henry Ford: “I Invented Nothing New”
At the turn of the 20th century, dozens of firms were devoted to building cars in the
United States. The approach they took was to hire skilled mechanics and carpenters
(much of the car body in those years was made out of wood, just as carriages had
been), who would, as a team, assemble one vehicle at a time. The cars that resulted
from this form of manufacture were high-priced and fairly heavy touring cars, and
they were largely the playthings of the rich.
While Ford is generally
credited with developing
the first high-quality
mass-produced car
intended for the mass
market, what he
pioneered was not the
car itself, but a new way
to make cars.
In sharp contrast to this largely built-to-order model, in 1906 Henry Ford considered
the possibilities of a mass market for automobiles:
[The] greatest need today is a light, low-priced car with an up-to-date engine of
ample horsepower, and built of the very best material….It must be powerful
enough for American roads and capable of carrying its passengers anywhere that
a horse-drawn vehicle will go without the driver being afraid of ruining his car.4
While Ford is generally credited with developing the first high-quality massproduced car intended for the mass market, what Ford pioneered was not the car
itself, but a new way to make cars. Indeed, during a patent dispute over the true
inventor of the automobile, he once testified:
I invented nothing new. I simply assembled into a car the discoveries of other
men behind whom were centuries of work….Had I worked 50 or ten or even
five years before, I would have failed. So it is with every new thing. Progress
3
Robert Conot, Thomas A. Edison: A Streak of Luck, (New York: Da Capo Press, 1979).
4
David A. Hounshell, From the American System to Mass Production, 1800–1932, (Baltimore: Johns Hopkins University Press,
1984).
17
Techno Savvy – April 8, 2005
happens when all the factors that make for it are ready, and then it is inevitable.
To teach that a comparatively few men are responsible for the greatest forward
5
steps of mankind is the worst sort of nonsense.
Ford’s new method of production resulted in substantial cost savings that were
partially passed onto customers in the form of lower prices, which, in turn,
stimulated further demand. Having come on the market at about $850 in 1908 (see
Figure 4), the price of a Model T dropped to $360 by 1916 (despite inflation caused
by the outbreak of World War I in Europe) and to $290 by 1927 (also an inflationary
period). Not surprisingly, demand soared. In 1908, the Ford Motor Company sold
close to 6,000 Model Ts, and by 1916 sales were in excess of 500,000.
The idea of
interchangeable parts
had been in the public
domain since the 1800s,
when Eli Whitney fulfilled
a huge government order
for 10,000 muskets,
l and Gas Drilling and
Completion Technology
žPAGEREF
Toc100634741 \h ´43
As we noted already, the idea of interchangeable parts had been in the public domain
since the 1800s when Eli Whitney fulfilled a huge government order for 10,000
muskets, primarily by making all the parts of his rifles so nearly identical that the
parts could be interchangeable. Ford first learned about interchangeable parts when
he started to ramp up for production of the Model Ts predecessor, the Model N. It
was then he met Walter Flanders, a machine tool salesman, who had worked
previously at the Singer Manufacturing Company, the maker of Singer sewing
machines and an early pioneer of interchangeability. Ford hired Flanders as overall
production manager at the Ford Motor Company.
Even so, the approach taken to the production of the Model N was the same as that
employed by all the other car manufacturers at the time. Supplies of parts and
equipment would arrive at the Piquette Avenue factory and the cars would be
constructed individually by a team of assemblers, which pieced together the parts
scattered around them.
The production process of the Model T in the Highland Park factory was to be
radically different. As noted already, Walter Flanders introduced Ford to the
concept of interchangeable parts. Another employee, Oscar Bornholdt, who had the
job of tooling up the Highland Park factory for the Model T, had seen continuousflow production techniques employed in other sectors, including the canning
industry. Using these techniques, firms such as Campbell Soup canned products for
shipment across the United States.
Ford adopted the concept too, so that the production process was reorganized around
the flow of work (for example, a machine tool was dedicated to just one operation),
and there was a continuous flow of materials from inputs to finished automobiles.
Indeed Bornholdt admitted that “at the Ford Plant, the machines are arranged
[sequentially] very much like the tin-can machines.”6
5
6
John Steele Gordon, The Business of America, (New York: Walker Publishing Company, Inc., 2001).
David A. Hounshell, From the American System to Mass Production, 1800–1932, (Baltimore: Johns Hopkins University Press,
1984).
18
Techno Savvy – April 8, 2005
Figure 4. The Model T
“Ford: High-Priced Quality in a Low-Priced Car”
Source: Courtesy of Ford Motor Company
Whereas Ford and his
colleagues borrowed the
concepts of
interchangeable parts
and continuous-flow
production from other
industries, Ford came up
with the idea of the
assembly line by himself.
Whereas Ford and his colleagues borrowed the concepts of interchangeable parts
and continuous-flow production from other industries, Ford came up with the idea of
the assembly line by himself. He did, however, give credit for the original idea to
the “disassembly” lines of the Chicago meatpackers. The 1906 publication of The
Jungle by Upton Sinclair revealed the gory details of work in slaughterhouses, as
whole pigs and cows came in one end and cuts of meat went out the other. The
workers stayed in place while the carcasses moved past them on a chain. William
Klann, head of the engine department at Ford, recalled touring Swift’s Chicago plant
thinking, “If they can kill pigs and cows that way, we can build cars that way.”7
The final piece of the puzzle in making these technologies work together seamlessly
th
was a relatively new invention, the electric motor. In the 19 century, factories had
been built around the steam engine, which powered machinery throughout the
factory by way of a complex system of shafts and belts. Ford was the first
automobile manufacturer to grasp the potential of the electric motor and to use it to
its fullest. (Indeed, in 1919, 50% of all automobiles were manufactured using
electric motors, all of them by the Ford Motor Company.) Ford’s engineers could
move machines without having to redesign the shafts that powered them so that the
factory could be built around the flow of work rather than around a central power
plant. The era of mass production had begun.
7
David A. Hounshell, From the American System to Mass Production, 1800–1932, (Baltimore: Johns Hopkins University Press,
1984).
19
Techno Savvy – April 8, 2005
David Sarnoff: “A Plan of Development”
Prior to the U.S. entry into World War I, all privately owned wireless transmitting
stations on the coasts — many of which belonged to American Marconi — were
commandeered for government use. Given that American Marconi was a subsidiary
of a British firm, the U.S. government did not want American wireless capacity
owned and, therefore, controlled by a foreign entity during wartime.
After the war ended, something had to be done with the Marconi facilities that had
been commandeered. To that end, the government decided that a new company
would be chartered, which could only have U.S. citizens as its directors and officers.
In addition, only 20% of its stock could belong to foreigners, and a representative of
the U.S. government would sit on the board of directors. American Marconi would
transfer all of its assets to this new company, individual investors in American
Marconi would receive shares of the new company’s stock, and General Electric
would purchase the shares owned by British Marconi.
It was in this way that, in February 1919, the Radio Corporation of America (RCA)
was created. To further solidify its position, within months of its formation, RCA
entered into cross-licensing agreements (in exchange for shares of its stock) with
almost all the large companies that held key patents for wireless telegraphy and
telephony (e.g., AT&T, General Electric, and Western Electric). In effect, this move
created a trust that monopolized the radio manufacturing industry.
Sarnoff anticipated that
most of the profits from
developing radio as a
“household utility”
would come from
manufacturing and
selling the “Radio Music
Box.”
In 1917, David Sarnoff, a Russian immigrant, had worked his way up through the
ranks to become the commercial manager of American Marconi, and was
responsible for the maintenance and expansion of its services to businesses. Two
years later he was the commercial manager of RCA, and two years after that, in
1921, he was promoted to general manager. Like many others, Sarnoff had
imagined the possibilities of a mass market for wireless telephonic broadcasting —
what we now call radio.
In the fall of 1916, Sarnoff had drafted a memorandum for the president of
American Marconi in which he envisioned “a plan of development which would
make radio a ‘household utility’ in the same sense as the piano or phonograph.”8
Sarnoff argued that recent improvements in radio equipment could make such a
scheme entirely feasible:
[A] radio telephone transmitter having a range of, say, 25 to 50 miles can be
installed at a fixed point where the instrumental or vocal music or both are
produced….The receiver can be designed in the form of a simple ‘Radio Music
Box’ and arranged for several different wave lengths, which should be
changeable with the throwing of a single switch or pressing of a single button.9
20
8
John Tebbel, David Sarnoff: Putting Electrons to Work, (Chicago: Encyclopedia Britannica Press, 1963).
9
Ibid.
Techno Savvy – April 8, 2005
Figure 5. RCA Radiola
“Great music — at home, as you never could get it before.”
Source: Courtesy of Thomson
Sarnoff anticipated that most of the profits from developing radio as a “household
utility” would come from manufacturing and selling the “Radio Music Box.” But,
he speculated, if the broadcasts carried something more than just music (see Figure
5), the potential market would be even greater:
Sarnoff understood the
importance of patents,
and RCA managed to
gain control of all the
important patents
covering radio
technology by either
buying the patents
themselves or the
companies holding them.
Events of national importance can be simultaneously announced and received.
Baseball scores can be transmitted in the air by the use of one set installed at the
Polo Grounds [where the New York Giants baseball team played]. The same
would be true in other cities. This proposition would be especially interesting to
farmers and others living in outlying districts removed from cities. By the
purchase of a “radio Music Box” they could enjoy concerts, lectures, music,
recitals, etc.10
Significantly, Sarnoff understood the importance of patents, and RCA managed to
gain control of all the important patents covering radio technology by either buying
the patents themselves or the companies holding them. This included the patent for
the all-important vacuum tube, the predecessor of the transistor. Moreover,
whenever a patent was infringed upon, Sarnoff chose not to swallow the losses but
to go after the offenders instead — lawsuits were a common occurrence in RCA’s
business dealings. By aggressively defending its franchise, RCA firmly established
itself as a broadcasting giant by the time that Sarnoff turned his attention to the new
technology of television, which we discuss in detail below.
10
John Tebbel, David Sarnoff: Putting Electrons to Work, (Chicago: Encyclopedia Britannica Press, 1963).
21
Techno Savvy – April 8, 2005
Tech Exploiters:
A Historical Perspective
Companies That Identified the Optimal Application of a
Technology
In reviewing the history of new technologies, one can make two broad
generalizations:
¤ The person, or company, that becomes most associated with a technology is not
always the inventor of the technology.
¤ The first use of a new technology is not always the one for which the invention
eventually becomes best known.
The best commercial use
of a new technology is
not always obvious, even
when the technology is
clearly revolutionary.
In fact, these two observations are interrelated because the best commercial use of a
new technology is not always obvious, even when the technology is clearly
revolutionary. That was the problem Edison faced with the phonograph in 1877.
Columbia Graphophone: Exploiting the Phonograph
Edison’s talking machine, with its tinfoil recording surface and hand crank, was just
about able to reproduce two minutes of speech shouted into the mouthpiece. But
what could such a machine be used for? In a pessimistic moment, Edison told his
assistant, Samuel Insull, that the phonograph did not have “any commercial value.”11
In June 1878, Edison suggested possible future uses for the phonograph in an article
for North American Review (see Figure 6). Edison ranked music reproduction fourth
because he felt that it was not a primary use of his invention.
Figure 6. Possible Future Uses for the Phonograph
1. Letter writing and all kinds of dictation without the aid of a stenographer.
2. Phonographic books, which will speak to blind people without effort on their part.
3. The teaching of elocution.
4. Reproduction of music.
5. The “Family Record” — a registry of sayings, reminiscences, etc., by members of a family in
their own voices, and of the last words of dying persons.
6. Music-boxes and toys.
7. Clocks that should announce in articulate speech the time for going home, going to meals, etc.
8. The preservation of languages by exact reproduction of the manner of pronouncing.
9. Educational purposes; such as preserving the explanations made by a teacher, so that the pupil
can refer to them at any moment, and spelling or other lessons placed upon the phonograph for
convenience in committing to memory.
10. Connection with the telephone, so as to make that instrument an auxiliary in the transmission
of permanent and invaluable records, instead of being the recipient of momentary and fleeting
communication.
Source: North American Review, June 1878
11
22
Robert Conot, Thomas A. Edison: A Streak of Luck, (New York: Da Capo Press, 1979).
Techno Savvy – April 8, 2005
While Edison puzzled over the best use of his invention, Alexander Graham Bell,
Emile Berliner, and others were working on improving the phonograph. Wax
recording cylinders were introduced, as well as a better holder for the recording
stylus and a constant-speed electric motor. A reliable talking machine was finally
made available to the general public in the 1890s (see Figure 7).
Figure 7. The Gramophone
“Reproduces Songs, Speeches, Instrumental Music”
Source: Scientific American, 1896
Edison continued to
believe the primary use
of his talking machine
would be in offices.
Edison continued to believe the primary use of his talking machine would be in
offices. But more farsighted individuals at other companies saw the entertainment
possibilities of the invention and presented it to the public as an instrument for the
reproduction of music. (One of these companies, Columbia Graphophone, which
developed a competing device called the “graphophone,” became the basis of
Columbia Records). Devices like these ultimately led to the establishment of the
lucrative phonographic record business that supplied consumers around the world
with recorded music and reproducing equipment.
Sony: Exploiting the Tape Recorder
About 50 years later, as Yogi Berra would say, it was déjà vu all over again. A new
recording device (the tape recorder) had been invented, although its commercial
value was unclear. A list of possible uses was published. Eventually the new device
became a major mass-market consumer product.
The tape recorder was developed in Germany before World War II. In 1936, AEG
Co. invented a recording machine that used plastic tape coated with magnetic
material. By 1950, Tokyo Telecommunications Engineering Corp. (Totsuko), later
renamed Sony, was ready to market its own version of the machine. This was a
heavy, bulky, and expensive model labeled the G-type tape recorder, which was
designed for institutional use and had a recording time of one hour. Totsuko
23
Techno Savvy – April 8, 2005
registered the G-type under the trademark name of “Tapecorder,” while the tape
itself was named “SONI-TAPE.”
The March 15, 1950, edition of the Mainichi Graph magazine carried an article with
a photograph of Totsuko’s “Tapecorder.” According to the article:
This is a tape recording machine soon to be mass-produced in Japan. It may
well be called “Talking Paper”….According to the manufacturer, it will prove
very handy wherever it is used, and “talking magazines” and “talking
newspapers” will become a reality in the future.
Clearly, the best commercial use for the new machine was somewhat unclear. In
addition to “talking magazines” and “talking newspapers” another possibility
suggested by a Totsuko employee was that “tape recorders should be actively
employed by music schools. Musicians must train themselves with a tape recorder
just as ballerinas study dancing by looking at a mirror.”
It was not until the
1960s, and after much
advertising, that the tape
recorder’s abilities to
record and play music
were fully appreciated by
consumers, at which
point sales began to
soar.
The first substantial order for the G-type tape recorder came from the Japanese
Supreme Court. In the postwar period, the Court was unable to train a sufficient
number of stenographers, and Totsuko convinced Supreme Court officials that the
tape recorder could take their place.
When sales of the product remained sluggish, Totsuko management came to the
conclusion that, despite the technical merits of the product, customers would not buy
tape recorders unless they knew how best to use them. So Totsuko started to study
how tape recorders could best be used and came across an American pamphlet
entitled “999 Uses of the Tape Recorder,” which conveniently listed possible uses in
alphabetical order.
At the same time, Totsuko stepped up engineering efforts to improve the machine
itself. It was obvious that the G-type tape recorder was too heavy, bulky, and
expensive for the consumer market. Consequently, Totsuko developed its H-type
tape recorder, which was introduced in March 1951. It weighed 28 pounds, less than
one-third of the original G-type tape recorder, came in a case with steel handles
attached to its sides (to enhance portability), and boasted a chic design.
The introduction of the H-type tape recorder generated an increase in orders, but
from schools rather than consumers. Schools were attracted to the educational
possibilities of the machine, given that audio visual aids in schools had just begun to
be accepted in Japan (as it was incorporated into Occupation policy). The idea was
to present 16 mm educational films as a visual aid while the tape recorder generated
the audio component.
It was not until the 1960s, and after much advertising (see Figure 8), that the tape
recorder’s abilities to record and play music were fully appreciated by consumers, at
which point sales began to soar.
24
Techno Savvy – April 8, 2005
Figure 8. Sony Model 250 Solid State Stereo Tape Recorder
“An exciting new dimension to home entertainment”
Source: Courtesy of Sony Electronics Inc.
Sony: Exploiting the Transistor
As noted, it took many years after its development for the tape recorder to become
established as a consumer device. In that regard, in March 1952, Masaru Ibuka, one
of the founders of Totsuko, left Japan on a three-month tour of the United States.
The purpose of the trip was to explore how Totsuko could expand the market for
tape recorders by seeing how American consumers used the devices. Ibuka also
wanted to observe how U.S. companies manufactured tape recorders.
Bell Laboratories held
the patent rights for
manufacturing the
transistor and would
make the rights available
to anyone who would
pay royalties.
While in the U.S., Ibuka heard from a friend about the newly invented transistor. He
learned that Western Electric, the parent company of Bell Laboratories, held the
patent rights for manufacturing the transistor and would make them available to
anyone who would pay royalties. It was thanks to this fortuitous encounter that the
new invention came to be used in the creation of another mass-market consumer
product — the transistor radio.
So what was this new invention? We noted above that RCA managed to gain
control of all the important patents covering radio technology, including the patent
for the vacuum tube, the predecessor of the transistor. Vacuum tubes were, in effect,
specially modified light bulbs that amplified signals. It was thanks to vacuum tubes
that AT&T could offer transcontinental phone service, because signals on telephone
lines could be amplified regularly along the line as they were transferred from one
switch box to another. But the vacuum tubes that made amplification possible were
extremely unreliable, used a lot of power, and produced too much heat.
25
Techno Savvy – April 8, 2005
At the end of World War II, the director of research at Bell Labs thought a solution
to these problems might lie in new types of materials called semiconductors, which
scientists had been experimenting with since before the war. In simple terms,
semiconductors are substances, such as germanium or silicon, whose electrical
conductivity is intermediate between that of a metal and an insulator.
In December 1947, a team at Bell Labs created the first point-contact transistor. It
was about half an inch tall and consisted of a strip of gold foil folded over the point
of a plastic triangle, with the assembly then held over a crystal of germanium. The
device was the world’s first semiconductor amplifier: When a small current went
through one of the gold contacts, a larger current came out the other.
The engineers at
Western Electric
suggested that the
transistor be used to
make hearing aids. Sony
chose to ignore that
advice and focus instead
on making radios.
Masaru Ibuka knew very little about transistors but decided that it might be the
invention his company needed to expand its consumer product lines and keep its
technical staff busy. As the tape recorder business started to gain traction, Ibuka
started thinking about some new project that would best utilize the diverse strengths
of his engineering and specialist talent.
Once again, however, the best use of this new technology was unclear. The
engineers at Western Electric suggested that Totsuko use the transistor to make
hearing aids. Totsuko’s management chose to ignore that advice and focus instead
on making radios, and, specifically, small radios that were reliable, portable, and
battery-powered. Note that about the time their miniature radio was ready to be
marketed in 1955 (see Figure 9), Totsuko decided to label all its products with the
Sony brand name.
Figure 9. Sony TR 610 Transistor radio
“World’s smallest, most powerful pocket-portable 6-transistor radio”
Source: Courtesy of Sony Electronics Inc.
26
Techno Savvy – April 8, 2005
The Sony radio was, however, not to be the first small transistor radio — the world’s
first transistor radio had been launched on the U.S. market in December 1954, just in
time for the Christmas season, by an American company called Regency.
Nevertheless, as Totsuko focused on developing the Sony brand, the gamble on the
transistor radio paid off. Amazingly, the U.S. electronic consumer products industry
was reluctant to manufacture and market transistor devices that would compete with
their vacuum tube models, and it only slowly adopted the new technology.
RCA: Exploiting Television
Always looking toward the future, David Sarnoff of RCA saw the possibilities
offered by the new technology called television. He made up his mind to do in
television what he had already accomplished in radio and, to that end, he began
studying the new technology very carefully.
Sarnoff discovered that John Logie Baird, in England, and Charles Francis Jenkins,
in the U.S., were both putting together television systems that they were hoping to
market. Both systems used mechanical equipment in the transmitters and, although
the results still left much to be desired, the idea was exciting, and the demonstrations
amazed viewers. But Sarnoff wondered if there was a way to dispense with the
mechanical equipment and do the whole job electronically.
In 1921, Philo T. Farnsworth (then age 15) had first come up with the idea for a
totally electronic television system. After developing the idea to some extent, he
filed his first patent for an all-electronic system in 1927. His basic system could
transmit the image of a line, which progressed to the image of a triangle, a dollar
sign, and by 1929, photographic images.
David Sarnoff of RCA
saw the possibilities
offered by the new
technology called
television. He made up
his mind to do in
television what he had
already accomplished in
radio.
1929 was also the year that Sarnoff hired a fellow Russian immigrant, Vladimir
Zworykin. Zworykin’s ideas about the development of a totally electronic television
system were very similar to those of Farnsworth, and in 1923 he had filed a patent in
the U.S. for an electric scanning system. In April 1930, Zworykin spent three full
days at Farnsworth’s laboratory, which was then in San Francisco. Farnsworth and
his team had heard of Zworykin’s work and had high respect for him as a scientist.
Another likely reason for Farnsworth openly welcoming Zworykin was that
Farnsworth had filed more than a dozen patent applications covering various aspects
of his work, and he probably felt that his intellectual property was well protected.
While it is not known exactly how much technical benefit Zworykin gained from his
visit to Farnsworth, just a few weeks after he returned to his RCA laboratory,
Zworykin applied for a patent on an improved camera tube.
Farnsworth was also granted an important patent in 1930, one that covered his
electronic television system. While Farnsworth was making steady progress,
Sarnoff grew impatient with the RCA laboratory’s progress, and decided to visit
Farnsworth’s laboratory. Although Farnsworth himself was not there (as he was
testifying in court about a lawsuit), Sarnoff must have been impressed by what he
saw because, subsequent to his visit, he made an offer: $100,000 for Farnsworth’s
patents and his services as well. But Farnsworth turned the offer down. He had
been hoping for either an investment or the payment of royalties for the use of his
patents.
27
Techno Savvy – April 8, 2005
After Sarnoff’s visit to his laboratory, Farnsworth worked out a deal with one of
RCA’s few significant rivals, the Philco Corporation in Philadelphia. Philco set the
Farnsworth group up in their own laboratory and provided a solid financial base for
their experiments. At Philco, Farnsworth made further progress, including
establishing an experimental television transmitting station.
But just as things were looking promising, a split occurred between Farnsworth and
Philco. In the words of Farnsworth’s wife, Elma: “An ultimatum was delivered to
Philco [by Sarnoff]; either it dumped the Farnsworth Corporation forthwith, or its
license to use RCA’s radio patents would not be renewed.”12 Farnsworth’s contract
with Philco was not renewed.
RCA and Sarnoff were to
become associated with
the “birth” of television
following their display at
the 1939 World’s Fair in
New York.
Not only did RCA issue the ultimatum to Philco in 1932, it also instituted a patent
suit against Farnsworth. The basic claim was that Farnsworth’s patent on a
“Television System,” filed in 1927 and issued three years later, was actually an
infringement on the one filed in 1923 by Zworykin. Sarnoff and RCA continued to
tie up Farnsworth’s patents for years, filing appeals whenever a ruling went in
Farnsworth’s favor. But in 1939 RCA finally agreed to a settlement of $1 million
paid over a period of ten years for the right to use Farnsworth’s patents. It was the
only time that RCA, in all its dealings, had paid for licensing rights.
For RCA it was the right move. RCA and Sarnoff were to become associated with
the “birth” of television following their display at the 1939 World’s Fair in New
York, a showcase for new technologies (see Figure 10).
Figure 10. RCA Television
“Newest scientific wonder”
Source: Courtesy of Thomson
12
Elma Farnsworth, Distant Vision: Romance and Discovery of an Invisible Frontier, (Salt Lake City: Pemberly Kent Publishers,
1989).
28
Techno Savvy – April 8, 2005
Tech Innovators:
A Historical Perspective
Companies That Innovated by Using New and/or Existing
Technologies
Thus far we have discussed two types of companies:
¤ those that adapted existing technologies, and
¤ those that identified the optimal application of a technology.
Renowned economist
Joseph Schumpeter
pointed out that most
innovations are the
result of the
“recombinations” of
existing ideas.
As we outlined above, neither type of company actually produced anything new. In
contrast, the companies in the third category that we discuss — “tech innovators” —
created something new by utilizing new and/or existing technologies. As Joseph
Schumpeter, the renowned economist, pointed out, most innovations are the result of
the “recombinations” of existing ideas:
To produce other things, or the same things by a different method, means to
combine these materials and forces differently.13
We focus primarily on traditional technology companies that have used both
hardware and software to create new technologies (i.e., new computer applications),
but there are plenty of examples of other companies that have used technology to
innovate, most notably in the biotechnology sector (and we discuss the innovations
of Genentech later in this report).
IBM: Innovations in Hardware
The first electronic digital computer was designed to calculate firing tables for U.S.
guns in World War II. To use them properly, gunners in battleships, tanks, and
airplanes had to aim in the right direction and then raise the barrel to the right
trajectory, taking account not only of the location of the target, but also the weight of
the shells, the temperature of the air and the direction of the wind, among other
factors. This meant that every gun had to have a firing table showing all the relevant
variables. These tables were exceedingly difficult to compute, with each table
taking months to calculate by hand.
In 1942, John Mauchly, an engineering professor at the University of Pennsylvania,
proposed to the Ordnance Department that he could build an electronic calculator
that would solve the trajectory equations in a matter of seconds. In 1943, the
Ordnance Department awarded a contract to Professor Mauchly and one of his
colleagues, J. Prosper Eckert.
By May 1944, the ENIAC (Electronic Numerical Integrator and Computer) could do
simple calculations, but it was not fully operational until late 1945. Containing more
than 17,000 vacuum tubes, weighing 30 tons, and occupying 1,800 sq. ft. of floor
space, the ENIAC, as Professor Mauchly promised, could perform calculations very
quickly.
13
Alfred D. Chandler Jr., Scale and Scope: The Dynamics of Industrial Capitalism, (Cambridge, MA: Harvard University Press,
1990).
29
Techno Savvy – April 8, 2005
With World War II over by the time the ENIAC was fully operational, the military
asked if the machine could be reprogrammed to help design the hydrogen bomb.
But following a dispute with the University of Pennsylvania about patent rights,
Eckert and Mauchly resigned their academic positions and formed a company of
their own. They then worked on developing an improved version of the ENIAC
called the UNIVAC (UNIVersal Automated Computer). Eckert-Mauchly ran out of
money shortly before it could finish development of the UNIVAC, and the company
had to sell out to Remington Rand, becoming its UNIVAC division.
Remington Rand salesmen were, however, much more interested in selling
typewriters and adding machines than the exotic and expensive “electronic brain.”
Moreover, demand for the machine seemed limited to 1) scientific organizations
(including the military) that had a few gigantic equations to solve and 2)
bureaucratic customers (such as the Bureau of the Census), who had to perform
simple operations on a large number of cases. As for other possible uses, Peter
Drucker, the business writer, has pointed out that, “UNIVAC, which had the most
advanced computer and the one most suitable for business uses, did not really want
to ‘demean’ its scientific miracle by supplying business.”14 Therefore, the UNIVAC
division was neglected by Remington Rand’s top executives, fell far behind its key
competitor, International Business Machines (IBM), and did not become profitable
for many years.
Harvard Professor,
Howard Aiken, builder of
the Harvard Mark I, an
electromechanical
computer, did not
believe a commercial
market for electronic
computers would ever
develop.
IBM experimented in the field of computing in the 1940s, cooperating with Harvard
Professor Howard Aiken to build the Harvard Mark I, an electromechanical
computer. Much like Remington Rand, Professor Aiken did not believe a
commercial market for electronic computers would ever develop, because he could
not imagine that “the basic logics of a machine designed for the numerical solution
of differential equations [could] coincide with the logics of a machine intended to
make bills for a department store.”15
As Peter Drucker pointed out, Professor Aiken was soon proven spectacularly wrong
about the business demand for computers:
…businesses began to buy this “scientific marvel” for the most mundane of
purposes, such as payroll…IBM, though equally surprised [as Remington Rand]
by the business demand for computers, responded immediately.16
Although IBM’s first electronic computer (the 701) was crude and inflexible
compared to the UNIVAC, IBM did have one critical advantage over its key
competitor: unlike Remington Rand, it was firmly focused on the market for large
office equipment. Not only were IBM salesmen generally superior to Remington’s,
but they were particularly expert at understanding the needs of data processing
managers in large corporations (see Figure 11).
30
14
Peter F. Drucker, Innovation and Entrepreneurship, (New York: HarperBusiness, 1993).
15
Howard H. Aiken, The Future of Automatic Computing Machinery, (Germany, 1956).
16
Ibid.
Techno Savvy – April 8, 2005
Figure 11. IBM 604 Electronic Calculator
Meeting “the most exacting accounting and calculating requirements of business, industry and engineering.”
Source: Courtesy of IBM
New applications for the
“robot brains” —
tracking inventory, doing
payrolls, regulating
factory operations,
handling billing — kept
emerging.
A key reason for the success of the IBM salesmen was that, because computers, like
the electromechanical equipment of prewar days, were rented rather than sold,
salesmen enjoyed close long-term relationships with their customers. IBM had long
excelled at meeting the needs of its clients and so, in the 1950s, it spent huge sums
developing software for customers, offering seminars that explained the diverse uses
of computers, and doing whatever was necessary to make sure that the users found
the machines valuable. Furthermore, new applications for the “robot brains” (e.g.,
tracking inventory, doing payrolls, regulating factory operations, and handling
billing) kept emerging. It was a long way from calculating trajectories for guns.
Just as IBM came from behind to eventually shape and dominate the market for “big
iron,” much the same thing happened with the advent of the personal computer.
When IBM began work on the personal computer, it was in a rush to enter a market
that it had previously dismissed as irrelevant to its core business. But needing to
compete with the success of the Apple II and Commodore 64, IBM assembled a
system that used an off-the-shelf microprocessor, the 8088 from Intel, and
outsourced the development of its operating system to a small software firm in
Seattle called Microsoft.
The “PC,” as IBM called it, made no great technological leaps over what Apple
already offered. Indeed, when Apple saw the PC, it welcomed the competition,
because it thought that IBM’s presence gave the personal computer market
31
Techno Savvy – April 8, 2005
legitimacy among business customers. (Apple even took out a full-page ad in the
Wall Street Journal saying “Welcome IBM. Seriously.”)
But IBM’s entry into the market did much more than instill legitimacy: It created a
true “personal computer.” While Apple had built its machine around a closed,
proprietary system (and one that appealed primarily to “techies” and hobbyists),
IBM created an open system that attracted a wide number of parts suppliers,
software programmers, and computer manufacturers. It was the combined efforts of
these companies that made the PC both necessary and affordable, first in the office
and later in the home.
Texas Instruments: Innovations in Chips
We discussed above how Sony exploited the transistor to develop the transistor
radio. Jack Kilby of Texas Instruments was one of the first to use transistors
innovatively to develop the integrated circuit, the forerunner of modern computer
chips.
Jack Kilby of Texas
Instruments was one of
the first to use
transistors innovatively
to develop the integrated
circuit, the forerunner of
modern computer chips.
In 1948, the transistor was nothing more than a smaller and more reliable —
although more expensive — replacement for the vacuum tubes that engineers used
to design electric circuits. These circuits required many other components too,
which limited the size of the transistor — although transistors got smaller, the wires
needed to connect them to the other components did not. The solution to this
problem would be to make an entire circuit — the transistors and all the other
components — on a single semiconductor chip (i.e., an integrated circuit). The key
to doing this, as Jack Kilby came to realize, was to make all the parts of the circuit
out of silicon.
In 1959, Jack Kilby and Robert Noyce, one of the co-founders of Intel, filed patents
for slightly different forms of the integrated circuit (IC). The IC was not only
smaller than the transistor, it was also much more powerful because many different
kinds of circuits performing several different tasks could be built onto the same chip
(see Figure 12). The manufacturing process facilitated these qualities of size and
power: lines of chemicals were etched into sheets of semiconductor materials, and
these sheets could then be laminated onto each other, forming even more complex
circuits.
Figure 12. Texas Instruments’ Integrated Circuit
Source: Wall Street Journal, March 25, 1959
32
Techno Savvy – April 8, 2005
Subsequent innovations on the original IC design included a memory chip (which
contained special kinds of circuits that could store electronic information), a logic
chip (which contained circuits that could manipulate that stored information in
certain defined ways), and a microprocessor chip (which was programmable so that
it could manipulate information in a variety of ways).
Microsoft: Innovations in Software
The first personal computer, the Altair, appeared on the cover of the January 1975
edition of Popular Electronics. The Altair was a capable, inexpensive computer
designed around the Intel 8080 microprocessor. An accompanying article in the
magazine described how readers could obtain the minicomputer from MITS (Micro
Instrumentation and Telemetry Systems) for less than $400. The first customers to
purchase the Altair were hobbyists, and among the first things they did with these
machines was play games.
Two barriers blocked the
spread of personal
computing: the lack of a
practical mass storage
device (the first PC lost
its data when the power
was shut off) and the
lack of a way to write
applications software.
Following the introduction of the Altair, two barriers blocked the spread of personal
computing: the lack of a practical mass storage device (the Altair lost its data when
the power was shut off) and the lack of a way to write applications software. With
regard to the second barrier, according to a biography17 of Bill Gates, when his
friend and fellow student at Harvard, Paul Allen, showed the Popular Electronics
cover to Gates, the two immediately decided they would write a programming
language for the Altair. For that they turned to BASIC.
The origins of BASIC go back to the early 1960s when, John Kemeny, chairman of
the mathematics department at Dartmouth believed that “next to the original
development of general-purpose high speed computers, the most important event
was the coming of man-machine interaction.”18 To that end — and in order to teach
interactive computing to all of Dartmouth’s students, not just those studying
computer science — Professor Kemeny and his colleague Thomas Kurtz decided to
build a simple computer system around a programming language designed for the
needs of students. They called that language BASIC, and it quickly became one of
the most widely used computer programming languages thanks, in large part, to its
ease of use.
In the early 1970s, the Digital Equipment Corporation (DEC) worked to modify
BASIC so that it could be implemented without taking up much memory. This was
important because the DEC minicomputer that would run the modified BASIC had a
fraction of the power of the mainframe at Dartmouth, and only 56K in core memory.
Mr. Gates and Mr. Allen would lower the memory requirement further still. In a
newsletter sent out to Altair customers, they stated that a version of BASIC that
required only 4 kilobytes of memory would be available in June 1975. As it turned
out, Gates and Allen wrote a version of BASIC that not only fit into very little
memory, but also added a lot of programming features, in addition to the
fundamental BASIC advantage of ease of use. As Paul Ceruzzi writes in A History
17
Stephen Manes and Paul Andrews, Gates: How Microsoft’s Mogul Reinvented an Industry, and Made Himself the Richest Man in
America, (New York: Touchstone, 1993).
18
John G. Kemeny, Man and the Computer, (New York: Scribner, 1972).
33
Techno Savvy – April 8, 2005
of Modern Computing, “with its skillful combination of features taken from
Dartmouth and from the Digital Equipment Corporation, (BASIC) was the key to
19
Gates’ and Allen’s success in establishing a personal computer software industry.”
Mr. Gates and his
company went on to
successfully innovate
using many other
existing technologies.
As is well known, Mr. Gates and his company went on to successfully innovate
using many other existing technologies:
¤ MS-DOS, Microsoft’s operating system for the IBM PC, was based on 86-DOS,
an operating system written by Seattle Computer Products. Microsoft initially
paid about $15,000 for the right to use Seattle Computer Products’ work, and
later paid a larger sum for the complete rights.
¤ Microsoft Word was based on a word processor developed by Xerox PARC
engineers and labeled “Bravo.” Microsoft hired one of its original authors away
from PARC.
¤ Excel was based on Lotus 1-2-3, which was, in turn, based on VisiCalc by
Software Arts.
¤ In December 1994, Microsoft paid Spyglass for a license to use its work as the
basis for a Web browser, which Microsoft renamed Internet Explorer.
¤ The graphical user environment that is Windows first appeared at PARC in the
Alto computer, and then in the Apple Macintosh, before becoming Microsoft’s
flagship product (see Figure 13).
Figure 13. Microsoft Windows 3.0
“…GUI environment on an MS-DOS PC, and subsequent demise of the ‘C’ prompt, is a reality today.”
Source: Used with permission from Microsoft Corporation
19
34
Paul E. Ceruzzi, A History of Modern Computing, (Cambridge, MA: The MIT Press, 2003).
Techno Savvy – April 8, 2005
Genentech: Innovations in Biotechnology
While the birth of modern computing began with John Mauchly’s proposal to the
Ordnance Department in 1942 that he build an electronic calculator, it is arguable
that the origins of modern biotechnology go back to Watson and Crick’s 1953
discovery of DNA, the chemical that directs the workings of the cell and serves as
the hereditary blueprint of all life. Prior to this discovery, scientists could not
synthetically replicate the body’s own anti-disease mechanisms to restore health and
prolong life because they did not understand the chemical mechanism by which it
produces exceedingly complex organic molecules.
It is arguable that the
origins of modern
biotechnology go back to
Watson and Crick’s
discovery of DNA in
1953.
Genentech (short for GENetic ENgineering TECHnology) was founded in 1976 to
exploit the commercial possibilities of recombinant DNA technology (or “gene
splicing”). The U.S. had long been the leader in biochemistry, thanks to generous
funding of university research by the National Institutes of Health. It was amid a
government-funded search for a cure for cancer that the mysteries of DNA were
unlocked (since cancer results in uncontrolled cell growth, its treatment requires an
understanding of how DNA regulates cellular reproduction and related functions).
Cells, essential building blocks for all tissues, can be thought of as protein factories.
A protein is essentially a chain of amino acids. What sets one protein apart from
another is the specific order of the amino acids in the chain; DNA is the chemical
that determines the order of those amino acids. DNA, therefore, controls the
functioning of the cell itself.
Once scientists, including those at Genentech, understood these chemical reactions,
they learned how to manipulate them. By slightly altering the DNA blueprint,
biologists could systematically make changes in proteins as they searched for a
better compound, instead of more or less randomly searching for improved
medicines (which has been the approach of the pharmaceutical industry).
So, for example, instead of having to harvest insulin by grinding up the spleens of
human cadavers and extracting the hormone in a laborious and expensive process,
Genentech scientists discovered in 1978 that they could open up a human spleen
cell, snip out the gene (i.e., a DNA segment) that contained the blueprint for insulin,
put it in a different and simpler cell, and induce this cell to produce insulin in large
and extremely pure quantities. Human insulin was Genentech’s second
breakthrough product (see Figure 14). The first, which was also the first useful
product made by genetic engineering, was the development in 1977 of a bacterial
production of somatostatin, a human growth, hormone-releasing inhibitory factor.
Figure 14. Genentech’s Human Insulin
Source: New York Times, December 2, 1981
35
Techno Savvy – April 8, 2005
Lessons from History;
Implications for the Future
Four Key Implications
Our review of how companies have exploited technologies to their advantage has a
number of lessons. Below we discuss four that have key implications for the future.
No. 1: A Significant Competitive Advantage Can Be Gained by
Adapting Existing Technologies
We noted that Ford adapted three manufacturing technologies that had evolved over
the course of 100 years, and that were employed by other companies in disparate
industries: interchangeable parts (Singer Manufacturing Company), continuousflow production (Campbell Soup), and assembly line production (Swift).
Importantly, once Ford had established his system of mass production using these
widely available technologies, he was able to significantly lower production costs.
Thomas Edison
combined the existing
technologies of
electricity and the
incandescent bulb with
the proven business
model of the gas
industry.
Similarly, Thomas Edison combined the existing technologies of electricity and the
incandescent bulb with the proven business model of the gas industry in order to
establish his electrical illumination system, which provided power to entire cities.
As discussed in detail below, it seems likely that companies such as BorgWarner
and UnitedHealth Group will also be able to use technologies to gain a significant
competitive advantage. So, for example, by being the only supplier on the market
with a dual clutch transmission system, BorgWarner has a head start over its
competitors. In addition, UnitedHealth Group is combining the existing technology
of “smart cards” with the proven business model of the managed care industry in
order to establish a strong competitive position in the new environment of Health
Savings Accounts (HSAs).
No. 2: Identifying the Optimal Application of a Relatively New
Technology Is Often as Important as the Technology Itself
Thomas Edison insisted for years that the primary use of his talking machine should
be for taking dictation in offices. In the 1950s, it was thought that the tape recorder
would lead to “talking magazines.” The engineers at Western Electric suggested
that the transistor be used to satisfy burgeoning demand for hearing aids. As it
turned out, significant mass markets never developed for office dictating machines,
“talking magazines,” or hearing aids. But significant mass markets did develop for
the phonograph, the tape recorder, and the transistor radio.
Reflecting the critical importance of commercial appeal, Baron S.M. Rothschild, a
potential investor in the electric light, once observed testily that:
It would greatly interest me to learn whether really there is something serious
and practical in the new idea of Mr. Edison, whose last inventions, the
microphone, phonograph, etc., however interesting, have finally proved to be
only trifles.20
20
36
Robert Conot, Thomas A. Edison: A Streak of Luck, (New York: Da Capo Press, 1979).
Techno Savvy – April 8, 2005
Appreciating the full value of a technology is not easy, and it is this skill that can
often bring as much reward (if not more) as the invention itself. In that regard, a
historian made the following comments about the development of a market for
automobiles:
The automobile was not developed in response to some grave international horse
crisis or horse shortage. National leaders, influential thinkers, and editorial
writers were not calling for the replacement of the horse, nor were ordinary
citizens anxiously hoping that some inventors would soon fill a societal and
personal need for motor transportation. In fact, during the first decade of
existence, 1895–1905, the automobile was a toy, a plaything for those who
could afford to buy one.21
Ford’s genius was to
understand the
possibilities of a mass
market for automobiles.
So, in its first decade of existence, the automobile was largely a luxury for the very
wealthy. Ford’s genius was to understand the possibilities of a mass market for
automobiles. Similarly, a biographer of RCA’s David Sarnoff noted that:
Although Sarnoff was not an inventor, he was the most successful innovator of
his era, with the ability to pinpoint the need for an invention and then flog it
through developmental stages to the marketplace.22
An excellent use of radio frequency identification (RFID), first introduced during
World War II for aircraft identification, is tracking items through a supply chain;
BJ’s Wholesale is well positioned to use RFID to lower its labor and distribution
costs. Meanwhile, Cisco Systems is exploiting the adoption of voice over Internet
protocol (VoIP) because enterprises tend to spend 3x–5x as much on security and
network switching equipment as they do on VoIP itself.
No. 3: The Company That Becomes Most Associated with a
Technology Is Not Always Its Inventor
It is said that Picasso
once observed that
“mediocre artists
borrow, great artists
steal.”
It is said that Picasso once observed that “mediocre artists borrow, great artists
steal.” “Stealing” is a strong word when it comes to exploiting technology. (After
all, the Europeans didn’t “steal” printing, gunpowder, and the magnetic compass
from the Chinese). But it certainly is the case that the company that first develops a
new technology is not always the ultimate beneficiary, given that other companies
may leverage that technology more successfully.
In A History of Modern Computing, Paul Ceruzzi writes that Steve Jobs of Apple
“not only admitted, but even boasted, of having stolen the graphical user interface
(GUI) from Xerox PARC.”23 According to another anecdote, when Jobs accused
Bill Gates of Microsoft of stealing the GUI from Apple and using it in Windows 1.0,
Gates fired back:
21
George Basalla, The Evolution of Technology, (Cambridge, United Kingdom: Cambridge University Press, 1988).
22
Kenneth Bilby, The General: David Sarnoff and the Rise of the Communications Industry, (New York: Harper & Row, 1986).
23
Paul E. Ceruzzi, A History of Modern Computing, (Cambridge, MA: The MIT Press, 2003).
37
Techno Savvy – April 8, 2005
No, Steve, I think its more like we both have a rich neighbor named Xerox, and
you broke in to steal the TV set, and you found out I’d been there first, and you
24
said. “Hey that’s no fair! I wanted to steal the TV set!”
We also discussed in detail the development of television and the success of David
Sarnoff at the expense of Philo T. Farnsworth and, to some extent, Vladimir
Zworykin. Regardless of the original source of their technologies, Apple, Microsoft
and RCA were ruthless in exploiting what they clearly recognized as key
innovations.
Mechanical Technology and Plug Power are both building on the concept of the
fuel cell, which has been around for almost 200 years. Mechanical Technology just
recently introduced the first commercial micro fuel cell, while Plug Power is the first
company to bring to market a reliable, economically viable fuel cell for on-site
power generation.
No. 4: Innovation Is Not Always Associated with Commercial
Success
The “users” of a
technology are often
more successful than its
“makers.”
Our fourth point follows from the third. Indeed, the recurring theme of this report is
that the “users” of a technology are often more successful than its “makers.” An
excellent example of this is the aforementioned Xerox PARC, which has been
credited with developing one of the first personal computers (the Alto), the concept
of personal distributed computing, the GUI, the first commercial mouse, Ethernet,
client/server architecture, laser printing, and many of the basic protocols of the
Internet. Yet Xerox has become famous for “fumbling the future” by failing to
commercially exploit any of PARC’s ideas. Likewise, Eckert-Mauchly was unable
(or, some say, unwilling) to exploit the commercial possibilities of computing, and
the company quickly disappeared inside Remington Rand.
Mismanagement of a new
technology can be just
as devastating as the
misappropriation of that
technology.
Clearly, mismanagement of a new technology can be just as devastating as the
misappropriation of that technology (as discussed in point three). In addition to the
Xerox PARC examples, another cautionary tale with regard to mismanagement is
the case of Prodigy, the first online service aimed at the casual home user.
Sears and IBM invested $600 million to create Prodigy, which debuted in November
1989, offering much the same functionality that America Online would subsequently
provide, including electronic bulletin boards and e-mail. However, Prodigy
management regarded electronic bulletin boards and e-mail as secondary to the real
purpose of the online service, which was to sell merchandise and information to
subscribers, and advertising to merchants.
In a well-publicized incident, Prodigy users banded together and protested the
surcharge that Prodigy imposed on e-mails via mass e-mailings and messages posted
to an electronic forum. Prodigy management promptly canceled the subscriptions of
those users and issued restrictions on the nature of e-mails that remaining
subscribers could send going forward. In protest, 3,000 members changed their
online service to GENie. That was the beginning of the end of Prodigy, even though
it would take America Online another five years to match Prodigy’s service offering.
24
38
This anecdote has been widely retold in the media and on the Internet but its precise origin is obscure.
Techno Savvy – April 8, 2005
These examples of the misfortunes of innovators such as Xerox, Eckert-Mauchly,
and Prodigy should serve as cautionary tales for the companies discussed in the
sections that follow (summarized in Figure 15). Specifically, it could well be that
some of the technologies that we discuss do not succeed as expected or,
alternatively, that some of the companies we have identified as techno savvy fail to
execute. It is for this reason that, in most cases, we list several possible beneficiaries
of the technologies, in addition to conducting case studies on specific companies that
seem well positioned.
Figure 15. Today’s Technology Adapters, Exploiters, and Innovators
Dual Clutch Transmission
Drilling and Completion
HSA “Smart Card”
Origin/Source
Germany, 1940
Oilfield Equipment & Services
Card & ATM Technology
Adapter
BorgWarner
EOG Resources
UnitedHealth Group
Product/
Competitive
Advantage
Mass-produced
dual clutch transmission
More productive wells
Combination medical, drug
benefit, HSA, debit/credit card
Real Time Patient
RFID
VoIP
Origin/Source
W.W. II aircraft identification
Vocaltec (Israel)
Internet
Exploiter
BJ’s Wholesale Club
Cisco Systems
Medtronic
Significant incremental
Monitoring for congestive
Product/
Competitive
Advantage
Origin/Source
Innovator
Reduced store labor and
spending on security &
heart failure,
network switching
low blood sugar levels
Fuel Cells
E-Money
Phood
19 Century England
Late 1990s
Ancient China
distribution costs
th
Mechanical Technology/
(PayPal) eBay/
Senomyx/
Plug Power
Euronet Worldwide
Martek Biosciences
PayPal/
Flavor enhancers/
eTop-Up
Food additives
Product/
Competitive
Advantage
Monitoring
Commercial fuel cells
Source: Smith Barney
39
Techno Savvy – April 8, 2005
Tech Adapters:
Three Case Studies
BorgWarner’s Dual Clutch Transmission
The earliest automobiles offered only manual transmissions. Similar in principle to
today’s manual transmission vehicles, these cars, such as the Ford Model T, had two
forward gears and one reverse. We note that, without a transmission, cars would be
limited to one gear ratio. So, for example, a gear ratio similar to third gear in a
manual transmission would have almost no acceleration when starting and, at high
speeds, the engine would race at a dangerous number of revolutions per minute.
Instead, the transmission uses the gears to make effective use of an engine’s torque
(or turning force) and to keep it operating at an appropriate speed.
As the popularity of cars increased, engineers began searching for a way to have the
car “automatically” shift from one gear to another. Although the groundwork had
been laid in the early 1900s, the first automatic car transmission was not
manufactured until 1938, when General Motors invented “Hydra-matic Drive”
(reflecting the fact that hydraulics played a key role). The first mass-produced
automobile to offer an automatic transmission was a 1940 Oldsmobile.
In a crowded Europe,
with its small car
engines and highly taxed
gasoline, there were
many proposals on how
to automate
transmissions, but none
succeeded.
Automatic transmissions with a hydraulic converter quickly became the standard in
the U.S., while in a crowded Europe, with its small car engines and highly taxed
gasoline, there were many proposals on how to automate transmissions, but none
succeeded. The manual transmission, with its clutch pedal, always came out on top,
and a key factor was fuel efficiency — automatic transmissions, despite all the
efforts of designers, still lead to higher fuel consumption than manual transmissions.
Today, however, many European drivers find themselves in a predicament: On the
one hand they like the sporty feel and fuel economy of a manual gearbox; on the
other hand, they increasingly find themselves stuck in traffic in congested cities, an
environment where automatic transmissions are much less tiresome. Moreover,
reflecting the fact that more than 80% of the cars manufactured in Europe are
manual (and the majority of the assembly lines are configured to produce manual
transmissions), automatic transmissions are an expensive option.
The dual clutch transmission (DCT), which is based on a manual gearbox, offers a
solution to these problems. The driver can initiate the gear change manually, or can
leave the shift lever in fully automatic mode. In other words, a car can meet the
demand for a sporty, responsive driving experience or, alternatively, for the
convenience of an automatic that offers good fuel economy.
As its name suggests, a dual clutch transmission uses two clutches (although, as in
an automatic, there is no clutch pedal). One of these clutches controls the shaft for
first, third, and fifth gears (along with reverse), and a different clutch and shaft
control second, fourth, and sixth gears (see Figure 16). The benefit of the dualclutch approach lies in its ability to shift from one ratio to the next without
disconnecting the engine from the drive wheels, which normally takes place when a
driver has to lift off the accelerator and depress the clutch pedal.
40
Techno Savvy – April 8, 2005
Figure 16. The Evolution of the Automobile Transmission
Ford Model T: two forward gears and one reverse
P-R-N-D-3-2-1 Automatic Transmission
Dual Clutch Transmission
Source: Ford Motor Company, Smith Barney, and Volkswagen
41
Techno Savvy – April 8, 2005
So, for example, when the engine is started, DCT electronics engage the clutch for
first gear. As the vehicle is accelerating, the electronics position second gear into
place. Then, when the computer decides it is time to shift, it disengages the clutch
that is transmitting torque for first gear at the same time that it engages the second
gear clutch. This greatly reduces the “torque interrupt” sensation that is typical of a
manual (i.e., when the transmission shifts gears, causing a lurch or jitter as the
engine power is reduced, the clutch disengages, the gear is shifted, and the clutch is
reengaged).
DCT is based on
concepts that go back to
the 1940s, but it has only
been recent advances in
electronics and hydraulic
controls that have made
the technology feasible
today.
The first DCT was patented in 1940 and was tested in a truck, but it never went into
production. In the 1980s, Porsche successfully developed its own DCT for racecar
applications. At the time, however, it was not feasible to transfer the DCT
technology into production cars, in large part because the electronics and hydraulic
controls were so expensive that the transmission would not be commercially viable.
But following advances in electronics and hydraulic controls, Volkswagen partnered
with BorgWarner in the late 1990s in the co development of a DCT for mainstream
car applications. (In a similar vein, recall that RCA’s David Sarnoff also had to wait
until improvements in radio equipment made wireless broadcasting feasible.) The
first commercial application of the BorgWarner DualTronic™ appeared in 2003 in
the Volkswagen Golf R32 DSG and the Audi TT 3.2.
As noted, in addition to its fuel economy, a key attraction of DCT is its performance
features. A reviewer of a Golf equipped with a wet clutch DCT transmission noted
that “the acceleration times of a (Golf) R32…are slightly better than those for a
manual gearbox, sprinting to 62 mph in 6.4 seconds compared to the 6.6 seconds it
takes in a manual. While both versions have the same top speed of 154 mph, the
(Golf R32) has a better fuel consumption of 27.6 mpg, [13%] better than the manual
version” 25 (see Figure 17).
Figure 17. Performance Improvements of DCT Transmission vs. 6-Speed Manual
Speed
VW Golf R32 equipped with a wet clutch DCT transmission
DCT
Manual
Top Speed:
Manual
247 km/h / 154 mph
DCT
247 km/h / 154 mph
Time
DCT:
10.3 liters/100 km
Manual:
11.5 liters/100 km
Fuel Efficiency
Source: Volkswagen
25
42
Automotive Engineer, April 2003.
Techno Savvy – April 8, 2005
Competing suppliers, including ZF Sachs, LuK, and Valeo, are also developing
dual-clutch transmissions. However, BorgWarner seems well positioned to maintain
a dominant share of the DCT market given its 1) head start, 2) technological edge,
and 3) growing customer base. Specifically, by being the only supplier on the
market with such a system, BorgWarner has a head start over its competitors on
improvements to the system (i.e., it is “further up the learning curve”). With regard
to the technology, Bill Kelley, BorgWarner’s director of drivetrain technology,
believes the company’s expertise in friction materials, high-tech lubricants, and
advanced electronic controls combine to give it a strong competitive edge.
Another driver of DCT
adoption should be
Europeans’ love of diesel
engines. Conventional
automatic transmissions
experience a 20% fuelefficiency loss when
coupled with diesel
engines.
For several reasons, Mr. Kelley thinks the majority of DCT customers will likely be
in Europe for the foreseeable future. For a start, the technology is particularly
attractive to European automakers because, inherent in the DCT design is the ability
to take advantage of the investments that Europeans have made in manual
transmissions. No expensive retooling is required for a move to DCT designs, while
the finished assembly line is more compact than that required for a conventional
automatic transmission.
Another driver of DCT adoption should be Europeans’ love of diesel engines
(almost 40% of new cars sold in Europe are fueled by diesel). However,
conventional automatic transmissions experience a 20% fuel-efficiency loss when
coupled with diesel engines. While DCT offers at least 5% better fuel economy than
conventional automatics with a gasoline engine, the gain is up to 15% with diesel.
Asian markets seem to
have potential. Manual
transmission penetration
is nearly 100% in India,
around 75% in China,
and more than 50% in
South Korea.
BorgWarner has said it expects the demand for DCT to grow from virtually nothing
in 2003 to 18% of the European passenger-car market by 2009. As for other DCT
customers, Mr. Kelley thinks that some Asian markets have potential. He points out
that manual transmission penetration is nearly 100% in India, around 75% in China,
and more than 50% in South Korea. In addition, although manual transmission
penetration is not particularly high in Japan, he believes there are opportunities for
DCT in that market, in large part because of the fuel economy it offers. However,
Mr. Kelley believes that, given the large investments that U.S. automakers have
made in automatic transmissions, the U.S. will not be a particularly large market for
DCT for many years to come.
EOG Resources’ Oil and Gas Drilling and Completion
Technology
EOG Resources is achieving superior results by applying some well-known drilling
and completion technologies in a more effective way than its competitors. The
company uses tools and technologies that are widely available in the exploration and
production (E&P) industry, but it customizes them in innovative and creative ways,
with the result being that it often enjoys success where others have failed.
Loren Leiker, EOG’s executive vice president, exploration and development,
attributes the company’s successful track record in large part to its decentralized
structure. The company is split into 11 divisions (nine of which are in the U.S.),
with each of these divisions effectively being run as a “micro-cap” company. The
divisions are managed by a small core of staff that have worked together for many
years and that have vast experience in the relevant geologic basins.
43
Techno Savvy – April 8, 2005
EOG’s divisions combine
the best ideas of their
geologists, geochemists,
and engineers in order to
overcome a challenge
and successfully extract
energy from the ground.
We noted above that Henry Ford combined the best ideas of interchangeable parts,
continuous-flow production, and assembly-line production in order to mass produce
automobiles. In a similar fashion, EOG’s divisions combine the best ideas of their
geologists, geochemists, and engineers in order to overcome a challenge and
successfully extract energy from the ground.
Indeed, Mr. Leiker observes that EOG’s exploration strategy involves hiring a lot of
creative people to generate ideas about E&P, and then filtering those ideas for the
very best concepts. So, rather than investing, say, $500 million on exploring an
uproved acreage (as some of its competitors do), EOG prefers to invest a
considerably smaller amount (anywhere from a few thousand to a few million
dollars) in one of the ideas generated by its team, and then invest additional funds
only if the initial results seem promising.
The Barnett Shale
A good example of EOG’s approach is offered by its exploration and development
of the Barnett Shale, which is a geologic formation in the southern U.S. containing
vast amounts of natural gas. According to a mid-2004 U.S. Geological Survey
estimate, in East Texas, the Barnett Shale is estimated to have over 26.6 trillion
cubic feet of expected, ultimately recoverable reserves.
While a number of companies have been successful in a region of the Barnett Shale
in East Texas known as the core, the difficult geology of the non-core region
presented a challenge. Companies drilling in the non-core region frequently found
that their pipes filled with water rather than natural gas. Because of these
challenges, the non-core region was an untapped frontier. However, EOG began to
employ the strategy outlined above to its first Barnett leasehold in order to exploit
the full potential of the non-core region.
The Technologies
In 2000, EOG drilled its first vertical well in the region, gathering rock data and
characterizing the geology. Later that year, EOG drilled three other vertical wells,
but the watery strata that rest under the non-core Barnett plagued all these wells. So,
in 2003, EOG began applying newer technologies in innovative ways, including
drilling horizontal wells that skirted the underlying strata. Gas production results
showed that this tactic was one part of optimizing performance; for example, a
typical horizontal well in the non-core Barnett Shale has been found to produce
300% more gas than a vertical well in the first 16 months.
Emboldened by the success of these test wells, EOG made a relatively small upfront investment of $30 million to buy 175,000 acres of non-core Barnett Shale (i.e.,
at a price of under $200 per acre). The company then eventually customized two
other well-known oil and gas production techniques to fully exploit these previously
neglected gas reserves. All three of EOG’s technologies now work in concert to
promote the highest gas output possible from this tricky resource play.
44
Techno Savvy – April 8, 2005
Figure 18. The Evolution of Energy Production
Do-It-Yourself Energy Production
The Early Vertical Oil Rig
Modern Drilling and Completion Technologies: Horizontal wells, 3-D Seismic, Fracturing
Source: Smith Barney and Environmental Protection Agency
45
Techno Savvy – April 8, 2005
¤ Initially, EOG employed horizontal wells to improve production volumes and
avoid the watery strata, as outlined above.
Three technologies work
in concert to promote the
highest gas output
possible from this tricky
resource play.
¤ EOG then recognized that 3-D seismic data (i.e., geologic imaging) would help
it avoid the geologic pitfalls of the non-core region that sabotaged other
companies’ results. EOG regularly gathers these data and uses them to choose
well locations by visualizing the shale, thereby avoiding sink holes that let
valuable gas escape before the pipe can collect it.
¤ Finally, successfully fracturing the rock had been a challenge to other
companies. (Fracturing technology creates controlled fissures — not too big or
small — so that gas can flow around the rock and into the pipe.) EOG
successfully fractured the Barnett by further customizing a known technology.
This involved choosing the right combination of “propin” (a medium used to
keep the rock separated, such as sand, gel, or beads) and the best water volume,
pumping rates, and propin concentration.
The Payoff
EOG’s Barnett Shale success has created a virtuous cycle for the company:
Landowners want top dollar for the mineral development rights, and EOG’s
competitors may offer more up-front cash for those leases. However, EOG offers a
track record of success in extracting gas, bringing the landowner higher gas volume
royalties over the long term, and thereby making EOG an attractive partner.
In addition, EOG’s reputation for environmentally rigorous standards also works in
its favor in the eyes of landowners. The result is that EOG had 321,000 acres of
non-core property in the Barnett Shale as of the fourth quarter of 2004.
EOG’s average
acquisition prices in the
Barnett Shale are around
$200 per acre. Recent
deals in the area average
$11,000–$17,000 per
acre.
Improved drilling and completion technology is most beneficial when it is novel.
But once the technology is widely employed in the industry, leasehold prices
escalate. As noted above, EOG’s average acquisition prices in the Barnett Shale
have been around $200 per acre, but now that EOG’s E&P technology is more
widely accepted, recent deals in the area average $11,000–$17,000 per acre (see
Figure 19).
Figure 19. EOG’s Technological Advantage Is Now Being Priced into the Region
Date
Price/Acre
Acres
Buyer
Location
11-Jan-05
$11,200
61,000
XTO Energy
Core, Parker & Johnson County
30-Nov-04
$17,000
16,000
Chesapeake
Johnson County, TX
23-Nov-04
$11,500
22,000
Encana
Fort Worth Basin
3-Mar-04
$5,466
1,500
Carizzo
Denton County
Early 2004
$172
175,000
EOG Resources
Barnett Shale, TX
Source: Smith Barney and company data
46
Techno Savvy – April 8, 2005
UnitedHealth Group’s Health Savings Account (HSA)
“Smart Card”
Managed care organizations (MCOs) are in the vortex of the $2 trillion per year
health care industry in the U.S., which represents approximately 15% of GDP, and is
growing more than 5% annually. The MCO industry is fragmented and rapidly
consolidating, with the two largest MCOs, UnitedHealth Group and WellPoint, each
enrolling 10% of the U.S. population in their medical plans.
It is in this environment that employers and individuals are poised to switch products
from the now-prevalent HMO, PPO, and POS options to consumer-directed health
plans/high-deductible health plans (CDHP/HDHP) linked to tax-advantaged savings
accounts, such as the Health Savings Account (HSA).
Recall that the Medicare Prescription Drug, Improvement, and Modernization Act of
2003 went beyond assisting seniors with prescription drugs and contained many
other provisions with wide-ranging implications. One key provision of the Act —
the establishment of HSAs — has nothing to do with Medicare. In effect, an HSA is
to health care what a 401(k) is to retirement savings.
Recent data suggest that
HSAs are starting to take
off.
Recent data suggest that HSAs are starting to take off. According to a March 2004
Mercer Human Resource Consulting survey of 991 employers, more than two-fifths
say that it is either very likely (8%) or somewhat likely (35%) that they will offer an
HSA in 2005 (see Figure 20). These figures jump significantly for 2006, as 19% of
employers believe it is very likely they will offer an HSA by 2006, and another 54%
believe it is at least somewhat likely.
Figure 20. Employer Interest in HSAs
likelihood of offering a high-deductible health plan with an HSA
100%
B y 2005
B y 2006
80%
60%
54%
40%
56%
35%
28%
19%
20%
8%
0%
V e ry lik e ly
S o m e w h a t lik e ly
N o t lik e ly
Source: Mercer Human Resource Consulting
47
Techno Savvy – April 8, 2005
Figure 21. The Evolution of Medical Payment Processes
Forms
The Web
The Smart Card
Source: Smith Barney
48
Techno Savvy – April 8, 2005
Because of their
complexity, consumerdirected health plan
(CDHP) products will
increase the processing
burden on health plans
and members.
Technology will become
more critical to health
plans.
Importantly, because of their complexity, consumer-directed health plan (CDHP)
products will increase the processing burden on health plans and members. The end
result will be that technology will become more critical to health plans, members,
and providers/suppliers, such as physicians, hospitals, pharmacies, and labs. In
other words, just as a bank must have real-time data for an ATM to work, and an
airline must have real-time data for automated ticket kiosks, health plans will also
have to operate in a real-time environment (e.g., providing data related to claims,
providers, eligibility, account balances, and deductible balances) in order for new
CDHP products to work well.
Stephen Hemsley, UnitedHealth Group’s president and chief operating officer, notes
that “the health services industry needs to function much more like the banking
industry relative to the use of technology. We should expect nearly perfect
execution across a wide range of standard transactions, should see funds flow
between participants electronically at the push of a button, and should develop
simple, intuitive customer reporting designed more like a bank statement than an
old-fashioned explanation of benefits form.”
Technology Is the Backbone of Managed Care
Technology is the backbone of a successful MCO; indeed, almost all MCO capital
expenditures have tended to be on technology. The industry is transaction intensive
and was among the first to adopt computer technology to automate manual
processes, given that underwriting results improve with more timely information. In
the price-competitive managed care industry, where a 10% pretax margin is
considered outstanding, every 100 basis points of medical or administrative cost
savings can boost profits by 10%.
That said, most key statistics pertaining to technology are not made publicly
available by MCOs. Moreover, those statistics that are shared by companies are
defined differently from MCO to MCO, so apples-to-apples comparisons are not
always possible. The analysis provided here is based on the experience of Smith
Barney’s managed care analyst Charles Boorady, which spans more than ten years
of following the industry closely on Wall Street and six years of technology
consulting experience. Charles also relied on research by Forrester Research and
Gartner Consulting as part of this analysis.
Thanks to heavy
investments in
technology, UnitedHealth
Group is widening its
lead over its
competitors.
A key conclusion of his analysis is that, thanks to heavy investments in technology,
UnitedHealth Group is widening its lead over its competitors. This is hardly new
news. In the last several years, United has employed informatics and predictive
modeling to gain a leg up on its competitors. Specifically, it has used technology to
identify situations where intervention can improve medical outcomes and boost
profits. So, for example, United is a leader in evidenced-based medicine (i.e.,
defining protocols based on what medical evidence suggests is the right course).
The CDHP and HSA Era
Imagine an unbearable chest pain. Is it something you ate or cardiac trouble?
Which clinic? Which doctor? What is the right protocol? What is the cost of
various options? Why does the highest-ranked facility have the highest mortality
49
Techno Savvy – April 8, 2005
rate? (Answer: Because the most severe cases go there seeking the best medical
attention.) What do you do? Where do you begin?
In a CDHP world, the
dollar amounts paid by
the patient and the
complexity of options
will become much
greater.
This unpleasant scenario will become more daunting in a CDHP world because the
dollar amounts paid by the patient and the complexity of options will become much
greater. Dr. Reed Tuckson, senior vice president of consumer health and medical
care advancement at UnitedHealth Group notes that, “as patients become
increasingly involved in decision making about their own health, particularly those
who are confronted by complex or chronic disease, they seek, require, and deserve
information that identifies the best-performing facilities and physicians for their
specific needs.”
The added complexity of a CDHP is precisely why United’s technology will give it
an even bigger lead over its competitors. To appreciate the technological
challenges, consider the following simple example. Imagine putting several items
on the counter at the register of a pharmacy: prescription drugs, over-the-counter
drugs, and health and beauty aids.
¤ Today you might pay for the prescription drug with a drug benefit card, and later
mail the receipt for out-of-pocket co-payments to your flexible spending account
(FSA) administrator for reimbursement. Next, you might ring up health and
beauty aids and over-the-counter medications and pay for them with a debit or
credit card such as Visa, Master Card, or American Express. You might also
use a discount card and coupons to get discounts on those items. You would
then save the receipt for the over-the-counter medications and mail it in with
eligible items circled to your FSA administrator for reimbursement.
¤ In the CDHP and HSA era things would be very different and very complicated.
For a start, there would be a certain number of dollars in your HSA that would
cover certain medical expenses. But how many dollars are left in the account,
and which medical expenses do they cover?
United Leads the Industry in CDHP and HSA Technology
United is approaching completion of an HSA “smart card” solution to the above
scenario. This would be one integrated card, combining the functions of medical,
drug benefit, HSA, FSA, debit/credit, and discount cards. Using such a card, in the
example above a pharmacy worker could ring up all the items and, with one swipe
of the card, determine:
¤ the amount of any discount you are entitled to on certain items,
¤ what qualifies for HSA dollars versus after-tax dollars,
¤ what UnitedHealth pays, and
¤ what you owe and how that payment should be divided between your HSA debit
card and personal credit card.
50
Techno Savvy – April 8, 2005
United is combining the
existing technology of
smart cards with the
proven business model
of the managed care
industry in order to
establish a strong
competitive position in
the new CDHP/HSA
environment.
Recall that Thomas Edison combined the existing technologies of electricity and the
incandescent bulb with the proven business model of the gas industry in order to
establish his electrical illumination system, which provided power to entire cities. In
a similar fashion, United is combining the existing technology of smart cards with
the proven business model of the managed care industry in order to establish a
strong competitive position in the new CDHP/HSA environment. So, as with the
Edison example, it is not the technology per se that is proprietary. Rather, it is the
vision of where the industry is headed, and what types of applications will be key,
that is giving United an edge.
As Figure 22 illustrates, an analysis by Forrester Research highlights United’s lead
in terms of CDHP and HSA technology given that, unlike its competitors, United
has “all the pieces” — the company can offer the plan, administer it, act as an HSA
custodian, and offer an HSA debit card.
Figure 22. Comparative Analysis of Health Plans
United is the only MCO
with all four HSA pieces
in house — the company
can offer the plan,
administer it, act as an
HSA custodian, and offer
an HSA debit card.
CDHP platform and HSA attributes
Health plan
Anthem
CDHP platform
NASCO
Assurant Health
BCBS of Arizona
BCBS of Michigan
HSA administrator
HSA custodian
FlexBen
JPMorgan Chase
MSAver
HSA Bank
HSA debit card vendor
HSA Bank
Menu of local and national banks
Blue Shield of California
Wells Fargo
Evolution Benefits
Definity Health
US Bank, Wells Fargo
Humana
JPMorgan Chase
Independence Blue Cross
The Bancorp Bank
Lumenos Employer Products Division
Mellon Bank
UnitedHealthCare
Source: Forrester Research, Inc.
United’s technology is superior in large part because its technology spending
significantly outstrips its competitors (see Figure 23). With more than $3 billion in
annual free cash generation, United has the financial wherewithal to stay ahead of
the competition. Dr. William McGuire, M.D., UnitedHealth Group’s chairman and
CEO, points out that United has “spent more than $2 billion on technology and
business process change over just the past five years,” and he believes continued
investment in and deployment of leading technology solutions “will be critical to our
future success.” Moreover, United’s position as one of the largest benefit providers
in the country enables it to roll out a new technology that quickly enjoys widespread
adoption, thereby spreading development costs over a large membership base.
51
Techno Savvy – April 8, 2005
Figure 23. Capital Expenditures by MCOs (2004)
$ in millions
$400
$300
$200
$100
$0
UNH
W LP *
A ET
CVH*
HUM
PHS
WC
HNT
CI
*Pro forma numbers
Source: Smith Barney
Another key point is that United treats technology as a separate business unit. This
structure puts pressure on the company’s technology group to be at the leading edge
because it is forced to compete with outside vendors for the business of other United
units. Reflecting the success of this approach, today the in-house group competes
against companies (e.g., Accenture, EDS, and Perot) that provide services to
United’s competitors. United recently won a ten-year $600–$700 million
technology outsourcing contract from Harvard Pilgrim Health Plans, a leading
Massachusetts HMO that previously used Perot.
United has already
chartered its own
commercial bank.
Could United’s lead in CDHP and HSA technology be threatened by new entrants?
That seems unlikely, but it cannot be ruled out entirely. For a start, medical
management technology is specialized and proprietary, so it would be easier for a
company such as United to begin offering commodity financial products (indeed,
United has already chartered its own commercial bank) than it would for a financial
services company to develop CDHP and HSA technology. Instead, it is more likely
that commercial banks and credit card issuers would partner with health plans, given
that there would be synergies between the HSA “smart card” and ATM technology.
However, it is possible that some companies may opt to enter the space by way of
acquisitions. Recall that multi-line insurance companies and life insurers exited the
health care business following the birth of the HMO, a vertically integrated product.
It could be that some companies may view the CDHP as a more commoditized
financial product that is relatively easy to manufacture. Furthermore, the asset
gathering potential of HSAs may appeal to 401(k) and FSA administrators, payroll
processors, etc.
52
Techno Savvy – April 8, 2005
Tech Exploiters:
Three Case Studies
BJ’s Wholesale Club and Radio Frequency Identification
A particularly good use
of RFID is tracking items
through a retail supply
chain.
Radio frequency identification (RFID) is a generic term for technologies that use
radio waves to automatically identify individual items. RFID is not a new
technology; it was first introduced during World War II to help identify aircraft
through radio frequency waves. RFID technology has evolved since and is currently
being used in industries such as railroads (to track cars), autos (to track parts), and
agriculture (to track livestock). But, as we discuss in detail below, a particularly
good use of RFID is tracking items through a retail supply chain. (Recall that we
noted above that it took about 20 years of refinement before the optimal use of the
phonograph became apparent.)
Better than Bar Codes
With bar codes, each
item must be scanned
individually; RFID
provides the ability to
read multiple items
simultaneously.
While bar codes have been around for 30 years and average 75%–85% accuracy,
RFID tags can have accuracy of greater than 90% when used properly. Moreover,
RFID provides the ability to read multiple items simultaneously, whereas with bar
codes each item must be scanned individually. Other negative features of bar codes
include the requirement of a line of sight, as well as limited data carrying capacity.
Today, there are several methods of identifying objects using RFID, but the most
common is to store a serial number that identifies a product, and perhaps other
information, on a microchip that is attached to an antenna. The chip and the antenna
together are called an RFID transponder or RFID tag. The antenna enables the chip
to transmit the identification information to a reader. The reader converts the radio
waves returned from the RFID tag into a form that can then be passed onto
computers that make use of it.
RFID tags are either “active” or “passive.”
¤ Passive tags are much less expensive (approximately $0.18–$0.40 per tag,
depending on how technologically advanced they are) than active tags, as they
do not carry an active battery source. Passive tags draw their power from
electromagnetic waves transmitted by RFID readers. The primary disadvantage
of passive tags is that they have a short transmission distance of up to ten feet.
However, passive tags are very flexible and can be programmed to provide
numerous functionalities. Passive tags tend to be used primarily for lower-value
and high-volume consumer goods. Currently, about 95% of all RFID tags
manufactured are passive tags.
¤ Active tags are much more expensive than passive tags (at approximately $10–
$100 per tag, depending on how technologically advanced they are), as they
carry an active battery source. Active tags can transmit over 100 feet and can
even include special encryption for data transmission. Active tags tend to be
used primarily for high value goods, such as controlled substances (nuclear
materials and firearms). Primarily due to the high cost associated with active
tags, only about 5% of RFID tags manufactured are active.
53
Techno Savvy – April 8, 2005
Figure 24. The Evolution of Inventory Management
“What you see is what you have”
Closing the store for three-day inventory accounting
RFID
Source: Smith Barney and Manhattan Associates
54
Techno Savvy – April 8, 2005
With regard to readers, there are various types used in RFID systems, reflecting the
absence of a uniform standardization for tag formatting, transmission frequency, or
application systems. RFID readers range from $500 to $3,000 per unit (depending
on how technologically advanced they are), and a complete rollout of an RFID
system requires several RFID readers. Software that analyzes the information from
the readers ranges from basic applications (canned software) to fully customized
software. Due to the current fragmentation and lack of standards, RFID purchases
tend to include both the tags and the readers from the same source.
At the current cost
structure it is likely that
pallet and case level
tracking will be the most
predominant for the
foreseeable future.
Today there are several obstacles to a broad-based RFID implementation rollout:
¤ Tag Costs. Smith Barney analysts believe that passive tags costing less than
$0.10 are needed to entice businesses to adopt RFID on a large scale; some
observers even suggest that a cost of less than $0.05 is necessary. Current tag
prices in the $0.20–$0.25 range are likely too costly for individual unit tracking
— it does not make a lot of sense to use a $0.25 tag on a $3 pack of razors or a
$1 loaf of bread. Consequently, it is likely that pallet and case level tracking
will be the most predominant for the foreseeable future.
¤ Who pays? Whether the retailer or the supplier pays for the RFID tags remains
a critical issue. Wal-Mart — an early adopter of RFID technology — has
specifically stated that it will not pay for RFID tags, which implies that large
manufacturers will have to shoulder the costs of tagging pallets and cases. This
could be a significant issue for consumer packaged goods companies, as the
extra expense of RFID could weigh on margins. (Gartner Consulting forecasts
that there is an 80% probability that, by 2007, a majority of leading consumer
goods manufacturers will implant RFID tags on loaded pallets that leave their
plants. 26) While the initial tagging costs are likely to be borne by suppliers, over
time manufacturers may be able to push at least some of the costs of RFID
tagging to packaging and pallet suppliers.
¤ Implementation Costs. Aside from the individual tag cost there are also the
costs of the readers and infrastructure implementation. As noted, RFID readers
generally range from $500 to $3,000, depending on the complexity and
functionality needed. Furthermore, it is not uncommon for companies to spend
close to $1 million for supply chain management consulting and software
implementation costs. In other words, a focus solely on tag costs underestimates
the true cost of implementing an RFID system.
¤ Standards. Currently there is no single global standard being used by RFID
systems providers. While tags are very flexible and can be programmed based
on the needs of each company, these differences may prohibit articulation
among different systems until standards are adopted. Two global standards are
currently predominant in RFID: ISO (International Standards Organization) and
EPC (Electronic Product Code). However, according to Patrick Sweeney,
president and chief executive officer of ODIN technologies, an RFID and EPC
26
Predicts 2004: Critical Infrastructure Protection; Authors R. Mogull, C. Moore, D. Fraley, R. Goodwin, A. Earley, R. DeLotto;
January 14, 2004.
55
Techno Savvy – April 8, 2005
integration and software company, having two standards should not ultimately
impede RFID implementation. Mr. Sweeney notes that bar codes have over 200
standards. The FCC has designated several frequencies in the U.S. for RFID
use. However, international jurisdictions have different frequencies in use for
RFID. For example, RFID frequencies in the U.S. are commonly used at 125
kHz for low transmission and 13.56 MHz for high transmission, while Europe is
using an ultra-high frequency of 868 MHz. These transmission frequency
differences among countries may challenge the notion of a global RFID supply
chain management system.
RFID in Retail
The technology would be
particularly beneficial in
the retail sector, where
RFID can be used to
track items through the
supply chain.
Despite these issues, as noted above, RFID is currently being employed in a range of
industries (from railroads to agriculture). It would seem, however, that the
technology would be particularly beneficial in the retail sector, where RFID can be
used to track items through the supply chain. Warehouse format retailers are likely
to be early adopters, given that, as discussed below, most large suppliers are
currently including RFID tags at the pallet level.
Specifically, RFID offers warehouse retailers the following benefits:
¤ Decreased Distribution and Labor Costs. RFID may be able to increase
efficiency for retailers at both distribution and in-store operations. More
specifically, tasks such as stocking, locating products, checking out, and
physical counting, among others, could be significantly reduced or made more
efficient. In some cases, retailers may be able to keep store labor hours the
same, but perhaps deploy sales associates in other tasks such as customer
service.
¤ Reduced Shrink. By Smith Barney estimates, shrink represents approximately
2% of sales for most retailers. Shrink is driven by several factors, including
employee theft, shoplifting, error, and fraud by vendors. RFID could potentially
reduce shrink across the board by providing a tracking device, both in store
(lessening employee theft and shoplifting) and at the distribution centers
(lessening employee theft and vendor fraud).
¤ Reduced Out-of-Stock Items. Retailers can face out-of-stock issues due to the
complex nature of inventory management, particularly for large operations. As
a result, retailers may lose sale opportunities. So, for example, in a pilot study,
The Gap found it could increase in-store item availability to almost 100% by
applying RFID tags and tracking inventory in its stores. Assuming that 10% of
customers leave due to items being out of stock, Accenture has estimated that
the benefit of RFID tags would be equivalent to a 3% increase in sales.
¤ Reduced Inventory Write-Offs. Due to a better grasp of what is in the stores and
distribution centers, as well as a sense of the life span of a product (e.g., food
and consumer electronics), retailers may be able to mitigate inventory write-offs
through the use of RFID.
Wal-Mart has been an early adopter of RFID technology, and the company is a
member of the board of EPC Global, which is responsible for establishing RFID
standards. Effective January 1, 2005, Wal-Mart mandated that its top 100 global
56
Techno Savvy – April 8, 2005
suppliers start tagging all cases and pallets, representing approximately 1 billion
cases per year (37 other suppliers volunteered to meet the deadline). By the end of
2006, Wal-Mart expects to track all cases and pallets from all of its global suppliers.
The company is expanding its RFID network from its experimental Texas
distribution center to 12 distribution centers and over 600 stores nationwide.
Wal-Mart has paved the way for other retailers to take advantage of RFID. In that
regard, some of Wal-Mart’s smaller competitors may be well positioned (even more
so than Wal-Mart). As we discuss here, BJ’s Wholesale Club seems to have a
particularly strong advantage.
The BJ’s Advantage
RFID adoption could
boost BJ’s 2007 EPS by
27%, versus 22% for
Wal-Mart and 15% for
Costco.
Smith Barney analyst Deborah Weinswig estimates that RFID adoption could boost
BJ’s 2007 EPS by 27% (see Figure 25). (2007 is the first full year that mass
merchant retailers should reap the benefits of RFID rollout.) By contrast, Wal-Mart
is forecast to experience a 22% boost to 2007 EPS, and Costco just a 15% increase.
Figure 25. Key RFID Metrics
for three companies
Current 2007E EPS Estimate
Potential 2007E EPS with RFID
% Accretion from RFID
Distribution Centers
SKUs:
—At discount stores
—At supercenters
Total SKUs
BJ’s
$2.69
$3.41
27%
Wal-Mart
$3.58
$4.37
22%
Costco
$2.76
$3.17
15%
3
110
26
65,000
125,000
7,000
3,700-4,500
Source: Smith Barney
RFID would likely have a relatively large positive impact on BJ’s for three reasons:
¤ Faster Implementation. As Figure 25 illustrates, BJ’s has only three distribution
centers, all of which are “cross docks.” (Cross docks are a type of distribution
center that operate such that it is almost as if the goods are loaded directly from
the suppliers’ truck into the retailers’ truck.) Costco has 26 distribution centers
(all of which are cross docks), while Wal-Mart has 110 distribution centers
(some of which are cross docks). Clearly, it would be relatively easy for BJ’s to
upgrade its three cross docks for RFID. In fact, the company recently
announced that it is in the process of building a new cross dock that will be
RFID capable in order to replace one of its older facilities.
¤ A Large Number of SKUs. Figure 25 illustrates that BJ’s has almost twice as
many stock keeping units (SKUs) as Costco. This relatively large number of
SKUs has been additive to BJ’s labor and distribution costs, given that BJ’s
workforce has had to keep track of a relatively large number of products. But,
as we already mentioned, since Wal-Mart has insisted that most large suppliers
to the industry tag all cases and pallets, keeping track of these SKUs would
become much less expensive for BJ’s if it used RFID. (Recall that in the
warehouse club environment, products are organized by pallet.)
57
Techno Savvy – April 8, 2005
¤ A Culture of Technological Innovation. BJ’s has a reputation for being
technologically savvy; so, for example, through its “Member Insights” program,
BJ’s mines the data it accumulates about its customers in order to identify key
buying patterns. BJ’s CFO Frank Forward pointed out to us that the company
invests heavily in hardware, software, and staff in order to build an
infrastructure that integrates member purchase behavior with industry sales data,
as well as demographic and economic data.
Cisco and Voice over Internet Protocol (VoIP)
VoIP involves sending voice communications as digital packets over data networks,
such as the Internet, alongside e-mail and Web traffic. In contrast to traditional
phone calls, which require a dedicated circuit to connect the callers at each end,
VoIP service relies on software. That software converts analog voice signals into
digital data packets that are transported over a data network and then converts them
back to analog so they can be understood at the other end. (Note that there is an
interesting parallel between the transistor, which replaced inefficient vacuum tubes
in transcontinental phone service in the 1950s, and VoIP, which is replacing
inefficient dedicated telecom circuits today.)
Without VoIP, some
companies need to
maintain four separate
communications
infrastructures.
In terms of the technology, IP was designed primarily for data transmission; delays
and occasional loss of data (e.g., e-mail) were not so critical. By contrast, voice
communication is both real time and “mission critical”: Any delay can lead to a
garbled phone call. In addition, in the early stages of VoIP, “network jitter” meant
that packets did not arrive in sequence, also leading to poor quality of service. In
recent years, however, technological advances have addressed these issues, making
VoIP a viable alternative.
Today, many enterprises are switching to VoIP (see Figure 26) because of, among
other factors, its cost advantages. (Note that, in this analysis, we are just discussing
VoIP for enterprises.)
Figure 26. IP Telephony Implementation Schematic
Source: Gartner Consulting: “2005: The Year of IP Telephony — Get Your Hands Dirty,” Jeff Snyder, Gartner Symposium, October 2004.
58
Techno Savvy – April 8, 2005
Figure 27. The Evolution of Enterprise Communications
The Early Switchboard
The Button Switchboard
VoIP
Source: The Jimmie Dodd Photograph Collection (JD2287a/12), The Center for American History, The University of Texas at Austin, and Smith
Barney
59
Techno Savvy – April 8, 2005
Consider that, without VoIP, some companies need to maintain four separate
communications infrastructures:
¤ the data network,
¤ a private branch exchange (PBX) for external phone calls,
¤ an “automatic call distributor” (ACD) to route calls internally, and
¤ a voice-mail system.
By switching to VoIP in its purest form (as opposed to, for example, a “hybrid”
version, as discussed below), such companies would, in theory, need only the data
network, likely resulting in a savings in administration and maintenance. Many
VoIP systems are administered via a Web-based browser interface that enables
managers to enact changes to an employee’s phone setting remotely. (Contrast that
with the “old” way of a telecom engineer coming to the physical location in order to
make the moves, adds, and changes.)
On top of cost savings,
VoIP has many
productivity-enhancing
advantages.
On top of cost savings, VoIP has many productivity-enhancing advantages. For
example, employees can “log on” to their phone from anywhere in the world, be it in
the new office across the hall from their old cubicle or their temporary desk at the
company’s European headquarters. (This is known as “presence-based services.”)
Moreover, in the case of an enterprise with multiple branch offices, it can have all
locations served by a single IP PBX, thus enabling extension dialing between farflung locations.
Another attractive feature of VoIP is “unified messaging,” which integrates voice
with other software programs, such as e-mail, instant messaging, and calendar and
collaboration applications. Consequently, voice mail, e-mail and all other messages
go into one inbox. Users can save their voice mails, reply to them with text, video,
or voice, and attach spreadsheets and presentations to their voice mail if appropriate.
Moreover, numbers (e.g., 212-816-3532) are no longer necessary, because VoIP can
operate with names (e.g., Edward Kerschner). Initiating a call, whether to just one
person or to many via a conference call, requires only a single click on a name or an
icon. In addition, “IP conferencing” facilitates videoconferencing over IP, as well as
sharing documents over the same line.
The Drivers of VoIP Growth
Gartner Consulting observes that the useful life cycle of an enterprise telephony
system is largely determined by its size:
End-user research shows smaller systems have shorter life cycles than larger
systems. Large system life cycles can be three to five times longer than their
small system counterparts. It should be noted, however, that system life cycles
can and do vary. For example, there are certainly cases of traditional PBX
systems and lines having useful life cycles of nearly 20 years. The following
outlines average life cycles of systems by the size of the system:
¤ On average, systems and lines in the 1- to 24-line size segment have useful
operational lives between three and five years.
60
Techno Savvy – April 8, 2005
¤ Systems and lines in the 25- to 100-line size segment have average life
cycles of four to six years.
¤ Systems and lines in the 101- to 400-line size segment have average life
cycles of six to nine years.
¤ •Systems and lines in the 401-line or more size segment have average life
cycles of seven to 11 years.27
By way of a brief history of the enterprise telecommunications equipment market, in
the 1980s there was a major PBX upgrade cycle as corporations shifted from analog
to digital. As a technology, analog is the process of taking an audio or video signal
(in most cases, the human voice) and translating it into electronic pulses. Digital, on
the other hand, breaks the signal into a binary format where the audio or video data
are represented by a series of 1’s and 0’s.
Digital PBX technology offered enterprises better voice clarity (although some say
analog has a richer sound quality) since digital signals can be regenerated if they
degrade over long-distance transport. Digital also enabled increased capacity
utilization since many more digital signals can fit into the equivalent bandwidth
when compared to analog.
The next upgrade cycle, in the 1990s, centered around voice messaging and
advanced digital features like call forwarding and conferencing. With the average
expected life of most enterprise PBX equipment at around ten years, the subsequent
upgrade cycle should have been in 2000. However, given the rampant Y2K fears at
the time, that upgrade was postponed as enterprises focused their attention and
spending on IT networks. Then, the recession of 2001–02 squeezed corporate
budgets, and the communications equipment upgrade cycle was postponed once
again.
Given that it is not in
their interest to extend
the life of this
equipment,
manufacturers are, in
effect, making their PBX
equipment obsolete.
The end result is that, for the majority of enterprises, the telecommunications
equipment in place today is much older than it should be. At the same time,
equipment suppliers are giving enterprises some good incentives to upgrade their
systems. Given that it is not in their interest to extend the life of this equipment,
companies such as Avaya and Siemens are, in effect, making their PBX equipment
obsolete. Quite simply, they are raising the cost of sustaining old equipment (e.g.,
by increasing the cost of replacement components) and they will steadily ratchet up
these maintenance costs over time.
Figure 28 illustrates Gartner Consulting’s forecasts that, in the North American
premises switching equipment market, the market share (in revenue dollars) of
traditional PBX equipment will decline from 39% in 2004 to 8% in 2008, the market
share of IP-enabled equipment will rise from 42% to 55%, and the market share of
pure IP equipment will increase from 18% to 38%, representing a compound annual
growth rate of 23% for pure IP equipment. (Note that, typically, an IP-enabled
product started life as a traditional PBX, but it has been further developed to include
27
“Traditional PBX/KTS Installed Base in North America Declines as Users Migrate to IP,” by Megan Marek Fernandez; January 5,
2005.
61
Techno Savvy – April 8, 2005
IP capabilities.) Similarly, Rick McConnell, general manager of Rich Media
Communications at Cisco, believes 2005 will be the year when line shipments of
pure IP technology surpass traditional PBX line shipments for the first time.
Figure 28. Market Share by Technology
total end user revenues: North American premises switching equipment extension line shipments
60%
PBX Traditional Line
IP-Enabled
Pure IP
40%
20%
0%
2004
2005
2006
2007
2008
Source: Gartner Consulting, “Traditional Forecast: Premises Switching Equipment, Worldwide, 1999–2008” by Megan Marek Fernandez,
Christopher Lock, and Isabel Montero; December 10, 2004
Gartner Consulting28 and Smith Barney have identified the following companies as
being among the leading vendors of VoIP equipment:
¤ Avaya,
¤ Cisco Systems,
¤ Nortel Networks, and
¤ Siemens.
Cisco estimates that, for
every dollar spent on
VoIP, $3-$5 are spent on
the required
simultaneous upgrade of
the security and network
switching equipment.
Cisco, in particular, seems especially well positioned to benefit from the VoIP
opportunity. In 2004, Cisco, which has about one-third of market share, had VoIP
revenues of $1.2 billion (on a total revenue base of $22 billion), up from $300
million in 2003. But the benefits of VoIP to Cisco are much more significant than
these numbers suggest, given that Cisco estimates that, for every dollar spent on
VoIP, $3–$5 are spent on the required simultaneous upgrade of the security and
network switching equipment. Given its dominant (80%) share of the networking
equipment market (in contrast to its one-third share of the VoIP market), Cisco is
particularly well positioned to grab the lion’s share of those revenues.
Upgrading Security Solutions
Most VoIP solutions are deployed on servers that run either proprietary operating
systems or commercially available systems such as those offered by Microsoft.
These servers are as susceptible to hacking or denial-of-service attacks, as are any
other servers deployed in an enterprise IT department. If hackers were to break into
a company’s voice system, sensitive data would be at risk. Other security concerns
28
“Magic Quadrant for Corporate Telephony in North America, 2004,” by Jay Lassman, Richard Costello, and Jeff Snyder; August
16, 2004.
62
Techno Savvy – April 8, 2005
include the “hijacking” of a voice gateway to place unlimited unauthorized free calls
across the globe, as well as eavesdropping and tampering with voice systems, user
identities, and telephone configurations.
In summary, IP phone systems face all of the same security threats as traditional
systems, as well as new security issues due to the inherently open nature of IP,
which was designed to enable greater flexibility and productivity compared to
proprietary circuit-based systems.
Cisco’s rapidly growing
security business
recently passed the $1
billion per year revenue
mark.
In this regard, in addition to selling the IP telephony infrastructure, phones,
management software, and applications, Cisco also provides security solutions for
protecting all types of IP traffic. Cisco’s security solutions are based on its SelfDefending Network Initiative. This is an integrated systems-level approach
designed to protect the network infrastructure, call processing systems, endpoints
(phones, video terminals, and other devices), and applications. Cisco’s rapidly
growing security business recently passed the $1 billion per year revenue mark and,
driven by the company’s intense focus on security issues and the burgeoning VoIP
upgrade cycle, Smith Barney expects at least 25% annual growth in security
revenues at Cisco over the next several years.
Upgrading the Basic Switching Infrastructure
The second step in the network upgrade/VoIP implementation process is upgrading
the Ethernet switches in the wiring closet and data centers with 1) Layer 3 and
quality-of-service (QoS) functionality, 2) additional capacity, and 3) power-overEthernet (PoE) capabilities.
¤ As part of this process, most Layer 2 switches, which are very common in
today’s outdated networks, need to be upgraded to Layer 3 switches, which have
integrated IP route processing and QoS capabilities. Layer 2 switches are fairly
simple devices that make traffic switching decisions based on simple yes and no
decisions. Data packets enter the switch and, depending on the header
markings, are switched to one port or another regardless of the current network
load conditions. Layer 3 switches, on the other hand, include route-processing
capabilities. Somewhat like routers, these switches have some ability to monitor
network conditions and make moment-by-moment decisions to send packets
over the most efficient path depending on the current network load and traffic
patterns. This Layer 3 capability, combined with specific QoS protocols that
have been developed, can greatly improve network performance and help ensure
that sufficient bandwidth is reserved for real-time, latency-sensitive applications
such as voice and video.
¤ Given that 1) more traffic will be added to the data network with VoIP and 2)
the price points on Gigabit and 10 Gigabit Ethernet have fallen significantly,
enterprises converting to VoIP tend to upgrade their 10/100 Megabit “Fast
Ethernet” switches with Gigabit to the desktop (in the wiring closet) and
potentially with 10 Gigabit switches in the datacenter.
¤ Finally, the newly installed switches are increasingly power-over-Ethernet (PoE)
enabled so that the IP phones can draw power over the data network lines.
Traditional telephone systems are powered over the lines that are drawn from
63
Techno Savvy – April 8, 2005
the local phone company’s central office. This is not the case for VoIP phone
systems, which require a power source within the enterprise. The desire for an
electric-utility-independent power source for the phones adds additional
incentive to upgrade or replace most of the switch ports in the wiring closet.
PoE, which is a fairly new technology, is based on a new standard completed in
mid-2003. Smith Barney estimates that only about 6% of all Ethernet switches
shipped in 2004 were PoE enabled, but rapidly falling price premiums and the
increasing penetration of VoIP in the enterprise promise to rapidly grow this
penetration.
By the time the changes outlined above are completed, the network upgrade needed
to fully implement VoIP becomes fairly significant, and this is where Cisco sees the
greatest benefit. Enterprise Ethernet switching is Cisco’s largest revenue segment,
accounting for over 40% of total sales in 2004, or about $10 billion. Remember that
for each $1 of VoIP gear Cisco sells, it typically sees approximately $3 in additional
network switch sales.
Smith Barney analysts
expect Cisco’s VoIP
sales to grow 50% in
2005.
Smith Barney analysts expect Cisco’s VoIP sales to grow 50% in 2005, to $1.8
billion, which implies $600 million of incremental year-over-year VoIP sales. The
derived demand for network switching should be about three times this number, at
$1.8 billion, yielding an 18% annual growth rate in switching from VoIP-related
network upgrades alone. It is this line of analysis that leads Smith Barney analysts
to believe Cisco will likely grow faster than the current consensus view over the
next several years.
The Case for Upgrading
Some have asked whether the complexity of upgrading the security and networking
infrastructure will be an impediment to growth in the VoIP market, but this has not
seemed to be the case when recent VoIP equipment growth rates are considered.
Cisco argues that the networking upgrade is justified by the capital and operational
cost savings of VoIP, which Smith Barney telecom equipment analyst B. Alex
Henderson has estimated can run as high as 43% when compared to the cost of a
traditional PBX system.
Customers have a return
on investment period on
their VoIP investments
ranging from six months
to three years, with the
average payback period
in the 12- to 14-month
range.
In that regard, Cisco’s Director of Product Marketing for IP Communications Hank
Lambert told us that one of Cisco’s largest VoIP customers had meticulously
documented its ongoing hard-dollar operating cost savings (over its previous legacy
system), and found that it was saving 75% on station moves (moves, adds, and
changes), 50% on PBX service contracts, 42% on long distance tolls, 38% on local
trunking charges, and 93% on conference calling.
Mr. Lambert told us about a modeling tool that Cisco has developed called the Cisco
Network Investment Calculation Tool. The company has run at least 5,500 separate
customer cases through this tool and found that customers have a return on
investment period on their VoIP investments ranging from six months to three years,
with the average payback period in the 12- to 14-month range.
With regard to how the scope of these savings has evolved, Rick McConnell told us
how Cisco’s enterprise VoIP sales approach has developed over the last five years:
64
Techno Savvy – April 8, 2005
¤ Five years ago, VoIP was primarily about toll bypass, or lowering telecom
service bills for the customer.
¤ The next stage was network simplification, or taking hundreds of PBXs at
hundreds of branch locations and replacing them with a few clusters of call
manager servers for simplified voice system management.
¤ Cisco believes it is now at the stage where it is selling hard-dollar cost savings
and, increasingly, enterprise-wide productivity improvements. The productivity
benefits primarily come from new applications that ride on top of the
networking/voice infrastructure, such as video conferencing, unified messaging
and communications, and virtual call centers.
It is important to point out, however, that enterprises have the option of not
switching completely to VoIP. Alternatively, they may go to hybrid systems that
combine some of the features of VoIP with legacy PBX systems. These hybrid
PBXs combine features of both traditional and IP telephony and can run either
solution depending on the software pack installed. (According to Cisco’s Rick
McConnell, whereas 18–24 months ago enterprise customers were evaluating VoIP
versus traditional PBX technology, today the decision is virtually always between a
“pure” IP solution and a hybrid system.)
Of the four leading
vendors of IP telephony,
Cisco is the only vendor
without an installed base
of legacy PBX
customers.
Of the four leading vendors of IP telephony, Cisco is the only vendor without an
installed base of legacy PBX customers. For this reason, the initial deployment of
Cisco equipment is always a “greenfield” build, and any existing equipment is often
scrapped. Avaya, Nortel, and Siemens, on the other hand, have significant installed
bases of traditional telephony customers and each derive the majority of their current
enterprise VoIP revenues from evolving their installed base of customers to hybrid
VoIP solutions. Often they are able to leverage some of the installed gear in the
process.
There are advantages to a hybrid/gradual approach. For instance, there is less upfront capital spending required. In addition, it is likely that many enterprises with
large concentrations of employees in a particular location will adopt a hybrid model
because it is less disruptive to the work environment and also allows for any
problems to be identified early on in the transition.
The enterprises that initially adopt a hybrid approach will, however, eventually need
to migrate to pure VoIP given that 1) the old legacy systems will ultimately need to
be replaced and 2) pure VoIP is ultimately less expensive, particularly given that it
eliminates the need for telecom service contracts. While the management of PBX
systems is typically outsourced by enterprises, the management of VoIP systems is,
by definition, an integral part of the management of an enterprise’s computer
network.
In that regard, Cisco’s Rick McConnell told us that he had recently spoken with a
customer who started down the path of evolving their voice network through a
hybrid approach but, after getting part of the way into the project, realized the harddollar operating cost savings from a pure IP approach were so significant that they
decided to make a clean cut over to pure VoIP.
65
Techno Savvy – April 8, 2005
Medtronic’s Real-Time Patient Monitoring
In this section, we discuss how Medtronic is taking advantage of the latest
developments in networking technology to develop real-time patient monitoring
devices. We start by discussing how the company is developing a wireless system
for real-time monitoring of congestive heart failure. We then analyze Medtronic’s
development of a real-time monitor of blood sugar levels, a giant step toward the
development of an external artificial pancreas.
Congestive Heart Failure:
Attacking the Problem Through Better Diagnostics
Today, pacemakers and implantable cardioverter defibrillators (ICDs) are two of the
largest and best-known markets in the medical device field, with combined sales
exceeding $8.5 billion in 2004. While these devices were originally designed to
treat slow heartbeats (pacemakers) and rapid heart beats (ICDs), over the past five
years congestive heart failure has become an increasingly serious condition, in part
because of the nation’s obesity epidemic.
Treating episodes of
acute heart failure is the
largest treatment cost for
hospitals in the U.S., at
roughly $40 billion
annually.
Congestive heart failure (CHF) is not an electrical disorder of the heart but, rather, a
muscular problem whereby the heart enlarges and loses pumping function. As a
result, fluid starts to back up in the lungs, causing a shortness of breath and/or a
drowning sensation. It is estimated that over five million people in the U.S. have
some form of heart condition, 500,000 people develop heart failure each year, and
another 250,000 die annually from this disease. Treating episodes of acute heart
failure is the largest treatment cost for hospitals in the U.S., at roughly $40 billion
annually.
Today, all of the major cardiac rhythm management (CRM) companies (Medtronic,
Guidant, and St. Jude Medical) have pacemakers and ICDs that treat a small subset
of CHF patients — those whose ventricles do not beat in sync, causing a reduction
in pumping function. Devices for the treatment of this condition — called CRT-P
(pacemakers) and CRT-D (ICDs) — include an additional lead that snakes around
the left ventricle and uses electrical signals to keep the ventricles beating in sync.
But while the data from these devices have shown a reduction in hospitalizations and
mortality, they do not work for the majority of the CHF population and, more
importantly, do nothing to sense a worsening condition of CHF that can lead to
hospitalization and/or death.
In that regard, the state of monitoring a patient’s CHF condition has been woefully
simplistic for years. The current method of evaluating a heart failure patient’s fluid
level (outside of a physician’s office or hospital) is by measuring fluctuations in
body weight via a standard scale. An increase in weight suggests to a physician that
fluid is backing up, which, in turn, suggests that the risk of a trip to the hospital is
increasing.
While patients are supposed to weigh themselves daily, compliance has been a longstanding issue, not just in terms of taking the daily measurement, but also in calling
the physician when weight fluctuations are seen. What makes this an even lessexact science is that the range of weight change before an adverse episode can be
very slight, in some cases, just a couple of pounds of added weight. Not
66
Techno Savvy – April 8, 2005
surprisingly, this system has done little to slow the rapid increase in hospitalization
costs due to CHF.
Over the past ten years, CRM market leader Medtronic has undertaken a large R&D
effort to address the diagnostic side of the CHF patient market, and it appears the
company is finally closing in on making these efforts pay off, both in terms of share
gains in the existing CRM market for devices and in market expansion.
The Recent U.S. Release of the Sentry ICD Launched the Effort…
Late in 2004, Medtronic began its foray into the CHF diagnostics market with the
U.S. launch of the Sentry ICD. The Sentry is a full-featured CRT-D ICD that also
includes an algorithm dubbed OptiVol, which measures impedance, or “wetness,” in
the lungs.
Medtronic’s OptiVol has
been shown to have a
sensitivity rate of 73% in
the detection of the
onset of an acute CHF
episode at least one
week before
hospitalization occurs.
While not a direct measurement, OptiVol has been shown to have a sensitivity rate
of 73% for detecting the onset of an acute CHF episode at least one week before
hospitalization occurs. The patient is notified of a worsening trend either by an alert
(i.e., device beep) or by a physician, who has access to the alert remotely through the
company’s Carelink network, which is described in more detail below. Currently,
neither the patient alert nor the Carelink connectivity features of the Sentry ICD
have been approved by the FDA, but they are expected to be approved by year-end.
…and It Should Be Followed by the Next-Generation Chronicle
While OptiVol represents the first breakthrough in fluid monitoring, Medtronic’s
follow-on product, the Chronicle, raises the bar significantly. The initial version of
the Chronicle is not an ICD or pacemaker, but a pacemaker-sized device that
includes a unique lead with a blood-pressure transducer on the tip. The tip of this
lead is placed near the base of the pulmonary valve and tracks changes in valve
pressure, with higher pressure suggesting that fluid is backing up.
Medtronic’s Chronicle
offers real-time
monitoring. Hence,
Chronicle allows the
physicians to track the
patient and their
responses to the various
CHF drugs they are
typically using.
Unlike OptiVol, which is more geared to alerting the patient/physician before a
possible upcoming event, Chronicle offers real-time monitoring. Hence, in the
hands of more aggressive CHF specialists, Chronicle allows the physicians to track
the patient and their responses to the various CHF drugs they are typically using,
thereby allowing the physicians to optimize the dosing. The pre-market approval
(PMA) application for Chronicle has already been submitted to the FDA, and
Marshall Stanton, Medtronic’s cardiac rhythm management medical director expects
approval in fiscal 2006 (ending April).
Since Chronicle is a diagnostic-only device, Medtronic plans to follow this product
up with the Chronicle ICD in fiscal 2007. Chronicle ICD will include all of the
features of the company’s CRT-D devices and also incorporate the Chronicle
technology as well. Smith Barney analyst Matthew Dodds believes Medtronic will
begin a small clinical trial in the next couple of months in order to garner FDA
approval.
67
Techno Savvy – April 8, 2005
The Carelink Network Will Provide the Data Access
Medtronic’s Carelink will
allow the physician
access to changes in
fluid levels on a real-time
basis.
While these devices will offer the patient-alert feature, the real benefit will lie in
allowing the physician access to changes in fluid levels on a real-time basis.
Medtronic will accomplish this via its CareLink network (see Figure 29). CareLink
was initially introduced in 2002 to help electrophysiologists track implanted
pacemakers and ICDs for the monitoring of arrhythmias (i.e., electrical disorders).
With Carelink, a patient with a networked implanted device has a small monitor at
home that uses a wand (held on top of the device for a few minutes) to upload data
to the network. Eventually, Medtronic’s home monitor will do away with the wand
approach and will use a wireless connection in a fashion similar from going from a
wired Internet connection to “wi-fi.”
Figure 29. The Evolution of Medtronic’s CHF Monitoring Technology
Source: Medtronic
Medtronic’s Marshall Stanton points out that the company has already invested more
than $100 million into the Carelink network, and the number of physician offices
and patients that are part of the system has been growing rapidly (see Figure 30).
With fluid monitoring, Medtronic will offer access to Carelink via a secured website
to CHF specialists and cardiologists who treat CHF patients. Medtronic also plans
on expanding its 2,500-plus CHF sales force in the U.S. in order to put more focus
on these physicians.
Figure 30. Medtronic Carelink Adoption
Patients Enrolled (left hand scale) and Active Clinics (right hand scale)
2 5 ,0 00
400
P atie n ts E n ro lle d (L H S )
A ctive C linics (R H S )
2 0 ,0 00
350
300
250
1 5 ,0 00
200
1 0 ,0 00
150
100
5 ,0 00
50
0
0
Q1
Q2
Q3
FY 03
Source: Medtronic
68
Q4
Q1
Q2
Q3
FY0 4
Q4
Q1
Q2
FY0 5 YT D
Q3
Techno Savvy – April 8, 2005
The Market Opportunity Through Expansion and Share Gains Looks Substantial
Medtronic appears well
in front of the
competition in terms of
fluid management, the
next “game changing”
technology in the CRM
space.
Medtronic appears well in front of the competition in terms of fluid management,
which Matthew Dodds believes is the next “game changing” technology in the CRM
space. While both St. Jude and Guidant have early network systems, both are very
simplistic compared to Carelink, and only the St. Jude system (called Housecall) has
been rolled out. In addition, neither company has unveiled any device that will
include fluid management, and Matthew Dodds knows of no clinical trial under way
at either company in this area. Medtronic has already completed two studies,
including the phase III Compass HF pivotal study, which was unveiled at the
American College of Cardiology meeting on March 8.
So, how important is fluid management? For ICDs, Medtronic’s Marshall Stanton
estimates that 70% of all patients implanted with ICDs today have some level of
ventricular dysfunction related to heart failure, and the percentage of heart failure
patients that account for ICD implants continues to rise. In Matthew Dodds’
opinion, Medtronic’s estimates seem realistic, as roughly 35%–40% of ICD implants
today are CRT-D devices, and they are specifically for use for CHF related to
desynchronized ventricles.
When the Chronicle ICD
is out in 2007, a 10percentage-point gain in
market share would
translate into an extra
$512 million in sales, or
an extra 500 basis points
to Smith Barney’s 12%
sales growth estimate for
that year.
With the U.S. ICD market representing a $3.6 billion run rate today, and expected to
cross $6 billion in 2008, the opportunity for share gains is significant for Medtronic.
Today, Medtronic controls a roughly 48%–49% of the U.S. ICD market, and each
additional share point equates to $40–$45 million, or 50–70 basis points of sales
growth. In 2007, when the Chronicle ICD is expected to be out, a 10-percentagepoint gain in market share would translate into an extra $512 million in sales, or an
extra 500 basis points to Smith Barney’s 12% sales growth estimate for that year.
While Chronicle implants will be a tougher initial sale, given that it may be difficult
to convince some patients to get an implant just for monitoring, physician comfort
with the benefits via implants of Chronicle ICD could lead to market expansion. As
we noted above, CRT products are used when the heart’s pumping function is
compromised. This is known as systolic heart failure.
There are, however, an equally large number of patient’s that have diastolic heart
failure, which occurs when the heart becomes “still” and does not fill properly.
While this disorder does not lend itself to CRT treatment, some patients would fall
under either ICD implantation (under the new indications from clinical trials — i.e.,
MADIT II and SCD-HeFT) or pacemakers. These patients could also represent
share gain opportunities for Medtronic. The remaining patients — those with
diastolic heart failure that do not qualify for an ICD or pacemaker — would
represent a new patient population for the Chronicle. Smith Barney estimates that
30% of all CHF patients, or 1.65 million Americans, would fall into this category.
69
Techno Savvy – April 8, 2005
The Artificial Pancreas:
Merging Insulin Pumps with a Continuous Glucose Sensor
The other disease that
Medtronic is doing
pioneering work on is
diabetes, a chronic
condition whereby the
body cannot regulate the
blood’s glucose levels.
The other disease that Medtronic is doing pioneering work on is diabetes, a chronic
condition whereby the body cannot regulate the blood’s glucose levels. There is no
known cure for diabetes, which afflicts an estimated 150 million people worldwide.
In the U.S., the American Diabetes Association (ADA) estimated that 18.2 million
people were afflicted with diabetes in 2002, and roughly 1.3 million new cases are
being diagnosed each year. The ADA estimates that expenditures related to diabetes
in 2002 were $132 billion, including $92 billion in direct costs and $40 billion in
indirect costs. Most of the direct costs associated with diabetes involve
complications arising over time, including eye, kidney, and nerve damage.
There are two classifications of diabetes — Type 1 and Type 2. In the U.S., Type 1
diabetes accounts for 5%–10% of all diagnosed cases, and Type 2 accounts for
90%–95% of cases. Type 1 diabetes typically shows up in childhood or adolescence
and is defined as a situation in which the body does not produce enough insulin.
Insulin is necessary for the body to be able to convert sugar, the basic fuel for the
cells in the body. In Type 2 diabetes, either the body does not produce enough
insulin or the body has developed a resistance to insulin. While genetics play into
the risk of developing Type 2 diabetes, obesity is also a strong risk factor.
Diabetes is controlled by testing blood glucose levels and insulin injections. We
outlined above how Genentech pioneered the development of human insulin by way
of genetic engineering. In the U.S., the majority of Type 1 diabetics are insulin
dependent (i.e., requiring insulin injections) and about 3 million Type 2 diabetics are
also considered insulin dependent. The “gold standard” for testing blood glucose
levels — blood glucose meters — have been around for years.
With a blood glucose meter, the patient pricks his or her finger and puts a drop of
blood on a test strip, which is read by a meter. Ideally, blood glucose levels should
be checked by insulin-dependent diabetics four times per day, although the
estimated “real world” rate is close to two times. Several companies, including
Johnson & Johnson, Roche, and Abbott Labs are major players in the blood glucose
meter/strip market.
For insulin delivery, a one to two times daily syringe injection has been the gold
standard for years. Over the past 15 years, a new method of insulin delivery has
arisen — insulin pumps, which now represent a $700 million worldwide market.
These are small pager-sized devices that use a small needle and catheter to deliver
insulin at a more steady-state rate, which has been proven in many studies to
significantly reduce morbidity and long-term complications associated with diabetes.
Medtronic’s MiniMed Is Already Dominant in Insulin Pumps…
Medtronic’s MiniMed
division dominates the
U.S. market for insulin
pumps, with a roughly
75%–80% share of all
pump placements.
70
In 2001, Medtronic entered the diabetes market with the acquisition of insulin pump
pioneer MiniMed. Today, MiniMed dominates the U.S. market for insulin pumps
with a roughly 75%–80% share of all pump placements. But with only 270,000
pumps placed in total in the U.S., insulin pumps are only being used by roughly 14%
of all Type 1 diabetics and very few Type 2 diabetics (although the number of users
has been growing by a healthy 15%–20% over the past three years).
Techno Savvy – April 8, 2005
Figure 31. The Evolution of Diabetes Management
The Human Pancreas
Blood glucose test strip and insulin injection
The External Artificial Pancreas
Source: Smith Barney and Medtronic
71
Techno Savvy – April 8, 2005
…and Is Leading the Charge in Continuous Glucose Monitoring
While Medtronic already enjoys significant advantages in brand awareness,
reputation, and sales force size, the company appears to be closing in on a major
technological advantage to distance itself further from the competition. For years
MiniMed has been working on a continuous glucose monitoring system (CGMS) as
an alternative to blood glucose meters. Unlike blood glucose meters, the CGMS
uses a tiny electrode and electrochemical technology to continuously read glucose
using fluid in the subcutaneous tissue (i.e., the area of fat between the skin and
muscles). As we describe in more detail below, within the next 12 months, we
expect Medtronic to marry its insulin pump and CGMS to create the first “artificial
pancreas.”
In 2007 Medtronic will
combine the glucose
sensor and insulin pump
to create one device.
In 1999, MiniMed received U.S. approval for a diagnostic version of the CGMS for
“three-day monitoring” (whereby the sensor, a disposable soft-needle, is inserted
into the abdomen and replaced after three days), called the CGMS System Gold. In
February 2004, Medtronic received FDA approval for its Guardian CGMS, which is
available for commercial use but only features an alarm feature when glucose rises
too high or falls too low. The next generation of this technology is the Guardian RT
CGMS, which was submitted to the FDA several months ago and was just launched
in Europe and Canada. The Guardian RT will provide a real-time blood glucose
reading every five minutes rather than the alarm feature offered by the Guardian.
Brad Enegren, Medtronic vice president of R&D for diabetes, expects that the
Guardian RT will be approved during fiscal 2006 (ending April). Since this product
is a stand-alone sensor and is unlikely to garner private or government
reimbursement right away, it is not expected to be a meaningful revenue generator
In fiscal 2007, Mr. Enegren expects that Medtronic will move one step further by
marrying the Guardian RT with the Paradigm 715 pump. In combining the products,
there will only one “box” that will combine the algorithms of the sensor and pump.
This first “sensor-augmented pump” system is slated to begin a 140-patient clinical
trial and 600-patient “outcome” trial around the end of April. Following that, Mr.
Enegren expects the next-generation system to shift from sensor augmented to “open
loop,” which will include algorithms that generate therapy suggestions to the patient
based on the data generated by the sensor.
More clinical data are likely to be required for this advance, so Mr. Enegren expects
approval in late fiscal 2006 or early fiscal 2007. The next step is the true artificial
pancreas, or “external closed loop,” as Medtronic will automate the sensor readings
with insulin delivery from the pump, taking the patient out of the loop. Since more
clinical data will likely be required, Matthew Dodds expects this approval to occur
in late fiscal 2008.
While this certainly seems like several iterations down the road from where
Medtronic is today (see Figure 32), it is important to note that the technology is
really already in place, as the company has already been performing small,
controlled studies, testing the automated sensor-augmented pump. In addition to an
external system, Medtronic continues to develop an internal system using its
implantable pump, which is still in clinical trials.
72
Techno Savvy – April 8, 2005
Figure 32. The Evolution of the External Artificial Pancreas
*Not yet approved by the FDA or European Health Authorities
Source: Medtronic
Unlike the fluid management opportunity in CRM, Medtronic’s market share in
pumps is already at a level that suggests limited opportunity for further share gains.
Hence, the opportunity for the artificial pancreas is really centered around: 1)
patient adoption of the pump technology, driven by the advantage of the sensor, and
2) additional revenues from the implantable sensor component of the system.
Medtronic appears to
have a major advantage
over its competition in
the development of an
artificial pancreas.
Competitively, Medtronic appears to have a major advantage over its competition in
the development of an artificial pancreas. Abbott Labs is one generation behind
with its Navigator CGMS (the company is looking to bypass the “alert” approval,
but this appears unlikely). Furthermore, Abbott does not own its own insulin pump
business, but has instead opted to partner with Smith/Deltec. In the opinion of
Matthew Dodds, not owning the pump component is likely to make it difficult to
control the iterative steps necessary to eventually move to the artificial pancreas.
Fringe pump player Animas is also working on a CGMS, but it is in the early stages
of development. Finally, it is also important to note that Medtronic should also be
able to take advantage of the Carelink network described above, as the company
plans on launching a portal for physicians who treat diabetics when the “sensor
augmented pump” is launched.
73
Techno Savvy – April 8, 2005
Tech Innovators:
Six Case Studies
Fuel Cells: Mechanical Technology and Plug Power
Some of the biggest complaints from users of personal digital assistants (PDAs), cell
phones, and laptop computers center around the life expectancy and energy storage
capacity of the lithium ion battery powering the equipment. Lithium ion batteries
are the predominant power source in the premium power pack industry, accounting
for about 90% of market share. However, as portable electronics (such as cell
phones) have become more complex (e.g., incorporating functions such as color
displays and embedded digital cameras) and power demands have increased, a
typical lithium ion battery today can support only about two to three hours of talk
time. As Figure 33 illustrates, improvements in laptop components, such as disks
and CPUs, have not been matched by improvements in battery energy density.
Figure 33. Improvements in Laptop Components
index, 1990 = 1
10,000
Disk Capacity
CPU Speed
Available RAM
Battery Energy Density
1,000
100
10
1
1990 1991
1992 1993 1994 1995 1996 1997
1998 1999 2000 2001 2002 2003
2004 2005
Source: Thad Starner, assistant professor of computing at the Georgia Institute of Technology College of Computing
Direct methanol fuel
cells (DMFC) seem likely
to replace lithium ion
batteries because of
their advantages of
lighter weight/greater
portability and improved
energy density.
Recall that lithium ion replaced nickel cadmium as the power source for premium
electronic products in the early 1990s, reflecting lithium ion’s key advantages of
lighter weight and improved energy density (about two times better). Now a new
technology — direct methanol fuel cells (DMFC) — seems likely to replace lithium
ion batteries because of its similar advantages of lighter weight/greater portability
and improved energy density. (DMFC technology provides at least twice the
amount of power currently offered by lithium ion batteries).
Direct methanol fuel cells are micro fuel cells. A micro fuel cell is a portable power
source for lower power electronic devices, which converts chemical energy into
usable electrical energy. It generates power through the electrochemical reaction of
a fuel in the presence of a catalyst.
74
Techno Savvy – April 8, 2005
Figure 34. The Evolution of Portable Power Sources
“Wet” Cell Battery
Lithium Ion Battery
Direct Methanol Fuel Cell
Micro Fuel Cell Powering a PDA
Micro Fuel Cell Powering an Entertainment System
Source: Smith Barney and Mechanical Technology
75
Techno Savvy – April 8, 2005
So, like batteries, fuel cells generate electricity through a chemical reaction.
However, unlike batteries, fuel cells are recharged with a fuel source instead of
electricity. A number of companies are currently developing a micro fuel cell that
uses methanol, a hydrocarbon (most commonly used as windshield washer
antifreeze), as its fuel. Those include large companies such as Motorola, Toshiba,
Casio, NEC, Sony, and Panasonic, as well as smaller companies such as
Mechanical Technology.
Methanol has the
advantage of high
energy density, thereby
facilitating longer use
time, as well as products
with increased
functionality.
Methanol has the advantage of high energy density, thereby facilitating longer use
time, as well as products with increased functionality. Specifically, methanol micro
fuel cells offer three main advantages over lithium ion batteries:
¤ Weight. As noted, a typical lithium ion battery has approximately two to three
hours of peak time (or talk time for mobile phone users). In contrast, two to
three cubic centimeters (cc) of neat methanol, each cc equivalent to
approximately one drop, can provide the same amount of power. So, a few
drops of methanol can provide the same power as a lithium ion battery. It is
likely that a DMFC cartridge for a cell phone will carry upward of 40 ccs of
methanol, which translates into roughly 35–40 hours of peak operating capacity
(i.e., talk time).
¤ Portability. DMFCs are more portable and easier to recharge than existing
battery technologies. Specifically, a battery is merely an energy storage device.
Once the stored energy is used, the battery must be recharged or replaced. As a
result, the end user must carry a charger and/or replacement batteries.
Conversely, a micro fuel cell is an energy-producing device that operates on a
small cartridge of methanol. As explained above, a few drops of methanol can
produce the same amount of energy as an entire lithium ion battery.
Accordingly, it is easier to carry (or conveniently purchase) a few replacement
cartridges than to carry a few batteries and/or a charger.
The power produced by a
fuel cell is approximately
two to three times that of
an equivalent lithium ion
power pack.
76
¤ Power Density (i.e., the amount of energy produced by a given volume of
battery). The power produced by a fuel cell is approximately two to three times
that of an equivalent lithium ion power pack (see Figure 35). (Prismatic lithium
ion batteries, typically used in a cell phone, and cylindrical lithium ion batteries,
typically used in a laptop computer, have 200-plus Watt Hours per liter [Wh/l of
energy] and 400-plus Wh/l, respectively. In contrast, a fuel cell operating at
20% efficiency has approximately 750 Wh/l; taking the efficiency level up to
30% yields 1,100 Wh/l. We talk about methanol efficiency because traditional
micro fuel cell technology has required the methanol be diluted with water,
thereby lowering the power density.) Although the weight and portability
advantages are important, it is likely that energy density (i.e., run time) would be
the strongest selling point to a potential end user.
Techno Savvy – April 8, 2005
Figure 35. Comparative Performance of Lithium Ion Batteries vs. DMFCs
watt hours per liter
1200
800
400
0
L ith iu m P ris m a tic
L ith iu m C y lin d ric a l
D M F C 2 0 % e ffic ie n t
D M F C 3 0 % e ffic ie n t
Source: Mechanical Technology
Note here that there are several hurdles that must be overcome in order for DMFC
technology to take market share from existing lithium ion technology.
When lithium ion
batteries were first
introduced, they were
three times as expensive
as the prevailing nickel
cadmium batteries, yet
the market was willing to
pay up for their improved
performance.
¤ Pricing. Lithium ion battery packs for notebook computers and mobile phones
(which, as noted, make up more than 90% of the portable electronics target
market) sold for average prices of $88.39 and $12.51, respectively, in 2002. By
2005, these prices are expected to drop by 25.0% (for notebook computers) and
26.5% (for mobile phones) to $66.43 and $9.20, respectively. While DMFCpowered products are not yet commercially available, and the companies
involved have not provided pricing information (for competitive purposes), the
pricing of fuel cells must ultimately be at a level that is at least competitive with
lithium ion batteries. That said, Dr. William P. Acker, CEO of Mechanical
Technology, points out that when lithium ion batteries were first introduced,
they were three times as expensive as the prevailing nickel cadmium batteries,
yet the market was willing to pay for their improved performance.
¤ Regulations. At the present time, methanol cannot be transported either
internationally or domestically in the passenger cabins of commercial aircraft.
However, an initial step in gaining regulatory approval occurred in December
2004, when the United Nations Committee of Experts on the Transportation of
Dangerous Goods (CETDG) recognized properly packaged methanol as
transportable cargo. The next step is that the transportation regulatory bodies of
individual countries must each allow properly packaged methanol to be
transported in the passenger cabins of commercial airplanes. Mechanical
Technology’s Dr. Acker expects that these airline transportation issues will be
resolved within two years.
¤ Threats from Existing Technology. Manufacturers of existing technologies are
working vigorously to make lithium ion batteries last longer. Some observers
claim that lithium ion batteries have not fully matured because the metals and
chemicals used are constantly changing. Moreover, others point out that
semiconductor companies are working to make devices that use battery power
77
Techno Savvy – April 8, 2005
more efficiently. However, Mechanical Technology’s Dr. Acker believes that,
even with the improvements that many observers are predicting, lithium ion
batteries are still not likely to come close to the performance offered by micro
fuel cells.
So, these issues do not seem insurmountable. Consequently, companies such as the
ones listed above are all developing DMFC technology for portable electronic
products. Given the size of the potential market (estimated to grow to $10 billion in
2010 from $5 billion in 2004), first-mover advantage will be key. In that regard, on
December 14, 2004, Mechanical Technology introduced its first commercial
product (a DMFC power pack for an Intermec RFID inventory scanner). This was a
significant event because it was the first time that a DMFC-powered product was to
be tested in a “real world” setting against lithium ion batteries.
Mechanical Technology’s strategy is to first enter selected markets in order to
validate and test its DMFC power system (named Mobion). These end markets,
which include both industrial and military applications, have been targeted by
Mechanical Technology due to their ability to adopt technologies earlier than
consumer markets, as well as the inherent value that fuel cells offer when compared
to existing power sources.
¤ The target industrial market for Mechanical Technology includes portable
electronic devices (e.g., handheld inventory scanners — such as the
aforementioned Intermec RFID inventory scanner — notebooks, PDAs, and
tablets) that are designed for rugged use. Major users include distributors,
manufacturers, transport companies, utilities, public safety, retailers, hospitals,
and financial institutions.
The consumer
electronics power pack
market, with a current
estimated market value
of $5 billion, ultimately
represents the largest
potential opportunity for
portable fuel cells.
¤ With regard to military markets, we note that, on average, a soldier carries
approximately ten batteries — each weighing 2.3 lbs. — to power radios, a
handheld global positioning system (GPS), flashlights, night-vision goggles, and
laser sights. It is estimated that, when using DMFC technology, this weight load
could potentially be cut by 50%–80%, depending on the application, while still
offering the same amount of power.
However, the consumer electronics power pack market, with a current estimated
market value of $5 billion, ultimately represents the largest potential opportunity for
portable fuel cells. Devices such as PDAs, handheld entertainment systems, and
mobile phones — i.e., lithium-ion powered devices — are the most likely candidates
for DMFC-based power systems given their increasingly complex technology and
power requirements.
Mechanical Technology
entered into a strategic
alliance agreement with
Gillette to develop and
commercialize micro fuel
cell products that power
handheld consumer
devices.
78
In that regard, in September 2003, Mechanical Technology entered into a strategic
alliance agreement with Gillette under which Mechanical Technology, Gillette, and
Gillette’s Duracell business unit intend to develop and commercialize micro fuel cell
products to power handheld, mass-market, portable consumer devices. Specifically,
Gillette is developing ways to package the methanol to be used in the DMFC units.
Mechanical Technology’s Dr. Acker observes that developing industry standards is a
key factor behind the Mechanical Technology/Gillette-Duracell relationship, as this
will pave the way for the introduction of micro fuel cells into the mass market. (We
Techno Savvy – April 8, 2005
noted above that the key to IBM’s successful development of a true “personal
computer” was its open system that attracted a wide range of partners, in sharp
contrast to Apple’s closed, proprietary system.)
At the other end of the spectrum, another group of companies is focused on
developing fuel cells for much larger applications, most notably on-site and mobile
power generation. Most of these companies fall into one of two camps: either they
are part of a larger corporation (e.g., General Motors, Ford, Honda, United
Technologies, Siemens, Dow Chemical, 3M, or General Electric), or they are
entrepreneurially driven (e.g., Ballard Power, Hydrogenics, and Plug Power). Many
of the entrepreneurially driven companies employ proton exchange membrane
(PEM) fuel cell technology to create a fuel cell power plant consisting of several
individual “cells” stacked together.
The key attributes of
PEM fuel cells include
low operating
temperatures, few
moving parts (if any),
extremely quiet and
environmentally friendly
operation, and rapid
start-up times.
These individual fuel cells need pure hydrogen to run efficiently and generate 0.5–
0.9 volts of direct current electricity. The individual cells are combined in a “stack”
configuration to produce the higher voltages more commonly found in low- and
medium-voltage distribution systems. The stack is the main component of the
power section (or “engine”) of a fuel cell power plant.
In select markets — most
notably battery backup
— fuel cells can compete
with existing
technologies.
The long-term potential of fuel cells is underscored by the billions of dollars already
invested by the blue chip industrial companies listed above. However, a key issue is
market acceptance. Lead acid batteries and diesel generators are proven
technologies that have worked for decades and are cost efficient today. (On the
other hand, the disadvantages of these technologies are environmental concerns,
reliability, and maintenance costs.) In select markets — most notably battery
backup — fuel cells can compete with these existing technologies.
The key attributes of PEM fuel cells include low operating temperatures, few
moving parts (if any), extremely quiet and environmentally friendly operation, and
rapid start-up times. In addition, the solid membrane also allows a flexible
configuration for the system (the stack can be mounted horizontally or vertically),
and the system copes well with large fluctuations in power demand and loads.
These features make PEM fuel cells the most flexible and versatile of the fuel cell
systems. (Typically they are sized at less than 100 kilowatts [kW] of power.)
Plug Power is the first company to bring a reliable, economically viable product to
market that can be manufactured in mass quantities. Moreover, Plug Power
President and CEO Dr. Roger Saillant, believes that the likely passing of the Energy
bill in the next 12–24 months will act as a powerful catalyst for the fuel cell market,
given that the bill will likely include tax legislation that is favorable for fuel cell
users.
Plug Power’s GenCore system targets telecom and broadband backup for thousands
of switching stations across the country. At $15,000 for a 5 kW system, the
GenCore unit compares favorably to lead acid batteries that cost $18,000 for the
same output. Furthermore, the GenCore system lasts ten years or more, while the
batteries last only three to five years and cost $12,000 to replace at these intervals.
79
Techno Savvy – April 8, 2005
Figure 36. The Evolution of Backup Power Generation
Entirely Dependent on the Power Grid
Diesel Generator
Plug Power’s GenCore System
Source: Smith Barney and Plug Power
80
Techno Savvy – April 8, 2005
The longevity of the GenCore system makes a difference for a company like
Verizon, which operates close to 200,000 sites requiring on-board backup power.
Dr. Saillant expects that in three to five years, Plug Power will have a substantial
hold on the premium backup market because the cost of its GenCore system
continues to drop steadily (see Figure 37). In particular, material costs have
declined as the company has turned to external suppliers as the source of key
components (i.e., rather than making them in house).
Figure 37. GenCore Direct Material Cost (DMC) Reduction
index, 2001 = 100%
100%
80%
60%
Expected 2005
DM C R eduction
40%
20%
0%
2001
2002
2003
2004
2005
Source: Plug Power
Following the GenCore product rollout, Plug Power plans to introduce fuel cells into
markets such as forklifts and auxiliary power on heavy trucks and marine
applications. Thereafter, the GenSys platform is expected to roll out to on-site prime
power electricity generation for residential uses. The sweet spot of this market will
come when fuel cell power drops below $500 per kW, the point at which a
homeowner can justify the sale on a payback basis compared to the $0.08–$0.11 per
kilowatt hour (kWh) paid for grid power.
Stand-alone hydrogen
generators for fuel cell
applications could
eventually evolve into a
massive global
residential opportunity.
Stand-alone hydrogen generators for fuel cell applications could eventually evolve
into a massive global residential opportunity, creating a decentralized power
structure and facilitating the sourcing of electricity in remote areas. This could
someday preclude the need to extend the power grid in parts of the globe, in much
the same way that cell phone technology has eliminated the need to extend the wire
line telecom infrastructure. Dr. Saillant believes there is some potential for the
development of the residential market in the next five years, but the real growth
opportunity likely lies further in the future.
81
Techno Savvy – April 8, 2005
E-Money: PayPal and Euronet Worldwide
In 2004, micropurchases
were less than one per
adult per month. That
number is expected to
grow to at least ten
micropurchases per
month by 2010.
In 2004, the average number of micropurchases (defined as transactions of less than
$5) conducted without cash was less than one per adult per month in Organization
for Economic Cooperation and Development (OECD) nations. By 2010, that
number is expected to grow to at least ten micropurchases per month, or 25% of the
adults in OECD nations — equal to about 1 billion adults29. These transactions will
likely include the purchase of everything from train tickets to parking. The
facilitator of these and larger transactions will be “e-money”
Cash Is Trash (Credit Cards and Checks Too)
Over the millennia, different payment methods have evolved as civilization has
advanced. In the earliest societies, barter among people living in close proximity
involved the exchange of resources and services for mutual advantage.
Subsequently, cattle, including anything from cows, to sheep, to camels, were the
first form of money and formed the basis of trade between clans and tribes in
different regions.
Today, of course, there are a multitude of payment methods in use in different parts
of the globe:
¤ In the U.S., credit cards are an extremely popular form of payment (see Figure
38).
¤ In France, checks are common, representing over one-third of cashless
transactions.
¤ In the Netherlands, bank transfers are popular and also account for over onethird of cashless transactions.
¤ Finally, in Germany, direct debit is common, accounting for over one-third of
cashless transactions in that country.
Figure 38. Relative Importance of Payment Instrument
percentage of total volume of cashless transactions in 2002
100%
80%
60%
40%
20%
0%
U .S .
F ra n c e
C re d it/D e b it C a rd
Checks
N e th e rla n d s
C re d it T ra n s fe r
G e rm a n y
D ire c t D e b it
Source: Bank for International Settlements
29
According to Gartner Consulting, “Microcommerce Will Transform the Commercial Landscape,” by S. Landry, A. Linden, J.
Fenn, N. Jones, R. Knox, C. Uzureau, D. Garcia, W. Andrews, and G. Daikoku; December 27, 2004.
82
Techno Savvy – April 8, 2005
By contrast, in Eastern European countries such as Bulgaria and Romania, cash is
the most popular payment instrument, as evidenced by the fact that, in 2002, over
50% of banknotes and coins in circulation were held outside credit institutions, as
compared to just 14% for Eurozone countries (see Figure 39).
Figure 39. Banknotes and Coins in Circulation Outside Credit Institutions
as a percentage of narrow money supply in 2002
80%
60%
40%
20%
0%
B u lg a ria
R o m a n ia
E u ro a re a
Source: European Central Bank
It is more than a little odd then that, whereas our “primitive” ancestors’ cattle were
readily acceptable as payment in any part of the world, woe betide the sophisticated
American who tries to pay by credit card in Romania, the Frenchman who tries to
pay by check in the Netherlands, or the Bulgarian who offers a stack of lev to a
German. But thanks to the explosion of Internet commerce on a global basis (see
Figure 40), there are ever-growing numbers of Americans, Romanians, French,
Germans, Bulgarians, and Dutch trying to conduct business with each other.
Figure 40. eBay’s Active Users
in millions
60
40
20
0
2001
2002
2003
2004
2005
Source: eBay
83
Techno Savvy – April 8, 2005
In emerging economies,
the lack of credit history
is a key factor inhibiting
the issuance of credit
cards.
Of course, one possibility is that the rest of the world will move to a more American
model, with credit cards becoming an ever-more-popular form of payment. While
this is possible, it seems unlikely, particularly in emerging economies (e.g., Eastern
Europe and Latin America) where the lack of a credit history is a key factor
inhibiting the issuance of credit cards. Furthermore, in many Western European
countries, credit cards have, in general, not been popular given that for many
Europeans indebtedness is an uncomfortable concept.
Moreover, credit cards have a range of disadvantages for consumers and merchants.
From the consumer’s perspective, credit cards are only useful for business-toconsumer transactions, but not for peer-to-peer commerce. From the merchant’s
perspective, credit cards are not economical for micropayments, and they are
expensive for larger transactions, given that credit card companies charge 1.7%–
2.5% of the transaction as a processing fee.
As for checks, the number of checks written has been falling steadily on a global
basis (see Figure 41). From the consumer’s perspective, checks are time-consuming
to issue and risky to accept from unknown parties. From the perspective of
merchants, checks require special handling, take time to deposit and clear, and carry
the risk that they will be lost or will bounce.
Figure 41. Checks as a Percentage of the Total Volume of Cashless Transactions
80%
Canada
France
U.K.
U.S.
60%
40%
20%
0%
1998
1999
2000
2001
2002
2003
2004
2005
Source: Bank for International Settlements
The absence of a
globally accepted
payment method — and
the resulting complexity
of cross-border
payments — is a key
issue today.
84
Clearly, the absence of a globally accepted payment method — and the resulting
complexity of cross-border payments — is a key issue today. So, too, is the growth
of peer-to-peer commerce within both developed and developing economies, as well
as the incongruous situation whereby many developing economies have 21st century
information technologies but 19th century (or earlier) financial systems. It is these
factors, among others, that are driving the emergence of e-money and, in particular,
the types of e-money that facilitate 1) global peer-to-peer e-commerce and 2) local
day-to-day e-payments.
Techno Savvy – April 8, 2005
Figure 42. The Evolution of Peer-to-Peer Payment Methods
Checks
Wire
+2:72:,5()81'6
¾7UDQVIHUIXQGVWR\RXUEDQNDFFRXQWVE\LVVXLQJDZLUHWUDQVIHUUHTXHVW
¾$OOLQFRPLQJZLUHVWR\RXUEDQNDFFRXQWVDUH
¾,QFRPLQJZLUHVUHFHLYHGE\SP(7ZLOOEHSRVWHGVDPHEXVLQHVVGD\
¾:LUHVVKRXOGEHGLUHFWHGWR
$FFRXQW1DPH%DQN$%$
%HQHILFLDU\&XVWRPHU1DPHDQG$FFRXQW1XPEHU
6:,)7&RGHUHTXLUHGRQO\IRULQWHUQDWLRQDOWUDQVIHUV%%%%
¾2XWJRLQJGRPHVWLFZLUHWUDQVIHUVDUH
¾2XWJRLQJLQWHUQDWLRQDOZLUHWUDQVIHUVDUH
¾:HPXVWUHFHLYHUHTXHVWVIRULQWHUQDWLRQDOZLUHWUDQVIHUVE\SP(7DQG
IRUGRPHVWLFZLUHVE\SPWREHVHQWWKDWEXVLQHVVGD\
PayPal
Source: Smith Barney and eBay
85
Techno Savvy – April 8, 2005
Global Peer-to-Peer E-Commerce
Prior to the advent of Web transactions, most peer-to-peer sales were conducted in
person because the counterparties were in close proximity to one another.
Therefore, cash and other paper-based forms of payment tended to make sense
because timing was not an issue (i.e., funds were normally exchanged in person), the
risks associated with transactions were low, and paper-based forms of payment were
cheap to accept.
Paper-based forms of
payment are not
practical for longdistance transactions
among anonymous
parties.
As technology evolved and the Internet gained popularity, sales began to occur on
both a national and global basis. However, paper-based forms of payment are not
practical for long-distance transactions among anonymous parties: sending cash
long distances does not make sense for obvious reasons, while accepting a check
from a stranger creates the risk that the check will bounce.
Enter PayPal, which was founded in 1998 to provide an alternative, Internet-based
payment service for consumers and small businesses. PayPal’s service uses the
existing financial infrastructure to enable its account holders to send and receive
payments via e-mail in real time. PayPal’s success and rapid adoption among eBay
users led to the company’s initial public offering in September 2001 and, eventually,
to its acquisition by eBay in October 2002. In 2005, PayPal is forecast to generate
nearly $1 billion in revenues (up from $700 million in 2004), which represents a
significant portion of eBay’s total revenues, forecast to reach over $4 billion.
In terms of numbers of accounts and frequency of use, PayPal is truly an online
payments giant:
¤ The company processed $19 billion worth of online transactions in 2004; and
¤ As for its customer base, in just six years PayPal has grown from no accounts to
64 million accounts, a level that puts it on a par with American Express and at
more than twice the number of accounts of JP Morgan Chase (see Figure 43);
Figure 43. Number of Global User Accounts at Leading Transaction Processing Firms
user accounts in millions, year-end 2004
80
65
64
60
53
40
30
28
23
20
13
13
Deutsche Bank
Barclays
0
American
Express
PayPal
Discover
Source: eBay presentation, February 3, 2005
86
JP Morgan
Chase
Bank of
America
Wells Fargo
Techno Savvy – April 8, 2005
¤ PayPal’s monthly user tally is twice that of American Express and four times
that of Citigroup. On a stand-alone basis, PayPal would rank as the 36th most
visited Web property in the U.S., ahead of popular franchises such as ESPN.com
and Comcast’s sites.
The picture is similarly impressive on the merchant side. PayPal now has close to
twice the number of merchant accounts as credit card behemoths Visa and
MasterCard, and more than three times the number of merchant accounts as
American Express (see Figure 44).
Figure 44. Number of Merchant Accounts
in millions
12
10.7
8
5.6
5.6
4.2
4
3.5
0
PayPal
Visa
M astercard
Discover
Am erican Express
Source: eBay presentation, February 3, 2005
PayPal has been able to
outgrow traditional
transaction
heavyweights, in large
part because of its
attractive fee structure
for small to mediumsized businesses.
PayPal has been able to outgrow traditional transaction heavyweights, in large part
because of its attractive fee structure for small to medium-sized businesses (see
Figure 45).
Figure 45. Average Transaction Fee as a Percentage of Sales
8%
M erchant Accounts
6%
4%
PayPal
2%
0%
$0-25k
$25-50k
$50-100k
$100-250k
$250-500k
$500-1M
$1-5M
$5-100M
> $100M
M onthly M erchant Paym ent Volum e
Source: eBay presentation, February 3, 2005
Small PayPal merchants that have monthly payment volumes of $0–$50,000 can
save more than 200 basis points in traditional credit-card transaction fee rates.
87
Techno Savvy – April 8, 2005
Historically, PayPal has been more costly for larger merchants. Over the past two
years, however, PayPal has adjusted its pricing structure to become more attractive
to larger merchants. It is estimated that PayPal now holds a 25-basis-point cost
advantage over traditional credit card companies for even the largest merchants.
How PayPal Works
PayPal can be used to send or receive funds to or from anyone with an e-mail
account in any one of the 45 countries where PayPal operates. There are two ways
in which consumers access the PayPal system:
¤ They may log on to the company’s website;
¤ They may click a PayPal icon on a merchant or auction website that
automatically transfers them to the PayPal site.
Once in the PayPal site, an individual may send funds to a merchant or friend by
providing the merchant or friend’s e-mail address, transaction amount, and preferred
funding source. Individuals may fund a transaction using a credit card, debit card,
bank account, or existing PayPal account balance. After the sender has completed
these steps, the funds are moved into a separate PayPal account for the receiver. At
the same time, the receiver is notified via e-mail that he or she has funds in a PayPal
account. A link in the e-mail transfers the receiver to the PayPal website, where he
or she may either spend or withdraw the funds (via a PayPal-branded debit card,
paper check, or electronic funds transfer.)
The International Opportunity
PayPal is currently being
used on only about 23%
of eBay’s international
transactions, compared
to 50% of its U.S.
transactions.
As Figure 46 illustrates, PayPal is currently being used on only about 23% of eBay’s
international transactions, compared to 50% of its U.S. transactions.
Figure 46. Percent of eBay Transactions Closed Via PayPal
80%
60%
40%
20%
0%
1Q03
2Q 03
3Q03
4Q03
International
1Q04
2Q04
3Q04
4Q04
United States
Source: Company reports and Smith Barney
One of the factors limiting PayPal adoption in overseas markets in the past has been
its lack of localized sites. PayPal currently has users in 45 countries around the
world, but users in more than half of those countries are forced to either spend their
PayPal “currency” on eBay or another PayPal vendor, or deposit their funds into a
U.S. bank.
88
Techno Savvy – April 8, 2005
Over the past few years, PayPal has begun to invest more heavily in localizing its
service to meet the needs of its overseas users. So, for example, PayPal has
introduced a multicurrency platform that allows international users to accept
payments and hold funds in a number of different currencies, including U.S. dollars,
Canadian dollars, euros, pounds, Australian dollars, and yen.
The key issues that PayPal needs to address when launching a localized version of
the service include:
¤ appropriate regulatory approval (often the most time-consuming and
unpredictable aspect of the localization process),
¤ launching a new PayPal site with a local URL (e.g., www.paypal.uk) and local
language,
¤ integrating the new site with the local eBay site, and
¤ offering local currency conversion and establishing payment methods.
Beyond the regulatory hurdles, PayPal has developed an international platform and
strategy that has been tested over time and enables the company to enter new
geographic markets with relative ease.
PayPal is currently
operating localized
versions of its service in
ten countries outside the
U.S., including the U.K.,
Canada, Australia,
Austria, Belgium, France,
Germany, Italy, the
Netherlands, and
Switzerland.
PayPal is currently operating localized versions of its service in ten countries outside
the U.S., including the U.K., Canada, Australia, Austria, Belgium, France, Germany,
Italy, the Netherlands, and Switzerland. Most of these sites were launched in the
past year, and localization has clearly helped to fuel growth in the individual
markets. For example, PayPal launched a local version of its service in the U.K.
during the third quarter of 2003. In the fourth quarter of 2004, PayPal grew its U.K.
accounts at a 144% annual rate. Total U.K. payment volumes were up 205% over
the year-ago period, and PayPal was used on 58% of U.K. eBay transactions, up
from 40% a year earlier. (Indeed, one-tenth of the entire population of the U.K. has
signed up for a PayPal account.)
PayPal launched localized versions in France and Germany during the second
quarter of 2004 and experienced similar results. So, for example, two quarters after
it launched local sites in those markets, PayPal’s penetration of eBay transactions in
France more than doubled, with user accounts up 111%. In Germany, PayPal’s
penetration nearly tripled, and user accounts are up 145% since launching the local
service.
While PayPal has made great progress on the international front in the past two
years, huge growth opportunities still lie ahead. While penetration rates in newly
localized countries are growing rapidly, they still remain well below the penetration
rates in the United States. So, for example, in terms of the broader e-commerce
market, PayPal processed around 9% of all U.S. transactions in 2004. By
comparison, PayPal only accounted for around 5% of total international transactions.
Global Merchant Services: Another Opportunity
While most PayPal customers use the service for purchasing items on eBay, other
forms of use are growing for PayPal “currency.” PayPal can be used to:
¤ pay taxes in York County, South Carolina;
89
Techno Savvy – April 8, 2005
¤ send a donation to the Pat Brody Shelter for Cats in Lunenburg, Massachusetts;
¤ purchase digital music at iTunes or DVDs at GlacierBayDVD.com;
¤ buy gift vouchers.
eBay includes these types of “off eBay” PayPal transactions within its Merchant
Services line of business. Merchant Services is a big market that eBay believes will
eventually encompass all global e-commerce transactions that do not take place
across the eBay platform. According to Jeff Jordan, president of PayPal, in 2004
eBay generated around $5.7 billion in Merchant Services total payment volumes
(TPV) — or 25% of PayPal’s TPV — representing an 80% compound annual
growth rate over the past three years (see Figure 47).
Figure 47. PayPal Global Merchant Services (“Off eBay”) Total Payment Volumes
$ in billions
$6.0
$4.0
$2.0
$0.0
2001
2002
2003
2004
Source: Company reports and Smith Barney
Only around 0.5% of
international “off eBay”
e-commerce transactions
are processed across the
PayPal platform.
Despite strong growth from Merchant Services over the past few years, eBay
believes it has just begun to tap this market’s potential; Mr. Jordan estimates that
only around 0.5% of international “off eBay” e-commerce transactions are processed
across the PayPal platform. The company has announced plans to invest at least
$100 million to improve its Merchant Services product, add distribution, and educate
merchants about the advantages of using PayPal.
Local Day-to-Day E-Payments
While PayPal is a facilitator of global peer-to-peer e-commerce, other forms of
e-money are emerging to facilitate local day-to-day e-payments by consumers to
businesses (and, in particular, to service providers).
As we noted above, many developing nations find themselves in the incongruous
situation of having 21st century information technologies but 19th century (or
earlier) financial systems. So, for example, despite relatively high levels of cell
phone and Internet penetration (see Figure 48), the lack of checking systems in most
Central and Eastern European countries means that bills (e.g., utility bills) must be
paid in cash at the biller’s office, a bank, or a post office through the use of a bill
payment network. Trips to these locations and long lines represent a huge
inconvenience for consumers.
90
Techno Savvy – April 8, 2005
Figure 48. Cell Phone Users and Internet Users per 100 Inhabitants
in 2003
120
Cell Phone
Internet
80
40
0
Czech Rep Slovak Rep
Poland
Hungary
Bulgaria
Romania
Italy
France
U.S.
Source: International Telecommunications Union
Moreover, although some Central and Eastern European countries have direct debit
systems that directly take funds out of consumer accounts every month, these
systems are rarely used due to billers’ distrust of them and a lack of control over the
payment for the consumer. As a result, cash is the dominant payment form in
Central and Eastern Europe, accounting for over 50% of household payments in
some countries.
It is not just consumers
in developing Central
and Eastern Europe that
have relatively few
payment alternatives.
It is not just consumers in developing Central and Eastern Europe that have
relatively few payment alternatives. As Figure 49 illustrates, relative to the U.S., the
countries in the European Union have only about half the number of ATMs, as well
as far fewer credit cards and places to use those cards. The situation is much the
same throughout Asia and Latin America.
On the point of credit cards, as we noted above, for many Western Europeans,
indebtedness is an uncomfortable concept, while the lack of credit histories inhibits
credit card growth. (As Figure 49 illustrates, debit cards are twice as prevalent in
the European Union as credit cards.) Moreover, some cultures consider credit cards
redundant due to flexible bank overdraft policies — the U.K., France, and Italy are
the biggest users of checks. Finally, as is the case in Eastern Europe, direct debit is,
in general, not a popular payment method for most consumers in Western Europe.
Figure 49. ATM and Credit/Debit Card Penetration Ratios by Region
in 2002
ATMs per
Credit Cards
Outlets per
Transactions
Million People
Per Person
Million People
Per Person
East & Central Europe
276
0.1
4,216
5
0.5
3,601
8
European Union
700
0.5
13,213
13
0.9
12,781
35
1,220
4.4
47,124
62
0.9
12,128
54
U.S.
Debit Cards
Outlets per
Transactions
Per Person Million People Per Person
Source: European Central Bank and Bank for International Settlements
91
Techno Savvy – April 8, 2005
Figure 50. The Evolution of Local Day-to-Day Payment Methods
Barter
Title: An English Naval Officer Bartering with a Maori; from Drawings illustrative of Captain Cook’s First Voyage, 1768–71;
Date: 1769; Author: The Artist of the Chief Mourner
Cash
Euronet Worldwide
Source: British Library, Smith Barney, and Euronet Worldwide
92
Techno Savvy – April 8, 2005
So, reflecting in large part the difficulty of obtaining credit, prepaid phone service is
the predominant payment method in the U.K., as well as other European countries.
Until recently, the prepaid service was paid for, in most instances, by purchasing a
“scratch card” for cash (these cards are similar to the long distance prepaid phone
cards an American would see in a convenience store). For retailers, however, the
disadvantages of prepaid cards include 1) a large inventory carrying cost, 2) the risk
of theft, 3) the need to keep prepaid cards for all the different mobile operators, and
4) the need to keep different denominations of cards (e.g., £5, £10, and £20). But,
thanks to an innovative approach by Euronet Worldwide, that prepaid model has
changed, both in the U.K. and in other countries, which, as we discuss below, has
significant implications for a whole range of local day-to-day e-payments.
Euronet states that its
mission is “to bring
electronic payment
convenience to millions
who have not had it
before.”
Euronet, which has been operating in Europe since 1994, has established an ATM
network that is the only intercontinental debit network. ATMs are, however, only
part of Euronet’s business strategy — a major long-term growth driver for the
company is the introduction of value-added services. Indeed, Euronet states that its
mission is “to bring electronic payment convenience to millions who have not had it
before.” (In a way, these aspirations echo those of Dartmouth’s John Kemeny and
Thomas Kurtz, whom you will recall developed BASIC in order to teach interactive
computing to all students, not just those studying computer science. Bill Gates’
subsequent development of a version of BASIC with even greater ease-of-use was
key to bringing interactive computing to a larger market.)
In that regard, Euronet’s greatest success of late has been its eTop Up services that
distribute minutes for prepaid mobile phones electronically at point-of-sale (POS)
locations, ATMs, and on the phones themselves (although most of this activity has
thus far happened at POS locations). Over 70% of Euronet’s revenues are now
derived from eTop Up services, and the company is the world’s largest processor of
prepaid transactions, supporting more than 171,000 POS terminals at 79,000 retail
locations around the world.
With eTop Up, Euronet basically acts as an electronics payment network that ties
together mobile operators and retail locations. When mobile phone customers go
into a retail outlet (e.g., a supermarket), they provide their mobile phone information
and pay the retailer for the amount of minutes they wish to purchase. This
information is entered into a device that resembles a credit card terminal and is
routed to Euronet, which deposits the funds into the mobile operator’s bank account.
Euronet then retrieves a prepaid minute PIN code from the mobile operator and
prints this code out on the eTop Up terminal located in the retail outlet. The mobile
phone user enters the code into the phone and the account is credited with the newly
purchased minutes.
eTop Up is proving to be a very popular solution in Europe, where roughly half of
the mobile phone plans are prepaid accounts. (Euronet is currently processing
twenty million prepaid transactions per month.) Most eTop Up products did not
exist four years ago and now account for roughly half of prepaid sales in the U.K.
(one of the first countries to adopt eTop Up). Indeed, Euronet is currently the largest
provider of eTop Up solutions in three key markets (the U.K., Germany, and
93
Techno Savvy – April 8, 2005
Poland), with that leading position being reflected in the significant proportion (at
over 60%) of the company’s revenues that are derived from those three countries.
Euronet has begun to
expand into other market
segments, including
music and gambling.
Given the high penetration of cell phones in many countries overseas, prepaid
mobile phone service was an obvious market for Euronet to address first. However,
the company has since begun to expand into other market segments:
¤ Music. In late 2004, Euronet announced a deal with Napster by which
consumers in the U.K. can prepay for music downloads in the same way that
they prepay for mobile phone service. So, for example, a British teenager can
go to a Euronet POS terminal at a retailer, hand over £10 and, in return, receive
a special code number. Subsequently, the teenager can go to the online Napster
website, enter the code number, and download a designated amount of music.
¤ Gambling. It is legal to wager in Australia, and Euronet has an arrangement
with a wagering company in that country that allows gamblers to conveniently
top up their wagering cards. So, for example, a gambler can go to a Euronet
POS terminal at a retailer, hand over A$50 and, in return, receive a special code
number. Subsequently, the gambler can dial the wagering service, enter the
code number, and top up his electronic wagering card with the prepaid amount.
Euronet’s Chairman and Chief Executive Officer, Michael J. Brown, pointed out to
us that there is a wide range of other possible applications of eTop Up, including:
¤ ring tones for mobile phones,
¤ electronic games, and
¤ utility bills.
He observes that any transaction that is recurring and cash based could potentially be
moved to the eTop Up model, and he points out that the attraction of this for Euronet
is that, given the infrastructure already in place, any additional revenues the
company generates will be highly profitable.
Mr. Brown believes that the biggest opportunities for growth are in economies that
are largely cash based and where the convenience of eTop Up services will be a big
improvement over existing payment methods. Consequently Euronet continues to
focus on opportunities in Central and Eastern Europe, Asia, and South America. In
particular, Mr. Brown highlighted Brazil, China, and Russia as three attractive
markets.
94
Techno Savvy – April 8, 2005
Phood: Senomyx and Martek
The word “phood,” which is used to connote food that offers pharmaceutical
benefits, has steadily been gaining usage, as an increasing number of phood products
have become available to consumers. Importantly, phood contains natural
substances (e.g., calcium-enhanced orange juice) and not pharmaceuticals, so that
the products do not have to be regulated by the Food and Drug Administration
(FDA) as drugs.
On the demand side, increasing awareness of health issues by consumers — spurred
in large part by the nation’s obesity epidemic — is driving interest in healthy eating.
Moreover, the federal government recently issued new dietary guidelines for
Americans, and for the first time since the recommendations were introduced in
1980, they emphasized healthy eating, as well as weight loss and cardiovascular
health.
Regulators are making it
easier for companies to
advertise health claims
about their products.
On the supply side, regulators are making it easier for companies to advertise health
claims about their products. Until recently, food companies were allowed to
advertise the health benefits of their products only if conclusive evidence —
reviewed by the FDA — supported the claim. But in 2003, the FDA changed its
policy to allow “qualified health claims,” enabling companies to put such claims on
their product labels based on limited and preliminary scientific evidence.
Phood Flavor Enhancers
One aspect of phood is flavor enhancers that improve the health aspects of packaged
food and beverage products by reducing additives such as monosodium glutamate
(MSG), salt, and sugar, while at the same time maintaining or enhancing the taste of
the product. Of course, mankind’s obsession with enhancing taste by way of food
additives goes back thousands of years.
The Chinese were said to be among the first to discover the uses of herbs and spices,
both for medicinal purposes and as flavor enhancers in cooking — chrysanthemums
were originally grown for their medicinal properties and were a valued ingredient in
a Taoist elixir. In Europe, thanks to the “spice trade,” spices became as valuable as
gold or silver by the ninth century, and in early America, almost every colonial
home featured an herb garden; by the 1950s, however, the additive MSG was being
used in considerable quantities in the U.S.
Senomyx is at the
forefront of developing
flavor enhancers derived
from sensory and tastereceptor-based
technology.
Today, several companies manufacture flavor and flavor enhancer products from
traditional methods (e.g., using additives or modified chemical or natural flavors),
including International Flavor & Fragrances, Givaudan SA, Symrise, Quest
International, and Firmenich. However, Senomyx, a biotechnology company, is at
the forefront of developing flavor enhancers derived from sensory and tastereceptor-based technology. The company has a unique approach to developing
flavor enhancers by applying the same innovative technologies used by
biotechnology and pharmaceutical companies.
95
Techno Savvy – April 8, 2005
Figure 51. The Evolution of Flavor Enhancers
Herbs and Spices
MSG
Enhancers Derived from Sensory and Taste-Receptor-Based Technology
Source: Smith Barney
96
Techno Savvy – April 8, 2005
Senomyx’s strategy is to
forge collaborations with
leading food and
beverage companies on
an exclusive basis for a
specific flavor or flavor
enhancer.
Senomyx’s approach has the key advantage of screening a large number of
compounds in a more rapid fashion than traditional methods, by utilizing proprietary
assays (based on identified human taste receptors) and high-throughput screening
technology. The company’s objective is to identify highly potent flavor enhancers
that require minimal concentrations (in the range of parts per million) to be utilized
in order to replace or reduce the need for unhealthy additives in foods and
beverages. Senomyx’s strategy is to forge collaborations with leading food and
beverage companies on an exclusive basis for a specific flavor or flavor enhancer in
a particular product category in exchange for R&D funding, milestone payments,
and royalties on product sales.
The key concept that Senomyx exploits is that taste is a chemical sense — certain
receptors can detect chemicals in the environment. In humans, these receptors are
found on the tongue in the form of taste buds. Humans can sense five basic taste
qualities: bitter, sweet, salty, sour, and umami (savory).
¤ The first step in Senomyx’s discovery and development process is to develop
taste receptor assays. These assays measure the interactions between the taste
receptors and potential flavors and flavor enhancers. Senomyx currently has
developed assays in the tastes of savory, sweet, and salty. In addition, the
company is also working on an assay of the bitter taste. Dr. Mark Zoller,
Senomyx’s chief scientific officer, indicated to us that this assay is more
complex to construct since the bitter taste involves a large number of different
taste receptors. As a point of interest, Dr. Zoller indicated that the bitter taste
evolved as a way to protect humans from poison. A product that could block the
bitter taste could be particularly useful for children’s medicines. In late 2004,
the company announced a collaboration with Nestlé in the area of coffee and
coffee whiteners, and Smith Barney biotechnology analyst Elise Wang believes
this program is focused on the development of a bitter blocker.
¤ After these assays have been developed, Senomyx uses automated highthroughput screening to rapidly assess its libraries of diverse natural and
synthetic compounds to identify “hits.” This approach provides the ability to
screen a larger number of compounds in a more rapid fashion than traditional
methods: Senomyx screens approximately 150,000 compounds per year,
compared with traditional approaches that screen, on average, 1,000 compounds
per year. A panel of trained taste testers is then used to evaluate the taste effect
of these compounds. Based on the panel’s analysis, Senomyx will select
compounds that have demonstrated a positive taste effect.
¤ The next step is to optimize, or chemically enhance, the lead compound to allow
lower amounts of it to be used in the consumer product. The optimized
compounds that meet the desirable taste attributes in food and beverage will
become the product candidates.
¤ After the product candidates are determined, Senomyx and its collaborators will
choose one or more of them for commercialization (see Figure 52). The first
step in this process is that the product candidates are evaluated for safety in
animal studies. Following the safety evaluation, Senomyx anticipates that its
97
Techno Savvy – April 8, 2005
collaborators will test market the products for approximately six to 12 months to
determine consumer acceptance.
Figure 52. Taste Areas and Associated Product Categories
2003 Estimated
Taste Areas
Savory and Salt
Example Product Categories
Worldwide Sales
Ready meals, sauces, spreads, frozen food, beverages,
$368 billion
meal replacements, soups, pasta, dried food, snack foods,
processed meats, processed cheeses, and cracker products
Sweet
Confectionaries, cereal, ice cream, beverages,
$383 billion
and bakery products
Source: Euromonitor 2004 (for estimated worldwide sales) and Smith Barney
Senomyx’s products are
regulated as flavor
ingredients, resulting in
a lower level of
regulatory scrutiny.
Although Senomyx’s product candidates are developed using similar techniques
applied in the therapeutic drug area, they are regulated as flavor ingredients,
resulting in a lower level of regulatory scrutiny. In addition, these products are
unlikely to be reviewed as food additives (e.g., NutraSweet), which typically require
extensive human testing, since minimal concentrations (in the range of parts per
million) of Senomyx’s flavors and flavor enhancers are expected to be utilized in
food and beverage products.
In that regard, Senomyx recently announced it has received Generally Recognized as
Safe (GRAS) designation for the company’s savory flavor enhancers from the
Flavor and Extract Manufacturing Association (FEMA) panel. (These flavor
enhancers are designed to reduce or eliminate the amount of monosodium glutamate
[MSG] and inosine monophospate [IMP], an expensive flavor enhancer of the
glutamate taste.) FEMA has a working relationship with the FDA, so no separate
FDA approval is required. As for overseas approvals, Dr. Zoller expects that
European approval should follow about two years after FEMA GRAS designation.
Senomyx has collaboration agreements with four of the largest packaged food and
beverage companies (see Figure 53).
Figure 53. Market Opportunities for Senomyx and Collaborators
Source: Company reports
98
Techno Savvy – April 8, 2005
¤ Senomyx has granted Campbell Soup exclusive product rights to wet soups and
savory beverages in the salt enhancement program.
¤ Coca-Cola has been granted exclusive product rights to soft drinks and other
nonalcoholic beverages, and a co-exclusive agreement for powdered beverages.
Elise Wang believes the focus of this collaboration is on developing a sweet
enhancer.
¤ Kraft Foods has been granted a co-exclusive agreement for powdered beverages
in the sweet enhancement program.
¤ Nestlé has been granted exclusive product rights for dehydrated and culinary
food, frozen food, and wet soups in the savory enhancement program, and for
dehydrated and culinary food and frozen foods in the salt enhancement program.
Significantly, because of the aforementioned GRAS determination, Nestlé will
now be able to begin consumer acceptance testing of food products (e.g., sauces,
frozen foods, processed cheese, and snack foods) containing Senomyx’s savory
enhancers. Senomyx’s Dr. Zoller anticipates that among the key selling points
of these products will be the reduction or elimination of MSG, as well as the
enhancement of naturally occurring savory flavors. Senomyx has said that it
anticipates the first commercial sale of such products could occur during the first
half of 2006, which would result in royalty payments to the company.
Senomyx could receive
royalties on product
sales in a range of $360
million–$1.44 billion.
Based on the existing agreements with these four collaborators, Senomyx’s
immediate addressable market opportunity is approximately $36 billion in sales.
Assuming the regulatory success of these flavor enhancers, Senomyx could receive
royalties on product sales in a range of 1%–4%, suggesting approximately $360
million–$1.44 billion in revenues. Note that Senomyx’s current license agreements
are exclusive arrangements for specific flavor enhancers in a particular product
category. Therefore, Senomyx has the ability to license the flavor enhancers from
existing collaborations for different food and beverage product categories.
Phood Additives
The FDA approved a
qualified health claim for
reduced risk of coronary
heart disease on foods
that contain
docosahexaenoic acid
(DHA) omega-3 fatty
acids.
As we noted above, in 2003 the FDA changed its policy to allow “qualified health
claims,” enabling companies to put health claims on their product labels based on
limited and preliminary scientific evidence. Under the new system, food makers
have to seek FDA approval first, and the FDA responds by grading each potential
claim: “A” for scientifically proven claims; “B” for those in which there is scientific
evidence supporting the claim, but the evidence is not conclusive; “C” when the
evidence is limited and not conclusive; and “D” when there is little scientific
evidence supporting the claim. So, for example, in 2004 the FDA approved the
availability of a qualified health claim for reduced risk of coronary heart disease on
foods (such as oily fish) that contain docosahexaenoic acid (DHA) omega-3 fatty
acids.
In addition to these possible cardiovascular benefits, DHA is essential for the proper
functioning of the brain in adults, and for the development of the nervous system
and visual abilities during the first six months of life. DHA is naturally found in
breast milk, as well as in cold-water fatty fish, organ meats, and eggs.
99
Techno Savvy – April 8, 2005
Figure 54. The Evolution of Food Additives
adding vitamin C to your diet
adding calcium to your diet
adding DHA to your diet
Source: Smith Barney
100
Techno Savvy – April 8, 2005
According to the National Institutes of Health (NIH), nutrition experts have issued
recommendations that adults consume approximately 220 mg of DHA per day, and
pregnant and lactating women consume approximately 300 mg of DHA per day.
The experts also indicated that adequate intake for infants on infant formula should
be approximately 0.35% DHA of total fat.
In that regard, a number of companies currently offer DHA products:
¤ The major manufacturers of DHA-containing fish oil are BASF and HoffmanLaRoche.
¤ In addition, Nutrinova, an operating unit of Celanese Ventures, is actively
marketing a DHA microalgal nutritional oil to the food and beverage and dietary
supplement markets in the U.S. and worldwide.
¤ The Ross Products division of Abbott Laboratories submitted a Generally
Recognized as Safe (GRAS) filing on January 2, 2002, seeking FDA
concurrence that its fish oil source of DHA and its fungal source of ARA are
GRAS when used as ingredients in infant formula. No decision has been made
by the FDA to date, and the GRAS notification continues to be under
consideration.
¤ Other major pharmaceutical, chemical, specialized biotechnology, and food
companies, as well as certain academic institutions and government agencies,
are conducting research and development and commercialization of products
and technologies that may be competitive in the DHA area.
Martek Biosciences
manufactures the only
source of DHA approved
by the FDA for use in
infant formula.
Martek Biosciences manufactures the only source of DHA approved by the FDA for
use in infant formula. The FDA has also approved Martek’s manufacture of
arachidonic acid (ARA), an omega-6 fatty acid found in the brain, which plays an
important role in brain development for infants. Specifically, in May 2001, the FDA
completed a favorable review of and granted Martek’s DHA/ARA GRAS
notification regarding the use of its DHA and ARA oil blend in infant formulas.
Martek’s nutritional oils are derived from microalgae; the company has a library of
over 3,500 species of microalgae, and it has key patents for numerous biological,
chemical, and manufacturing processes related to microalgae commercialization.
Importantly, Martek’s nutritional oils have several key competitive advantages over
fish oils and other currently available sources of DHA/ARA.
Martek’s oils contain
minimal fatty acids, are
derived from all-natural,
vegetarian sources, have
minimal taste and odor,
and have a high
oxidative stability and
long shelf life.
¤ For a start, Martek’s oils contain minimal fatty acids, such as EPA, and no
toxins, such as methylmercury, polychlorinated biphenyls (PCBs), and dioxin,
which are found in fish oil.
¤ Martek’s products are derived from all-natural, vegetarian sources, which are
free of pollutants and toxins and are easily digestible.
¤ Furthermore, Martek’s products have minimal taste and odor compared to fish
oil.
¤ Finally, Martek has a higher oxidative stability and longer shelf life than fish oil,
making the products more amenable to the spray drying process required for the
powdered infant formula and other applications, such as food.
101
Techno Savvy – April 8, 2005
Dr. James Flatt, Martek’s senior vice president of research and development, points
out that the company’s approach is to go to “nature’s original source” for DHA
rather than to go indirectly through, for example, fish oils (which contain DHA
because the fish eat other sea life that feed on the microalgae).
In the near term, the infant formula market will remain the primary driver for growth
in product sales and earnings for Martek. The company currently has license
agreements with 16 infant formula manufacturers, representing approximately 70%
of the worldwide and 100% of the U.S. infant formula markets.
In addition to infant formula, Martek has entered into an exclusive license agreement
with Mead Johnson for Martek’s DHA as a nutritional supplement (i.e., vitamin) for
pregnant and lactating women in North America and parts of Asia. DHA is
considered essential for the optimal development of an infant’s brain and eyes, both
during pregnancy and after birth. DHA may also help prevent preterm (premature)
labor and may help protect against postpartum depression.
There is a growing body
of data that DHA also
has benefits for brain
and eye development,
and not just in infants.
However, the greatest growth opportunity for Martek will likely be in the food area,
as the health benefits of DHA become more widely publicized. With regard to these
health benefits, we noted above that the FDA approved a qualified health claim
concerning the cardiovascular benefits of DHA. Dr. Flatt points out that there is a
growing body of data that DHA also has benefits for brain and eye development, and
not just in infants. Indeed, monkeys that were deprived of DHA were shown to
perform relatively poorly on cognitive tests. Other studies suggest that DHA may
decrease the risk of diseases such as dementia and Alzheimer’s in older adults, given
that people consuming high levels of DHA (by eating fish) experienced significantly
lower incidence of these diseases.
Martek recently
announced that it has
entered into a 15-year,
nonexclusive DHA
license and supply
agreement with Kellogg
(in food products).
Martek recently announced that it has entered into a 15-year, nonexclusive DHA
license and supply agreement with Kellogg (in food products). Under the terms of
the agreement, Kellogg will develop food products containing Martek’s DHA and
must purchase almost all of its DHA needs from Martek for products in the U.S. and
other territories. Importantly, Kellogg has agreed to display the Martek DHA logo
on all product packages, print advertisements, and certain other promotional
materials. Martek indicated that Kellogg intends to launch the initial product
containing Martek’s DHA in mid-2006.
Martek indicated that, if it
were to capture the
entire cereal market, its
market opportunity
would be $670 million in
the U.S. and $1.1 billion
worldwide.
Dr. Flatt speculates that DHA could first be introduced into food products that
children consume, such as yogurts and cereals. The market opportunity for these
product categories is sizable (see Figure 55). For example, in 2004, Kellogg
reported total worldwide RTE (ready-to-eat) cereal sales of $5.3 billion (and North
America sales of $2.4 billion) and total snack sales of $2.8 billion (snack bar sales of
$479 million). Based on the latest AC Nielsen data, Kellogg is the leading food
company in RTE cereal, with a market share of 32.3%, and it represents 21.8% of
the snack/granola bars category. Martek indicated that, if it were to capture the
entire cereal market, its market opportunity would be $670 million in the U.S. and
$1.1 billion worldwide.
102
Techno Savvy – April 8, 2005
Figure 55. Martek’s Potential DHA Food Opportunity
Estimated Market Opportunity* ($MM)
Category
Dairy
Examples
U.S.
Worldwide
Milk, eggs, cheese,
$680
$1,360
$740
$1,480
$670
$1,110
$2,540
$5,100
yogurt
Beverages
Nutritional drinks, fruit
juice
Cereals
Breakfast cereal,
nutritional bars, cereal
snacks
Breads
Bread, crackers, bagels
* Assumes 100% penetration rate and current DHA pricing
Source: Martek Biosciences
As mentioned above, in comparison to fish-oil derived DHA, Martek’s microalgaederived DHA has the benefit of being free of pollutants and toxins, such as PCBs,
mercury, and dioxins. Furthermore, Martek’s DHA can be formulated to be added
to both dry and wet mixes with minimal taste and odor, making it an optimal food
and beverage additive.
103
Techno Savvy – April 8, 2005
Appendix A
30
Technology Candidates
Technology
Artificial Intelligence
Description
Artificial intelligence technology consists of
Vendors
IBM and MEDai
computer-based software and algorithms that
mirror humans’ capacity for learning,
contemplation and judgment.
Audio Mining
Applies data-mining operations (like filtering,
Autonomy, BBN Technologies,
clustering, categorization, pattern matching and Nexidia, Nice Systems, ScanSoft,
conceptual search) to audio streams, without the
need for human indexing of the content.
Augmented Reality
In augmented reality, the user’s view of the real
StreamSage, Utopy, Verint
Systems, and Witness/Eyretel
Kaiser Electro-Optics,
world is supplemented with relevant information, Microvision, and MicroOptical
typically by using a heads-up display to
superimpose text or graphics about the user’s
environment over the real-world objects.
Bioinformatics
The application of computer, mathematical, and
Accelrys, LION bioscience, and
statistical techniques to analyze and characterize
MDL Information Systems
the molecular components of living things.
Biometrics
Biometrics use an element of “what you are” as
a form of real-time identification and
authentication. They include finger or hand
Bioscrypt, DigitalPersona,
Identix, Polaroid, Sony,
Verdicom, Viisage, and Visionics
scans, handwriting on a tablet, keyboard
ballistics, iris scan, facial recognition, and other
systems.
Blu-Ray
Blu-ray technology uses a blue-violet laser to
Dell, HP, Samsung, and Sony
read and write up to 27GB of data onto a singlelayer DVD (for example, two-plus hours of HDTV
or 13 hours of standard-definition television). A
double-layer disc will offer twice this capacity.
Computer-Brain Interface
Computer-brain interfaces interpret distinctive
Brain Actuated Technologies,
brain patterns generated voluntarily by a user as Georgia State University, Neural
commands to a computer or other device.
Signals, and University of
Michigan
Contextual Personalization
Portal delivers personalized views based on such
Art Technology Group (ATG),
contextual attributes as device, bandwidth, time, BroadVision, IBM, and Vignette
location, and task at hand.
30
This list contains representative solutions and providers for each technology category. The inclusion of a vendor or solution in this
summary is not an indication of its leadership for any particular category.
104
Techno Savvy – April 8, 2005
Technology
Description
Controlled Medical Vocabulary
A controlled medical vocabulary is used to
3M, Apelon, Health Language,
normalize the concepts and terms used to
and Medicomp Systems
describe clinical conditions.
Digital Rights Management
Technologies designed to protect content after
Technologies
distribution through the use of policy, encryption,
Vendors
(Medcin)
Microsoft, Real, ContentGuard,
Macrovision, and InterTrust
and watermarking
Driver-Load Matching
Automated matching of remote drivers and
tractor-trailers, connecting them with the
nearest available shipment for their next run.
Direct Freight Services, OrderPro
Logistics, Schneider National,
Teletouch, and TransCore
Typically communicated via wireless networks
and in-cab PCs.
Dual Clutch Transmission
A transmission system based on a manual
BorgWarner
gearbox, by which the driver can either
initiate the gear change manually, or can
leave the shift lever in fully automatic mode.
E-Forms
E-Forms are automated and interactive
Adobe Systems, Cardiff (now
templates for the capture, processing, display
Verity), FileNet, PureEdge
and output of defined sets of business data.
Solutions, and Marketing
Management Analytics
Electronic Ink
Digital paper resembles a sheet of plastic-
E-Ink and Gyricon Media
laminated paper. Beneath the plastic are tiny
microscopic beads that change color to form text
and images. The result is a nearly paper-thin,
rewritable display.
Electronic Shelf Labels
Programmable wireless electronic devices that
Eldat Communications, IBM,
affix to store shelf labels. Alphanumeric display
NCR, and Telepanel
is typically used to display item pricing or
promotional information in real time.
Embedded Diagnostic Tools
Machinery from many manufacturers will include Caterpillar, Komatsu, General
diagnostic and monitoring capabilities, built-in
Electric, and Rolls-Royce
and onboard, that self-adjust or provide
diagnostic information to maintenance systems.
Emotion Detection
The task of recognizing emotional states of a
MIT Media Lab and NCR
human (such as through face or voice
recognition), with the objective of using this
information for better user interfaces.
105
Techno Savvy – April 8, 2005
Technology
Description
Vendors
EMPI
An enterprise master person index (EMPI)
Initiate Systems, Quovadx, and
provides the ability to cross-link the same
SeeBeyond
member in multiple applications without
rewriting and exchanges predetermined
identifiers during flow between application roles.
Will be especially important for eliminating the
Social Security account number as an external
look-up ID.
E-Signatures
E-Signatures are associated with electronic
messaging. They bind the signer to whatever
Honeywell, Rockwell
International, and Siemens
the document states, prevent alterations once
signed, and prevent the fraudulent transfer of a
signature on one document to another. This
technology is used in a number of authentication
methods
Health Spending Account “Smart
Card technology that provides data on
Card”
eligibility, co-payments, and flexible
UnitedHealth Group
spending account and health savings
account (HSA) payments.
Idea Management
A process of developing, identifying, and using
Akiva and Imaginatik
valuable insights or alternatives that otherwise
would not have emerged through normal
processes. Typically campaign focused.
Infrastructure Virtualization
Ability to leverage underutilized technology and
operational resources, resulting in an optimal
Various independent software
vendors and IBM
mix of predictable variable cost and operational
leverage.
Inkjet processes
Depositing semiconductor materials onto a
Alien Technology, Cambridge
flexible substrate using an inkjet-style process.
Display Technology, Philips,
Plastic Logic, and Xerox
Interactive TV
Two-way television services, including electronic GoldPocket Interactive, Liberate
program guides (EPGs), video on demand,
Technologies, and MSN TV
interactive advertising, and information and
communication services.
LEP/OLED
Light-emitting polymers (LEPs) are based on long Cambridge Display Technology,
chain polymers that fluoresce when a current is Lucent Technologies, and Philips
passed through them. Using inkjet technologies,
LEPs can be “printed” onto practically any
substrate to form a display of light emitting
pixels.
106
Techno Savvy – April 8, 2005
Technology
Description
Vendors
Lifetime Value
The potential value of future relationships based
Teradata
on calculations of a set of predictive behavioral
models.
Location “Aware” Services
Services attached to location technologies in the
cellular network, such as enhancing mobile
Various independent software
vendors
applications through routing, mapping, and
integration of geographic information systems.
Mesh Networks Sensor
Sensors are ad hoc networks formed by dynamic Millennial Net, Dust Networks,
meshes of peer nodes. Each includes simple
networking, computing, and sensing capabilities.
Ember, Zensys, Intel, and
Crossbow Technology
Some implementations offer low-power
operation and multiyear battery life.
Micro Fuel Cells
Micro fuel cells offer an alternative to
Motorola, Toshiba, Casio, NEC,
batteries as a power source for mobile
Sony, Panasonic, and
devices. They have the potential to provide
Mechanical Technology
two to three times the energy capacity of
lithium ion batteries.
Microelectromechanical Systems
Semiconductor devices incorporating structures
NA
that can physically move, in addition to
electronic circuits.
Micropayments
Charging small amounts (cents or fractions
Amdocs, bcgi, Convergys,
of cents) per Web transaction — for example, Portal Software, Qpass, and
by page view or for streaming media. Usually
Trivnet
done in prepaid scenarios.
MRAM
Magnetic tunnel junctions use an oxide to
separate a layer of metal with a fixed magnetic
HP, IBM, Infineon Technologies,
and Motorola
orientation from a metal layer that can be
magnetically flipped. This is enhanced
semiconductor storage.
Nanotech-Based Manufacturing
Use of nanotechnology to provide manufacturing
capacity.
Natural Language Processing
The analysis and manipulation of words and
Epic Systems, Expresiv
Technologies, and MedQuist
iPhrase and Mindfabric
phrases entered in natural language by users
employing artificial intelligence.
Oil and Gas Drilling and Completion Energy exploration and production tools and
Technology
technologies.
Oilfield equipment and
services companies
107
Techno Savvy – April 8, 2005
Technology
Description
P2P Networks
Network consisting of peer nodes capable of
Vendors
NA
both requesting and responding to supported
transactions, rather than clients and servers.
“Phood”
Food that offers pharmaceutical benefits.
Senomyx and Martek
Biosciences, among others
PHR
Patient health records (PHRs) are Internetaccessible repositories for storing an individual’s
LifeMasters, Supported SelfCare,
MEDecision, and WellMed
health information. PHRs are used to provide
patients with a copy of the most important
clinical information pertaining to their medical
care.
Power Line Broadband
The transmission of broadband traffic over the
power line infrastructure.
Amperion, Current Technologies,
DS2, and Main.net
Communications
Product content and data management Product content and data management is a set
of related disciplines, technologies, and solutions
GXS/HAHT, IBM/Trigo, QRS,
UCCnet, and Velosel
used to create and maintain consistent
interpretation of product data across trading
partners to facilitate commercial exchange.
Real-Time Patient Remote
Real-time remote monitoring for patients
Monitoring
with, for example, congestive heart failure or
Medtronic
low blood sugar levels.
Reference Data Standards
Reference data standards help identify and
No vendors have emerged. Most
describe customers, products, instruments, and of the development work is
other entities by standardizing these processes
happening through in-house
to a set of consistent identifiers and codes.
development and industry
groups, including the Reference
Data Coalition and the Reference
Data User Group.
Retinal Displays
Retinal displays are a type of heads-up display
that “paint” a picture directly on the sensitive
part of the user’s retina. The image appears to
be on a screen at the user’s ideal viewing
distance. There is no actual screen in front of
the user, just some simple optics (e.g., modified
eyeglasses) that reflect the image back into the
eye.
108
MicroOptical and Microvision
Techno Savvy – April 8, 2005
Technology
Description
RFID (Case/Pallet)
RFID (case/pallet) solutions target tracking
inventory at the case and pallet level.
Vendors
Alien Technology, Checkpoint
Oat Systems, GlobeRanger,
IBM, Manhattan, RedPrairie,
Samsys, Symbol, and Texas
Instruments
RFID Payments: Worldwide
Payment transactions outside the U.S. are
IT outsourcing consortia.
initiated via contact-less technologies embodied
in smart cards, tags, key fobs, and others.
Debits can be made to bank-card accounts or
prepaid accounts.
Telematics
Network-enabled cars, providing in-car services ATX and General Motors(OnStar)
such as cell phone integration, remote
diagnostics, roadside SOS, e-mail, vehicle
tracking, Global Positioning System navigation,
and traffic information.
Truth Verification
Truth verification is the analysis of a voice signal
Nemesysco
(live or recorded) to look for the variations
caused by truthful versus untruthful utterances.
Video on Demand
These are movies, television programs, and
Atom Films, CinemaNow,
other video downloaded or streamed via the
MovieLink, and Akimbo
Internet. Streaming video is like VOD offered via
cable systems in that consumers may interact
with video in real time: pause, fast-forward,
rewind, answer polls, transact, etc. Download
services more closely resemble the satellite
store-and-forward model.
Video Telephony
Full duplex, real-time, audiovisual
NA
communications between/among end users over
high-bandwidth networks. In cable broadband
networks, traffic is IP-based and uses DataOver-Cable Service Interface Specification
(DOCSIS) 1.1/PacketCable technology.
Virtual Prototyping
Highly detailed modeling of a product and its
environment. It extends beyond CAE to include
No notable vendors at this time.
Vendors likely to deliver this
realistic rendering, ability to be manufactured,
technology are Altair
service processes, product ergonomics, virtual
Engineering, ANSYS, Dassault
reality, haptics, and so on. Advanced virtual
Systemes, MSC.Software, PTC,
prototyping incorporates statistical modeling of
and UGS
manufacturing and variability of material
properties, environmental conditions, and usage
modes.
109
Techno Savvy – April 8, 2005
Technology
Description
Voice over Internet Protocol
The packetization of voice traffic for
transport over an IP network.
Web Services–Enabled Business Model Web Services–enabled business models are new
Vendors
Avaya, Cisco Systems, Nortel
Networks, and Siemens
Webify
approaches to doing business among enterprises
and consumers that would not have been
possible without the benefits of Web services.
ZigBee (802.15.4)
ZigBee (802.15.4) is a global wireless standard
for reliable, secure, low-power remote
monitoring and control applications — including
consumer applications such as electronics,
home automation, machine-to-machine (M2M),
and gaming.
Source: Gartner Consulting and Smith Barney
110
Ember, Honeywell, Philips,
Motorola, Samsung, and
Invensys
Techno Savvy – April 8, 2005
Table of Figures
Figure 1. Summary of Nine Key Technologies and 12 Techno Savvy Companies....................................7
Figure 2. Technology Adapters, Exploiters, and Innovators.................................................................... 11
Figure 3. Edison Electric Light................................................................................................................ 16
Figure 4. The Model T............................................................................................................................ 19
Figure 5. RCA Radiola ........................................................................................................................... 21
Figure 6. Possible Future Uses for the Phonograph ............................................................................... 22
Figure 7. The Gramophone.................................................................................................................... 23
Figure 8. Sony Model 250 Solid State Stereo Tape Recorder ................................................................ 25
Figure 9. Sony TR 610 Transistor radio ................................................................................................. 26
Figure 10. RCA Television ..................................................................................................................... 28
Figure 11. IBM 604 Electronic Calculator ............................................................................................... 31
Figure 12. Texas Instruments’ Integrated Circuit .................................................................................... 32
Figure 13. Microsoft Windows 3.0 .......................................................................................................... 34
Figure 14. Genentech’s Human Insulin .................................................................................................. 35
Figure 15. Today’s Technology Adapters, Exploiters, and Innovators .................................................... 39
Figure 16. The Evolution of the Automobile Transmission...................................................................... 41
Figure 17. Performance Improvements of DCT Transmission vs. 6-Speed Manual ................................ 42
Figure 18. The Evolution of Energy Production ...................................................................................... 45
Figure 19. EOG’s Technological Advantage Is Now Being Priced into the Region ................................. 46
Figure 20. Employer Interest in HSAs .................................................................................................... 47
Figure 21. The Evolution of Medical Payment Processes....................................................................... 48
Figure 22. Comparative Analysis of Health Plans................................................................................... 51
Figure 23. Capital Expenditures by MCOs (2004) .................................................................................. 52
Figure 24. The Evolution of Inventory Management ............................................................................... 54
Figure 25. Key RFID Metrics.................................................................................................................. 57
Figure 26. IP Telephony Implementation Schematic .............................................................................. 58
Figure 27. The Evolution of Enterprise Communications........................................................................ 59
Figure 28. Market Share by Technology................................................................................................. 62
Figure 29. The Evolution of Medtronic’s CHF Monitoring Technology .................................................... 68
Figure 30. Medtronic Carelink Adoption ................................................................................................. 68
Figure 31. The Evolution of Diabetes Management................................................................................ 71
Figure 32. The Evolution of the External Artificial Pancreas ................................................................... 73
Figure 33. Improvements in Laptop Components................................................................................... 74
Figure 34. The Evolution of Portable Power Sources ............................................................................. 75
Figure 35. Comparative Performance of Lithium Ion Batteries vs. DMFCs ............................................. 77
Figure 36. The Evolution of Backup Power Generation .......................................................................... 80
Figure 37. GenCore Direct Material Cost (DMC) Reduction ................................................................... 81
Figure 38. Relative Importance of Payment Instrument.......................................................................... 82
Figure 39. Banknotes and Coins in Circulation Outside Credit Institutions.............................................. 83
Figure 40. eBay’s Active Users .............................................................................................................. 83
Figure 41. Checks as a Percentage of the Total Volume of Cashless Transactions ............................... 84
Figure 42. The Evolution of Peer-to-Peer Payment Methods.................................................................. 85
Figure 43. Number of Global User Accounts at Leading Transaction Processing Firms ......................... 86
Figure 44. Number of Merchant Accounts .............................................................................................. 87
Figure 45. Average Transaction Fee as a Percentage of Sales.............................................................. 87
Figure 46. Percent of eBay Transactions Closed Via PayPal ................................................................. 88
Figure 47. PayPal Global Merchant Services (“Off eBay”) Total Payment Volumes................................ 90
Figure 48. Cell Phone Users and Internet Users per 100 Inhabitants ..................................................... 91
Figure 49. ATM and Credit/Debit Card Penetration Ratios by Region .................................................... 91
Figure 50. The Evolution of Local Day-to-Day Payment Methods........................................................... 92
Figure 51. The Evolution of Flavor Enhancers........................................................................................ 96
Figure 52. Taste Areas and Associated Product Categories .................................................................. 98
Figure 53. Market Opportunities for Senomyx and Collaborators ........................................................... 98
Figure 54. The Evolution of Food Additives.......................................................................................... 100
Figure 55. Martek’s Potential DHA Food Opportunity ........................................................................... 103
111
Techno Savvy – April 8, 2005
Notes
112
Techno Savvy – April 8, 2005
Notes
113
Techno Savvy – April 8, 2005
Notes
114
Techno Savvy – April 8, 2005
ANALYST CERTIFICATION
Appendix A-1
We, Edward Kerschner and Michael Geraghty, the authors of this report, hereby certify that all of the views expressed in this research
report accurately reflect our personal views about any and all of the subject issuer(s) or securities. We also certify that no part of our
compensation was, is, or will be directly or indirectly related to the specific recommendation(s) or view(s) in this report.
Analysts’ compensation is determined based upon activities and services intended to benefit the investor clients of Citigroup Global
Markets Inc. and its affiliates ("the Firm"). Like all Firm employees, analysts receive compensation that is impacted by overall firm
profitability, which includes revenues from, among other business units, the Private Client Division, Institutional Equities, and Investment
Banking.
Smith Barney Equity Research Ratings Distribution
Data current as of 4 April 2005
Buy
Hold
Sell
Smith Barney Global Fundamental Equity Research Coverage (2591)
39%
43%
19%
% of companies in each rating category that are investment banking clients
54%
57%
42%
Guide to Fundamental Research Investment Ratings:
Smith Barney’s stock recommendations include a risk rating and an investment rating.
Risk ratings, which take into account both price volatility and fundamental criteria, are: Low [L], Medium [M], High [H], and Speculative
[S].
Investment ratings are a function of Smith Barney’s expectation of total return (forecast price appreciation and dividend yield within the
next 12 months) and risk rating.
For securities in developed markets (US, UK, Europe, Japan, and Australia/New Zealand), investment ratings are: Buy [1] (expected total
return of 10% or more for Low-Risk stocks, 15% or more for Medium-Risk stocks, 20% or more for High-Risk stocks, and 35% or more for
Speculative stocks); Hold [2] (0%-10% for Low-Risk stocks, 0%-15% for Medium-Risk stocks, 0%-20% for High-Risk stocks, and 0%-35%
for Speculative stocks); and Sell [3] (negative total return).
For securities in emerging markets (Asia Pacific, Emerging Europe/Middle East/Africa, and Latin America), investment ratings are: Buy [1]
(expected total return of 15% or more for Low-Risk stocks, 20% or more for Medium-Risk stocks, 30% or more for High-Risk stocks, and
40% or more for Speculative stocks); Hold [2] (5%-15% for Low-Risk stocks, 10%-20% for Medium-Risk stocks, 15%-30% for High-Risk
stocks, and 20%-40% for Speculative stocks); and Sell [3] (5% or less for Low-Risk stocks, 10% or less for Medium-Risk stocks, 15% or
less for High-Risk stocks, and 20% or less for Speculative stocks).
Investment ratings are determined by the ranges described above at the time of initiation of coverage, a change in risk rating, or a change
in target price. At other times, the expected total returns may fall outside of these ranges because of price movement and/or volatility.
Such interim deviations from specified ranges will be permitted but will become subject to review by Research Management. Your
decision to buy or sell a security should be based upon your personal investment objectives and should be made only after evaluating the
stock’s expected performance and risk.
Between September 9, 2002, and September 12, 2003, Smith Barney’s stock ratings were based upon expected performance over the
following 12 to 18 months relative to the analyst’s industry coverage universe at such time. An Outperform (1) rating indicated that we
expected the stock to outperform the analyst’s industry coverage universe over the coming 12-18 months. An In-line (2) rating indicated
that we expected the stock to perform approximately in line with the analyst’s coverage universe. An Underperform (3) rating indicated
that we expected the stock to underperform the analyst’s coverage universe. In emerging markets, the same ratings classifications were
used, but the stocks were rated based upon expected performance relative to the primary market index in the region or country. Our
complementary Risk rating system -- Low (L), Medium (M), High (H), and Speculative (S) -- took into account predictability of financial
results and stock price volatility. Risk ratings for Asia Pacific were determined by a quantitative screen which classified stocks into the
same four risk categories. In the major markets, our Industry rating system -- Overweight, Marketweight, and Underweight -- took into
account each analyst’s evaluation of their industry coverage as compared to the primary market index in their region over the following 12
to 18 months.
Prior to September 9, 2002, the Firm’s stock rating system was based upon the expected total return over the next 12 to 18 months. The
total return required for a given rating depended on the degree of risk in a stock (the higher the risk, the higher the required return). A Buy
(1) rating indicated an expected total return ranging from +15% or greater for a Low-Risk stock to +30% or greater for a Speculative
stock. An Outperform (2) rating indicated an expected total return ranging from +5% to +15% (Low-Risk) to +10% to +30% (Speculative).
A Neutral (3) rating indicated an expected total return ranging from -5% to +5% (Low-Risk) to -10% to +10% (Speculative). An
Underperform (4) rating indicated an expected total return ranging from -5% to -15% (Low-Risk) to -10% to -20% (Speculative). A Sell (5)
rating indicated an expected total return ranging from -15% or worse (Low-Risk) to -20% or worse (Speculative). The Risk ratings were
the same as in the current system.
OTHER DISCLOSURES
For securities recommended in this report in which the Firm is not a market maker, the Firm usually provides bids and offers and may act
as principal in connection with such transactions. The Firm is a regular issuer of traded financial instruments linked to securities that may
have been recommended in this report. The Firm regularly trades in, and may, at any time, hold a trading position (long or short) in, the
shares of the subject company(ies) discussed in this report. The Firm may engage in securities transactions in a manner inconsistent with
this research report and, with respect to securities covered by this report, will buy or sell from customers on a principal basis.
Securities recommended, offered, or sold by the Firm: (i) are not insured by the Federal Deposit Insurance Corporation; (ii) are not
deposits or other obligations of any insured depository institution (including Citibank); and (iii) are subject to investment risks, including
the possible loss of the principal amount invested. Although information has been obtained from and is based upon sources that the Firm
believes to be reliable, we do not guarantee its accuracy and it may be incomplete and condensed. Note, however, that the Firm has
taken all reasonable steps to determine the accuracy and completeness of the disclosures made in the Important Disclosures section of
this report. All opinions, projections and estimates constitute the judgment of the author as of the date of the report and are subject to
change without notice. Prices and availability of financial instruments also are subject to change without notice. If this is a fundamental
research report, it is the intention of Smith Barney to provide research coverage of this/these issuer(s), including in response to news
115
Techno Savvy – April 8, 2005
affecting this issuer, subject to applicable quiet periods and capacity constraints. This report is for informational purposes only and is not
intended as an offer or solicitation for the purchase or sale of a security. Any decision to purchase securities mentioned in this research
must take into account existing public information on such security or any registered prospectus.
Investing in non-U.S. securities, including ADRs, may entail certain risks. The securities of non-U.S. issuers may not be registered with,
nor be subject to the reporting requirements of the U.S. Securities and Exchange Commission. There may be limited information
available on foreign securities. Foreign companies are generally not subject to uniform audit and reporting standards, practices and
requirements comparable to those in the U.S. Securities of some foreign companies may be less liquid and their prices more volatile than
securities of comparable U.S. companies. In addition, exchange rate movements may have an adverse effect on the value of an
investment in a foreign stock and its corresponding dividend payment for U.S. investors. Net dividends to ADR investors are estimated,
using withholding tax rates conventions, deemed accurate, but investors are urged to consult their tax advisor for exact dividend
computations. Investors who have received this report from the Firm may be prohibited in certain states or other jurisdictions from
purchasing securities mentioned in this report from the Firm. Please ask your Financial Consultant for additional details.
The UK’s Financial Services Authority rules require that a firm must establish, implement and make available a policy for managing
conflicts of interest arising as a result of publication or distribution of investment research. The policy applicable to Citigroup’s equity
research products can be found at www.citigroupgeo.com. This report may have been distributed simultaneously, in multiple formats, to
the Firm’s worldwide institutional and retail customers. If this report is being made available via the Smith Barney Private Client Group in
the United Kingdom and Amsterdam, please note that this report is distributed in the UK by Citigroup Global Markets Ltd., a firm
Authorised and regulated by the Financial Services Authority (FSA) for the conduct of Investment Business in the UK. This document is
not to be construed as providing investment services in any jurisdiction where the provision of such services would be illegal. Subject to
the nature and contents of this document, the investments described herein are subject to fluctuations in price and/or value and investors
may get back less than originally invested. Certain high-volatility investments can be subject to sudden and large falls in value that could
equal or exceed the amount invested. Certain investments contained herein may have tax implications for private customers in the UK
whereby levels and basis of taxation may be subject to change. If in doubt, investors should seek advice from a tax adviser. This material
may relate to investments or services of a person outside of the UK or to other matters which are not regulated by the Financial Services
Authority and further details as to where this may be the case are available upon request in respect of this material. This report may not
be distributed to private clients in Germany. This report is distributed in Germany by Citigroup Global Markets Deutschland AG & Co.
KGaA, regulated by Bundesanstalt fuer Finanzdienstleistungsaufsicht (BaFin). If this publication is being made available in certain
provinces of Canada by Citigroup Global Markets (Canada) Inc. ("CGM Canada"), CGM Canada has approved this publication. If this
report was prepared by Smith Barney and distributed in Japan by Nikko Citigroup Ltd., it is being so distributed under license. This report
is made available in Australia to wholesale clients through Citigroup Global Markets Australia Pty Ltd. (ABN 64 003 114 832 and AFSL
No. 240992) and to retail clients through Smith Barney Citigroup Australia Pty Ltd. (ABN 19 009 145 555 and AFSL No. 240813),
Participants of the ASX Group. This advice has been prepared without taking account of the objectives, financial situation or needs of any
particular investor. Accordingly, investors should, before acting on the advice, consider the appropriateness of the advice, having regard
to their objectives, financial situation and needs. In New Zealand this report is made available through Citigroup Global Markets New
Zealand Ltd., a member firm of the New Zealand Stock Exchange. Citigroup Global Markets (Pty) Ltd. is incorporated in the Republic of
South Africa (company registration number 2000/025866/07) and its registered office is at 145 West Street, Sandton, Johannesburg
2196. The investments and services contained herein are not available to private customers in South Africa. If this report is made
available in Hong Kong by, or on behalf of, Citigroup Global Markets Asia Ltd., it is attributable to Citigroup Global Markets Asia Ltd.,
Citibank Tower, Citibank Plaza, 3 Garden Road, Hong Kong. If this report is made available in Hong Kong by The Citigroup Private Bank
to its clients, it is attributable to Citibank N.A., Citibank Tower, Citibank Plaza, 3 Garden Road, Hong Kong. This publication is made
available in Singapore through Citigroup Global Markets Singapore Pte. Ltd., a Capital Markets Services Licence holder.
© 2005 Citigroup Global Markets Inc. Member SIPC. Smith Barney is a division and service mark of Citigroup Global Markets Inc. and its
affiliates and is used and registered throughout the world. Citigroup and the Umbrella Device are trademarks and service marks of
Citicorp or its affiliates and are used and registered throughout the world. Nikko is a registered trademark of Nikko Cordial Corporation.
All rights reserved. Any unauthorized use, duplication, redistribution or disclosure is prohibited by law and will result in prosecution. The
Firm accepts no liability whatsoever for the actions of third parties. The Firm makes no representations or warranties whatsoever as to
the data and information provided in any third party referenced website and shall have no liability or responsibility arising out of, or in
connection with, any such referenced website.
ADDITIONAL INFORMATION IS AVAILABLE UPON REQUEST
116
US04P001