ICT-security

Transcription

ICT-security
ICT-security
security - beveiliging
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Sun, 15 Aug 2010 17:58:06 UTC
Contents
Articles
Genarals security
1
Security
1
Security risk
5
ISO/IEC 27000
6
ISO/IEC 27000-series
7
Threat
8
Risk
9
1.0 Personnal
20
Authentication
20
Authorization
25
Social engineering (security)
27
Security engineering
34
2.0 Physical
Physical security
39
39
3.0 Environmental and Facilities
43
4.0 Systems
44
Computer security
44
Access control
54
Access control list
66
Password
68
Hacker (computer security)
77
Malware
86
Vulnerability
97
Computer virus
99
Computer worm
109
Exploit (computer security)
112
Computer insecurity
114
5.0 Networks
Communications security
120
120
Network security
5.1 Internet
123
127
Book:Internet security
127
Firewall (computing)
128
Denial-of-service attack
133
Spam (electronic)
144
Phishing
156
6.0 Information
169
Data security
169
Information security
171
Encryption
186
Cryptography
188
Bruce Schneier
202
7.0 Application
205
Application security
205
Application software
209
Software cracking
213
References
Article Sources and Contributors
216
Image Sources, Licenses and Contributors
223
Article Licenses
License
225
1
Genarals security
Security
Security is the degree of protection against danger, damage, loss, and
criminal activity. Security as a form of protection are structures and
processes that provide or improve security as a condition. The Institute
for Security and Open Methodologies (ISECOM) in the OSSTMM 3
defines security as "a form of protection where a separation is created
between the assets and the threat. This includes but is not limited to the
elimination of either the asset or the threat. Security as a national
condition was defined in a United Nations study (1986), so that
countries can develop and progress freely.
Security has to be compared to related concepts: safety, continuity,
reliability. The key difference between security and reliability is that
security must take into account the actions of people attempting to
cause destruction.
X-ray machines and metal detectors are used to
control what is allowed to pass through an airport
security perimeter.
Different scenarios also give rise to the context in which security is
maintained:
• With respect to classified matter, the condition that prevents
unauthorized persons from having access to official information that
is safeguarded in the interests of national security.
• Measures taken by a military unit, an activity or installation to
protect itself against all acts designed to, or which may,
Perceived security compared to real security
Security spikes protect a gated community in the
East End of London.
Perception of security is sometimes poorly mapped to measureable
objective security. The perceived effectiveness of security measures is
sometimes different from the actually security provided by those
measures. The presence of security protections may even be taken for
security itself. For example, two computer security programs could be
interfering with each other and even cancelling each other's effect,
while the owner believes s/he is getting double the protection.
Security theater is a critical term for deployment of measures primarily
aimed at raising subjective security in a population without a genuine
Security checkpoint at the entrance to the Delta
or commensurate concern for the effects of that measure on—and
Air Lines corporate headquarters in Atlanta
possibly decreasing—objective security. For example, some consider
the screening of airline passengers based on static databases to have been Security Theater and Computer Assisted
Passenger Prescreening System to have created a decrease in objective security.
Perception of security can also increase objective security when it affects or deters malicious behavior, as with visual
signs of security protections, such as video surveillance, alarm systems in a home, or an anti-theft system in a car
Security
2
such as a LoJack, signs.
Since some intruders will decide not to attempt to break into such areas or vehicles, there can actually be less
damage to windows in addition to protection of valuable objects inside. Without such advertisement, a car-thief
might, for example, approach a car, break the window, and then flee in response to an alarm being triggered. Either
way, perhaps the car itself and the objects inside aren't stolen, but with perceived security even the windows of the
car have a lower chance of being damaged, increasing the financial security of its owner(s).
However, the non-profit, security research group, ISECOM, has determined that such signs may actually increase the
violence, daring, and desperation of an intruder [1] This claim shows that perceived security works mostly on the
provider and is not security at all [2] .
It is important, however, for signs advertising security not to give clues as to how to subvert that security, for
example in the case where a home burglar might be more likely to break into a certain home if he or she is able to
learn beforehand which company makes its security system.
Categorising security
There is an immense literature on the analysis and categorisation of security. Part of the reason for this is that, in
most security systems, the "weakest link in the chain" is the most important. The situation is asymmetric since the
defender must cover all points of attack while the attacker need only identify a single weak point upon which to
concentrate.
Types
IT realm
Physical realm
Political
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Application security
Computing security
Data security
Information
security
Network security
Airport security
Port security/Supply chain security
Food security
Home security
Hospital security
Physical security
School security
Shopping centre security
Infrastructure security
Homeland security
Human security
International security
National security
Public security
Monetary
•
Financial security
• Aviation security is a combination of measures and material and human resources intended to counter the
unlawful interference with the aviation security.
• Operations Security (OPSEC) is a complement to other "traditional" security measures that evaluates the
organization from an adversarial perspective.[3] .
Security concepts
Certain concepts recur throughout different fields of security:
•
•
•
•
•
•
•
Assurance - assurance is the level of guarantee that a security system will behave as expected
Countermeasure - a countermeasure is a way to stop a threat from triggering a risk event
Defense in depth - never rely on one single security measure alone
Exploit - a vulnerability that has been triggered by a threat - a risk of 1.0 (100%)
Risk - a risk is a possible event which could cause a loss
Threat - a threat is a method of triggering a risk event that is dangerous
Vulnerability - a weakness in a target that can potentially be exploited by a threat
Security
Security management in organizations
In the corporate world, various aspects of security were historically addressed separately - notably by distinct and
often noncommunicating departments for IT security, physical security, and fraud prevention. Today there is a
greater recognition of the interconnected nature of security requirements, an approach variously known as holistic
security, "all hazards" management, and other terms.
Inciting factors in the convergence of security disciplines include the development of digital video surveillance
technologies (see Professional video over IP) and the digitization and networking of physical control systems (see
SCADA)[4] [5] . Greater interdisciplinary cooperation is further evidenced by the February 2005 creation of the
Alliance for Enterprise Security Risk Management, a joint venture including leading associations in security (ASIS),
information security (ISSA, the Information Systems Security Association), and IT audit (ISACA, the Information
Systems Audit and Control Association)[6] .
In 2007 the International Organisation for Standardization (ISO) released ISO 28000 - Security Management
Systems for the supply chain. Although the title supply chain is included, this Standard specifies the requirements for
a security management system, including those aspects critical to security assurance for any organisation or
enterprise wishing to management the security of the organisation and its activities. ISO 28000 is the foremost risk
based security system and is suitable for managing both public and private regulatory security, customs and industry
based security schemes and requirements.
People in the security business
Computer security
•
•
•
•
•
Ross J. Anderson
Dan Geer
Andrew Odlyzko
Bruce Schneier
Eugene Spafford
3
Security
4
National security
• Richard A. Clarke
• David H. Holtzman
Physical security
• James F. Pastor
See also
Concepts
Branches
•
•
•
•
•
•
•
•
•
•
•
Computer security
•
•
•
• Cracking
• Hacking
• MySecureCyberspace
• Phreaking
Communications security
Human security
Information security
•
•
• CISSP
National security
Physical Security
3D Security
Classified information
Insecurity
ISO 27000
ISO 28000
ISO 31000
Security breach
Security increase
Security Risk
Surveillance
•
Wireless sensor
network
•
•
Police
Public Security Bureau
•
•
Security guard
Security police
References
[1]
[2]
[3]
[4]
[5]
[6]
http:/ / wiki. answers. com/ Q/ Do_home_security_systems_prevent_burglaries
http:/ / www. isecom. org/ hsm
OSPA Website (http:/ / www. opsecprofessionals. org/ )
Taming the Two-Headed Beast (http:/ / www. csoonline. com/ read/ 090402/ beast. html), CSOonline, September 2002
Security 2.0 (http:/ / www. csoonline. com/ read/ 041505/ constellation. html), CSOonline, April 2005
AESRM Website (http:/ / www. aesrm. org/ )
Security risk
Security risk
Security Risk describes employing the concept of risk to the security risk management paradigm to make a
particular determination of security orientated events.
Introduction
Security risk is the demarcation of risk, into the security silo, from the broader enterprise risk management
framework for the purposes of isolating and analysing unique events, outcomes and consequences.[1]
Security risk is often, quantitatively, represented as any event that compromises the assets, operations and objectives
of an organisation. 'Event', in the security paradigm, comprises those undertaken by actors intentionally for purposes
that adversely affect the organisation.
The role of the 'actors' and the intentionality of the 'events', provides the differentiation of security risk from other
risk management silos, particularly those of safety, environment, quality, operational and financial.
Common Approaches to Analysing Security Risk
Risk = Threat × Harm
Risk = Consequence × Threat × Vulnerability
Risk = Consequence × Likelihood
Risk = Consequence × Likelihood × Vulnerability
Psychological Factors relating to Security Risk
Main article: Risk - Risk in Psychology
Given the strong influence affective states can play in the conducting of security risk assessment, many papers have
considered the roles of affect heuristic[2] and biases in skewing findings of the process.[3]
References
[1] Function of security risk assessments to ERM (http:/ / www. optaresystems. com/ index. php/ optare/ publication_detail/
security_risk_assessment_enterprise_risk_management/ )
[2] Keller, C., Siegrist, M., & Gutscher, H. The Role of the Affect and Availability Heuristics in Risk Communication. Risk Analysis, Vol. 26, No.
3, 2006
[3] Heuristics and risk perception – Risk assessments pitfalls (http:/ / www. optaresystems. com/ index. php/ optare/ publication_detail/
heuristics_risk_perception_risk_assessment_pitfalls/ )
5
ISO/IEC 27000
ISO/IEC 27000
ISO/IEC 27000 is part of a growing family of ISO/IEC Information Security Management Systems (ISMS)
standards, the 'ISO/IEC 27000 series'. ISO/IEC 27000 is an international standard entitled: "Information technology Security techniques - Information security management systems - Overview and vocabulary".
The standard was developed by sub-committee 27 (SC27) of the first Joint Technical Committee (JTC1) of the
International Organization for Standardization and the International Electrotechnical Commission.
ISO/IEC 27000 provides:
• An overview of and introduction to the entire ISO/IEC 27000 family of Information Security Management
Systems (ISMS) standards; and
• A glossary or vocabulary of fundamental terms and definitions used throughout the ISO/IEC 27000 family.
Information security, like many technical subjects, is evolving a complex web of terminology. Relatively few
authors take the trouble to define precisely what they mean, an approach which is unacceptable in the standards
arena as it potentially leads to confusion and devalues formal assessment and certification. As with ISO 9000 and
ISO 14000, the base '000' standard is intended to address this.
Status
Current version: ISO/IEC 27000:2009, available from ISO/ITTF as a free download [1]
Target audience: users of the remaining ISO/IEC 27000-series information security management standards
See also
• ISO/IEC 27000-series
• ISO/IEC 27001
• ISO/IEC 27002 (formerly ISO/IEC 17799)
References
[1] http:/ / standards. iso. org/ ittf/ PubliclyAvailableStandards/ c041933_ISO_IEC_27000_2009. zip
6
ISO/IEC 27000-series
ISO/IEC 27000-series
The ISO/IEC 27000-series (also known as the 'ISMS Family of Standards' or 'ISO27k' for short) comprises
information security standards published jointly by the International Organization for Standardization (ISO) and the
International Electrotechnical Commission (IEC).
The series provides best practice recommendations on information security management, risks and controls within
the context of an overall Information Security Management System (ISMS), similar in design to management
systems for quality assurance (the ISO 9000 series) and environmental protection (the ISO 14000 series).
The series is deliberately broad in scope, covering more than just privacy, confidentiality and IT or technical security
issues. It is applicable to organizations of all shapes and sizes. All organizations are encouraged to assess their
information security risks, then implement appropriate information security controls according to their needs, using
the guidance and suggestions where relevant. Given the dynamic nature of information security, the ISMS concept
incorporates continuous feedback and improvement activities, summarized by Deming's "plan-do-check-act"
approach, that seek to address changes in the threats, vulnerabilities or impacts of information security incidents.
The standards are the product of ISO/IEC JTC1 (Joint Technical Committee 1) SC27 (Sub Committee 27), an
international body that meets in person twice a year.
At present, nine of the standards in the series are publicly available while several more are under development.
Published standards
•
•
•
•
•
•
•
ISO/IEC 27000 — Information security management systems — Overview and vocabulary [1]
ISO/IEC 27001 — Information security management systems — Requirements
ISO/IEC 27002 — Code of practice for information security management
ISO/IEC 27003 — Information security management system implementation guidance
ISO/IEC 27004 — Information security management — Measurement
ISO/IEC 27005 — Information security risk management
ISO/IEC 27006 — Requirements for bodies providing audit and certification of information security management
systems
• ISO/IEC 27011 — Information security management guidelines for telecommunications organizations based on
ISO/IEC 27002
• ISO 27799 - Information security management in health using ISO/IEC 27002 [standard produced by the Health
Infomatics group within ISO, independently of ISO/IEC JTC1/SC27]
In preparation
• ISO/IEC 27007 - Guidelines for information security management systems auditing (focused on the management
system)
• ISO/IEC 27008 - Guidance for auditors on ISMS controls (focused on the information security controls)
• ISO/IEC 27013 - Guideline on the integrated implementation of ISO/IEC 20000-1 and ISO/IEC 27001
• ISO/IEC 27014 - Information security governance framework
• ISO/IEC 27015 - Information security management guidelines for the finance and insurance sectors
• ISO/IEC 27031 - Guideline for ICT readiness for business continuity (essentially the ICT continuity component
within business continuity management)
• ISO/IEC 27032 - Guideline for cybersecurity (essentially, 'being a good neighbor' on the Internet)
• ISO/IEC 27033 - IT network security, a multi-part standard based on ISO/IEC 18028:2006
• ISO/IEC 27034 - Guideline for application security
• ISO/IEC 27035 - Security incident management
7
ISO/IEC 27000-series
• ISO/IEC 27036 - Guidelines for security of outsourcing
• ISO/IEC 27037 - Guidelines for identification, collection and/or acquisition and preservation of digital evidence
See also
• BS 7799, the original British Standard from which ISO/IEC 17799, ISO/IEC 27002 and ISO/IEC 27001 were
derived
• Document management system
• Sarbanes-Oxley Act
• Standard of Good Practice published by the Information Security Forum
External links
• The ISO 17799 Newsletter [1]
• Opensource software to support ISO 27000 processes [2]
References
[1] http:/ / 17799-news. the-hamster. com
[2] http:/ / esis. sourceforge. net/
Threat
A threat is an act of coercion wherein an act is proposed to elicit a negative response. It is a communicated intent to
inflict harm or loss on another person. It is a crime in many jurisdictions. Libertarians hold that a palpable,
immediate, and direct threat of aggression, embodied in the initiation of an overt act, is equivalent to aggression
itself, and that proportionate defensive force would be justified in response to such a threat, if a clear and present
danger exists.[1]
Brazil
In Brazil, the crime of threatening someone, defined as a threat to cause unjust and grave harm, is punishable by a
fine or three months to one year in prison, as described in the Brazilian Penal Code, article 147. Brazilian
jurisprudence does not treat as a crime a threat that was proferred in a heated discussion.
Germany
The German Strafgesetzbuch § 241 punishes the crime of threat with a prison term for up to one year or a fine.
United States
In the United States, federal law criminalizes certain true threats transmitted via the U.S. mail[2] or in interstate
commerce. It also criminalizes threatening the government officials of the United States. Some U.S. states
criminalize cyberbullying.
References
[1] http:/ / mises. org/ rothbard/ Ethics/ twelve. asp
[2] 18 U.S.C. § 876 (http:/ / www. law. cornell. edu/ uscode/ 18/ 876. html)
8
Risk
9
Risk
Risk concerns the deviation of one or more results of one or more future events
from their expected value. Technically, the value of those results may be positive
or negative. However, general usage tends to focus only on potential harm that
may arise from a future event, which may accrue either from incurring a cost
("downside risk [1]") or by failing to attain some benefit ("upside risk [1]").
Historical background
The term risk may be traced back to classical Greek rizikon (Greek ριζα, riza),
meaning root, later used in Latin for "cliff". The term is used in Homer's
Rhapsody M of Odyssey "Sirens, Scylla, Charybdee and the bulls of Helios
(Sun)" Odysseus tried to save himself from Charybdee at the cliffs of Scylla,
where his ship was destroyed by heavy seas generated by Zeus as a punishment
for his crew killing before the bulls of Helios (the god of the sun), by grapping
the roots of a wild fig tree.
For the sociologist Niklas Luhmann the term 'risk' is a neologism that appeared with the transition from traditional to
modern society.[2] "In the Middle Ages the term risicum was used in highly specific contexts, above all sea trade and
its ensuing legal problems of loss and damage."[2] [3] In the vernacular languages of the 16th century the words
rischio and riezgo were used,[2] both terms derived from the Arabic word "‫"قزر‬, "rizk", meaning 'to seek prosperity'.
This was introduced to continental Europe, through interaction with Middle Eastern and North African Arab traders.
In the English language the term risk appeared only in the 17th century, and "seems to be imported from continental
Europe."[2] When the terminology of risk took ground, it replaced the older notion that thought "in terms of good and
bad fortune."[2] Niklas Luhmann (1996) seeks to explain this transition: "Perhaps, this was simply a loss of
plausibility of the old rhetorics of Fortuna as an allegorical figure of religious content and of prudentia as a (noble)
virtue in the emerging commercial society."[4]
Scenario analysis matured during Cold War confrontations between major powers, notably the United States and the
Soviet Union. It became widespread in insurance circles in the 1970s when major oil tanker disasters forced a more
comprehensive foresight. The scientific approach to risk entered finance in the 1960s with the advent of the capital
asset pricing model and became increasingly important in the 1980s when financial derivatives proliferated. It
reached general professions in the 1990s when the power of personal computing allowed for widespread data
collection and numbers crunching.
Governments are using it, for example, to set standards for environmental regulation, e.g. "pathway analysis" as
practiced by the United States Environmental Protection Agency.
Risk
10
Definitions of risk
There are different definitions of risk for each of several applications. The widely inconsistent and ambiguous use of
the word is one of several current criticisms of the methods to manage risk.[5]
In one definition, "risks" are simply future issues that can be avoided or mitigated, rather than present problems that
must be immediately addressed.[6]
In risk management, the term "hazard" is used to mean an event that could cause harm and the term "risk" is used to
mean simply the probability of something happening.
OHSAS (Occupational Health & Safety Advisory Services) defines risk as the product of the probability of a hazard
resulting in an adverse event, times the severity of the event.[7] Mathematically, risk often simply defined as:
One of the first major uses of this concept was at the planning of the Delta Works in 1953, a flood protection
program in the Netherlands, with the aid of the mathematician David van Dantzig.[8] The kind of risk analysis
pioneered here has become common today in fields like nuclear power, aerospace and the chemical industry.
There are many formal methods used to assess or to "measure" risk, which many consider to be a critical factor in
human decision making. Some of these quantitative definitions of risk are well-grounded in sound statistics theory.
However, these measurements of risk rely on failure occurrence data which may be sparse. This makes risk
assessment difficult in hazardous industries such as nuclear energy where the frequency of failures is rare and
harmful consequences of failure are astronomical. The dangerous harmful consequences often necessitate actions to
reduce the probability of failure to infinitesimally small values which are hard to measure and corroborate with
empirical evidence. Often, the probability of a negative event is estimated by using the frequency of past similar
events or by event-tree methods, but probabilities for rare failures may be difficult to estimate if an event tree cannot
be formulated. Methods to calculate the cost of the loss of human life vary depending on the purpose of the
calculation. Specific methods include what people are willing to pay to insure against death,[9] and radiological
release (e.g., GBq of radio-iodine).
Financial risk is often defined as the unexpected variability or volatility of returns and thus includes both potential
worse-than-expected as well as better-than-expected returns. References to negative risk below should be read as
applying to positive impacts or opportunity (e.g., for "loss" read "loss or gain") unless the context precludes this
interpretation.
In statistics, risk is often mapped to the probability of some event seen as undesirable. Usually, the probability of that
event and some assessment of its expected harm must be combined into a believable scenario (an outcome), which
combines the set of risk, regret and reward probabilities into an expected value for that outcome. (See also Expected
utility.)
Thus, in statistical decision theory, the risk function of an estimator δ(x) for a parameter θ, calculated from some
observables x, is defined as the expectation value of the loss function L,
In information security, a risk is written as an asset, the threats to the asset and the vulnerability that can be exploited
by the threats to impact the asset - an example being: Our desktop computers (asset) can be compromised by
malware (threat) entering the environment as an email attachment (vulnerability).
The risk is then assessed as a function of three variables:
1. the probability that there is a threat
2. the probability that there are any vulnerabilities
3. the potential impact to the business.
The two probabilities are sometimes combined and are also known as likelihood. If any of these variables
approaches zero, the overall risk approaches zero.
Risk
11
Risk versus uncertainty
Risk: Combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury
or ill health that can be caused by the event or exposure(s)
In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and
uncertainty.
... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated.
The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their
causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases
a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and
crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a
measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an
uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type.
“
”
Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is
measureable.
Another distinction between risk and uncertainty is proposed in How to Measure Anything: Finding the Value of
Intangibles in Business and The Failure of Risk Management: Why It's Broken and How to Fix It by Doug
Hubbard:[10] [11]
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The
"true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There
is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other
undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses.
Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in
exploratory drilling costs".
In this sense, Hubbard uses the terms so that one may have uncertainty without risk but not risk without uncertainty.
We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If
we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome.
The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires
both probabilities for outcomes and losses quantified for outcomes.
Risk as a vector quantity
Hubbard also argues that that defining risk as the product of impact and probability presumes (probably incorrectly)
that the decision makers are risk neutral.[11] Only for a risk neutral person is the "certain monetary equivalent"
exactly equal to the probability of the loss times the amount of the loss. For example, a risk neutral person would
consider 20% chance of winning $1 million exactly equal to $200,000 (or a 20% chance of losing $1 million to be
exactly equal to losing $200,000). However, most decision makers are not actually risk neutral and would not
consider these equivalent choices. This gave rise to Prospect theory and Cumulative prospect theory. Hubbard
proposes instead that risk is a kind of "vector quantity" that does not collapse the probability and magnitude of a risk
by presuming anything about the risk tolerance of the decision maker. Risks are simply described as an set or
function of possible loss amounts each associated with specific probabilities. How this array is collapsed into a
single value cannot be done until the risk tolerance of the decision maker is quantified.
Risk can be both negative and positive, but it tends to be the negative side that people focus on. This is because some
things can be dangerous, such as putting their own or someone else’s life at risk. Risks concern people as they think
Risk
12
that they will have a negative effect on their future.
Insurance and health risk
Insurance is a risk-reducing investment in which the buyer pays a small fixed amount to be protected from a
potential large loss. Gambling is a risk-increasing investment, wherein money on hand is risked for a possible large
return, but with the possibility of losing it all. Purchasing a lottery ticket is a very risky investment with a high
chance of no return and a small chance of a very high return. In contrast, putting money in a bank at a defined rate of
interest is a risk-averse action that gives a guaranteed return of a small gain and precludes other investments with
possibly higher gain.
Risks in personal health may be reduced by primary prevention actions that decrease early causes of illness or by
secondary prevention actions after a person has clearly measured clinical signs or symptoms recognized as risk
factors. Tertiary prevention reduces the negative impact of an already established disease by restoring function and
reducing disease-related complications. Ethical medical practice requires careful discussion of risk factors with
individual patients to obtain informed consent for secondary and tertiary prevention efforts, whereas public health
efforts in primary prevention require education of the entire population at risk. In each case, careful communication
about risk factors, likely outcomes and certainty must distinguish between causal events that must be decreased and
associated events that may be merely consequences rather than causes.
Economic risk
Economic risks can be manifested in lower incomes or higher expenditures than expected. The causes can be many,
for instance, the hike in the price for raw materials, the lapsing of deadlines for construction of a new operating
facility, disruptions in a production process, emergence of a serious competitor on the market, the loss of key
personnel, the change of a political regime, or natural disasters.[12]
In business
Means of assessing risk vary widely between professions. Indeed, they may define these professions; for example, a
doctor manages medical risk, while a civil engineer manages risk of structural failure. A professional code of ethics
is usually focused on risk assessment and mitigation (by the professional on behalf of client, public, society or life in
general).
In the workplace, incidental and inherent risks exist. Incidental risks are those that occur naturally in the business but
are not part of the core of the business. Inherent risks have a negative effect on the operating profit of the business.
Risk-sensitive industries
Some industries manage risk in a highly quantified and numerate way. These include the nuclear power and aircraft
industries, where the possible failure of a complex series of engineered systems could result in highly undesirable
outcomes. The usual measure of risk for a class of events is then:
R = probability of the event × C
The total risk is then the sum of the individual class-risks.
In the nuclear industry, consequence is often measured in terms of off-site radiological release, and this is often
banded into five or six decade-wide bands.
The risks are evaluated using fault tree/event tree techniques (see safety engineering). Where these risks are low,
they are normally considered to be "Broadly Acceptable". A higher level of risk (typically up to 10 to 100 times what
is considered Broadly Acceptable) has to be justified against the costs of reducing it further and the possible benefits
that make it tolerable—these risks are described as "Tolerable if ALARP". Risks beyond this level are classified as
Risk
13
"Intolerable".
The level of risk deemed Broadly Acceptable has been considered by regulatory bodies in various countries—an
early attempt by UK government regulator and academic F. R. Farmer used the example of hill-walking and similar
activities, which have definable risks that people appear to find acceptable. This resulted in the so-called Farmer
Curve of acceptable probability of an event versus its consequence.
The technique as a whole is usually referred to as Probabilistic Risk Assessment (PRA) (or Probabilistic Safety
Assessment, PSA). See WASH-1400 for an example of this approach.
In finance
In finance, risk is the probability that an investment's actual return will be different than expected. This includes the
possibility of losing some or all of the original investment. Some regard a calculation of the standard deviation of the
historical returns or average returns of a specific investment as providing some historical measure of risk; see
modern portfolio theory. Financial risk may be market-dependent, determined by numerous market factors, or
operational, resulting from fraudulent behavior (e.g. Bernard Madoff). Recent studies suggest that testosterone level
plays a major role in risk taking during financial decisions.[13] [14]
In finance, risk has no one definition, but some theorists, notably Ron Dembo, have defined quite general methods to
assess risk as an expected after-the-fact level of regret. Such methods have been uniquely successful in limiting
interest rate risk in financial markets. Financial markets are considered to be a proving ground for general methods
of risk assessment. However, these methods are also hard to understand. The mathematical difficulties interfere with
other social goods such as disclosure, valuation and transparency. In particular, it is not always obvious if such
financial instruments are "hedging" (purchasing/selling a financial instrument specifically to reduce or cancel out the
risk in another investment) or "speculation" (increasing measurable risk and exposing the investor to catastrophic
loss in pursuit of very high windfalls that increase expected value).
As regret measures rarely reflect actual human risk-aversion, it is difficult to determine if the outcomes of such
transactions will be satisfactory. Risk seeking describes an individual whose utility function's second derivative is
positive. Such an individual would willingly (actually pay a premium to) assume all risk in the economy and is hence
not likely to exist.
In financial markets, one may need to measure credit risk, information timing and source risk, probability model risk,
and legal risk if there are regulatory or civil actions taken as a result of some "investor's regret". Knowing one's risk
appetite in conjunction with one's financial well-being are most crucial.
A fundamental idea in finance is the relationship between risk and return (see modern portfolio theory). The greater
the potential return one might seek, the greater the risk that one generally assumes. A free market reflects this
principle in the pricing of an instrument: strong demand for a safer instrument drives its price higher (and its return
proportionately lower), while weak demand for a riskier instrument drives its price lower (and its potential return
thereby higher).
"For example, a US Treasury bond is considered to be one of the safest investments and, when compared to a
corporate bond, provides a lower rate of return. The reason for this is that a corporation is much more likely to go
bankrupt than the U.S. government. Because the risk of investing in a corporate bond is higher, investors are offered
a higher rate of return."
The most popular, and also the most vilified lately risk measurement is Value-at-Risk (VaR). There are different
types of VaR - Long Term VaR, Marginal VaR, Factor VaR and Shock VaR[15] The latter is used in measuring risk
during the extreme market stress conditions.
Risk
14
In public works
In a peer reviewed study of risk in public works projects located in twenty nations on five continents, Flyvbjerg,
Holm, and Buhl (2002, 2005) documented high risks for such ventures for both costs[16] and demand.[17] Actual
costs of projects were typically higher than estimated costs; cost overruns of 50% were common, overruns above
100% not uncommon. Actual demand was often lower than estimated; demand shortfalls of 25% were common, of
50% not uncommon.
Due to such cost and demand risks, cost-benefit analyses of public works projects have proved to be highly
uncertain.
The main causes of cost and demand risks were found to be optimism bias and strategic misrepresentation. Measures
identified to mitigate this type of risk are better governance through incentive alignment and the use of reference
class forecasting.[18]
In human services
Huge ethical and political issues arise when human beings themselves are seen or treated as 'risks', or when the risk
decision making of people who use human services might have an impact on that service. The experience of many
people who rely on human services for support is that 'risk' is often used as a reason to prevent them from gaining
further independence or fully accessing the community, and that these services are often unnecessarily risk
averse.[19]
Risk in psychology
Regret
In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk
aversion (preferring the status quo in case one becomes worse off).
Framing
Framing[20] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality
(our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the
probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents
caused by drunk driving—partly because any given driver frames the problem by largely or totally ignoring the risk
of a serious or fatal accident.
For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis
despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be
ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human
tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method
and are a major concern of the philosophy of science.
All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of
people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is
socially painful to disagree, where there are conflicts of interest. One effective way to solve framing problems in risk
assessment or measurement (although some argue that risk cannot be measured, only assessed) is to raise others'
fears or personal ideals by way of completeness.
Risk
15
Neurobiology of Framing
Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been
shown to take a more global perspective[21] while greater left prefrontal activity relates to local or focal
processing[22]
From the Theory of Leaky Modules[23] McElroy and Seta proposed that they could predictably alter the framing
effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[24] The
result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was
ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially
because directed tapping or listening is easily done.
Fear as intuitive risk assessment
For the time being, people rely on their fear and hesitation to keep them out of the most profoundly unknown
circumstances.
In The Gift of Fear, Gavin de Becker argues that "True fear is a gift. It is a survival signal that sounds only in the
presence of danger. Yet unwarranted fear has assumed a power over us that it holds over no other creature on Earth.
It need not be this way."
Risk could be said to be the way we collectively measure and share this "true fear"—a fusion of rational doubt,
irrational fear, and a set of unquantified biases from our own experience.
The field of behavioral finance focuses on human risk-aversion, asymmetric regret, and other ways that human
financial behavior varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated
with a return on an asset.
Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters
caused by naive risk assessments that pretend to rationality but in fact merely fuse many shared biases together.
Risk assessment and management
Because planned actions are subject to large cost and benefit risks, proper risk assessment and risk management for
such actions are crucial to making them successful.[25]
Since Risk assessment and management is essential in security management, both are tightly related. Security
assessment methodologies like CRAMM contain risk assessment modules as an important part of the first steps of
the methodology. On the other hand, Risk Assessment methodologies, like Mehari evolved to become Security
Assessment methodologies. A ISO standard on risk management (Principles and guidelines on implementation) is
currently being draft under code ISO 31000. Target publication date 30 May 2009.
Risk in auditing
The audit risk model expresses the risk of an auditor providing an inappropriate opinion of a commercial entity's
financial statements. It can be analytically expressed as:
AR = IR x CR x DR
Where AR is audit risk, IR is inherent risk, CR is control risk and DR is detection risk.
See also
• Applied information economics
• Adventure
• Ambiguity
• Ambiguity aversion
Risk
16
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Benefit shortfall
Cindynics
Civil defense
Cost overrun
Credit risk
Crisis
Cultural Theory of risk
Early case assessment
Emergency
Ergonomics
Event chain methodology
Financial risk
Fuel price risk management
Hazard
Hazard prevention
Identity resolution
Inherent risk
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Insurance industry
Interest rate risk
International Risk Governance Council
Investment risk
ISO 31000
ISO 28000
Legal risk
Life-critical system
Liquidity risk
List of books about risk
Loss aversion
Market risk
Megaprojects and risk
Operational risk
Optimism bias
Political risk
Preventive maintenance
Preventive medicine
Probabilistic risk assessment
Reference class forecasting
Reinvestment risk
Reputational risk
Risk analysis
Risk aversion
Riskbase
Risk factor (finance)
Risk homeostasis
Risk management
• Risk-neutral measure
• Risk perception
Risk
17
•
•
•
•
•
•
Risk register
Sampling risk
Security risk
Systemic risk
Uncertainty
Value at risk
Bibliography
Referred literature
• Bent Flyvbjerg, 2006: From Nobel Prize to Project Management: Getting Risks Right. Project Management
Journal, vol. 37, no. 3, August, pp. 5–15. Available at homepage of author [26]
• James Franklin, 2001: The Science of Conjecture: Evidence and Probability Before Pascal, Baltimore: Johns
Hopkins University Press.
• Niklas Luhmann, 1996: Modern Society Shocked by its Risks (= University of Hongkong, Department of
Sociology Occasional Papers 17), Hongkong, available via HKU Scholars HUB [27]
Books
• Historian David A. Moss's book When All Else Fails [28] explains the U.S. government's historical role as risk
manager of last resort.
• Peter L. Bernstein. Against the Gods ISBN 0-471-29563-9. Risk explained and its appreciation by man traced
from earliest times through all the major figures of their ages in mathematical circles.
• Rescher, Nicholas (1983). A Philosophical Introduction to the Theory of Risk Evaluation and Measurement.
University Press of America.
• Porteous, Bruce T.; Pradip Tapadar (December 2005). Economic Capital and Financial Risk Management for
Financial Services Firms and Conglomerates. Palgrave Macmillan. ISBN 1-4039-3608-0.
• Tom Kendrick (2003). Identifying and Managing Project Risk: Essential Tools for Failure-Proofing Your Project.
AMACOM/American Management Association. ISBN 978-0814407615.
• Flyvbjerg, Bent, Nils Bruzelius, and Werner Rothengatter, 2003. Megaprojects and Risk: An Anatomy of
Ambition (Cambridge: Cambridge University Press). [29]
• David Hillson (2007). Practical Project Risk Management: The Atom Methodology. Management Concepts.
ISBN 978-1567262025.
• Kim Heldman (2005). Project Manager's Spotlight on Risk Management. Jossey-Bass. ISBN 978-0782144116.
• Dirk Proske (2008). Catalogue of risks - Natural, Technical, Social and Health Risks. Springer.
ISBN 978-3540795544.
• Gardner, Dan, Risk: The Science and Politics of Fear [30], Random House, Inc., 2008. ISBN 0771032994
Articles and papers
• Clark, L., Manes, F., Antoun, N., Sahakian, B. J., & Robbins, T. W. (2003). "The contributions of lesion laterality
and lesion volume to decision-making impairment following frontal lobe damage." Neuropsychologia, 41,
1474-1483.
• Drake, R. A. (1985). "Decision making and risk taking: Neurological manipulation with a proposed consistency
mediation." Contemporary Social Psychology, 11, 149-152.
• Drake, R. A. (1985). "Lateral asymmetry of risky recommendations." Personality and Social Psychology Bulletin,
11, 409-417.
• Gregory, Kent J., Bibbo, Giovanni and Pattison, John E. (2005), "A Standard Approach to Measurement
Uncertainties for Scientists and Engineers in Medicine", Australasian Physical and Engineering Sciences in
Risk
18
•
•
•
•
•
•
•
Medicine 28(2):131-139.
Hansson, Sven Ove. (2007). "Risk" [31], The Stanford Encyclopedia of Philosophy (Summer 2007 Edition),
Edward N. Zalta (ed.), forthcoming [32].
Holton, Glyn A. (2004). "Defining Risk" [33], Financial Analysts Journal, 60 (6), 19–25. A paper exploring the
foundations of risk. (PDF file)
Knight, F. H. (1921) Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company. (Cited at: [34], § I.I.26.)
Kruger, Daniel J., Wang, X.T., & Wilke, Andreas (2007) "Towards the development of an evolutionarily valid
domain-specific risk-taking scale" [35] Evolutionary Psychology (PDF file)
Metzner-Szigeth, A. (2009). "Contradictory Approaches? – On Realism and Constructivism in the Social
Sciences Research on Risk, Technology and the Environment." Futures, Vol. 41, No. 2, March 2009,
pp. 156–170 (fulltext journal: [36]) (free preprint: [37])
Miller, L. (1985). "Cognitive risk taking after frontal or temporal lobectomy I. The synthesis of fragmented visual
information." Neuropsychologia, 23, 359 369.
Miller, L., & Milner, B. (1985). "Cognitive risk taking after frontal or temporal lobectomy II. The synthesis of
phonemic and semantic information." Neuropsychologia, 23, 371 379.
• Neill, M. Allen, J. Woodhead, N. Reid, S. Irwin, L. Sanderson, H. 2008 "A Positive Approach to Risk Requires
Person Centred Thinking" London, CSIP Personalisation Network, Department of Health. Available from: http://
networks.csip.org.uk/Personalisation/Topics/Browse/Risk/[Accessed 21 July 2008]
External links
•
•
•
•
•
•
The Wiktionary definition of risk
Risk [31] - The entry of the Stanford Encyclopedia of Philosophy
Risk Management magazine [38], a publication of the Risk and Insurance Management Society.
Risk and Insurance [39]
StrategicRISK, a risk management journal [40]
"Risk preference and religiosity" [41] article from the Institute for the Biocultural Study of Religion [42]
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
http:/ / pages. stern. nyu. edu/ ~adamodar/ pdfiles/ invphil/ ch2. pdf
Luhmann 1996:3
James Franklin, 2001: The Science of Conjecture: Evidence and Probability Before Pascal, Baltimore: Johns Hopkins University Press, 274
Luhmann 1996:4
Douglas Hubbard The Failure of Risk Management: Why It's Broken and How to Fix It, John Wiley & Sons, 2009
E.g. "Risk is the unwanted subset of a set of uncertain outcomes." (Cornelius Keating)
"Risk is a combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury or ill health that can
be caused by the event or exposure(s)" (OHSAS 18001:2007).
[8] Wired Magazine, Before the levees break (http:/ / www. wired. com/ science/ planetearth/ magazine/ 17-01/ ff_dutch_delta?currentPage=3),
page 3
[9] Landsburg, Steven (2003-03-03). "Is your life worth $10 million?" (http:/ / www. slate. com/ id/ 2079475/ ). Everyday Economics (Slate). .
Retrieved 2008-03-17.
[10] Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007
[11] Douglas Hubbard "The Failure of Risk Management: Why It's Broken and How to Fix It, John Wiley & Sons, 2009
[12] (http:/ / ssrn. com/ abstract=1012812)
[13] Sapienza P., Zingales L. and Maestripieri D. 2009. Gender differences in financial risk aversion and career choices are affected by
testosterone. Proceedings of the National Academy of Sciences.
[14] Apicella C. L. and all. Testosterone and financial risk preferences. Evolution and Human Behavior. Vol 29. Issue 6. 384-390. abstract (http:/
/ www. ehbonline. org/ article/ S1090-5138(08)00067-6/ abstract)
[15] Value at risk
[16] http:/ / flyvbjerg. plan. aau. dk/ JAPAASPUBLISHED. pdf
[17] http:/ / flyvbjerg. plan. aau. dk/ Traffic91PRINTJAPA. pdf
[18] http:/ / flyvbjerg. plan. aau. dk/ 0406DfT-UK%20OptBiasASPUBL. pdf
Risk
19
[19] A person centred approach to risk - Risk - Advice on Personalisation - Personalisation - Homepage - CSIP Networks (http:/ / networks. csip.
org. uk/ Personalisation/ Topics/ Browse/ Risk/ ?parent=3151& child=3681)
[20] Amos Tversky / Daniel Kahneman, 1981. "The Framing of Decisions and the Psychology of Choice."
[21] Schatz, J., Craft, S., Koby, M., & DeBaun, M. R. (2004). Asymmetries in visual-spatial processing following childhood stroke.
Neuropsychology, 18, 340-352.
[22] Volberg, G., & Hubner, R. (2004). On the role of response conflicts and stimulus position for hemispheric differences in global/local
processing: An ERP study. Neuropsychologia, 42, 1805-1813.
[23] Drake, R. A. (2004). Selective potentiation of proximal processes: Neurobiological mechanisms for spread of activation. Medical Science
Monitor, 10, 231-234.
[24] McElroy, T., & Seta, J. J. (2004). On the other hand am I rational? Hemisphere activation and the framing effect. Brain and Cognition, 55,
572-580.
[25] Flyvbjerg 2006
[26] http:/ / flyvbjerg. plan. aau. dk/ Publications2006/ Nobel-PMJ2006. pdf
[27] http:/ / hub. hku. hk/ handle/ 123456789/ 38822
[28] http:/ / www. hup. harvard. edu/ catalog/ MOSWHE. html
[29] http:/ / books. google. com/ books?vid=ISBN0521009464& id=RAV5P-50UjEC& printsec=frontcover
[30] http:/ / books. google. com/ books?id=5j_8xF8vUlAC& printsec=frontcover
[31] http:/ / plato. stanford. edu/ entries/ risk/
[32] http:/ / plato. stanford. edu/ archives/ sum2007/ entries/ risk/
[33] http:/ / www. riskexpertise. com/ papers/ risk. pdf
[34] http:/ / www. econlib. org/ library/ Knight/ knRUP1. html
[35] http:/ / www. epjournal. net/ filestore/ ep05555568. pdf
[36] http:/ / www. sciencedirect. com/ science?_ob=ArticleURL& _udi=B6V65-4TGS7JY-1& _user=10& _coverDate=04%2F30%2F2009&
_rdoc=1& _fmt=high& _orig=search& _sort=d& _docanchor=& view=c& _acct=C000050221& _version=1& _urlVersion=0& _userid=10&
md5=054fec1f03e9ec784596add85197d2a8
[37] http:/ / egora. uni-muenster. de/ ifs/ personen/ bindata/ metznerszigeth_contradictory_approaches_preprint. PDF
[38] http:/ / www. rmmag. com/
[39] http:/ / www. riskandinsurance. com/
[40] http:/ / www. strategicrisk. co. uk/
[41] http:/ / ibcsr. org/ index. php?option=com_content& view=article& id=149:risk-preference-and-religiosity& catid=25:research-news&
Itemid=59
[42] http:/ / ibcsr. org/ index. php
20
1.0 Personnal
Authentication
Authentication (from Greek: αυθεντικός; real or genuine, from authentes; author) is the act of establishing or
confirming something (or someone) as authentic, that is, that claims made by or about the subject are true
("authentification" is a French language variant of this word). This might involve confirming the identity of a person,
tracing the origins of an artifact, ensuring that a product is what its packaging and labeling claims to be, or assuring
that a computer program is a trusted one.
Authentication methods
In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain
famous person, or was produced in a certain place or period of history.
There are two types of techniques for doing this.
The first is comparing the attributes of the object itself to what is known about objects of that origin. For example, an
art expert might look for similarities in the style of painting, check the location and form of a signature, or compare
the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a
chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of
similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to
examine the authenticity of audio recordings, photographs, or videos.
Attribute comparison may be vulnerable to forgery. In general, it relies on the fact that creating a forgery
indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, or that the
amount of effort required to do so is considerably greater than the amount of money that can be gained by selling the
forgery.
In art and antiques certificates are of great importance, authenticating an object of interest and value. Certificates
can, however, also be forged and the authentication of these pose a problem. For instance, the son of Han van
Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as
well; see the article Jacques van Meegeren.
Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification,
depending on the risk of getting caught.
The second type relies on documentation or other external affirmations. For example, the rules of evidence in
criminal courts often require establishing the chain of custody of evidence presented. This can be accomplished
through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some
antiques are accompanied by certificates attesting to their authenticity. External records have their own problems of
forgery and perjury, and are also vulnerable to being separated from the artifact and lost.
Currency and other financial instruments commonly use the first type of authentication method. Bills, coins, and
cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel,
watermarks, and holographic imagery, which are easy for receivers to verify.
Consumer goods such as pharmaceuticals, perfume, fashion clothing can use either type of authentication method to
prevent counterfeit goods from taking advantage of a popular brand's reputation (damaging the brand owner's sales
and reputation). A trademark is a legally protected marking or other identifying feature which aids consumers in the
identification of genuine brand-name goods.
Authentication
21
Product authentication
Counterfeit products are often offered to consumers as being authentic.
Counterfeit consumer goods such as electronics, music, apparel, and
Counterfeit medications have been sold as being legitimate. Efforts to
control the supply chain and educate consumers to evaluate the
packaging and labeling help ensure that authentic products are sold and
used.
Information content
The authentication of information can pose special problems
(especially man-in-the-middle attacks), and is often wrapped up with
authenticating identity.
A Security hologram label on an electronics box
for authentication
Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or
recording is available, then the medium itself (or its packaging - anything from a box to e-mail headers) can help
prove or disprove the authenticity of the document.
However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself
to use in authentication.
Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a
given message originated from or was relayed by them. These involve authentication factors like:
• A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.
• A shared secret, such as a passphrase, in the content of the message.
• An electronic signature; public key infrastructure is often used to cryptographically guarantee that a message has
been signed by the holder of a particular private key.
The opposite problem is detection of plagiarism, where information from a different author is passed of as a person's
own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar
text, which has different attribution. In some cases excessively high quality or a style mismatch may raise suspicion
of plagiarism.
Factual verification
Determining the truth or factual accuracy of information in a message is generally considered a separate problem
from authentication. A wide range of techniques, from detective work to fact checking in journalism, to scientific
experiment might be employed.
Video authentication
With closed circuit television cameras in place in many public places it has become necessary to conduct video
authentication to establish credibility when video CCTV recordings are used to identify crime. CCTV is a visual
assessment tool. Visual Assessment means having proper identifiable or descriptive information during or after an
incident. These systems should not be used independently from other security measures and their video recordings
must be authenticated in order to be proven genuine when identifying an accident or crime. [1]
Authentication
Authentication factors and identity
The ways in which someone may be authenticated fall into three categories, based on what are known as the factors
of authentication: something you know, something you have, or something you are. Each authentication factor
covers a range of elements used to authenticate or verify a person's identity prior to being granted access, approving
a transaction request, signing a document or other work product, granting authority to others, and establishing a
chain of authority.
Security research has determined that for a positive identification, elements from at least two, and preferably all
three, factors be verified.[2] The three factors (classes) and some of elements of each factor are:
• the ownership factors: Something the user has (e.g., wrist band, ID card, security token, software token, phone,
or cell phone)
• the knowledge factors: Something the user knows (e.g., a password, pass phrase, or personal identification
number (PIN), challenge response (the user must answer a question))
• the inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are
assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric
identifier).
Two-factor authentication
When elements representing two factors are required for identification, the term two-factor authentication is applied.
. e.g. a bankcard (something the user has) and a PIN (something the user knows). Business networks may require
users to provide a password (knowledge factor) and a random number from a security token (ownership factor).
Access to a very high security system might require a mantrap screening of height, weight, facial, and fingerprint
checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a
two-factor authentication.
History and state-of-the-art
Historically, fingerprints have been used as the most authoritative method of authentication, but recent court cases in
the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as
well, fingerprints have been shown to be easily spoofable, with British Telecom's top computer-security official
noting that "few" fingerprint readers have not already been tricked by one spoof or another.[3] Hybrid or two-tiered
authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB
device.
In a computer data context, cryptographic methods have been developed (see digital signature and
challenge-response authentication) which are currently not spoofable if and only if the originator's key has not been
compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is
irrelevant. It is not known whether these cryptographically based authentication methods are provably secure since
unanticipated mathematical developments may make them vulnerable to attack in future. If that were to occur, it may
call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned
when a new attack on the cryptography underlying the signature is discovered.
22
Authentication
Strong authentication
The U.S. Government's National Information Assurance Glossary defines strong authentication as
layered authentication approach relying on two or more authenticators to establish the identity of an
originator or receiver of information.
In litigation electronic authentication of computers, digital audio recordings, video recordings analog and digital
must be authenticated before being accepted into evidence as tampering has become a problem. All electronic
evidence should be proven genuine before used in any legal proceeding.[4]
Authentication vs. authorization
The process of authorization is sometimes mistakenly thought to be the same as authentication; many widely adopted
standard security protocols, obligatory regulations, and even statutes make this error. However, authentication is the
process of verifying a claim made by a subject that it should be allowed to act on behalf of a given principal (person,
computer, process, etc.). Authorization, on the other hand, involves verifying that an authenticated subject has
permission to perform certain operations or access specific resources. Authentication, therefore, must precede
authorization.
For example, when you show proper identification credentials to a bank teller, you are asking to be authenticated to
act on behalf of the account holder. If your authentication request is approved, you become authorized to access the
accounts of that account holder, but no others.
Even though authorization cannot occur without authentication, the former term is sometimes used to mean the
combination of both.
To distinguish "authentication" from the closely related "authorization", the short-hand notations A1
(authentication), A2 (authorization) as well as AuthN / AuthZ (AuthR) or Au / Az are used in some communities.
Normally delegation was considered to be a part of authorization domain. Recently authentication is also used for
various type of delegation tasks. Delegation in IT network is also a new but evolving field[5] .
Access control
One familiar use of authentication and authorization is access control. A computer system that is supposed to be used
only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually
controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the
user, thence granting those privileges as may be authorized to that identity. Common examples of access control
involving authentication include:
•
•
•
•
•
•
•
A captcha is a means of asserting that a user is a human being and not a computer program.
A computer program using a blind credential to authenticate to another program
Entering a country with a passport
Logging in to a computer
Using a confirmation E-mail to verify ownership of an e-mail address
Using an Internet banking system
Withdrawing cash from an ATM
In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card
network does not require a personal identification number for authentication of the claimed identity; and a small
transaction usually does not even require a signature of the authenticated person for proof of authorization of the
transaction. The security of the system is maintained by limiting distribution of credit card numbers, and by the
threat of punishment for fraud.
23
Authentication
24
Security experts argue that it is impossible to prove the identity of a computer user with absolute certainty. It is only
possible to apply one or more tests which, if passed, have been previously declared to be sufficient to proceed. The
problem is to determine which tests are sufficient, and many such are inadequate. Any given test can be spoofed one
way or another, with varying degrees of difficulty.
See also
•
Classification of Authentication
•
Identity Assurance Framework
•
Athens access and identity management
•
Java Authentication and Authorization Service
•
Atomic Authorization
•
Kerberos
•
Authentication OSID
•
Multi-factor authentication
•
Authorization
•
Needham-Schroeder protocol
•
Basic access authentication
•
OpenID – an authentication method for the web
•
Biometrics
•
Point of Access for Providers of Information - the PAPI protocol
•
CAPTCHA
•
Public key cryptography
•
Chip Authentication Program
•
RADIUS
•
Closed-loop authentication
•
Recognition of human individuals
•
Diameter (protocol)
•
Secret sharing
•
Digital Identity
•
Secure remote password protocol (SRP)
•
Encrypted key exchange (EKE)
•
Secure Shell
•
EAP
•
Security printing
•
Fingerprint Verification Competition
•
Tamper-evident
•
Geo-location
•
TCP Wrapper
•
Global Trust Center
•
Time-based authentication
•
HMAC
•
Two-factor authentication
•
Identification (information)
External links
•
•
•
•
Fourth-Factor Authentication: Somebody You Know [6] or episode 94,related on it - on SecurityNow [7]
General Information on Enterprise Authentication [8]
Password Management Best Practices [9]
Password Policy Guidelines [10]
References
[1] http:/ / www. videoproductionprimeau. com/ index. php?q=content/ video-authentication
[2] >Federal Financial Institutions Examination Council (2008). "Authentication in an Internet Banking Environment" (http:/ / www. ffiec. gov/
pdf/ authentication_guidance. pdf). . Retrieved 2009-12-31.
[3] Register, UK; Dan Goodin; 30/3/08; Get your German Interior Minister's fingerprint, here. Compared to other solutions, "It's basically like
leaving the password to your computer everywhere you go, without you being able to control it anymore," one of the hackers comments.
(http:/ / www. theregister. co. uk/ 2008/ 03/ 30/ german_interior_minister_fingerprint_appropriated|The)
[4] http:/ / expertpages. com/ news/ CCTV_Video_Problems_and_Solutions. htm
[5] A mechanism for identity delegation at authentication level, N Ahmed, C Jensen - Identity and Privacy in the Internet Age - Springer 2009
[6] http:/ / www. rsa. com/ rsalabs/ node. asp?id=3156
[7] http:/ / www. grc. com/ securitynow. htm
[8] http:/ / www. authenticationworld. com/
[9] http:/ / psynch. com/ docs/ password-management-best-practices. html
[10] http:/ / psynch. com/ docs/ password-policy-guidelines. html
Authorization
Authorization
Authorization (also spelt Authorisation) is the function of specifying access rights to resources, which is related to
information security and computer security in general and to access control in particular. More formally, "to
authorize" is to define access policy. For example, human resources staff are normally authorized to access employee
records, and this policy is usually formalized as access control rules in a computer system. During operation, the
system uses the access control rules to decide whether access requests from (authenticated) consumers shall be
granted or rejected. Resources include individual files' or items' data, computer programs, computer devices and
functionality provided by computer applications. Examples of consumers are computer users, computer programs
and other devices on the computer.
Overview
Access control in computer systems and networks relies on access policies. The access control process can be
divided into two phases: 1) policy definition phase, and 2) policy enforcement phase. Authorization is the function of
the policy definition phase which precedes the policy enforcement phase where access requests are granted or
rejected based on the previously defined authorizations.
Most modern, multi-user operating systems include access control and thereby rely on authorization. Access control
also makes use of authentication to verify the identity of consumers. When a consumer tries to access a resource, the
access control process checks that the consumer has been authorized to use that resource. Authorization is the
responsibility of an authority, such as a department manager, within the application domain, but is often delegated to
a custodian such as a system administrator. Authorizations are expressed as access policies in some type of "policy
definition application", e.g. in the form of an access control list or a capability, on the basis of the "principle of least
privilege": consumers should only be authorized to access whatever they need to do their jobs. Older and single user
operating systems often had weak or non-existent authentication and access control systems.
"Anonymous consumers" or "guests", are consumers that have not been required to authenticate. They often have
limited authorization. On a distributed system, it is often desirable to grant access without requiring a unique
identity. Familiar examples of access tokens include keys and tickets: they grant access without proving identity.
Trusted consumers that have been authenticated are often authorized to unrestricted access to resources. "Partially
trusted" and guests will often have restricted authorization in order to protect resources against improper access and
usage. The access policy in some operating systems, by default, grant all consumers full access to all resources.
Others do the opposite, insisting that the administrator explicitly authorizes a consumer to use each resource.
Even when access is controlled through a combination of authentication and access control lists, the problems of
maintaining the authorization data is not trivial, and often represents as much administrative burden as managing
authentication credentials. It is often necessary to change or remove a user's authorization: this is done by changing
or deleting the corresponding access rules on the system. Using atomic authorization is an alternative to per-system
authorization management, where a trusted third party securely distributes authorization information.
Confusion
The term authorization is often incorrectly used in the sense of the policy enforcement phase function. This
confusing interpretation can be traced back to the introduction of Cisco's AAA server. Examples of this can be seen
in RFC2904 [1] , and Cisco AAA [2] . However, the correct and fundamental meaning of authorization is not
compatible with this usage of the term. For example the fundamental security services confidentiality, integrity and
availability are defined in terms of authorization [3] For example, confidentiality is defined by the International
Organization for Standardization (ISO) as "ensuring that information is accessible only to those authorized to have
access", where authorization is a function of the policy definition phase. It would be absurd to interpret
25
Authorization
confidentiality as "ensuring that information is accessible only to those who are granted access when requested",
because people who access systems e.g. with stolen passwords would then be "authorized". It is common that logon
screens provide warnings like: "Only authorized users may access this system", e.g. [4] . Incorrect usage of the term
authorization would invalidate such warnings, because attackers with stolen passwords could claim that they were
authorized.
The confusion around authorization is so widespread that both interpretations (i.e. authorization both as policy
definition phase and as policy enforcement phase) often appear within the same document, e.g. [5] .
Examples of correct usage of the authorization concept include e.g. [6] [7] .
Related Interpretations
Public policy
In public policy, authorization is a feature of trusted systems used for security or social control.
Banking
In banking, an authorization is a hold placed on a customer's account when a purchase is made using a debit card or
credit card.
Publishing
In publishing, sometimes public lectures and other freely available texts are published without the consent of the
author. These are called unauthorized texts. An example is the 2002 'The Theory of Everything: The Origin and Fate
of the Universe' , which was collected from Stephen Hawking's lectures and published without his permission.
See also
•
•
•
•
•
•
•
•
•
Security engineering
Computer security
Authentication
Access control
Kerberos (protocol)
Operating system
Authorization OSID
Authorization hold
XACML
References
[1] J. Vollbrecht et al. AAA Authorization Framework. IETF, 2000 txt (http:/ / www. ietf. org/ rfc/ rfc2904. txt).
[2] B.J. Caroll. Cisco Access Control Security: AAA Administration Services. Cisco Press, 2004
[3] ISO 7498-2 Information Processing Systems - Open Systems Interconnection - Basic Reference Model - Part 2: Security Architecture.
ISO/IEC 1989
[4] Access Warning Statements, University of California, Berkeley (http:/ / technology. berkeley. edu/ policy/ warnings. html)
[5] Understanding SOA Security Design and Implementation. IBM Redbook 2007 PDF (http:/ / www. redbooks. ibm. com/ redbooks/ pdfs/
sg247310. pdf)
[6] A. H. Karp. Authorization-Based Access Control for the Services Oriented Architecture. Proceedings of the Fourth International Conference
on Creating, Connecting, and Collaborating through Computing (C5), 26-27 January 2006, Berkeley, CA, USA. PDF (http:/ / www. hpl. hp.
com/ techreports/ 2006/ HPL-2006-3. pdf)
[7] A. Jøsang, D. Gollmann, R. Au. A Method for Access Authorisation Through Delegation Networks. Proceedings of the Australasian
Information Security Workshop (AISW'06), Hobart, January 2006. PDF (http:/ / persons. unik. no/ josang/ papers/ JGA2006-AISW. pdf)
26
Social engineering (security)
27
Social engineering (security)
Computer security
Secure operating systems
Security architecture
Security by design
Secure coding
Computer insecurity
Vulnerability Social engineering
Eavesdropping
Exploits
Trojans
Viruses and
worms
Denial of service
Payloads
Backdoors
Rootkits
Keyloggers
Social engineering is the act of manipulating people into performing actions or divulging confidential information,
rather than by breaking in or using technical cracking techniques; essentially a fancier, more technical way of
lying.[1] While similar to a confidence trick or simple fraud, the term typically applies to trickery or deception for the
purpose of information gathering, fraud, or computer system access; in most cases the attacker never comes
face-to-face with the victim.
"Social engineering" as an act of psychological manipulation was popularized by hacker-turned-consultant Kevin
Mitnick. The term had previously been associated with the social sciences, but its usage has caught on among
computer professionals and is now a recognized term of art.
Social engineering techniques and terms
All social engineering techniques are based on specific attributes of human decision-making known as cognitive
biases.[2] These biases, sometimes called "bugs in the human hardware," are exploited in various combinations to
create attack techniques, some of which are listed here:
Pretexting
Pretexting is the act of creating and using an invented scenario (the pretext) to engage a targeted victim in a manner
that increases the chance the victim will divulge information or perform actions that would be unlikely in ordinary
circumstances. It is more than a simple lie, as it most often involves some prior research or setup and the use of
priori information for impersonation (e.g., date of birth, Social Security Number, last bill amount) to establish
legitimacy in the mind of the target.[3]
This technique can be used to trick a business into disclosing customer information as well as by private
investigators to obtain telephone records, utility records, banking records and other information directly from junior
company service representatives. The information can then be used to establish even greater legitimacy under
tougher questioning with a manager, e.g., to make account changes, get specific balances, etc. Pretexting has been an
observed law enforcement technique, under the auspices of which, a law officer may leverage the threat of an alleged
infraction to detain a suspect for questioning and conduct close inspection of a vehicle or premises.
Social engineering (security)
Pretexting can also be used to impersonate co-workers, police, bank, tax authorities, or insurance investigators — or
any other individual who could have perceived authority or right-to-know in the mind of the targeted victim. The
pretexter must simply prepare answers to questions that might be asked by the victim. In some cases all that is
needed is a voice that sounds authoritative, an earnest tone, and an ability to think on one's feet.
Diversion theft
Diversion theft, also known as the "Corner Game"[4] or "Round the Corner Game", originated in the East End of
London.
In summary, diversion theft is a "con" exercised by professional thieves, normally against a transport or courier
company. The objective is to persuade the persons responsible for a legitimate delivery that the consignment is
requested elsewhere — hence, "round the corner".
With a load/consignment redirected, the thieves persuade the driver to unload the consignment near to, or away
from, the consignee's address, in the pretense that it is "going straight out" or "urgently required somewhere else".
The "con" or deception has many different facets, which include social engineering techniques to persuade legitimate
administrative or traffic personnel of a transport or courier company to issue instructions to the driver to redirect the
consignment or load.
Another variation of diversion theft is stationing a security van outside a bank on a Friday evening. Smartly dressed
guards use the line "Night safe's out of order Sir". By this method shopkeepers etc. are gulled into depositing their
takings into the van. They do of course obtain a receipt but later this turns out to be worthless. A similar technique
was used many years ago to steal a Steinway grand piano from a radio studio in London "Come to overhaul the piano
guv" was the chat line. Nowadays ID would probably be asked for but even that can be faked.
The social engineering skills of these thieves are well rehearsed, and are extremely effective. Most companies do not
prepare their staff for this type of deception.
Phishing
Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that
appears to come from a legitimate business — a bank, or credit card company — requesting "verification" of
information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a
fraudulent web page that seems legitimate — with company logos and content — and has a form requesting
everything from a home address to an ATM card's PIN.
For example, 2003 saw the proliferation of a phishing scam in which users received e-mails supposedly from eBay
claiming that the user's account was about to be suspended unless a link provided was clicked to update a credit card
(information that the genuine eBay already had). Because it is relatively simple to make a Web site resemble a
legitimate organization's site by mimicking the HTML code, the scam counted on people being tricked into thinking
they were being contacted by eBay and subsequently, were going to eBay's site to update their account information.
By spamming large groups of people, the "phisher" counted on the e-mail being read by a percentage of people who
already had listed credit card numbers with eBay legitimately, who might respond.
IVR or phone phishing
This technique uses a rogue Interactive voice response (IVR) system to recreate a legitimate-sounding copy of a
bank or other institution's IVR system. The victim is prompted (typically via a phishing e-mail) to call in to the
"bank" via a (ideally toll free) number provided in order to "verify" information. A typical system will reject log-ins
continually, ensuring the victim enters PINs or passwords multiple times, often disclosing several different
passwords. More advanced systems transfer the victim to the attacker posing as a customer service agent for further
questioning.
28
Social engineering (security)
One could even record the typical commands ("Press one to change your password, press two to speak to customer
service" ...) and play back the direction manually in real time, giving the appearance of being an IVR without the
expense.
Phone phishing is also called vishing.
Baiting
Baiting is like the real-world Trojan Horse that uses physical media and relies on the curiosity or greed of the
victim.[5]
In this attack, the attacker leaves a malware infected floppy disk, CD ROM, or USB flash drive in a location sure to
be found (bathroom, elevator, sidewalk, parking lot), gives it a legitimate looking and curiosity-piquing label, and
simply waits for the victim to use the device.
For example, an attacker might create a disk featuring a corporate logo, readily available from the target's web site,
and write "Executive Salary Summary Q2 2010" on the front. The attacker would then leave the disk on the floor of
an elevator or somewhere in the lobby of the targeted company. An unknowing employee might find it and
subsequently insert the disk into a computer to satisfy their curiosity, or a good samaritan might find it and turn it in
to the company.
In either case as a consequence of merely inserting the disk into a computer to see the contents, the user would
unknowingly install malware on it, likely giving an attacker unfettered access to the victim's PC and perhaps, the
targeted company's internal computer network.
Unless computer controls block the infection, PCs set to "auto-run" inserted media may be compromised as soon as a
rogue disk is inserted.
Quid pro quo
Quid pro quo means something for something:
• An attacker calls random numbers at a company claiming to be calling back from technical support. Eventually
they will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker
will "help" solve the problem and in the process have the user type commands that give the attacker access or
launch malware.
• In a 2003 information security survey, 90% of office workers gave researchers what they claimed was their
password in answer to a survey question in exchange for a cheap pen.[6] Similar surveys in later years obtained
similar results using chocolates and other cheap lures, although they made no attempt to validate the passwords.[7]
Other types
Common confidence tricksters or fraudsters also could be considered "social engineers" in the wider sense, in that
they deliberately deceive and manipulate people, exploiting human weaknesses to obtain personal benefit. They may,
for example, use social engineering techniques as part of an IT fraud.
A very recent type of social engineering techniques include spoofing or hacking IDs of people having popular e-mail
IDs such as Yahoo!, GMail, Hotmail, etc. Among the many motivations for deception are:
• Phishing credit-card account numbers and their passwords.
• Hacking private e-mails and chat histories, and manipulating them by using common editing techniques before
using them to extort money and creating distrust among individuals.
• Hacking websites of companies or organizations and destroying their reputation.
• Computer virus hoaxes
29
Social engineering (security)
Notable social engineers
Kevin Mitnick
Reformed computer criminal and later security consultant Kevin Mitnick popularized the term "social engineering",
pointing out that it is much easier to trick someone into giving a password for a system than to spend the effort to
crack into the system.[8] He claims it was the single most effective method in his arsenal.
The Badir Brothers
Brothers Ramy, Muzher, and Shadde Badir—all of whom were blind from birth—managed to set up an extensive
phone and computer fraud scheme in the village of Kafr Kassem outside Tel Aviv, Israel in the 1990s using social
engineering, voice impersonation, and Braille-display computers.[9]
United States law
In common law, pretexting is an invasion of privacy tort of appropriation.[10]
Pretexting of telephone records
In December 2006, United States Congress approved a Senate sponsored bill making the pretexting of telephone
records a federal felony with fines of up to $250,000 and ten years in prison for individuals (or fines of up to
$500,000 for companies). It was signed by president George W. Bush on January 12, 2007.[11]
Federal legislation
The 1999 "GLBA" is a U.S. Federal law that specifically addresses pretexting of banking records as an illegal act
punishable under federal statutes. When a business entity such as a private investigator, SIU insurance investigator,
or an adjuster conducts any type of deception, it falls under the authority of the Federal Trade Commission (FTC).
This federal agency has the obligation and authority to ensure that consumers are not subjected to any unfair or
deceptive business practices. US Federal Trade Commission Act, Section 5 of the FTCA states, in part: "Whenever
the Commission shall have reason to believe that any such person, partnership, or corporation has been or is using
any unfair method of competition or unfair or deceptive act or practice in or affecting commerce, and if it shall
appear to the Commission that a proceeding by it in respect thereof would be to the interest of the public, it shall
issue and serve upon such person, partnership, or corporation a complaint stating its charges in that respect."
The statute states that when someone obtains any personal, non-public information from a financial institution or the
consumer, their action is subject to the statute. It relates to the consumer's relationship with the financial institution.
For example, a pretexter using false pretenses either to get a consumer's address from the consumer's bank, or to get
a consumer to disclose the name of his or her bank, would be covered. The determining principle is that pretexting
only occurs when information is obtained through false pretenses.
While the sale of cell telephone records has gained significant media attention, and telecommunications records are
the focus of the two bills currently before the United States Senate, many other types of private records are being
bought and sold in the public market. Alongside many advertisements for cell phone records, wireline records and
the records associated with calling cards are advertised. As individuals shift to VoIP telephones, it is safe to assume
that those records will be offered for sale as well. Currently, it is legal to sell telephone records, but illegal to obtain
them.[12]
30
Social engineering (security)
1st Source Information Specialists
U.S. Rep. Fred Upton (R-Kalamazoo, Michigan), chairman of the Energy and Commerce Subcommittee on
Telecommunications and the Internet, expressed concern over the easy access to personal mobile phone records on
the Internet during Wednesday's E&C Committee hearing on "Phone Records For Sale: Why Aren't Phone Records
Safe From Pretexting?" Illinois became the first state to sue an online records broker when Attorney General Lisa
Madigan sued 1st Source Information Specialists, Inc., on 20 January, a spokeswoman for Madigan's office said. The
Florida-based company operates several Web sites that sell mobile telephone records, according to a copy of the suit.
The attorneys general of Florida and Missouri quickly followed Madigan's lead, filing suit on 24 January and 30
January, respectively, against 1st Source Information Specialists and, in Missouri's case, one other records broker First Data Solutions, Inc.
Several wireless providers, including T-Mobile, Verizon, and Cingular filed earlier lawsuits against records brokers,
with Cingular winning an injunction against First Data Solutions and 1st Source Information Specialists on January
13. U.S. Senator Charles Schumer (D-New York) introduced legislation in February 2006 aimed at curbing the
practice. The Consumer Telephone Records Protection Act of 2006 would create felony criminal penalties for
stealing and selling the records of mobile phone, landline, and Voice over Internet Protocol (VoIP) subscribers.
Hewlett Packard
Patricia Dunn, former chairman of Hewlett Packard, reported that the HP board hired a private investigation
company to delve into who was responsible for leaks within the board. Dunn acknowledged that the company used
the practice of pretexting to solicit the telephone records of board members and journalists. Chairman Dunn later
apologized for this act and offered to step down from the board if it was desired by board members.[13] Unlike
Federal law, California law specifically forbids such pretexting. The four felony charges brought on Dunn were
dismissed.[14]
In popular culture
• In the film Hackers, the protagonist used pretexting when he asked a security guard for the telephone number to a
TV station's modem while posing as an important executive.
• In Jeffrey Deaver's book The Blue Nowhere, social engineering to obtain confidential information is one of the
methods used by the killer, Phate, to get close to his victims.
• In the movie Live Free or Die Hard, Justin Long is seen pretexting that his father is dying from a heart attack to
have a BMW Assist representative start what will become a stolen car.
• In the movie Sneakers, one of the characters poses as a low level security guard's superior in order to convince
him that a security breach is just a false alarm.
• In the movie The Thomas Crown Affair, one of the characters poses over the telephone as a museum guard's
superior in order to move the guard away from his post.
• In the James Bond movie Diamonds Are Forever, Bond is seen gaining entry to the Whyte laboratory with a
then-state-of-the-art card-access lock system by "tailgating". He merely waits for an employee to come to open
the door, then posing himself as a rookie at the lab, fakes inserting a non-existent card while the door is unlocked
for him by the employee.
31
Social engineering (security)
See also
•
•
•
•
•
•
•
Phishing
Confidence trick
Certified Social Engineering Prevention Specialist (CSEPS)
Media pranks, which often use similar tactics (though usually not for criminal purposes)
Physical information security
Vishing
SMiShing
References
Further reading
• Boyington, Gregory. (1990). Baa Baa Black Sheep Published by Bantam Books ISBN 0-553-26350-1
• Harley, David. 1998 Re-Floating the Titanic: Dealing with Social Engineering Attacks [15] EICAR Conference.
• Laribee, Lena. June 2006 Development of methodical social engineering taxonomy project [16] Master's Thesis,
Naval Postgraduate School.
• Leyden, John. April 18, 2003. Office workers give away passwords for a cheap pen [17]. The Register. Retrieved
2004-09-09.
• Long, Johnny. (2008). No Tech Hacking - A Guide to Social Engineering, Dumpster Diving, and Shoulder Surfing
Published by Syngress Publishing Inc. ISBN 978-1-59749-215-7
• Mann, Ian. (2008). Hacking the Human: Social Engineering Techniques and Security Countermeasures Published
by Gower Publishing Ltd. ISBN 0-566-08773-1 or ISBN 978-0-566-08773-8
• Mitnick, Kevin, Kasperavičius, Alexis. (2004). CSEPS Course Workbook. Mitnick Security Publishing.
• Mitnick, Kevin, Simon, William L., Wozniak, Steve,. (2002). The Art of Deception: Controlling the Human
Element of Security Published by Wiley. ISBN 0-471-23712-4 or ISBN 0-7645-4280-X
External links
•
•
•
•
•
•
•
•
•
•
Socialware.ru [18] - The most major runet community devoted to social engineering.
Spylabs on vimeo [19] - Video chanel devoted to social engineering.
Social Engineering Fundamentals [20] - Securityfocus.com. Retrieved on August 3, 2009.
Social Engineering, the USB Way [21] - DarkReading.com. Retrieved on July 7, 2006.
Should Social Engineering be a part of Penetration Testing? [22] - Darknet.org.uk. Retrieved on August 3, 2009.
"Protecting Consumers' Phone Records" [23] - US Committee on Commerce, Science, and Transportation.
Retrieved on February 8, 2006.
Plotkin, Hal. Memo to the Press: Pretexting is Already Illegal [24]. Retrived on September 9, 2006.
Striptease for passwords [25] - MSNBC.MSN.com. Retrieved on November 1, 2007.
Social-Engineer.org [26] - social-engineer.org. Retrieved on September 16, 2009.
[27] - Social Engineering: Manipulating Caller-Id
32
Social engineering (security)
References
[1] Goodchild, Joan (January 11, 2010). "Social Engineering: The Basics" (http:/ / www. csoonline. com/ article/ 514063/
Social_Engineering_The_Basics). csoonline. . Retrieved 14 January 2010.
[2] Mitnick, K: "CSEPS Course Workbook" (2004), unit 3, Mitnick Security Publishing.
[3] " Pretexting: Your Personal Information Revealed (http:/ / www. ftc. gov/ bcp/ edu/ pubs/ consumer/ credit/ cre10. shtm),"Federal Trade
Commission
[4] http:/ / trainforlife. co. uk/ onlinecourses. php
[5] http:/ / www. darkreading. com/ document. asp?doc_id=95556& WT. svl=column1_1
[6] Office workers give away passwords (http:/ / www. theregister. co. uk/ content/ 55/ 30324. html)
[7] Passwords revealed by sweet deal (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 3639679. stm)
[8] Mitnick, K: "CSEPS Course Workbook" (2004), p. 4, Mitnick Security Publishing.
[9] http:/ / www. wired. com/ wired/ archive/ 12. 02/ phreaks. html
[10] Restatement 2d of Torts § 652C.
[11] Congress outlaws pretexting (http:/ / arstechnica. com/ news. ars/ post/ 20061211-8395. html), Eric Bangeman, 12/11/2006 11:01:01, Ars
Technica
[12] Mitnick, K (2002): "The Art of Deception", p. 103 Wiley Publishing Ltd: Indianapolis, Indiana; United States of America. ISBN
0-471-23712-4
[13] HP chairman: Use of pretexting 'embarrassing' (http:/ / news. com. com/ HP+ chairman+ Use+ of+ pretexting+ embarrassing/
2100-1014_3-6113715. html?tag=nefd. lede) Stephen Shankland, 2006-09-08 1:08 PM PDT CNET News.com
[14] Calif. court drops charges against Dunn (http:/ / news. cnet. com/ Calif. -court-drops-charges-against-Dunn/ 2100-1014_3-6167187. html)
[15] http:/ / smallbluegreenblog. files. wordpress. com/ 2010/ 04/ eicar98. pdf
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
http:/ / faculty. nps. edu/ ncrowe/ oldstudents/ laribeethesis. htm
http:/ / www. theregister. co. uk/ 2003/ 04/ 18/ office_workers_give_away_passwords/
http:/ / www. socialware. ru/
http:/ / vimeo. com/ spylabs/
http:/ / www. securityfocus. com/ infocus/ 1527
http:/ / www. darkreading. com/ document. asp?doc_id=95556& WT. svl=column1_1
http:/ / www. darknet. org. uk/ 2006/ 03/ should-social-engineering-a-part-of-penetration-testing/
http:/ / www. epic. org/ privacy/ iei/ sencomtest2806. html
http:/ / www. plotkin. com/ blog-archives/ 2006/ 09/ memo_to_the_pre. html
http:/ / www. msnbc. msn. com/ id/ 21566341/
http:/ / www. social-engineer. org
http:/ / www. jocktoday. com/ 2010/ 02/ 08/ social-engineering-manipulating-caller-id/
33
Security engineering
Security engineering
Security engineering is a specialized field of engineering that deals with the development of detailed engineering
plans and designs for security features, controls and systems. It is similar to other systems engineering activities in
that its primary motivation is to support the delivery of engineering solutions that satisfy pre-defined functional and
user requirements, but with the added dimension of preventing misuse and malicious behavior. These constraints and
restrictions are often asserted as a security policy.
In one form or another, Security Engineering has existed as an informal field of study for several centuries. For
example, the fields of locksmithing and security printing have been around for many years.
Due to recent catastrophic events, most notably 9/11, Security Engineering has quickly become a rapidly growing
field. In fact, in a recent report completed in 2006, it was estimated that the global security industry was valued at
US$150 billion.[1]
Security engineering involves aspects of social science, psychology (such as designing a system to 'fail well' instead
of trying to eliminate all sources of error) and economics, as well as physics, chemistry, mathematics, architecture
and landscaping.[2] Some of the techniques used, such as fault tree analysis, are derived from safety engineering.
Other techniques such as cryptography were previously restricted to military applications. One of the pioneers of
security engineering as a formal field of study is Ross Anderson.
Qualifications
Typical qualifications for a security engineer are:
•
•
•
•
•
Security+ - Entry Level
Professional Engineer, Chartered Engineer, Chartered Professional Engineer
CPP
PSP
CISSP
However, multiple qualifications, or several qualified persons working together, may provide a more complete
solution.[3]
Security Stance
The 2 possible default positions on security matters are:
1 Default deny - "Everything, not explicitly permitted, is forbidden"
Improves security at a cost in functionality.
This is a good approach if you have lots of security threats.
See secure computing for a discussion of computer security using this approach.
2 Default permit - "Everything, not explicitly forbidden, is permitted"
Allows greater functionality by sacrificing security.
This is only a good approach in an environment where security threats are non-existent or negligible.
See computer insecurity for an example of the failure of this approach in the real world.
34
Security engineering
35
Core Practices
•
•
•
•
•
•
•
Security Planning
Security Requirements Analysis
Security Architecture
Secure Coding
Security testing
Security Operations and Maintenance
Economics of Security
Sub-fields
• Physical security
• deter attackers from accessing a facility, resource, or information stored on physical media.
• Information security
• protecting data from unauthorized access, use, disclosure, destruction, modification, or disruption to access.
• See esp. Computer security
• Economics of security
• the economic aspects of economics of privacy and computer security.
Methodologies
Technological advances, principally in the field of computers, have now allowed the creation of far more complex
systems, with new and complex security problems. Because modern systems cut across many areas of human
endeavor, security engineers not only need consider the mathematical and physical properties of systems; they also
need to consider attacks on the people who use and form parts of those systems using social engineering attacks.
Secure systems have to resist not only technical attacks, but also coercion, fraud, and deception by confidence
tricksters.
Web Applications
According to the Microsoft Developer Network the patterns & practices of Security Engineering
following activities:
•
•
•
•
•
•
•
•
Security Objectives
Security Design Guidelines
Security Modeling
Security Architecture and Design Review
Security Code Review
Security Testing
Security Tuning
Security Deployment Review
These activities are designed to help meet security objectives in the software life cycle.
[4]
consists of the
Security engineering
36
Physical
• Understanding of a typical threat and the usual risks to people and
property.
• Understanding the incentives created both by the threat and the
countermeasures.
• Understanding risk and threat analysis methodology and the benefits
of an empirical study of the physical security of a facility.
• Understanding how to apply the methodology to buildings, critical
infrastructure, ports, public transport and other
facilities/compounds.
• Overview of common physical and technological methods of
protection and understanding their roles in deterrence, detection and
mitigation.
Canadian Embassy in Washington, D.C. showing
planters being used as vehicle barriers, and
barriers and gates along the vehicle entrance
• Determining and prioritizing security needs and aligning them with the perceived threats and the available budget.
Target Hardening
Whatever the target, there are multiple ways of preventing penetration by unwanted or unauthorised persons.
Methods include placing Jersey barriers, stairs or other sturdy obstacles outside tall or politically sensitive buildings
to prevent car and truck bombings. Improving the method of Visitor management and some new electronic locks
take advantage of technologies such as fingerprint scanning, iris or retinal scanning, and voiceprint identification to
authenticate users.
Employers of Security Engineers
• US Department of State, Bureau of Diplomatic Security (ABET certified institution degree in engineering or
physics required)[5]
Criticisms
Some criticize this field as not being a bona fide field of engineering because the methodologies of this field are less
formal or excessively ad-hoc compared to other fields and many in the practice of security engineering have no
engineering degree. Part of the problem lies in the fact that while conforming to positive requirements is well
understood; conforming to negative requirements requires complex and indirect posturing to reach a closed form
solution. In fact, some rigorous methods do exist to address these difficulties but are seldom used, partly because
they are viewed as too old or too complex by many practitioners. As a result, many ad-hoc approaches simply do not
succeed.
Security engineering
37
See also
Computer Related
Physical
Misc. Topics
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Authentication
Cloud engineering
Cryptography
Cryptanalysis
Computer insecurity
Data remanence
Defensive programming (secure coding)
Earthquake engineering
Electronic underground community
Explosion protection
Hacking
Information Systems Security Engineering
Password policy
Software cracking
Software Security Assurance
Secure computing
Security Patterns
Systems engineering
Trusted system
Economics of Security
Access control
Access control vestibule
Authorization
Critical Infrastructure Protection
Environmental design (esp. CPTED)
Locksmithing
Physical Security
Secrecy
Security
Secure cryptoprocessor
Security through obscurity
Technical surveillance
counter-measures
Deception
Fraud
Full disclosure
Security awareness
Security community
Steganography
Social engineering
Kerckhoffs'
principle
Further reading
• Ross Anderson (2001). Security Engineering [6]. Wiley. ISBN 0-471-38922-6.
• Ross Anderson (2008). Security Engineering - A Guide to Building Dependable Distributed Systems. Wiley.
ISBN 0-470-06852-3.
• Ross Anderson (2001). "Why Information Security is Hard - An Economic Perspective [7]"
• Bruce Schneier (1995). Applied Cryptography (2nd edition ed.). Wiley. ISBN 0-471-11709-9.
• Bruce Schneier (2000). Secrets and Lies: Digital Security in a Networked World. Wiley. ISBN 0-471-25311-1.
• David A. Wheeler (2003). "Secure Programming for Linux and Unix HOWTO" [8]. Linux Documentation Project.
Retrieved 2005-12-19.
Articles and Papers
•
•
•
•
patterns & practices Security Engineering on Channel9 [9]
patterns & practices Security Engineering on MSDN [10]
patterns & practices Security Engineering Explained [11]
Basic Target Hardening [12] from the Government of South Australia
References
[1] "Data analytics, networked video lead trends for 2008" (http:/ / www. sptnews. ca/ index. php?option=com_content& task=view& id=798&
Itemid=4). SP&T News (CLB MEDIA INC). . Retrieved 2008-01-05.
[2] http:/ / findarticles. com/ p/ articles/ mi_m1216/ is_n5_v181/ ai_6730246
[3] http:/ / www. asla. org/ safespaces/ pdf/ design_brochure. pdf
[4] http:/ / msdn2. microsoft. com/ en-us/ library/ ms998404. aspx
[5] http:/ / careers. state. gov/ specialist/ opportunities/ seceng. html
[6] http:/ / www. cl. cam. ac. uk/ ~rja14/ book. html
[7] http:/ / www. acsa-admin. org/ 2001/ papers/ 110. pdf
[8] http:/ / www. dwheeler. com/ secure-programs
[9] http:/ / channel9. msdn. com/ wiki/ default. aspx/ SecurityWiki. SecurityEngineering
[10] http:/ / msdn. com/ SecurityEngineering
Security engineering
[11] http:/ / msdn. microsoft. com/ library/ en-us/ dnpag2/ html/ SecEngExplained. asp
[12] http:/ / www. capitalprograms. sa. edu. au/ a8_publish/ modules/ publish/ content. asp?id=23343& navgrp=2557
38
39
2.0 Physical
Physical security
Physical security describes both measures that prevent or deter attackers from accessing a facility, resource, or
information stored on physical media and guidance on how to design structures to resist various hostile acts[1] . It can
be as simple as a locked door or as elaborate as multiple layers of armed Security guards and Guardhouse placement.
Physical security is not a modern phenomenon. Physical security exists in order to deter persons from entering a
physical facility. Historical examples of physical security include city walls, moats, etc.
The key factor is the technology used for physical security has changed over time. While in past eras, there was no
Passive Infrared (PIR) based technology, electronic access control systems, or Video Surveillance System (VSS)
cameras, the essential methodology of physical security has not altered over time.
Elements and design
The field of security engineering has identified the following elements
to physical security:
• explosion protection;
• obstacles, to frustrate trivial attackers and delay serious ones;
• alarms, security lighting, security guard patrols or closed-circuit
television cameras, to make it likely that attacks will be noticed; and
• security response, to repel, catch or frustrate attackers when an
attack is detected.
In a well designed system, these features must complement each
other[2] . There are at least four layers of physical security:
•
•
•
•
•
Spikes atop a barrier wall
Environmental design
Mechanical, electronic and procedural access control
Intrusion detection
Video monitoring
Personnel Identification
The goal is to convince potential attackers that the likely costs of attack exceed the value of making the attack.
The initial layer of security for a campus, building, office, or physical space uses Crime Prevention Through
Environmental Design to deter threats. Some of the most common examples are also the most basic - barbed wire,
warning signs and fencing, concrete bollards, metal barriers, vehicle height-restrictors, site lighting and trenches.
Physical security
The next layer is mechanical and includes gates, doors, and locks. Key
control of the locks becomes a problem with large user populations and
any user turnover. Keys quickly become unmanageable forcing the
adoption of electronic access control. Electronic access control easily
manages large user populations, controlling for user lifecycles times,
dates, and individual access points. For example a user's access rights
could allow access from 0700 to 1900 Monday through Friday and
expires in 90 days. Another form of access control (procedural)
includes the use of policies, processes and procedures to manage the
Electronic access control
ingress into the restricted area. An example of this is the deployment of
security personnel conducting checks for authorized entry at
predetermined points of entry. This form of access control is usually supplemented by the earlier forms of access
control (i.e. mechanical and electronic access control), or simple devices such as physical passes.
An additional sub-layer of mechanical/electronic access control protection is reached by integrating a key
management system to manage the possession and usage of mechanical keys to locks or property within a building
or campus.
The third layer is intrusion detection systems or alarms. Intrusion detection monitors for attacks. It is less a
preventative measure and more of a response measure, although some would argue that it is a deterrent. Intrusion
detection has a high incidence of false alarms. In many jurisdictions, law enforcement will not respond to alarms
from intrusion detection systems.
The last layer is video monitoring systems. Security cameras can be a
deterrent in many cases, but their real power comes from incident
verification and historical analysis. For example, if alarms are being
generated and there is a camera in place, the camera could be viewed to
verify the alarms. In instances when an attack has already occurred and
a camera is in place at the point of attack, the recorded video can be
reviewed. Although the term closed-circuit television (CCTV) is
common, it is quickly becoming outdated as more video systems lose
the closed circuit for signal transmission and are instead transmitting
Closed-circuit television sign
on computer networks. Advances in information technology are
transforming video monitoring into video analysis. For instance, once
an image is digitized it can become data that sophisticated algorithms can act upon. As the speed and accuracy of
automated analysis increases, the video system could move from a monitoring system to an intrusion detection
system or access control system. It is not a stretch to imagine a video camera inputting data to a processor that
outputs to a door lock. Instead of using some kind of key, whether mechanical or electrical, a person's visage is the
key. When actual design and implementation is considered, there are numerous types of security cameras that can be
used for many different applications. One must analyze their needs and choose accordingly[3] .
40
Physical security
41
Intertwined in these four layers are people. Guards have a role in all
layers, in the first as patrols and at checkpoints. In the second to
administer electronic access control. In the third to respond to alarms.
The response force must be able to arrive on site in less time than it is
expected that the attacker will require to breach the barriers. And in the
fourth to monitor and analyze video. Users obviously have a role also
by questioning and reporting suspicious people. Aiding in identifying
people as known versus unknown are identification systems. Often
photo ID badges are used and are frequently coupled to the electronic
access control system. Visitors are often required to wear a visitor
badge.
Private factory guard
Examples
Many installations, serving a myriad of different purposes, have
physical obstacles in place to deter intrusion. This can be high walls,
barbed wire, glass mounted on top of walls, etc.
The presence of PIR-based motion detectors are common in many
places, as a means of noting intrusion into a physical installation.
Moreover, VSS/CCTV cameras are becoming increasingly common, as
a means of identifying persons who intrude into physical locations.
Businesses use a variety of options for physical security, including
security guards, electric security fencing, cameras, motion detectors,
and light beams.
Canadian Embassy in Washington, D.C. showing
planters being used as vehicle barriers, and
barriers and gates along the vehicle entrance
ATMs (cash dispensers) are protected, not by making them
invulnerable, but by spoiling the money inside when they are attacked.
Money tainted with a dye could act as a flag to the money's unlawful acquisition.
Safes are rated in terms of the time in minutes which a skilled, well equipped safe-breaker is expected to require to
open the safe. These ratings are developed by highly skilled safe breakers employed by insurance agencies, such as
Underwriters Laboratories. In a properly designed system, either the time between inspections by a patrolling guard
should be less than that time, or an alarm response force should be able to reach it in less than that time.
Hiding the resources, or hiding the fact that resources are valuable, is also often a good idea as it will reduce the
exposure to opponents and will cause further delays during an attack, but should not be relied upon as a principal
means of ensuring security (see security through obscurity and inside job).
Not all aspects of Physical Security need be high tech. Even something as simple as a thick or pointy bush can add a
layer of physical security to some premises, especially in a residential setting.
See also
Physical security
42
•
Access badge
•
Fence
•
Proximity card
•
Access control
•
Fortification
•
Razor wire
•
Alarm
•
Guard tour patrol system
•
Safe
•
Alarm management
•
ID Card
•
Safe-cracking
•
Bank vault
•
IP video surveillance
•
Security
•
Biometrics
•
Keycards
•
Security engineering
•
Burglar alarm
•
Locksmithing
•
Security lighting
•
Castle
•
Lock picking
•
Security Operations Center
•
Category:Security companies •
Logical security
•
Security policy
•
Closed-circuit television
•
Magnetic stripe card
•
Smart card
•
Common Access Card
•
Mifare
•
Surveillance
•
Computer security
•
Optical turnstile
•
Swipe card
•
Credential
•
Photo identification
•
Wiegand effect
•
Door security
•
Physical Security Professional
•
Electronic lock
•
Physical key management
•
Electric fence
•
Prison
References
[1] Task Committee (1999). Structural Design for Physical Security. ASCE. ISBN 0784404577.
[2] Anderson, Ross (2001). Security Engineering. Wiley. ISBN 0471389226.
[3] Oeltjen, Jason. "Different Types of Security Cameras" (http:/ / www. thecctvblog. com/ choosing-type-security-camera-installation/ ). .
43
3.0 Environmental and Facilities
44
4.0 Systems
Computer security
Computer security
Secure operating systems
Security architecture
Security by design
Secure coding
Computer insecurity
Vulnerability Social engineering
Eavesdropping
Exploits
Trojans
Viruses and
worms
Denial of service
Payloads
Backdoors
Rootkits
Keyloggers
Computer security is a branch of computer technology known as information security as applied to computers and
networks. The objective of computer security includes protection of information and property from theft, corruption,
or natural disaster, while allowing the information and property to remain accessible and productive to its intended
users. The term computer system security means the collective processes and mechanisms by which sensitive and
valuable information and services are protected from publication, tampering or collapse by unauthorized activities or
untrustworthy individuals and unplanned events respectively. The strategies and methodologies of computer security
often differ from most other computer technologies because of its somewhat elusive objective of preventing
unwanted computer behavior instead of enabling wanted computer behavior.
Security by design
The technologies of computer security are based on logic. As security is not necessarily the primary goal of most
computer applications, designing a program with security in mind often imposes restrictions on that program's
behavior.
There are 4 approaches to security in computing, sometimes a combination of approaches is valid:
1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer
insecurity).
2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch
and path analysis for example).
3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer
insecurity).
4. Trust no software but enforce a security policy with trustworthy hardware mechanisms.
Computer security
Computers consist of software executing atop hardware, and a "computer system" is, by frank definition, a
combination of hardware, software (and, arguably, firmware, should one choose so separately to categorize it) that
provides specific functionality, to include either an explicitly expressed or (as is more often the case) implicitly
carried along security policy. Indeed, citing the Department of Defense Trusted Computer System Evaluation
Criteria (the TCSEC, or Orange Book)—archaic though that may be —the inclusion of specially designed hardware
features, to include such approaches as tagged architectures and (to particularly address "stack smashing" attacks of
recent notoriety) restriction of executable text to specific memory regions and/or register groups, was a sine qua non
of the higher evaluation classes, to wit, B2 and above.)
Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and
non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is
often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more
practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two
and thick layers of four.
There are various strategies and techniques used to design security systems. However there are few, if any, effective
strategies to enhance security after design. One technique enforces the principle of least privilege to great extent,
where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to
one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is
reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness
of crucial software subsystems. This enables a closed form solution to security that works well when only a single
well-characterized property can be isolated as critical, and that property is also assessible to math. Not surprisingly,
it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where
formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort
approach to make modules secure.
The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the
integrity of the system and the information it holds. Defense in depth works when the breaching of one security
measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that
several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety
of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than
"fail insecure" (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a
deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it
insecure.
In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that
security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach
occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only
be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs
are found the "window of vulnerability" is kept as short as possible.
Early history of security by design
The early Multics operating system was notable for its early emphasis on computer security by design, and Multics
was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this,
Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has
become widely known as a non-terminating process that fails to produce computer security. This led to further work
on computer security that prefigured modern security engineering techniques producing closed form processes that
terminate.
45
Computer security
Security architecture
Security Architecture can be defined as the design artifacts that describe how the security controls (security
countermeasures) are positioned, and how they relate to the overall information technology architecture. These
controls serve the purpose to maintain the system's quality attributes, among them confidentiality, integrity,
availability, accountability and assurance."[1] . A security architecture is the plan that shows where security measures
need to be placed. If the plan describes a specific solution then, prior to building such a plan, one would make a risk
analysis. If the plan describes a generic high level design (reference architecture) then the plan should be based on a
threat analysis.
Hardware mechanisms that protect computers and data
Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such
as dongles may be considered more secure due to the physical access required in order to be compromised.
While many software based security solutions encrypt the data to prevent data from being stolen, a malicious
program or a hacker may corrupt the data in order to make it unrecoverable or unusable. Similarly, encrypted
operating systems can be corrupted by a malicious program or a hacker, making the system unusable.
Hardware-based security solutions can prevent read and write access to data and hence offers very strong protection
against tampering and unauthorized access.
Working of hardware based security: A hardware device allows a user to login, logout and to set different privilege
levels by doing manual actions. The device uses biometric technology to prevent malicious users from logging in,
logging out, and changing privilege levels. The current state of a user of the device is read both by a computer and
controllers in peripheral devices such as harddisks. Illegal access by a malicious user or a malicious program is
interrupted based on the current state of a user by harddisk and DVD controllers making illegal access to data
impossible. Hardware based access control is more secure than logging in and logging out using operating systems
as operating systems are vulnerable to malicious attacks. Since software cannot manipulate the user privilege levels,
it is impossible for a hacker or a malicious program to gain access to secure data protected by hardware or perform
unauthorized privileged operations. The hardware protects the operating system image and file system privileges
from being tampered. Therefore, a completely secure system can be created using a combination of hardware based
security and secure system administration policies.
Secure operating systems
One use of the term computer security refers to technology to implement a secure operating system. Much of this
technology is based on science developed in the 1980s and used to produce what may be some of the most
impenetrable operating systems ever. Though still valid, the technology is in limited use today, primarily because it
imposes some changes to system management and also because it is not widely understood. Such ultra-strong secure
operating systems are based on operating system kernel technology that can guarantee that certain security policies
are absolutely enforced in an operating environment. An example of such a Computer security policy is the Bell-La
Padula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the
memory management unit, to a special correctly implemented operating system kernel. This forms the foundation for
a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the
absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not
only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems,
on the other hand, lack the features that assure this maximal level of security. The design methodology to produce
such secure systems is precise, deterministic and logical.
Systems designed with such methodology represent the state of the art of computer security although products using
such security are not widely known. In sharp contrast to most kinds of software, they meet specifications with
46
Computer security
verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this
way are used primarily to protect national security information, military secrets, and the data of international
financial institutions. These are very powerful security tools and very few secure operating systems have been
certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified"
(including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security
depends not only on the soundness of the design strategy, but also on the assurance of correctness of the
implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria
quantifies security strength of products in terms of two components, security functionality and assurance level (such
as EAL levels), and these are specified in a Protection Profile for requirements and a Security Target for product
descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for
decades or certified under the Common Criteria.
In USA parlance, the term High Assurance usually suggests the system has the right security functions that are
implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can
protect less valuable information, such as income tax information. Secure operating systems designed to meet
medium robustness levels of security functionality and assurance have seen wider use within both government and
commercial markets. Medium robust systems may provide the same security functions as high assurance secure
operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower
levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less
dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are
used not only to protect the data stored on these systems but also to provide a high level of protection for network
connections and routing services.
Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a domain for its own
execution, and capable of protecting application code from malicious subversion, and capable of protecting the
system from subverted code, then high degrees of security are understandably not possible. While such secure
operating systems are possible and have been implemented, most commercial systems fall in a 'low security'
category because they rely on features not supported by secure operating systems (like portability, et al.). In low
security operating environments, applications must be relied on to participate in their own protection. There are 'best
effort' secure coding practices that can be followed to make an application more resistant to malicious subversion.
In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of
coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow,
and code/command injection. It is to be immediately noted that all of the foregoing are specific instances of a general
class of attacks, where situations in which putative "data" actually contains implicit or explicit, executable
instructions are cleverly exploited.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C
and C++" [2]). Other languages, such as Java, are more resistant to some of these defects, but are still prone to
code/command injection and other software defects which facilitate subversion.
Recently another bad coding practice has come under scrutiny; dangling pointers. The first known exploit for this
particular problem was presented in July 2007. Before this publication the problem was known but considered to be
academic and not practically exploitable.[3]
Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically achievable, insofar as
the variety of mechanisms are too wide and the manners in which they can be exploited are too variegated. It is
interesting to note, however, that such vulnerabilities often arise from archaic philosophies in which computers were
assumed to be narrowly disseminated entities used by a chosen few, all of whom were likely highly educated, solidly
trained academics with naught but the goodness of mankind in mind. Thus, it was considered quite harmless if, for
47
Computer security
(fictitious) example, a FORMAT string in a FORTRAN program could contain the J format specifier to mean "shut
down system after printing." After all, who would use such a feature but a well-intentioned system programmer? It
was simply beyond conception that software could be deployed in a destructive fashion.
It is worth noting that, in some languages, the distinction between code (ideally, read-only) and data (generally
read/write) is blurred. In LISP, particularly, there is no distinction whatsoever between code and data, both taking the
same form: an S-expression can be code, or data, or both, and the "user" of a LISP program who manages to insert
an executable LAMBDA segment into putative "data" can achieve arbitrarily general and dangerous functionality.
Even something as "modern" as Perl offers the eval() function, which enables one to generate Perl code and submit it
to the interpreter, disguised as string data.
Capabilities and access control lists
Within computer systems, two security models capable of enforcing privilege separation are access control lists
(ACLs) and capability-based security. The semantics of ACLs have been proven to be insecure in many situations,
e.g., the confused deputy problem. It has also been shown that the promise of ACLs of giving access to an object to
only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does
not mean practical flaws exist in all ACL-based systems, but only that the designers of certain utilities must take
responsibility to ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems and commercial OSs still use ACLs.
Capabilities can, however, also be implemented at the language level, leading to a style of programming that is
essentially a refinement of standard object-oriented design. An open source project in the area is the E language.
First the Plessey System 250 and then Cambridge CAP computer demonstrated the use of capabilities, both in
hardware and software, in the 1970s. A reason for the lack of adoption of capabilities may be that ACLs appeared to
offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.
The most secure computers are those not connected to the Internet and shielded from any interference. In the real
world, the most security comes from operating systems where security is not an add-on.
Applications
Computer security is critical in almost any technology-driven industry which operates on computer
systems.Computer security can also be referred to as computer safety. The issues of computer based systems and
addressing their countless vulnerabilities are an integral part of maintaining an operational industry.[4]
Cloud computing Security
Security in the cloud is challenging, due to varied degree of security features and management schemes within the
cloud entitites. In this connection one logical protocol base need to evolve so that the entire gamet of components
operate synchronously and securely.
48
Computer security
In aviation
The aviation industry is especially important when analyzing computer security because the involved risks include
human life, expensive equipment, cargo, and transportation infrastructure. Security can be compromised by hardware
and software malpractice, human error, and faulty operating environments. Threats that exploit computer
vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction,
and human error.[5]
The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry
range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as data
theft or loss, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of
passenger life. Military systems that control munitions can pose an even greater risk.
A proper attack does not need to be very high tech or well funded; for a power outage at an airport alone can cause
repercussions worldwide.[6] . One of the easiest and, arguably, the most difficult to trace security vulnerabilities is
achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may
spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having
altered flight courses of commercial aircraft and caused panic and confusion in the past. Controlling aircraft over
oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's
sight controllers must rely on periodic radio communications with a third party.
Lightning, power fluctuations, surges, brown-outs, blown fuses, and various other power outages instantly disable all
computer systems, since they are dependent on an electrical source. Other accidental and intentional faults have
caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable
communication and electrical power only jeopardizes computer safety.
Notable system accidents
In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's
main command and research facility. Using trojan horse viruses, hackers were able to obtain unrestricted access to
Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files,
such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics
and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense
contractors, and other private sector organizations, by posing as a trusted Rome center user.[7] Now, a technique
called Ethical hack testing is used to remediate these issues.
Electromagnetic interference is another threat to computer safety and in 1989, a United States Air Force F-16 jet
accidentally dropped a 230 kg bomb in West Georgia after unspecified interference caused the jet's computers to
release it. [8]
A similar telecommunications accident also happened in 1994, when two UH-60 Blackhawk helicopters were
destroyed by F-15 aircraft in Iraq because the IFF system's encryption system malfunctioned.
Computer security policy
United States
Cybersecurity Act of 2010
On April 1, 2009, Senator Jay Rockefeller (D-WV) introduced the "Cybersecurity Act of 2009 - S. 773" (full text [9])
in the Senate; the bill, co-written with Senators Evan Bayh (D-IN), Barbara Mikulski (D-MD), Bill Nelson (D-FL),
and Olympia Snowe (R-ME), was referred to the Committee on Commerce, Science, and Transportation, which
approved a revised version of the same bill (the "Cybersecurity Act of 2010") on March 24, 2010[10] . The bill seeks
to increase collaboration between the public and the private sector on cybersecurity issues, especially those private
49
Computer security
entities that own infrastructures that are critical to national security interests (the bill quotes John Brennan, the
Assistant to the President for Homeland Security and Counterterrorism: "our nation’s security and economic
prosperity depend on the security, stability, and integrity of communications and information infrastructure that are
largely privately-owned and globally-operated" and talks about the country's response to a "cyber-Katrina"[11] .),
increase public awareness on cybersecurity issues, and foster and fund cybersecurity research. Some of the most
controversial parts of the bill include Paragraph 315, which grants the President the right to "order the limitation or
shutdown of Internet traffic to and from any compromised Federal Government or United States critical
infrastructure information system or network[11] ." The Electronic Frontier Foundation, an international non-profit
digital rights advocacy and legal organization based in the United States, characterized the bill as promoting a
"potentially dangerous approach that favors the dramatic over the sober response"[12] .
International Cybercrime Reporting and Cooperation Act
On March 25, 2010, Representative Yvette Clarke (D-NY) introduced the "International Cybercrime Reporting and
Cooperation Act - H.R.4962" (full text [13]) in the House of Representatives; the bill, co-sponsored by seven other
representatives (among whom only one Republican), was referred to three House committees[14] . The bill seeks to
make sure that the administration keeps Congress informed on information infrastructure, cybercrime, and end-user
protection worldwide. It also "directs the President to give priority for assistance to improve legal, judicial, and
enforcement capabilities with respect to cybercrime to countries with low information and communications
technology levels of development or utilization in their critical infrastructure, telecommunications systems, and
financial industries"[14] as well as to develop an action plan and an annual compliance assessment for countries of
"cyber concern"[14] .
Protecting Cyberspace as a National Asset Act of 2010 ("Kill switch bill")
On June 19, 2010, United States Senator Joe Lieberman (I-CT) introduced a bill called "Protecting Cyberspace as a
National Asset Act of 2010 - S.3480" (full text in pdf [15]), which he co-wrote with Senator Susan Collins (R-ME)
and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which the American media dubbed
the "Kill switch bill", would grant the President emergency powers over the Internet. However, all three co-authors
of the bill issued a statement claiming that instead, the bill "[narrowed] existing broad Presidential authority to take
over telecommunications networks"[16] .
Terminology
The following terms used in engineering secure systems are explained below.
• Authentication techniques can be used to ensure that communication end-points are who they say they are.
• Automated theorem proving and other verification tools can enable critical algorithms and code used in secure
systems to be mathematically proven to meet their specifications.
• Capability and access control list techniques can be used to ensure privilege separation and mandatory access
control. This section discusses their use.
• Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic
by the system's designers.
• Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data
exchanged between systems can be intercepted or modified.
• Firewalls can provide some protection from online intrusion.
• A microkernel is a carefully crafted, deliberately small corpus of software that underlies the operating system per
se and is used solely to provide very low-level, very precisely defined primitives upon which an operating system
can be developed. A simple example with considerable didactic value is the early '90s GEMSOS (Gemini
Computers), which provided extremely low-level primitives, such as "segment" management, atop which an
50
Computer security
operating system could be built. The theory (in the case of "segments") was that—rather than have the operating
system itself worry about mandatory access separation by means of military-style labeling—it is safer if a
low-level, independently scrutinized module can be charged solely with the management of individually labeled
segments, be they memory "segments" or file system "segments" or executable text "segments." If software below
the visibility of the operating system is (as in this case) charged with labeling, there is no theoretically viable
means for a clever hacker to subvert the labeling scheme, since the operating system per se does not provide
mechanisms for interfering with labeling: the operating system is, essentially, a client (an "application," arguably)
atop the microkernel and, as such, subject to its restrictions.
• Endpoint Security software helps networks to prevent data theft and virus infection through portable storage
devices, such as USB drives.
Some of the following items may belong to the computer insecurity article:
• Access authorization restricts access to a computer to group of users through the use of authentication systems.
These systems can protect either the whole computer – such as through an interactive logon screen – or individual
services, such as an FTP server. There are many methods for identifying and authenticating users, such as
passwords, identification cards, and, more recently, smart cards and biometric systems.
• Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses
and other malicious software (malware).
• Applications with known security flaws should not be run. Either leave it turned off until it can be patched or
otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry
used by worms to automatically break into a system and then spread to other systems connected to it. The security
website Secunia provides a search tool for unpatched known flaws in popular products.
• Backups are a way of securing information; they are another copy of all the important computer files kept in
another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups
are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original
files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank
vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over
the Internet for both business and individuals.
• Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes,
or tornadoes, may strike the building where the computer is located. The building can be on fire, or an
explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of
disaster. Further, it is recommended that the alternate location be placed where the same disaster would not
affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster
that affected the primary site include having had a primary site in World Trade Center I and the recovery site
in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and
recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (e.g.
primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by Hurricane Katrina
in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to
prevent them from being stolen.
51
Computer security
52
• Encryption is used to protect the message
from the eyes of others. It can be done in
several ways by switching the characters
around, replacing characters with others,
and even removing characters from the
message. These have to be used in
Cryptographic techniques involve transforming information, scrambling it so it
combination to make the encryption
becomes unreadable during transmission. The intended recipient can unscramble
secure enough, that is to say, sufficiently
the message, but eavesdroppers cannot.
difficult to crack. Public key encryption
is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of
recipients, and only those recipients will be able to read that message.
• Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion
by restricting the network traffic which can pass through them, based on a set of system administrator defined
rules.
• Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers.
They can be used to catch crackers or fix vulnerabilities.
• Intrusion-detection systems can scan a network for people that are on the network but who should not be there or
are doing things that they should not be doing, for example trying a lot of passwords to gain access to the
network.
• Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker
finds a computer they can try a port scan to detect and attack services on that computer.
• Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy
in place to prevent social engineering can reduce successful breaches of the network and servers.
• File Integrity Monitors are tools used to detect changes in the integrity of systems and files.
See also
•
Attack tree
•
Human-computer interaction (security)
•
Authentication
•
Identity management
•
Authorization
•
Information Leak Prevention
•
CAPTCHA
•
Internet privacy
•
CERT
•
ISO/IEC 15408
•
Cloud computing security
•
Network Security Toolkit
•
Computer security model
•
Network security
•
Cryptographic hash
function
•
OWASP
•
Cryptography
•
Penetration test
•
Cyber security standards
•
Physical information security
•
Dancing pigs
•
Physical security
•
Data security
•
Presumed security
•
Differentiated security
•
Proactive Cyber Defence
•
Ethical hack
•
Sandbox (computer security)
•
Fault tolerance
•
Security Architecture
•
Firewalls
•
Separation of protection and security
•
Formal methods
Computer security
References
• Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems [6], ISBN
0-471-38922-6
• Morrie Gasser: Building a secure computer system [17] ISBN 0-442-23022-2 1988
• Stephen Haag, Maeve Cummings, Donald McCubbrey, Alain Pinsonneault, Richard Donovan: Management
Information Systems for the information age, ISBN 0-07-091120-7
• E. Stewart Lee: Essays about Computer Security [18] Cambridge, 1999
• Peter G. Neumann: Principled Assuredly Trustworthy Composable Architectures [19] 2004
• Paul A. Karger, Roger R. Schell: Thirty Years Later: Lessons from the Multics Security Evaluation [20], IBM
white paper.
• Bruce Schneier: Secrets & Lies: Digital Security in a Networked World, ISBN 0-471-25311-1
• Robert C. Seacord: Secure Coding in C and C++. Addison Wesley, September, 2005. ISBN 0-321-33572-4
• Clifford Stoll: Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, Pocket Books, ISBN
0-7434-1146-3
• Network Infrastructure Security [21], Angus Wong and Alan Yeung, Springer, 2009.
External links
• Security advisories links [22] from the Open Directory Project
• Top 5 Security No Brainers for Businesses [23] from Network World
References
[1] Definitions: IT Security Architecture (http:/ / opensecurityarchitecture. com). SecurityArchitecture.org, Jan, 2008
[2] http:/ / www. cert. org/ books/ secure-coding
[3] New hacking technique exploits common programming error (http:/ / searchsecurity. techtarget. com/ originalContent/
0,289142,sid14_gci1265116,00. html). SearchSecurity.com, July 2007
[4] J. C. Willemssen, "FAA Computer Security". GAO/T-AIMD-00-330. Presented at Committee on Science, House of Representatives, 2000.
[5] P. G. Neumann, "Computer Security in Aviation," presented at International Conference on Aviation Safety and Security in the 21st Century,
White House Commission on Safety and Security, 1997.
[6] J. Zellan, Aviation Security. Hauppauge, NY: Nova Science, 2003, pp. 65–70.
[7] Information Security (http:/ / www. fas. org/ irp/ gao/ aim96084. htm). United States Department of Defense, 1986
[8] Air Force Bombs Georgia (http:/ / catless. ncl. ac. uk/ Risks/ 8. 72. html). The Risks Digest, vol. 8, no. 72, May 1989
[9] http:/ / www. opencongress. org/ bill/ 111-s773/ text
[10] Cybersecurity bill passes first hurdle (http:/ / www. computerworld. com/ s/ article/ 9174065/ Cybersecurity_bill_passes_first_hurdle),
Computer World, March 24, 2010. Retrieved on June 26, 2010.
[11] Cybersecurity Act of 2009 (http:/ / www. opencongress. org/ bill/ 111-s773/ text), OpenCongress.org, April 1, 2009. Retrieved on June 26,
2010.
[12] Federal Authority Over the Internet? The Cybersecurity Act of 2009 (http:/ / www. eff. org/ deeplinks/ 2009/ 04/ cybersecurity-act), eff.org,
April 10, 2009. Retrieved on June 26, 2010.
[13] http:/ / www. opencongress. org/ bill/ 111-h4962/ text
[14] H.R.4962 - International Cybercrime Reporting and Cooperation Act (http:/ / www. opencongress. org/ bill/ 111-h4962/ show),
OpenCongress.org. Retrieved on June 26, 2010.
[15] http:/ / hsgac. senate. gov/ public/ index. cfm?FuseAction=Files. View& FileStore_id=4ee63497-ca5b-4a4b-9bba-04b7f4cb0123
[16] Senators Say Cybersecurity Bill Has No 'Kill Switch' (http:/ / www. informationweek. com/ news/ government/ security/ showArticle.
jhtml?articleID=225701368& subSection=News), informationweek.com, June 24, 2010. Retrieved on June 25, 2010.
[17] http:/ / cs. unomaha. edu/ ~stanw/ gasserbook. pdf
[18] http:/ / www. cl. cam. ac. uk/ ~mgk25/ lee-essays. pdf
[19] http:/ / www. csl. sri. com/ neumann/ chats4. pdf
[20] http:/ / www. acsac. org/ 2002/ papers/ classic-multics. pdf
[21] http:/ / www. springer. com/ computer/ communications/ book/ 978-1-4419-0165-1
[22] http:/ / www. dmoz. org/ Computers/ Security/ Advisories_and_Patches/
[23] http:/ / www. networkworld. com/ community/ node/ 59971
53
Access control
54
Access control
Access control is a system which enables an authority to control access to areas and resources in a given physical
facility or computer-based information system. An access control system, within the field of physical security, is
generally seen as the second layer in the security of a physical structure.
Access control is, in reality, an everyday phenomenon. A lock on a car door is essentially a form of access control. A
PIN on an ATM system at a bank is another means of access control. Bouncers standing in front of a night club is
perhaps a more primitive mode of access control (given the evident lack of information technology involved). The
possession of access control is of prime importance when persons seek to secure important, confidential, or sensitive
information and equipment.
Item control or electronic key management is an area within (and possibly integrated with) an access control
system which concerns the managing of possession and location of small assets or physical (mechanical) keys.
Physical access
Physical access by a person may be allowed depending on payment,
authorization, etc. Also there may be one-way traffic of people. These
can be enforced by personnel such as a border guard, a doorman, a
ticket checker, etc., or with a device such as a turnstile. There may be
fences to avoid circumventing this access control. An alternative of
access control in the strict sense (physically controlling access itself) is
a system of checking authorized presence, see e.g. Ticket controller
(transportation). A variant is exit control, e.g. of a shop (checkout) or a
country.
Underground entrance to the New York City
In physical security, the term access control refers to the practice of
Subway system
restricting entrance to a property, a building, or a room to authorized
persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical
means such as locks and keys, or through technological means such as access control systems like the Access control
vestibule. Within these environments, physical key management may also be employed as a means of further
managing and monitoring access to mechanically keyed areas or access to certain small assets.
Physical access control is a matter of who, where, and when. An access control system determines who is allowed to
enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically this was
partially accomplished through keys and locks. When a door is locked only someone with a key can enter through
the door depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key
holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific
door and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the
key holder is no longer authorized to use the protected area, the locks must be re-keyed.
Electronic access control uses computers to solve the limitations of mechanical locks and keys. A wide range of
credentials can be used to replace mechanical keys. The electronic access control system grants access based on the
credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is
recorded. When access is refused, the door remains locked and the attempted access is recorded. The system will
also monitor the door and alarm if the door is forced open or held open too long after being unlocked.
Access control
Access control system operation
When a credential is presented to a reader, the reader sends the credential’s information, usually a number, to a
control panel, a highly reliable processor. The control panel compares the credential's number to an access control
list, grants or denies the presented request, and sends a transaction log to a database. When access is denied based on
the access control list, the door remains locked. If there is a match between the credential and the access control list,
the control panel operates a relay that in turn unlocks the door. The control panel also ignores a door open signal to
prevent an alarm. Often the reader provides feedback, such as a flashing red LED for an access denied and a flashing
green LED for an access granted.
The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the
access control list. For example, Alice has access rights to the server room but Bob does not. Alice either gives Bob
her credential or Bob takes it; he now has access to the server room. To prevent this, two-factor authentication can be
used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted.
another factor can be a PIN, a second credential, operator intervention, or a biometric input.
There are three types (factors) of authenticating information:
• something the user knows, usually password or any pass-phrase
• something the user has, such as smart card
• something the user is,such as fingerprint, verified by biometric measurement
passwords are a common means of verifying user's identity before access is given to information system. In addition,
a fourth factor of authentication is now recognized: someone you know, where another person who knows you can
provide a human element of authentication in situations where systems have been setup to allow for such scenarios.
For example, a user may have their password, but have forgotten their smart card. In a such a scenario, if the user is
known to designated cohorts, the cohorts may provide their smart card and password in combination with the extant
factor of the user in question and thus provide two factors for the user with missing credential, and three factors
overall to allow access.
Credential
A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being, that enables
an individual access to a given physical facility or computer-based information system. Typically, credentials can be
something you know (such as number or PIN), something you have (such as an access badge), something you are
(such as a biometric feature) or some combination of these items. The typical credential is an access card, key fob, or
other key. There are many card technologies including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26
bit card-swipe, contact smart cards, and contactless smart cards. Also available are key-fobs which are more compact
than ID cards and attach to a key ring. Typical biometric technologies include fingerprint, facial recognition, iris
recognition, retinal scan, voice, and hand geometry.
Access control system components
An access control point, which can be a door, turnstile, parking gate, elevator, or other physical barrier where
granting access can be electrically controlled. Typically the access point is a door. An electronic access control door
can contain several elements. At its most basic there is a stand-alone electric lock. The lock is unlocked by an
operator with a switch. To automate this, operator intervention is replaced by a reader. The reader could be a keypad
where a code is entered, it could be a card reader, or it could be a biometric reader. Readers do not usually make an
access decision but send a card number to an access control panel that verifies the number against an access list. To
monitor the door position a magnetic door switch is used. In concept the door switch is not unlike those on
refrigerators or car doors. Generally only entry is controlled and exit is uncontrolled. In cases where exit is also
controlled a second reader is used on the opposite side of the door. In cases where exit is not controlled, free exit, a
device called a request-to-exit (REX) is used. Request-to-exit devices can be a pushbutton or a motion detector.
55
Access control
56
When the button is pushed or the motion detector detects motion at the door, the door alarm is temporarily ignored
while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free
egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the
request-to-exit device also unlocks the door.
Access control topology
Access control decisions are made by comparing the credential to an
access control list. This lookup can be done by a host or server, by an
access control panel, or by a reader. The development of access control
systems has seen a steady push of the lookup out from a central host to
the edge of the system, or the reader. The predominate topology circa
2009 is hub and spoke with a control panel as the hub and the readers
as the spokes. The lookup and control functions are by the control
panel. The spokes communicate through a serial connection; usually
RS485. Some manufactures are pushing the decision making to the
edge by placing a controller at the door. The controllers are IP enabled
and connect to a host and database using standard networks.
Types of readers
Typical access control door wiring
Access control readers may be classified by functions they are able to
perform:
• Basic (non-intelligent) readers: simply read card number or PIN and
forward it to a control panel. In case of biometric identification,
such readers output ID number of a user. Typically Wiegand
protocol is used for transmitting data to the control panel, but other
options such as RS-232, RS-485 and Clock/Data are not
uncommon. This is the most popular type of access control readers.
Examples of such readers are RF Tiny by RFLOGICS, ProxPoint by
HID, and P300 by Farpointe Data.
• Semi-intelligent readers: have all inputs and outputs necessary to
control door hardware (lock, door contact, exit button), but do not
Access control door wiring when using intelligent
make any access decisions. When a user presents a card or enters
readers
PIN, the reader sends information to the main controller and waits
for its response. If the connection to the main controller is interrupted, such readers stop working or function in a
degraded mode. Usually semi-intelligent readers are connected to a control panel via an RS-485 bus. Examples of
such readers are InfoProx Lite IPL200 by CEM Systems and AP-510 by Apollo.
• Intelligent readers: have all inputs and outputs necessary to control door hardware, they also have memory and
processing power necessary to make access decisions independently. Same as semi-intelligent readers they are
connected to a control panel via an RS-485 bus. The control panel sends configuration updates and retrieves
events from the readers. Examples of such readers could be InfoProx IPO200 by CEM Systems and AP-500 by
Apollo. There is also a new generation of intelligent readers referred to as "IP readers". Systems with IP readers
usually do not have traditional control panels and readers communicate directly to PC that acts as a host.
Examples of such readers are PowerNet IP Reader by Isonas Security Systems, ID08 by Solus has the built in
webservice to make it user friendly, Edge ER40 reader by HID Global, LogLock and UNiLOCK by ASPiSYS
Ltd, and BioEntry Plus reader by Suprema Inc.
Access control
57
Some readers may have additional features such as LCD and function buttons for data collection purposes (i.e.
clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card
read/write support.
Access control readers may also be classified by the type of identification technology.
Access control system topologies
1. Serial controllers. Controllers are connected to a host PC via a
serial RS-485 communication line (or via 20mA current loop in some
older systems). External RS-232/485 converters or internal RS-485
cards have to be installed as standard PCs do not have RS-485
communication ports. In larger systems multi-port serial IO boards are
used, Digi International being one of most popular options.
Advantages:
Access control system using serial controllers
• RS-485 standard allows long cable runs, up to 4000 feet (1200 m)
• Relatively short response time. The maximum number of devices on an RS-485 line is limited to 32, which means
that the host can frequently request status updates from each device and display events almost in real time.
• High reliability and security as the communication line is not shared with any other systems.
Disadvantages:
• RS-485 does not allows Star-type wiring unless splitters are used
• RS-485 is not well suited for transferring large amounts of data (i.e. configuration and users). The highest
possible throughput is 115.2 kbit/s, but in most system it is downgraded to 56.2 kbit/s or less to increase
reliability.
• RS-485 does not allow host PC to communicate with several controllers connected to the same port
simultaneously. Therefore in large systems transfers of configuration and users to controllers may take a very
long time and interfere with normal operations.
• Controllers cannot initiate communication in case of an alarm. The host PC acts as a master on the RS-485
communication line and controllers have to wait till they are polled.
• Special serial switches are required in order to build a redundant host PC setup.
• Separate RS-485 lines have to be installed instead of using an already existing network infrastructure.
• Cable that meets RS-485 standards is significantly more expensive than the regular Category 5 UTP network
cable.
• Operation of the system is highly dependent on the host PC. In case the host PC fails, events from controllers are
not retrieved and functions that required interaction between controllers (i.e. anti-passback) stop working.
2. Serial main and sub-controllers. All door hardware is connected to
sub-controllers (a.k.a. door controllers or door interfaces).
Sub-controllers usually do not make access decisions, and forward all
requests to the main controllers. Main controllers usually support from
16 to 32 sub-controllers. Advantages:
• Work load on the host PC is significantly reduced, because it only
needs to communicate with a few main controllers.
• The overall cost of the system is lower, as sub-controllers are
usually simple and inexpensive devices.
• All other advantages listed in the first paragraph apply.
Disadvantages:
Access control system using serial main and
sub-controllers
Access control
• Operation of the system is highly dependent on main controllers. In case one of the main controllers fails, events
from its sub-controllers are not retrieved and functions that require interaction between sub controllers (i.e.
anti-passback) stop working.
• Some models of sub-controllers (usually lower cost) have no memory and processing power to make access
decisions independently. If the main controller fails, sub-controllers change to degraded mode in which doors are
either completely locked or unlocked and no events are recorded. Such sub-controllers should be avoided or used
only in areas that do not require high security.
• Main controllers tend to be expensive, therefore such topology is not very well suited for systems with multiple
remote locations that have only a few doors.
• All other RS-485-related disadvantages listed in the first paragraph apply.
3. Serial main controllers & intelligent readers. All door hardware is
connected directly to intelligent or semi-intelligent readers. Readers
usually do not make access decisions, and forward all requests to the
main controller. Only if the connection to the main controller is
unavailable, the readers use their internal database to make access
Access control system using serial main
decisions and record events. Semi-intelligent reader that have no
controller and intelligent readers
database and cannot function without the main controller should be
used only in areas that do not require high security. Main controllers
usually support from 16 to 64 readers. All advantages and disadvantages are the same as the ones listed in the second
paragraph.
4. Serial controllers with terminal servers. In spite of the rapid
development and increasing use of computer networks, access control
manufacturers remained conservative and did not rush to introduce
network-enabled products. When pressed for solutions with network
connectivity, many chose the option requiring less efforts: addition of a
terminal server, a device that converts serial data for transmission via
LAN or WAN. Terminal servers manufactured by Lantronix and Tibbo
Technology are popular in the security industry. Advantages:
• Allows utilizing existing network infrastructure for connecting
separate segments of the system.
• Provides convenient solution in cases when installation of an
RS-485 line would be difficult or impossible.
Disadvantages:
• Increases complexity of the system.
• Creates additional work for installers: usually terminal servers have
to be configured independently, not through the interface of the
access control software.
Access control systems using serial controllers
• Serial communication link between the controller and the terminal
and terminal servers
server acts as a bottleneck: even though the data between the host
PC and the terminal server travels at the 10/100/1000Mbit/s network speed it then slows down to the serial speed
of 112.5 kbit/s or less. There are also additional delays introduced in the process of conversion between serial and
network data.
All RS-485-related advantages and disadvantages also apply.
58
Access control
59
5. Network-enabled main controllers. The topology is nearly the
same as described in the second and third paragraphs. The same
advantages and disadvantages apply, but the on-board network
interface offers a couple valuable improvements. Transmission of
configuration and users to the main controllers is faster and may be
done in parallel. This makes the system more responsive and does not
interrupt normal operations. No special hardware is required in order to
achieve redundant host PC setup: in case the primary host PC fails, the
secondary host PC may start polling network controllers. The
disadvantages introduced by terminal servers (listed in the fourth
paragraph) are also eliminated.
Access control system using network-enabled
main controllers
6. IP controllers. Controllers are connected to a host PC via Ethernet
LAN or WAN. Advantages:
• An existing network infrastructure is fully utilized, there is no need
to install new communication lines.
• There are no limitations regarding the number of controllers (32 per
line in case of RS-485).
Access control system using IP controllers
• Special RS-485 installation, termination, grounding and
troubleshooting knowledge is not required.
• Communication with controllers may be done at the full network speed, which is important if transferring a lot of
data (databases with thousands of users, possibly including biometric records).
• In case of an alarm controllers may initiate connection to the host PC. This ability is important in large systems
because it allows to reduce network traffic caused by unnecessary polling.
• Simplifies installation of systems consisting of multiple sites separated by large distances. Basic Internet link is
sufficient to establish connections to remote locations.
• Wide selection of standard network equipment is available to provide connectivity in different situations (fiber,
wireless, VPN, dual path, PoE)
Disadvantages:
• The system becomes susceptible to network related problems, such as delays in case of heavy traffic and network
equipment failures.
• Access controllers and workstations may become accessible to hackers if the network of the organization is not
well protected. This threat may be eliminated by physically separating the access control network from the
network of the organization. Also it should be noted that most IP controllers utilize either Linux platform or
proprietary operating systems, which makes them more difficult to hack. Industry standard data encryption is also
used.
• Maximum distance from a hub or a switch to the controller is 100 meters (330 ft).
• Operation of the system is dependent on the host PC. In case the host PC fails, events from controllers are not
retrieved and functions that required interaction between controllers (i.e. anti-passback) stop working. Some
controllers, however, have peer-to-peer communication option in order to reduce dependency on the host PC.
Access control
60
7. IP readers. Readers are connected to a host PC via Ethernet LAN or
WAN. Advantages:
• Most IP readers are PoE capable. This feature makes it very easy to
provide battery backed power to the entire system, including the
locks and various types of detectors (if used).
• IP readers eliminate the need for controller enclosures.
Access control system using IP readers
• There is no wasted capacity when using IP readers (i.e. a 4-door controller would have 25% unused capacity if it
was controlling only 3 doors).
• IP reader systems scale easily: there is no need to install new main or sub-controllers.
• Failure of one IP reader does not affect any other readers in the system.
Disadvantages:
• In order to be used in high-security areas IP readers require special input/output modules to eliminate the
possibility of intrusion by accessing lock and/or exit button wiring. Not all IP reader manufacturers have such
modules available.
• Being more sophisticated than basic readers IP readers are also more expensive and sensitive, therefore they
should not be installed outdoors in areas with harsh weather conditions or high possibility of vandalism.
• The variety of IP readers in terms of identification technologies and read range is much lower than that of the
basic readers.
The advantages and disadvantages of IP controllers apply to the IP readers as well.
Security risks
The most common security risk of intrusion of an access control
system is simply following a legitimate user through a door. Often the
legitimate user will hold the door for the intruder. This risk can be
minimized through security awareness training of the user population
or more active means such as turnstiles. In very high security
applications this risk is minimized by using a sally port, sometimes
called a security vestibule or mantrap where operator intervention is
required presumably to assure valid identification.
The second most common risk is from levering the door open. This is
surprisingly simple and effective on most doors. The lever could be as
small as a screw driver or big as a crow bar. Fully implemented access
control systems include forced door monitoring alarms. These vary in
effectiveness usually failing from high false positive alarms, poor
database configuration, or lack of active intrusion monitoring.
Access control door wiring when using intelligent
readers and IO module
Similar to levering is crashing through cheap partition walls. In shared tenant spaces the demisal wall is a
vulnerability. Along the same lines is breaking sidelights.
Spoofing locking hardware is fairly simple and more elegant than levering. A strong magnet can operate the solenoid
controlling bolts in electric locking hardware. Motor locks, more prevalent in Europe than in the US, are also
susceptible to this attack using a donut shaped magnet. It is also possible to manipulate the power to the lock either
by removing or adding current.
Access cards themselves have proven vulnerable to sophisticated attacks. Enterprising hackers have built portable
readers that capture the card number from a user’s proximity card. The hacker simply walks by the user, reads the
card, and then presents the number to a reader securing the door. This is possible because card numbers are sent in
Access control
the clear, no encryption being used.
Finally, most electric locking hardware still have mechanical keys as a failover. Mechanical key locks are vulnerable
to bumping.
The need-to-know principle
The need to know principle can be enforced with user access controls and authorization procedures and its objective
is to ensure that only authorized individuals gain access to information or systems necessary to undertake their
duties. See Principle_of_least_privilege.
Computer security
In computer security, access control includes authentication, authorization and audit. It also includes measures such
as physical devices, including biometric scans and metal locks, hidden paths, digital signatures, encryption, social
barriers, and monitoring by humans and automated systems.
In any access control model, the entities that can perform actions in the system are called subjects, and the entities
representing resources to which access may need to be controlled are called objects (see also Access Control
Matrix). Subjects and objects should both be considered as software entities and as human users[1] . Although some
systems equate subjects with user IDs, so that all processes started by a user by default have the same authority, this
level of control is not fine-grained enough to satisfy the Principle of least privilege, and arguably is responsible for
the prevalence of malware in such systems (see computer insecurity).
In some models, for example the object-capability model, any software entity can potentially act as both a subject
and object.
Access control models used by current systems tend to fall into one of two classes: those based on capabilities and
those based on access control lists (ACLs). In a capability-based model, holding an unforgeable reference or
capability to an object provides access to the object (roughly analogous to how possession of your house key grants
you access to your house); access is conveyed to another party by transmitting such a capability over a secure
channel. In an ACL-based model, a subject's access to an object depends on whether its identity is on a list
associated with the object (roughly analogous to how a bouncer at a private party would check your ID to see if your
name is on the guest list); access is conveyed by editing the list. (Different ACL systems have a variety of different
conventions regarding who or what is responsible for editing the list and how it is edited.)
Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members
of a group of subjects (often the group is itself modeled as a subject).
Access control systems provide the essential services of identification and authentication (I&A), authorization, and
accountability where:
• identification and authentication determine who can log on to a system, and the association of users with the
software subjects that they are able to control as a result of logging in;
• authorization determines what a subject can do;
• accountability identifies what a subject (or all subjects associated with a user) did.
Identification and authentication (I&A)
Identification and authentication (I&A) is the process of verifying that an identity is bound to the entity that makes
an assertion or claim of identity. The I&A process assumes that there was an initial validation of the identity,
commonly called identity proofing. Various methods of identity proofing are available ranging from in person
validation using government issued identification to anonymous methods that allow the claimant to remain
anonymous, but known to the system if they return. The method used for identity proofing and validation should
provide an assurance level commensurate with the intended use of the identity within the system. Subsequently, the
61
Access control
entity asserts an identity together with an authenticator as a means for validation. The only requirements for the
identifier is that it must be unique within its security domain.
Authenticators are commonly based on at least one of the following four factors:
• Something you know, such as a password or a personal identification number (PIN). This assumes that only the
owner of the account knows the password or PIN needed to access the account.
• Something you have, such as a smart card or security token. This assumes that only the owner of the account has
the necessary smart card or token needed to unlock the account.
• Something you are, such as fingerprint, voice, retina, or iris characteristics.
• Where you are, for example inside or outside a company firewall, or proximity of login location to a personal
GPS device.
Authorization
Authorization applies to subjects. Authorization determines what a subject can do on the system.
Most modern operating systems define sets of permissions that are variations or extensions of three basic types of
access:
• Read (R): The subject can
• Read file contents
• List directory contents
• Write (W): The subject can change the contents of a file or directory with the following tasks:
• Add
• Create
• Delete
• Rename
• Execute (X): If the file is a program, the subject can cause the program to be run. (In Unix systems, the 'execute'
permission doubles as a 'traverse directory' permission when granted for a directory.)
These rights and permissions are implemented differently in systems based on discretionary access control (DAC)
and mandatory access control (MAC).
Accountability
Accountability uses such system components as audit trails (records) and logs to associate a subject with its actions.
The information recorded should be sufficient to map the subject to a controlling user. Audit trails and logs are
important for
• Detecting security violations
• Re-creating security incidents
If no one is regularly reviewing your logs and they are not maintained in a secure and consistent manner, they may
not be admissible as evidence.
Many systems can generate automated reports based on certain predefined criteria or thresholds, known as clipping
levels. For example, a clipping level may be set to generate a report for the following:
• More than three failed logon attempts in a given period
• Any attempt to use a disabled user account
These reports help a system administrator or security administrator to more easily identify possible break-in
attempts.
62
Access control
Access control techniques
Access control techniques are sometimes categorized as either discretionary or non-discretionary. The three most
widely recognized models are Discretionary Access Control (DAC), Mandatory Access Control (MAC), and Role
Based Access Control (RBAC). MAC and RBAC are both non-discretionary.
Attribute-based Access Control
In attribute-based access control, access is granted not based on the rights of the subject associated with a user after
authentication, but based on attributes of the user. The user has to prove so called claims about his attributes to the
access control engine. An attribute-based access control policy specifies which claims need to satisfied in order to
grant access to an object. For instance the claim could be "older than 18" . Any user that can prove this claim is
granted access. Users can be anonymous as authentication and identification are not strictly required. One does
however require means for proving claims anonymously. This can for instance be achieved using Anonymous
credentials.
Discretionary access control
Discretionary access control (DAC) is an access policy determined by the owner of an object. The owner decides
who is allowed to access the object and what privileges they have.
Two important concepts in DAC are
• File and data ownership: Every object in the system has an owner. In most DAC systems, each object's initial
owner is the subject that caused it to be created. The access policy for an object is determined by its owner.
• Access rights and permissions: These are the controls that an owner can assign to other subjects for specific
resources.
Access controls may be discretionary in ACL-based or capability-based access control systems. (In capability-based
systems, there is usually no explicit concept of 'owner', but the creator of an object has a similar degree of control
over its access policy.)
Mandatory access control
Mandatory access control (MAC) is an access policy determined by the system, not the owner. MAC is used in
multilevel systems that process highly sensitive data, such as classified government and military information. A
multilevel system is a single computer system that handles multiple classification levels between subjects and
objects.
• Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned to them. A subject's
sensitivity label specifies its level of trust. An object's sensitivity label specifies the level of trust required for
access. In order to access a given object, the subject must have a sensitivity level equal to or higher than the
requested object.
• Data import and export: Controlling the import of information from other systems and export to other systems
(including printers) is a critical function of MAC-based systems, which must ensure that sensitivity labels are
properly maintained and implemented so that sensitive information is appropriately protected at all times.
Two methods are commonly used for applying mandatory access control:
• Rule-based (or label-based) access control: This type of control further defines specific conditions for access to a
requested object. All MAC-based systems implement a simple form of rule-based access control to determine
whether access should be granted or denied by matching:
• An object's sensitivity label
• A subject's sensitivity label
• Lattice-based access control: These can be used for complex access control decisions involving multiple objects
and/or subjects. A lattice model is a mathematical structure that defines greatest lower-bound and least
63
Access control
upper-bound values for a pair of elements, such as a subject and an object.
Few systems implement MAC. XTS-400 is an example of one that does. The computer system at the company in the
movie Tron is an example of MAC in popular culture.
Role-based access control
Role-based access control (RBAC) is an access policy determined by the system, not the owner. RBAC is used in
commercial applications and also in military systems, where multi-level security requirements may also exist. RBAC
differs from DAC in that DAC allows users to control access to their resources, while in RBAC, access is controlled
at the system level, outside of the user's control. Although RBAC is non-discretionary, it can be distinguished from
MAC primarily in the way permissions are handled. MAC controls read and write permissions based on a user's
clearance level and additional labels. RBAC controls collections of permissions that may include complex operations
such as an e-commerce transaction, or may be as simple as read or write. A role in RBAC can be viewed as a set of
permissions.
Three primary rules are defined for RBAC:
1. Role assignment: A subject can execute a transaction only if the subject has selected or been assigned a role.
2. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures
that users can take on only roles for which they are authorized.
3. Transaction authorization: A subject can execute a transaction only if the transaction is authorized for the subject's
active role. With rules 1 and 2, this rule ensures that users can execute only transactions for which they are
authorized.
Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles
subsume permissions owned by sub-roles.
Most IT vendors offer RBAC in one or more products.
Telecommunication
In telecommunication, the term access control is defined in U.S. Federal Standard 1037C[2] with the following
meanings:
1. A service feature or technique used to permit or deny use of the components of a communication system.
2. A technique used to define or restrict the rights of individuals or application programs to obtain data from, or
place data onto, a storage device.
3. The definition or restriction of the rights of individuals or application programs to obtain data from, or place data
into, a storage device.
4. The process of limiting access to the resources of an AIS to authorized users, programs, processes, or other
systems.
5. That function performed by the resource controller that allocates system resources to satisfy user requests.
Notice that this definition depends on several other technical terms from Federal Standard 1037C.
64
Access control
65
Public policy
In public policy, access control to restrict access to systems ("authorization") or to track or monitor behavior within
systems ("accountability") is an implementation feature of using trusted systems for security or social control.
See also
•
Access badge
•
Fortification
•
Prison
•
Access control
•
Htaccess
•
Proximity card
•
Access control vestibule
•
ID Card
•
Razor wire
•
Alarm
•
IP Controller
•
Safe
•
Alarm management
•
IP reader
•
Safe-cracking
•
Bank vault
•
Key cards
•
Security
•
Biometrics
•
key management
•
Security engineering
•
Burglar alarm
•
Lock smithing
•
Security lighting
•
Card reader
•
Lock picking
•
Security Management
•
Castle
•
Logical security
•
Security policy
•
Category:Security companies •
Magnetic stripe card
•
Smart card
•
Common Access Card
•
Optical turnstile
•
Swipe card
•
Computer security
•
Photo identification
•
Wiegand effect
•
Credential
•
Physical key management
•
XACML
•
Door security
•
Physical Security Professional
•
Electronic lock
References
•
•
•
•
U.S. Federal Standard 1037C
U.S. MIL-STD-188
U.S. National Information Systems Security Glossary
Harris, Shon, All-in-one CISSP Exam Guide, Third Edition, McGraw Hill Osborne, Emeryville, California, 2005.
[1] http:/ / www. techexams. net/ technotes/ securityplus/ mac_dac_rbac. shtml
[2] http:/ / www. its. bldrdoc. gov/ fs-1037/ other/ a. pdf
External links
• eXtensible Access Control Markup Language. (http://xml.coverpages.org/xacml.html) An OASIS standard
language/model for access control. Also XACML.
• Access Control Authentication Article on AuthenticationWorld.com (http://www.authenticationworld.com/
Access-Control-Authentication/)
• Entrance Technology Options (http://www.sptnews.ca/index.php/20070907739/Articles/
Entrance-Technology-Options.html) article at SP&T News
• Novel chip-in access control technology used in Austrian ski resort (http://www.sourcesecurity.com/news/
articles/co-1040-ga-co-3879-ga.2311.html)
• Beyond Authentication, Authorization and Accounting (http://ism3.wordpress.com/2009/04/23/beyondaaa/)
Access control list
Access control list
An access control list (ACL), with respect to a computer file system, is a list of permissions attached to an object.
An ACL specifies which users or system processes are granted access to objects, as well as what operations are
allowed on given objects. Each entry in a typical ACL specifies a subject and an operation. For instance, if a file has
an ACL that contains (Alice, delete), this would give Alice permission to delete the file.
ACL-based security models
When a subject requests an operation on an object in an ACL-based security model the operating system first checks
the ACL for an applicable entry to decide whether the requested operation is authorized. A key issue in the definition
of any ACL-based security model is determining how access control lists are edited, namely which users and
processes are granted ACL-modification access. ACL models may be applied to collections of objects as well as to
individual entities within the system hierarchy.
Filesystem ACLs
A Filesystem ACL is a data structure (usually a table) containing entries that specify individual user or group rights
to specific system objects such as programs, processes, or files. These entries are known as access control entries
(ACEs) in the Microsoft Windows NT, OpenVMS, Unix-like, and Mac OS X operating systems. Each accessible
object contains an identifier to its ACL. The privileges or permissions determine specific access rights, such as
whether a user can read from, write to, or execute an object. In some implementations an ACE can control whether
or not a user, or group of users, may alter the ACL on an object.
Most of the Unix and Unix-like operating systems (e.g. Linux,[1] BSD, or Solaris) support so called POSIX.1e
ACLs, based on an early POSIX draft that was abandoned. Many of them, for example AIX, Mac OS X beginning
with version 10.4 ("Tiger"), or Solaris with ZFS filesystem[2] , support NFSv4 ACLs, which are part of the NFSv4
standard. FreeBSD 9-CURRENT supports NFSv4 ACLs on both UFS and ZFS file systems; full support is expected
to be backported to version 8.1[3] . There is an experimental implementation of NFSv4 ACLs for Linux.[4]
Networking ACLs
On some types of proprietary computer hardware, an Access Control List refers to rules that are applied to port
numbers or network daemon names that are available on a host or other layer 3, each with a list of hosts and/or
networks permitted to use the service. Both individual servers as well as routers can have network ACLs. Access
control lists can generally be configured to control both inbound and outbound traffic, and in this context they are
similar to firewalls.
See also
•
•
•
•
•
Standard Access Control List, Cisco-IOS configuration rules
Role-based access control
Confused deputy problem
Capability-based security
Cacls
66
Access control list
External links
•
•
•
•
•
•
FreeBSD Handbook: File System Access Control Lists [5]
SELinux and grsecurity: A Case Study Comparing Linux Security Kernel Enhancements [6]
Susan Hinrichs. "Operating System Security" [7].
John Mitchell. "Access Control and Operating System Security" [8].
Michael Clarkson. "Access Control" [9].
Microsoft
• MSDN Library: Access Control Lists [10]
• Microsoft Technet: How Permissions Work [11]
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed
under the GFDL.
References
[1] Support for ACL and EA introduced in RHEL-3 in October 2003 (http:/ / www. redhat. com/ docs/ manuals/ enterprise/ RHEL-3-Manual/
release-notes/ as-x86/ ) (the patch exists before, but officially in kernel since 2.6 released at December 2003)
[2] "8. Using ACLs to Protect ZFS Files (Solaris ZFS Administration Guide) - Sun Microsystems" (http:/ / docs. sun. com/ app/ docs/ doc/
819-5461/ ftyxi?a=view). Docs.sun.com. 2009-10-01. . Retrieved 2010-05-04.
[3] "NFSv4_ACLs - FreeBSD Wiki" (http:/ / wiki. freebsd. org/ NFSv4_ACLs). Wiki.freebsd.org. 2010-04-20. . Retrieved 2010-05-04.
[4] "Native NFSv4 ACLs on Linux" (http:/ / www. suse. de/ ~agruen/ nfs4acl/ ). Suse.de. . Retrieved 2010-05-04.
[5] http:/ / www. freebsd. org/ doc/ en/ books/ handbook/ fs-acl. html
[6] http:/ / www. cs. virginia. edu/ ~jcg8f/ GrsecuritySELinuxCaseStudy. pdf
[7] http:/ / www. cs. uiuc. edu/ class/ fa05/ cs498sh/ seclab/ slides/ OSNotes. ppt
[8] http:/ / crypto. stanford. edu/ cs155old/ cs155-spring03/ lecture9. pdf
[9] http:/ / www. cs. cornell. edu/ courses/ cs513/ 2007fa/ NL. accessControl. html
[10] http:/ / msdn. microsoft. com/ en-us/ library/ aa374872(VS. 85). aspx
[11] http:/ / technet. microsoft. com/ en-us/ library/ cc783530(WS. 10). aspx
67
Password
Password
A password is a secret word or string of characters that is used for authentication, to prove identity or gain access to
a resource (example: an access code is a type of password). The password should be kept secret from those not
allowed access.
The use of passwords is known to be ancient. Sentries would challenge those wishing to enter an area or approaching
it to supply a password or watchword. Sentries would only allow a person or group to pass if they knew the
password. In modern times, user names and passwords are commonly used by people during a log in process that
controls access to protected computer operating systems, mobile phones, cable TV decoders, automated teller
machines (ATMs), etc. A typical computer user may require passwords for many purposes: logging in to computer
accounts, retrieving e-mail from servers, accessing programs, databases, networks, web sites, and even reading the
morning newspaper online.
Despite the name, there is no need for passwords to be actual words; indeed passwords which are not actual words
may be harder to guess, a desirable property. Some passwords are formed from multiple words and may more
accurately be called a passphrase. The term passcode is sometimes used when the secret information is purely
numeric, such as the personal identification number (PIN) commonly used for ATM access. Passwords are generally
short enough to be easily memorized and typed.
For the purposes of more compellingly authenticating the identity of one computing device to another, passwords
have significant disadvantages (they may be stolen, spoofed, forgotten, etc.) over authentications systems relying on
cryptographic protocols, which are more difficult to circumvent.
Easy to remember, hard to guess
The easier a password is for the owner to remember generally means it will be easier for an attacker to guess.[1]
Passwords which are difficult to remember will reduce the security of a system because (a) users might need to write
down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely
to re-use the same password. Similarly, the more stringent requirements for password strength, e.g. "have a mix of
uppercase and lowercase letters and digits" or "change it monthly," the greater the degree to which users will subvert
the system.[2]
In The Memorability and Security of Passwords,[3] Jeff Yan et al. examine the effect of advice given to users about a
good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each
word, are just as memorable as naively selected passwords, and just as hard to crack as randomly generated
passwords. Combining two unrelated words is another good method. Having a personally designed "algorithm" for
generating obscure passwords is another good method.
However, asking users to remember a password consisting of a “mix of uppercase and lowercase characters” is
similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g.
only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises the first letter). Asking users
to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' --> '3' and 'I' --> '1',
substitutions which are well known to attackers. Similarly typing the password one keyboard row higher is a
common trick known to attackers.
68
Password
Factors in the security of a password system
The security of a password-protected system depends on several factors. The overall system must, of course, be
designed for sound security, with protection against computer viruses, man-in-the-middle attacks and the like.
Physical security issues are also a concern, from deterring shoulder surfing to more sophisticated physical threats
such as video cameras and keyboard sniffers. And, of course, passwords should be chosen so that they are hard for
an attacker to guess and hard for an attacker to discover using any (and all) of the available automatic attack
schemes. See password strength, computer security, and computer insecurity.
Nowadays it is a common practice for computer systems to hide passwords as they are typed. The purpose of this
measure is to avoid bystanders to read the password. However, some argue that such practice may lead to mistakes
and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or
hide passwords as they type them.[4]
Effective access control provisions may force extreme measures on criminals seeking to acquire a password or
biometric token.[5] Less extreme measures include extortion, rubber hose cryptanalysis, side channel attack, ...
Here are some specific password management issues that must be considered in thinking about, choosing, and
handling, a password.
Rate at which an attacker can try guessed passwords
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system
security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password
entry attempts. In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple
passwords, if they have been well chosen and are not easily guessed.[6]
Many systems store or transmit a cryptographic hash of the password in a manner that makes the hash value
accessible to an attacker. When this is done, and it is very common, an attacker can work off-line, rapidly testing
candidate passwords against the true password's hash value. Passwords that are used to generate cryptographic keys
(e.g., for disk encryption or Wi-Fi security) can also be subjected to high rate guessing. Lists of common passwords
are widely available and can make password attacks very efficient. (See Password cracking.) Security in such
situations depends on using passwords or passphrases of adequate complexity, making such an attack
computationally infeasible for the attacker. Some systems, such as PGP and Wi-Fi WPA apply a
computation-intensive hash to the password to slow such attacks. See key strengthening.
Form of stored passwords
Some computer systems store user passwords as cleartext, against which to compare user log on attempts. If an
attacker gains access to such an internal password store, all passwords—and so all user accounts—will be
compromised. If some users employ the same password for accounts on different systems, those will be
compromised as well.
More secure systems store each password in a cryptographically protected form, so access to the actual password
will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts
remains possible.
A common approach stores only a "hashed" form of the plaintext password. When a user types in a password on
such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value
generated from the user's entry matches the hash stored in the password database, the user is permitted access. The
hash value is created by applying a hash function (for maximum resistance to attack this should be a cryptographic
hash function) to a string consisting of the submitted password and, usually, another value known as a salt. The salt
prevents attackers from easily building a list of hash values for common passwords. MD5 and SHA1 are frequently
used cryptographic hash functions.
69
Password
A modified version of the DES algorithm was used for this purpose in early Unix systems. The UNIX DES function
was iterated to make the hash function equivalent slow, further frustrating automated guessing attacks, and used the
password candidate as a key to encrypt a fixed value, thus blocking yet another attack on the password shrouding
system. More recent Unix or Unix like systems (e.g., Linux or the various BSD systems) use what most believe to be
still more effective protective mechanisms based on MD5, SHA1, Blowfish, Twofish, or any of several other
algorithms to prevent or frustrate attacks on stored password files.[7]
If the hash function is well designed, it will be computationally infeasible to reverse it to directly find a plaintext
password. However, many systems do not protect their hashed passwords adequately, and if an attacker can gain
access to the hashed values he can use widely available tools which compare the encrypted outcome of every word
from some list, such as a dictionary (many are available on the Internet). Large lists of possible passwords in many
languages are widely available on the Internet, as are software programs to try common variations. The existence of
these dictionary attack tools constrains user password choices which are intended to resist easy attacks; they must not
be findable on such lists. Obviously, words on such lists should be avoided as passwords. Use of a key stretching
hash such as PBKDF2 is designed to reduce this risk.
A poorly designed hash function can make attacks feasible even if a strong password is chosen. See LM hash for a
very widely deployed, and deplorably insecure, example. [8]
Methods of verifying a password over a network
Various methods have been used to verify submitted passwords in a network setting:
Simple transmission of the password
Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or
person. If the password is carried as electrical signals on unsecured physical wiring between the user access point
and the central system controlling the password database, it is subject to snooping by wiretapping methods. If it is
carried as packetized data over the Internet, anyone able to watch the packets containing the logon information can
snoop with a very low probability of detection.
Email is sometimes used to distribute passwords. Since most email is sent as cleartext, it is available without effort
during transport to any eavesdropper. Further, the email will be stored on at least two computers as cleartext—the
sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on
those as well, at least for some time. Attempts to delete an email from all these vulnerabilities may, or may not,
succeed; backups or history files or caches on any of several systems may still contain the email. Indeed merely
identifying every one of those systems may be difficult. Emailed passwords are generally an insecure method of
distribution.
An example of cleartext transmission of passwords is the original Wikipedia website. When you logged into your
Wikipedia account, your username and password are sent from your computer's browser through the Internet as
cleartext. In principle, anyone could read them in transit and thereafter log into your account as you; Wikipedia's
servers have no way of distinguishing such an attacker from you. In practice, an unknowably larger number could do
so as well (e.g., employees at your Internet Service Provider, at any of the systems through which the traffic passes,
etc.). More recently, Wikipedia has offered a secure login option, which, like many e-commerce sites, uses the SSL /
(TLS) cryptographically based protocol to eliminate the cleartext transmission. But, because anyone can gain access
to Wikipedia (without logging in at all), and then edit essentially all articles, it can be argued that there is little need
to encrypt these transmissions as there's little being protected. Other websites (e.g., banks and financial institutions)
have quite different security requirements, and cleartext transmission of anything is clearly insecure in those
contexts.
Using client-side encryption will only protect transmission from the mail handling system server to the client
machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on
70
Password
multiple computers, certainly on the originating and receiving computers, most often in cleartext.
Transmission through encrypted channels
The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, using
cryptographic protection. The most widely used is the Transport Layer Security (TLS, previously called SSL) feature
built into most current Internet browsers. Most browsers alert the user of a TLS/SSL protected exchange with a
server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in
use; see cryptography.
Hash-based challenge-response methods
Unfortunately, there is a conflict between stored hashed-passwords and hash-based challenge-response
authentication; the latter requires a client to prove to a server that he knows what the shared secret (i.e., password) is,
and to do this, the server must be able to obtain the shared secret from its stored form. On many systems (including
Unix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the
serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared
secret, an attacker does not need the original password to authenticate remotely; he only needs the hash.
Zero-knowledge password proofs
Rather than transmitting a password, or transmitting the hash of the password, password-authenticated key
agreement systems can perform a zero-knowledge password proof, which proves knowledge of the password without
exposing it.
Moving a step further, augmented systems for password-authenticated key agreement (e.g., AMP, B-SPEKE,
PAK-Z, SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client
to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and
where the unhashed password is required to gain access.
Procedures for changing passwords
Usually, a system must provide a way to change a password, either because a user believes the current password has
been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in
unencrypted form, security can be lost (e.g., via wiretapping) even before the new password can even be installed in
the password database. And, of course, if the new password is given to a compromised employee, little is gained.
Some web sites include the user-selected password in an unencrypted confirmation e-mail message, with the obvious
increased vulnerability.
Identity management systems are increasingly used to automate issuance of replacements for lost passwords, a
feature called self service password reset. The user's identity is verified by asking questions and comparing the
answers to ones previously stored (i.e., when the account was opened). Typical questions include: "Where were you
born?," "What is your favorite movie?" or "What is the name of your pet?" In many cases the answers to these
questions can be relatively easily guessed by an attacker, determined by low effort research, or obtained through
social engineering, and so this is less than fully satisfactory as a verification technique. While many users have been
trained never to reveal a password, few consider the name of their pet or favorite movie to require similar care.
71
Password
Password longevity
"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g.,
quarterly, monthly or even more often), with the intent that a stolen password will become unusable more or less
quickly. Such policies usually provoke user protest and foot-dragging at best and hostility at worst. Users may
develop simple variation patterns to keep their passwords memorable. In any case, the security benefits are distinctly
limited, if worthwhile, because attackers often exploit a password as soon as it is compromised, which will probably
be some time before change is required. In many cases, particularly with administrative or "root" accounts, once an
attacker has gained access, he can make alterations to the operating system that will allow him future access even
after the initial password he used expires. (see rootkit). Implementing such a policy requires careful consideration of
the relevant human factors.
Number of users per password
Sometimes a single password controls access to a device, for example, for a network router, or password-protected
mobile phone. However, in the case of a computer system, a password is usually stored for each user account, thus
making all access traceable (save, of course, in the case of users sharing passwords). A would-be user on most
systems must supply a username as well as a password, almost always at account set up time, and periodically
thereafter. If the user supplies a password matching the one stored for the supplied username, he or she is permitted
further access into the computer system. This is also the case for a cash machine, except that the 'user name' is
typically the account number stored on the bank customer's card, and the PIN is usually quite short (4 to 6 digits).
Allotting separate passwords to each user of a system is preferable to having a single password shared by legitimate
users of the system, certainly from a security viewpoint. This is partly because users are more willing to tell another
person (who may not be authorized) a shared password than one exclusively for their use. Single passwords are also
much less convenient to change because many people need to be told at the same time, and they make removal of a
particular user's access more difficult, as for instance on graduation or resignation. Per-user passwords are also
essential if users are to be held accountable for their activities, such as making financial transactions or viewing
medical records.
Design of the protected software
Common techniques used to improve the security of software systems protected by a password include:
• Not echoing the password on the display screen as it is being entered or obscuring it as it is typed by using
asterisks (*) or bullets (•).
• Allowing passwords of adequate length (some legacy operating systems, including early versions of Unix and
Windows, limited passwords to an 8 character maximum.
• Requiring users to re-enter their password after a period of inactivity (a semi log-off policy).
• Enforcing a password policy to increase password strength and security.
•
•
•
•
Requiring periodic password changes.
Assigning randomly chosen passwords.
Requiring minimum or maximum password lengths.
Some systems require characters from various character classes in a password—for example, "must have at
least one uppercase and at least one lowercase letter". However, all-lowercase passwords are more secure per
keystroke than mixed capitalization passwords.[9]
• Providing an alternative to keyboard entry (e.g., spoken passwords, or biometric passwords).
• Using encrypted tunnels or password-authenticated key agreement to prevent access to transmitted passwords via
network attacks
• Limiting the number of allowed failures within a given time period (to prevent repeated password guessing).
After the limit is reached, further attempts will fail (including correct password attempts) until the beginning of
72
Password
the next time period. However, this is vulnerable to a form of denial of service attack.
• Introducing a delay between password submission attempts to slow down automated password guessing
programs.
Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing
security as a result.
Password cracking
Attempting to crack passwords by trying as many possibilities as time and money permit is a brute force attack. A
related method, rather more efficient in most cases, is a dictionary attack. In a dictionary attack, all words in one or
more dictionaries are tested. Lists of common passwords are also typically tested.
Password strength is the likelihood that a password cannot be guessed or discovered, and varies with the attack
algorithm used. Passwords easily discovered are termed weak or vulnerable; passwords very difficult or impossible
to discover are considered strong. There are several programs available for password attack (or even auditing and
recovery by systems personnel) such as L0phtCrack, John the Ripper, and Cain; some of which use password design
vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are
sometimes used by system administrators to detect weak passwords proposed by users.
Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords
are readily guessed automatically. For example, Columbia University found 22% of user passwords could be
recovered with little effort.[10] According to Bruce Schneier, examining data from a 2006 phishing attack, 55% of
MySpace passwords would be crackable in 8 hours using a commercially available Password Recovery Toolkit
capable of testing 200,000 passwords per second in 2006.[11] He also reported that the single most common
password was password1, confirming yet again the general lack of informed care in choosing passwords among
users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the
years—for example, average length was up to eight characters from under seven in previous surveys, and less than
4% were dictionary words.[12] )
1998 incident
On July 16, 1998, CERT reported an incident[13] where an intruder had collected 186,126 account names with their
respective encrypted passwords. At the time of the discovery, the intruder had guessed 47,642 (25.6%) of these
passwords using a password cracking tool. The passwords appeared to have been collected from several other sites,
some were identified but not all. This is still the largest reported incident to date.
Alternatives to passwords for access control
The numerous ways in which permanent or semi-permanent passwords can be compromised has prompted the
development of other techniques. Unfortunately, some are inadequate in practice, and in any case few have become
universally available for users seeking a more secure alternative.
• Single-use passwords. Having passwords which are only valid once makes many potential attacks ineffective.
Most users find single use passwords extremely inconvenient. They have, however, been widely implemented in
personal online banking, where they are known as Transaction Authentication Numbers (TANs). As most home
users only perform a small number of transactions each week, the single use issue has not led to intolerable
customer dissatisfaction in this case.
• Time-synchronized one-time passwords are similar in some ways to single-use passwords, but the value to be
entered is displayed on a small (generally pocketable) item and changes every minute or so.
• PassWindow one-time passwords are used as single-use passwords, but the dynamic characters to be entered are
visible only when a user superimposes a unique printed visual key over a server generated challenge image shown
73
Password
•
•
•
•
on the user's screen.
Access controls based on public key cryptography e.g. ssh. The necessary keys are usually too large to memorize
(but see proposal Passmaze [14]) and must be stored on a local computer, security token or portable memory
device, such as a USB flash drive or even floppy disk.
Biometric methods promise authentication based on unalterable personal characteristics, but currently (2008)
have high error rates and require additional hardware to scan, for example, fingerprints, irises, etc. They have
proven easy to spoof in some famous incidents testing commercially available systems, for example, the gummie
fingerprint spoof demonstration,[15] and, because these characteristics are unalterable, they cannot be changed if
compromised; this is a highly important consideration in access control as a compromised access token is
necessarily insecure.
Single sign-on technology is claimed to eliminate the need for having multiple passwords. Such schemes do not
relieve user and administrators from choosing reasonable single passwords, nor system designers or
administrators from ensuring that private access control information passed among systems enabling single
sign-on is secure against attack. As yet, no satisfactory standard has been developed.
Envaulting technology is a password-free way to secure data on e.g. removable storage devices such as USB flash
drives. Instead of user passwords, access control is based on the user's access to a network resource.
• Non-text-based passwords, such as graphical passwords or mouse-movement based passwords.[16] Another
system requires users to select a series of faces as a password, utilizing the human brain's ability to recall faces
easily.[17] So far, these are promising, but are not widely used. Studies on this subject have been made to
determine its usability in the real world.
• Graphical passwords are an alternative means of authentication for log-in intended to be used in place of
conventional password; they use images, graphics or colours instead of letters, digits or special characters. In
some implementations the user is required to pick from a series of images in the correct sequence in order to gain
access.[18] While some believe that graphical passwords would be harder to crack, others suggest that people will
be just as likely to pick common images or sequences as they are to pick common passwords.
• 2D Key (2-Dimensional Key) [19] is a 2D matrix-like key input method having the key styles of multiline
passphrase, crossword, ASCII/Unicode art, with optional textual semantic noises, to create big password/key
beyond 128 bits to realize the MePKC (Memorizable Public-Key Cryptography) using fully memorizable private
key upon the current private key management technologies like encrypted private key, split private key, and
roaming private key.
• Cognitive passwords use question and answer cue/response pairs to verify identity.
Website password systems
Passwords are used on websites to authenticate users and are usually maintained on the Web server, meaning the
browser on a remote system sends a password to the server (by HTTP POST), the server checks the password and
sends back the relevant content (or an access denied message). This process eliminates the possibility of local
reverse engineering as the code used to authenticate the password does not reside on the local machine.
Transmission of the password, via the browser, in plaintext means it can be intercepted along its journey to the
server. Many web authentication systems use SSL to establish an encrypted session between the browser and the
server, and is usually the underlying meaning of claims to have a "secure Web site". This is done automatically by
the browser and increases integrity of the session, assuming neither end has been compromised and that the
SSL/TLS implementations used are high quality ones.
So-called website password and membership management systems often involve the use of Java or JavaScript code
existing on the client side (meaning the visitor's web browser) HTML source code (for example, AuthPro).
Drawbacks to such systems are the relative ease in bypassing or circumventing the protection by switching off
JavaScript and Meta redirects in the browser, thereby gaining access to the protected web page. Others take
74
Password
75
advantage of server-side scripting languages such as ASP or PHP to authenticate users on the server before
delivering the source code to the browser. Popular systems such as Sentry Login [20] and Password Sentry [21] take
advantage of technology in which web pages are protected using such scripting language code snippets placed in
front of the HTML code in the web page source saved in the appropriate extension on the server, such as .asp or
.php.
History of passwords
Passwords or watchwords have been used since ancient times. Polybius describes the system for distribution
watchwords in the Roman military as follows:
The way in which they secure the passing round of the watchword for the night is as follows: from the tenth
maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a
man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune,
and receiving from him the watchword - that is a wooden tablet with the word inscribed on it - takes his leave,
and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the
next maniple, who in turn passes it to the one next him. All do the same until it reaches the first maniples,
those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes
before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all
the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes
inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is
responsible for the stoppage meets with the punishment he merits.[22]
Passwords in military use evolved to include not just a password, but a password and a counterpassword; for
example in the opening days of the Battle of Normandy, paratroopers of the U.S. 101st Airborne Division used a
password - "thunder" - which was presented as a challenge, and answered with the correct response - "flash". The
challenge and response were changed periodically. American paratroopers also famously used a device known as a
"cricket" on D-Day in place of a password system as a temporarily unique method of identification; one metallic
click given by the device in lieu of a password was to be met by two clicks in reply.[23]
Passwords have been used with computers since the earliest days of computing. MIT's CTSS, one of the first time
sharing systems, was introduced in 1961. It had a LOGIN command that requested a user password. "After typing
PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password
with privacy."[24] Robert Morris invented the idea of storing login passwords in a hashed form as part of the Unix
operating system. His algorithm, know as crypt(3), used a 12-bit salt and invoked a modified form of the DES
algorithm 25 times to reduce the risk of pre-computed dictionary attacks.
See also
•
•
•
•
•
•
•
•
•
Access Code
Authentication
CAPTCHA
Diceware
Keyfile
Passphrase
Password manager
Password policy
Password psychology
• Password strength
• Password length parameter
• Password cracking
Password
•
•
•
•
•
•
•
•
•
Password fatigue
Password-authenticated key agreement
Password notification e-mail
Password synchronization
Pre-shared key
Random password generator
Rainbow table
Self-service password reset
Shibboleth
External links
•
•
•
•
•
•
Large collection of statistics about passwords [25]
Graphical Passwords: A Survey [26]
PassClicks [27]
PassImages [28]
Links for password-based cryptography [29]
Procedural Advice for Organisations and Administrators [30]
• Memorability and Security of Passwords [31] - Cambridge University Computer Laboratory study of password
memorability vs. security.
References
[1] Vance, Ashlee (January 20, 2010). "If Your Password Is 123456, Just Make It HackMe" (http:/ / www. nytimes. com/ 2010/ 01/ 21/
technology/ 21password. html). The New York Times. .
[2] (http:/ / all. net/ journal/ netsec/ 1997-09. html) Fred Cohen and Associates
[3] The Memorability and Security of Passwords (http:/ / homepages. cs. ncl. ac. uk/ jeff. yan/ jyan_ieee_pwd. pdf)
[4] Lyquix Blog: Do We Need to Hide Passwords? (http:/ / www. lyquix. com/ blog/ 92-do-we-need-to-hide-passwords)
[5] news.bbc.co.uk: Malaysia car thieves steal finger (http:/ / news. bbc. co. uk/ 2/ hi/ asia-pacific/ 4396831. stm)
[6] Top ten passwords used in the United Kingdom (http:/ / www. modernlifeisrubbish. co. uk/ top-10-most-common-passwords. asp)
[7] Password Protection for Modern Operating Systems (http:/ / www. usenix. org/ publications/ login/ 2004-06/ pdfs/ alexander. pdf)
[8] http:/ / support. microsoft. com/ default. aspx?scid=KB;EN-US;q299656
[9] "To Capitalize or Not to Capitalize?" (http:/ / world. std. com/ ~reinhold/ dicewarefaq. html#capitalize)
[10] Password (http:/ / www1. cs. columbia. edu/ ~crf/ howto/ password-howto. html)
[11] Schneier, Real-World Passwords (http:/ / www. schneier. com/ blog/ archives/ 2006/ 12/ realworld_passw. html)
[12] MySpace Passwords Aren't So Dumb (http:/ / www. wired. com/ politics/ security/ commentary/ securitymatters/ 2006/ 12/ 72300)
[13] "CERT IN-98.03" (http:/ / www. cert. org/ incident_notes/ IN-98. 03. html). . Retrieved 2009-09-09.
[14] http:/ / eprint. iacr. org/ 2005/ 434
[15] T Matsumoto. H Matsumotot, K Yamada, and S Hoshino, Impact of artificial 'Gummy' Fingers on Fingerprint Systems. Proc SPIE, vol
4677, Optical Security and Counterfeit Deterrence Techniques IV or itu.int/itudoc/itu-t/workshop/security/resent/s5p4.pdf pg 356
[16] http:/ / waelchatila. com/ 2005/ 09/ 18/ 1127075317148. html
[17] http:/ / mcpmag. com/ reviews/ products/ article. asp?EditorialsID=486
[18] http:/ / searchsecurity. techtarget. com/ sDefinition/ 0,,sid14_gci1001829,00. html
[19] http:/ / www. xpreeli. com/ doc/ manual_2DKey_2. 0. pdf
[20] http:/ / www. Sentrylogin. com
[21] http:/ / www. monster-submit. com/ sentry/
[22] Polybius on the Roman Military (http:/ / ancienthistory. about. com/ library/ bl/ bl_text_polybius6. htm)
[23] Bando, Mark Screaming Eagles: Tales of the 101st Airborne Division in World War II
[24] CTSS Programmers Guide, 2nd Ed., MIT Press, 1965
[25] http:/ / www. passwordresearch. com/ stats/ statindex. html
[26] http:/ / www. acsac. org/ 2005/ abstracts/ 89. html
[27] http:/ / labs. mininova. org/ passclicks/
[28] http:/ / www. network-research-group. org/ default. asp?page=publications
[29] http:/ / www. jablon. org/ passwordlinks. html
[30] http:/ / www. emiic. net/ services/ guides/ Passwords%20Guide. pdf
76
Password
77
[31] http:/ / www. ftp. cl. cam. ac. uk/ ftp/ users/ rja14/ tr500. pdf
Hacker (computer security)
This article is part of a
series on:
Computer Hacking
Hobbyist hacker
Technology hacker
Hacker programmer
Hacking in computer security
Computer security
Computer insecurity
Network security
History
Phreaking
Cryptovirology
Hacker ethic
Black hat, Grey hat, White hat
Hacker Manifesto
Black Hat Briefings, DEF CON
Cybercrime
Computer crime, Crimeware
List of convicted computer
criminals
Script kiddie
Hacking tools
Vulnerability
Exploit
Payload
Software
Malware
Rootkit, Backdoor
Trojan horse, Virus, Worm
Spyware, Botnet, Keystroke
logging
Antivirus software, Firewall, HIDS
In common usage, a hacker is a person who breaks into computers and computer networks, either for profit or
motivated by the challenge.[1] The subculture that has evolved around hackers is often referred to as the computer
underground but is now an open community.[2]
Other uses of the word hacker exist that are not related to computer security (computer programmer and home
computer hobbyists), but these are rarely used by the mainstream media because of the common stereotype that is in
Hacker (computer security)
TV and movies. Some would argue that the people that are now considered hackers are not hackers, as before the
media described the person who breaks into computers as a hacker there was a hacker community. This group was a
community of people who had a large interest in computer programming, often sharing, without restrictions, the
source code for the software they wrote. These people now refer to the cyber-criminal hackers as "crackers"[3] .
History
In today's society understanding the term Hacker is complicated because it has many different definitions. The term
Hacker can be traced back to MIT (Massachusetts Institute Technology). MIT was the first institution to offer a
course in computer programming and computer science and it is here in 1960 where a group of MIT students taking
a lab on Artificial Intelligence first coined this word. These students called themselves hackers because they were
able to take programs and have them perform actions not intended for that program. “The term was developed on the
basis of a practical joke and feeling of excitement because the team member would “hack away” at the keyboard
hours at a time.” (Moore R., 2006).[4]
Hacking developed alongside "Phone Phreaking", a term referred to exploration of the phone network without
authorization, and there has often been overlap between both technology and participants. The first recorded hack
was accomplished by "Joe Engressia" also known as The Whistler. Engressia is known as the grandfather of
Phreaking. His hacking technique was that he could perfectly whistle a tone into a phone and make free call.[5] Bruce
Sterling traces part of the roots of the computer underground to the Yippies, a 1960s counterculture movement which
published the Technological Assistance Program (TAP) newsletter. [6] . Other sources of early 70s hacker culture
can be traced towards more beneficial forms of hacking, including MIT labs or the homebrew club, which later
resulted in such things as early personal computers or the open source movement.
Artifacts and customs
The computer underground[1] is heavily dependent on technology. It has produced its own slang and various forms of
unusual alphabet use, for example 1337speak. Writing programs and performing other activities to support these
views is referred to as hacktivism. Some go as far as seeing illegal cracking ethically justified for this goal; a
common form is website defacement. The computer underground is frequently compared to the Wild West.[7] It is
common among hackers to use aliases for the purpose of concealing identity, rather than revealing their real names.
Hacker groups and conventions
The computer underground is supported by regular real-world gatherings called hacker conventions or "hacker
cons". These drawn many people every year including SummerCon (Summer), DEF CON, HoHoCon (Christmas),
ShmooCon (February), BlackHat, Hacker Halted, and H.O.P.E... In the early 1980s Hacker Groups became popular,
Hacker groups provided access to information and resources, and a place to learn from other members. Hackers
could also gain credibility by being affiliated with a group.[8] Hackers could also gain credibility by being affiliated
with an elite group.[8]
Hacker attitudes
Several subgroups of the computer underground with different attitudes and aims use different terms to demarcate
themselves from each other, or try to exclude some specific group with which they do not agree. Eric S. Raymond
(author of The New Hacker's Dictionary) advocates that members of the computer underground should be called
crackers. Yet, those people see themselves as hackers and even try to include the views of Raymond in what they see
as one wider hacker culture, a view harshly rejected by Raymond himself. Instead of a hacker/cracker dichotomy,
they give more emphasis to a spectrum of different categories, such as white hat, grey hat, black hat and script
kiddie. In contrast to Raymond, they usually reserve the term cracker. According to (Clifford R.D. 2006) a cracker or
78
Hacker (computer security)
cracking is to " gain unauthorized access to a computer in order to commit another crime such as destroying
information contianed in that system"e).[9] These subgroups may also defined by the legal status of their
activities.[10]
According to Steven Levy an American journalist who has written several books on computers, technology,
cryptography, and cybersecurity said most hacker motives are reflected by the Hackers Ethic. These ethic are as
follows:"
• Access to computers and anything that might teach you something about the way the world works should be
unlimited and total.always yield to the Hands-on imperative!
•
•
•
•
•
All information should be free.
Mistrust authority and promote decentralization.
Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.
You can create art and beauty on a computer.
Computers can change your life for the better."[5]
White hat
A white hat hacker breaks security for non-malicious reasons, for instance testing their own security system. This
classification also includes individuals who perform penetration tests and vulnerability assessments within a
contractual agreement. Often, this type of 'white hat' hacker is called an ethical hacker. The International Council of
Electronic Commerce Consultants, also known as the EC-Council [11] has developed certifications, courseware,
classes, and online training covering the diverse arena of Ethical Hacking.[10]
Grey hat
A gray hat hacker is a combination of a Black Hat Hacker and a White Hat Hacker. A Grey Hat Hacker will surf the
internet and hack into a computer system for the sole purpose of notifying the administrator that their system has
been hacked. Then they will offer to repair their system for a small fee.[4]
Blue Hat
A blue hat hacker is someone outside computer security consulting firms who is used to bug test a system prior to its
launch, looking for exploits so they can be closed. Microsoft also uses the term BlueHat [12] to represent a series of
security briefing events.[13] [14] [15]
Black hat
A black hat hacker, sometimes called "cracker", is someone who breaks computer security without authorization or
uses technology (usually a computer, phone system or network) for vandalism, credit card fraud, identity theft,
piracy, or other types of illegal activity.[10] [16]
Elite (or known as 1337 or 31337 in 1337_speak)
Elite is a term used to describe the most advanced hackers who are said to be on "the cutting edge" of computing and
network technology. These would be individuals in the earliest 2.5 percentile of the technology adoption lifecycle
curve, referred to as "innovators." As script kiddies and noobs utilize and exploit weaknesses in systems discovered
by others, elites are those who bring about the initial discovery.
79
Hacker (computer security)
80
Script kiddie
A script kiddie is a non-expert who breaks into computer systems by using pre-packaged automated tools written by
others, usually with little understanding of the underlying concept—hence the term script (i.e. a prearranged plan or
set of activities) kiddie (i.e. kid, child—an individual lacking knowledge and experience, immature).[16]
Neophyte
A neophyte or "newbie" is a term used to describe someone who is new to hacking or phreaking and has almost no
knowledge or experience of the workings of technology, and hacking.[4]
Hacktivism
A hacktivist is a hacker who utilizes technology to announce a social, ideological, religious, or political message. In
general, most hacktivism involves website defacement or denial-of-service attacks. In more extreme cases,
hacktivism is used as tool for Cyberterrorism.
Common methods
Computer security
Secure operating systems
Security architecture
Security by design
Secure coding
Computer insecurity
Vulnerability Social engineering
Eavesdropping
Exploits
Trojans
Viruses and
worms
Denial of service
Payloads
Backdoors
Rootkits
Keyloggers
A typical approach in an attack on Internet-connected system is:
1. Network enumeration: Discovering information about the intended target.
2. Vulnerability analysis: Identifying potential ways of attack.
3. Exploitation: Attempting to compromise the system by employing the vulnerabilities found through the
vulnerability analysis.[17]
In order to do so, there are several recurring tools of the trade and techniques used by computer criminals and
security experts.
Hacker (computer security)
Security exploit
A security exploit is a prepared application that takes advantage of a known weakness. Common examples of
security exploits are SQL injection, Cross Site Scripting and Cross Site Request Forgery which abuse security holes
that may result from substandard programming practice. Other exploits would be able to be used through FTP,
HTTP, PHP, SSH, Telnet and some web-pages. These are very common in website/domain hacking.
Vulnerability scanner
A vulnerability scanner is a tool used to quickly check computers on a network for known weaknesses. Hackers also
commonly use port scanners. These check to see which ports on a specified computer are "open" or available to
access the computer, and sometimes will detect what program or service is listening on that port, and its version
number. (Note that firewalls defend computers from intruders by limiting access to ports/machines both inbound and
outbound, but can still be circumvented.)
Password cracking
Password cracking is the process of recovering passwords from data that has been stored in or transmitted by a
computer system. A common approach is to repeatedly try guesses for the password.
Packet sniffer
A packet sniffer is an application that captures data packets, which can be used to capture passwords and other data
in transit over the network.
Spoofing attack
A spoofing attack involves one program, system, or website successfully masquerading as another by falsifying data
and thereby being treated as a trusted system by a user or another program. The purpose of this is usually to fool
programs, systems, or users into revealing confidential information, such as user names and passwords, to the
attacker.
Rootkit
A rootkit is designed to conceal the compromise of a computer's security, and can represent any of a set of programs
which work to subvert control of an operating system from its legitimate operators. Usually, a rootkit will obscure its
installation and attempt to prevent its removal through a subversion of standard system security. Rootkits may
include replacements for system binaries so that it becomes impossible for the legitimate user to detect the presence
of the intruder on the system by looking at process tables.
Social engineering
Social Engineering is the art of getting persons to reveal sensitive information about a system. This is usually done
by impersonating someone or by convincing people to believe you have permissions to obtain such information.
Trojan horse
A Trojan horse is a program which seems to be doing one thing, but is actually doing another. A trojan horse can be
used to set up a back door in a computer system such that the intruder can gain access later. (The name refers to the
horse from the Trojan War, with conceptually similar function of deceiving defenders into bringing an intruder
inside.)
81
Hacker (computer security)
Virus
A virus is a self-replicating program that spreads by inserting copies of itself into other executable code or
documents. Therefore, a computer virus behaves in a way similar to a biological virus, which spreads by inserting
itself into living cells.
While some are harmless or mere hoaxes most computer viruses are considered malicious.
Worm
Like a virus, a worm is also a self-replicating program. A worm differs from a virus in that it propagates through
computer networks without user intervention. Unlike a virus, it does not need to attach itself to an existing program.
Many people conflate the terms "virus" and "worm", using them both to describe any self-propagating program.
Key loggers
A keylogger is a tool designed to record ('log') every keystroke on an affected machine for later retrieval. Its purpose
is usually to allow the user of this tool to gain access to confidential information typed on the affected machine, such
as a user's password or other private data. Some key loggers uses virus-, trojan-, and rootkit-like methods to remain
active and hidden. However, some key loggers are used in legitimate ways and sometimes to even enhance computer
security. As an example, a business might have a key logger on a computer that was used as at a Point of Sale and
data collected by the key logger could be use for catching employee fraud.
Notable Security Hackers
Kevin Mitnick
Kevin Mitnick is a computer security consultant and author, formerly the most wanted computer criminal in United
States history.[18]
Eric Corley
Eric Corley (also known as Emmanuel Goldstein) is the long standing publisher of 2600: The Hacker Quarterly. He
is also the founder of the H.O.P.E. conferences. He has been part of the hacker community since the late '70s.
Fyodor
Gordon Lyon, known by the handle Fyodor, authored the Nmap Security Scanner as well as many network security
books and web sites. He is a founding member of the Honeynet Project and Vice President of Computer
Professionals for Social Responsibility.
82
Hacker (computer security)
83
Solar Designer
Solar Designer is the pseudonym of the founder of the Openwall Project.
Michał Zalewski
Michał Zalewski (lcamtuf) is a prominent security researcher.
Gary McKinnon
Gary McKinnon is a British hacker facing extradition to the United States to face charges of perpetrating what has
been described as the "biggest military computer hack of all time".[19]
Hacking and the media
Hacker magazines
The most notable hacker-oriented magazine publications are Phrack, Hakin9 and 2600: The Hacker Quarterly.
While the information contained in hacker magazines and ezines was often outdated, they improved the reputations
of those who contributed by documenting their successes.[8]
Hackers in fiction
Hackers often show an interest in fictional cyberpunk and cyberculture literature and movies. Absorption of fictional
pseudonyms, symbols, values, and metaphors from these fictional works is very common.
Books portraying hackers:
•
•
•
•
•
•
•
The cyberpunk novels of William Gibson — especially the Sprawl Trilogy — are very popular with hackers.[20]
Hackers (short stories)
Snow Crash
Helba from the .hack manga and anime series.
Little Brother by Cory Doctorow
Rice Tea by Julien McArdle
Lisbeth Salander in Men who hate women by Stieg Larsson
Films also portray hackers:
•
•
•
•
•
•
Cypher
Tron
WarGames
The Matrix series
Hackers
Swordfish
•
•
•
•
•
•
The Net
The Net 2.0
Antitrust
Enemy of the State
Sneakers
Untraceable
•
•
•
•
•
Firewall
Die Hard "4": Live Free or Die Hard
Eagle Eye
Take Down
Weird Science
Hacker (computer security)
Non-fiction books
•
•
•
•
•
•
•
•
Hacking: The Art of Exploitation, Second Edition by Jon Erickson
The Hacker Crackdown
The Art of Intrusion by Kevin D. Mitnick
The Art of Deception by Kevin D. Mitnick
Takedown
The Hacker's Handbook
The Cuckoo's Egg by Clifford Stoll
Underground by Suelette Dreyfus
Fiction books
• Ender's Game
• Neuromancer
• Evil Genius
See also
•
•
•
•
Wireless hacking
List of notable hackers
Cyber spying
Cyber Storm Exercise
References
Taylor, 1999
Taylor, Paul A. (1999). Hackers. Routledge. ISBN 9780415180726.
[1] Sterling, Bruce (1993). "Part 2(d)". The Hacker Crackdown. McLean, Virginia: IndyPublish.com. p. 61. ISBN 1-4043-0641-2.
[2] Blomquist, Brian (May 29, 1999). " FBI's Web Site Socked as Hackers Target Feds (http:/ / archive. nypost. com/ a/ 475198)". New York
Post. Retrieved on October 21, 2008.
[3] S. Raymond, Eric. "Jargon File: Cracker" (http:/ / catb. org/ jargon/ html/ C/ cracker. html). . Retrieved 2010-05-08. "Coined ca. 1985 by
hackers in defense against journalistic misuse of hacker"
[4] Moore, Robert (2006). Cybercrime:Investigating High-Technology Computer Crime. Cincinnati, Ohio: Anderson Publishing.
[5] Kizza, Joseph M. (2005). Computer Network Security. New York, LLC: Springer-Verlag.
[6] TAP Magazine Archive. http:/ / servv89pn0aj. sn. sourcedns. com/ ~gbpprorg/ 2600/ TAP/
[7] Tim Jordan, Paul A. Taylor (2004). Hacktivism and Cyberwars. Routledge. pp. 133–134. ISBN 9780415260039. "Wild West imagery has
permeated discussions of cybercultures."
[8] Thomas, Douglas. Hacker Culture. University of Minnesota Press. pp. 90. ISBN 9780816633463.
[9] Clifford, Ralph D. (2006). Cybercrime:The Investigation, Prosecution and Defense of a Computer-Related Crime Second Edition. Durham,
North Carolina: Carolina Academic Press.
[10] Wilhelm, Douglas. "2". Professional Penetration Testing. Syngress Press. pp. 503. ISBN 9781597494250.
[11] http:/ / www. eccouncil. org/
[12] http:/ / www. microsoft. com/ technet/ security/ bluehat/ default. mspx
[13] "Blue hat hacker Definition" (http:/ / www. pcmag. com/ encyclopedia_term/ 0,2542,t=blue+ hat+ hacker& i=56321,00. asp). PC Magazine
Encyclopedia. . Retrieved 31 May 2010. "A security professional invited by Microsoft to find vulnerabilities in Windows."
[14] Fried, Ina (June 15, 2005). ""Blue Hat" summit meant to reveal ways of the other side" (http:/ / news. cnet. com/
Microsoft-meets-the-hackers/ 2009-1002_3-5747813. html). Microsoft meets the hackers. CNET News. . Retrieved 31 May 2010.
[15] Markoff, John (October 17, 2005). "At Microsoft, Interlopers Sound Off on Security" (http:/ / www. nytimes. com/ 2005/ 10/ 17/
technology/ 17hackers. html?pagewanted=1& _r=1). New York Times. . Retrieved 31 May 2010.
[16] Andress, Mandy; Cox, Phil; Tittel, Ed. CIW Security Professional. New York, NY: Hungry Minds, Inc.. p. 10. ISBN 0764548220.
[17] Hacking approach (http:/ / www. informit. com/ articles/ article. aspx?p=25916)
[18] United States Attorney's Office, Central District of California (9 August 1999). "Kevin Mitnick sentenced to nearly four years in prison;
computer hacker ordered to pay restitution ..." (http:/ / www. usdoj. gov/ criminal/ cybercrime/ mitnick. htm). Press release. . Retrieved 10
April 2010.
84
Hacker (computer security)
[19] Boyd, Clark (30 July 2008). "Profile: Gary McKinnon" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 4715612. stm). BBC News. . Retrieved
2008-11-15.
[20] Staples, Brent (May 11, 2003). "A Prince of Cyberpunk Fiction Moves Into the Mainstream" (http:/ / www. nytimes. com/ 2003/ 05/ 11/
opinion/ 11SUN3. html?ex=1367985600& en=9714db46bfff633a& ei=5007& partner=USERLAND). . Retrieved 2008-08-30. "Mr. Gibson's
novels and short stories are worshiped by hackers"
Related literature
• Kevin Beaver. Hacking For Dummies.
• Code Hacking: A Developer's Guide to Network Security by Richard Conway, Julian Cordingley
• “Dot.Con: The Dangers of Cyber Crime and a Call for Proactive Solutions,” (http://www.scribd.com/doc/
14361572/Dotcon-Dangers-of-Cybercrime-by-Johanna-Granville) by Johanna Granville, Australian Journal of
Politics and History, vol. 49, no. 1. (Winter 2003), pp. 102–109.
• Katie Hafner & John Markoff (1991). Cyberpunk: Outlaws and Hackers on the Computer Frontier. Simon &
Schuster. ISBN 0-671-68322-5.
• David H. Freeman & Charles C. Mann (1997). @ Large: The Strange Case of the World's Biggest Internet
Invasion. Simon & Schuster. ISBN 0-684-82464-7.
• Suelette Dreyfus (1997). Underground: Tales of Hacking, Madness and Obsession on the Electronic Frontier.
Mandarin. ISBN 1-86330-595-5.
• Bill Apro & Graeme Hammond (2005). Hackers: The Hunt for Australia's Most Infamous Computer Cracker.
Five Mile Press. ISBN 1-74124-722-5.
• Stuart McClure, Joel Scambray & George Kurtz (1999). Hacking Exposed. Mcgraw-Hill. ISBN 0-07-212127-0.
• Michael Gregg (2006). Certfied Ethical Hacker. Pearson. ISBN 978-0789735317.
• Clifford Stoll (1990). The Cuckoo's Egg. The Bodley Head Ltd. ISBN 0-370-31433-6.
External links
• (http://archives.cnn.com/2001/TECH/internet/11/19/hack.history.idg/), CNN Tech PCWorld
Staff(November 2001). Timeline: A 40-year history of hacking from 1960 to 2001
• (http://vodpod.com/watch/31369-discovery-channel-the-history-of-hacking-documentary) Discovery Channel
Documentary. History of Hacking Documentary video
85
Malware
Malware
Malware, short for malicious software, is software designed to infiltrate a computer system without the owner's
informed consent. The expression is a general term used by computer professionals to mean a variety of forms of
hostile, intrusive, or annoying software or program code.[1] The term "computer virus" is sometimes used as a
catch-all phrase to include all types of malware, including true viruses.
Software is considered to be malware based on the perceived intent of the creator rather than any particular features.
Malware includes computer viruses, worms, trojan horses, spyware, dishonest adware, crimeware, most rootkits, and
other malicious and unwanted software. In law, malware is sometimes known as a computer contaminant, for
instance in the legal codes of several U. S. states, including California and West Virginia.[2] [3]
Malware is not the same as defective software, that is a software that has a legitimate purpose but contains harmful
bugs.
Preliminary results from Symantec published in 2008 suggested that "the release rate of malicious code and other
unwanted programs may be exceeding that of legitimate software applications."[4] According to F-Secure, "As much
malware [was] produced in 2007 as in the previous 20 years altogether."[5] Malware's most common pathway from
criminals to users is through the Internet: primarily by e-mail and the World Wide Web.[6]
The prevalence of malware as a vehicle for organized Internet crime, along with the general inability of traditional
anti-malware protection platforms (products) to protect against the continuous stream of unique and newly produced
malware, has seen the adoption of a new mindset for businesses operating on the Internet: the acknowledgment that
some sizable percentage of Internet customers will always be infected for some reason or another, and that they need
to continue doing business with infected customers. The result is a greater emphasis on back-office systems designed
to spot fraudulent activities associated with advanced malware operating on customers' computers.[7]
On March 29, 2010, Symantec Corporation named Shaoxing, China as the world's malware capital.[8]
Sometimes, malware is disguised as genuine software, and may come from an official site. Therefore, some security
programs, such as McAfee may call malware "potentially unwanted programs" or "PUP".
Purposes
Many early infectious programs, including the first Internet Worm and a number of MS-DOS viruses, were written
as experiments or pranks. They were generally intended to be harmless or merely annoying, rather than to cause
serious damage to computer systems. In some cases, the perpetrator did not realize how much harm their creations
would do. Young programmers learning about viruses and their techniques wrote them for the sole purpose that they
could or to see how far it could spread. As late as 1999, widespread viruses such as the Melissa virus appear to have
been written chiefly as pranks.
Hostile intent related to vandalism can be found in programs designed to cause harm or data loss. Many DOS
viruses, and the Windows ExploreZip worm, were designed to destroy files on a hard disk, or to corrupt the file
system by writing invalid data to them. Network-borne worms such as the 2001 Code Red worm or the Ramen worm
fall into the same category. Designed to vandalize web pages, worms may seem like the online equivalent to graffiti
tagging, with the author's alias or affinity group appearing everywhere the worm goes.
Since the rise of widespread broadband Internet access, malicious software has been designed for a profit, for
examples forced advertising. For instance, since 2003, the majority of widespread viruses and worms have been
designed to take control of users' computers for black-market exploitation. Infected "zombie computers" are used to
send email spam, to host contraband data such as child pornography,[9] or to engage in distributed denial-of-service
attacks as a form of extortion.
86
Malware
Another strictly for-profit category of malware has emerged in spyware -- programs designed to monitor users' web
browsing, display unsolicited advertisements, or redirect affiliate marketing revenues to the spyware creator.
Spyware programs do not spread like viruses; they are, in general, installed by exploiting security holes or are
packaged with user-installed software, such as peer-to-peer applications.
Infectious malware: viruses and worms
The best-known types of malware, viruses and worms, are known for the manner in which they spread, rather than
any other particular behavior. The term computer virus is used for a program that has infected some executable
software and that causes that when run, spread the virus to other executables. Viruses may also contain a payload
that performs other actions, often malicious. A worm, on the other hand, is a program that actively transmits itself
over a network to infect other computers. It too may carry a payload.
These definitions lead to the observation that a virus requires user intervention to spread, whereas a worm spreads
itself automatically. Using this distinction, infections transmitted by email or Microsoft Word documents, which rely
on the recipient opening a file or email to infect the system, would be classified as viruses rather than worms.
Some writers in the trade and popular press appear to misunderstand this distinction, and use the terms
interchangeably.
Capsule history of viruses and worms
Before Internet access became widespread, viruses spread on personal computers by infecting the executable boot
sectors of floppy disks. By inserting a copy of itself into the machine code instructions in these executables, a virus
causes itself to be run whenever a program is run or the disk is booted. Early computer viruses were written for the
Apple II and Macintosh, but they became more widespread with the dominance of the IBM PC and MS-DOS
system. Executable-infecting viruses are dependent on users exchanging software or boot-able floppies, so they
spread rapidly in computer hobbyist circles.
The first worms, network-borne infectious programs, originated not on personal computers, but on multitasking Unix
systems. The first well-known worm was the Internet Worm of 1988, which infected SunOS and VAX BSD systems.
Unlike a virus, this worm did not insert itself into other programs. Instead, it exploited security holes (vulnerabilities)
in network server programs and started itself running as a separate process. This same behavior is used by today's
worms as well.
With the rise of the Microsoft Windows platform in the 1990s, and the flexible macros of its applications, it became
possible to write infectious code in the macro language of Microsoft Word and similar programs. These macro
viruses infect documents and templates rather than applications (executables), but rely on the fact that macros in a
Word document are a form of executable code.
Today, worms are most commonly written for the Windows OS, although a few like Mare-D[10] and the Lion
worm[11] are also written for Linux and Unix systems. Worms today work in the same basic way as 1988's Internet
Worm: they scan the network and leverage vulnerable computers to replicate. Because they need no human
intervention, worms can spread with incredible speed. The SQL Slammer infected thousands of computers in a few
minutes.[12]
87
Malware
88
Concealment: Trojan horses, rootkits, and backdoors
Trojan horses
For a malicious program to accomplish its goals, it must be able to run without being shut down, or deleted by the
user or administrator of the computer system on which it is running. Concealment can also help get the malware
installed in the first place. When a malicious program is disguised as something innocuous or desirable, users may be
tempted to install it without knowing what it does. This is the technique of the Trojan horse or trojan.
In broad terms, a Trojan horse is any program that invites the user to run it, concealing a harmful or malicious
payload. The payload may take effect immediately and can lead to many undesirable effects, such as deleting the
user's files or further installing malicious or undesirable software. Trojan horses known as droppers are used to start
off a worm outbreak, by injecting the worm into users' local networks.
One of the most common ways that spyware is distributed is as a Trojan horse, bundled with a piece of desirable
software that the user downloads from the Internet. When the user installs the software, the spyware is installed
alongside. Spyware authors who attempt to act in a legal fashion may include an end-user license agreement that
states the behavior of the spyware in loose terms, which the users are unlikely to read or understand.
Rootkits
Once a malicious program is installed on a system, it is essential that it stays concealed, to avoid detection and
disinfection. The same is true when a human attacker breaks into a computer directly. Techniques known as rootkits
allow this concealment, by modifying the host's operating system so that the malware is hidden from the user.
Rootkits can prevent a malicious process from being visible in the system's list of processes, or keep its files from
being read. Originally, a rootkit was a set of tools installed by a human attacker on a Unix system, allowing the
attacker to gain administrator (root) access. Today, the term is used more generally for concealment routines in a
malicious program.
Some malicious programs contain routines to defend against removal, not merely to hide themselves, but to repel
attempts to remove them. An early example of this behavior is recorded in the Jargon File tale of a pair of programs
infesting a Xerox CP-V time sharing system:
Each ghost-job would detect the fact that the other had been killed, and would start a new copy of the recently
slain program within a few milliseconds. The only way to kill both ghosts was to kill them simultaneously
(very difficult) or to deliberately crash the system.[13]
Similar techniques are used by some modern malware, wherein the malware starts a number of processes that
monitor and restore one another as needed. Some malware programs use other techniques, such as naming the
infected file similar to a legitimate or trust-able file (expl0rer.exe VS explorer.exe).
Backdoors
A backdoor is a method of bypassing normal authentication procedures. Once a system has been compromised (by
one of the above methods, or in some other way), one or more backdoors may be installed in order to allow easier
access in the future. Backdoors may also be installed prior to malicious software, to allow attackers entry.
The idea has often been suggested that computer manufacturers preinstall backdoors on their systems to provide
technical support for customers, but this has never been reliably verified. Crackers typically use backdoors to secure
remote access to a computer, while attempting to remain hidden from casual inspection. To install backdoors
crackers may use Trojan horses, worms, or other methods.
Malware
Malware for profit: spyware, botnets, keystroke loggers, and dialers
During the 1980s and 1990s, it was usually taken for granted that malicious programs were created as a form of
vandalism or prank. More recently, the greater share of malware programs have been written with a financial or
profit motive in mind. This can be taken as the malware authors' choice to monetize their control over infected
systems: to turn that control into a source of revenue.
Spyware programs are commercially produced for the purpose of gathering information about computer users,
showing them pop-up ads, or altering web-browser behavior for the financial benefit of the spyware creator. For
instance, some spyware programs redirect search engine results to paid advertisements. Others, often called
"stealware" by the media, overwrite affiliate marketing codes so that revenue is redirected to the spyware creator
rather than the intended recipient.
Spyware programs are sometimes installed as Trojan horses of one sort or another. They differ in that their creators
present themselves openly as businesses, for instance by selling advertising space on the pop-ups created by the
malware. Most such programs present the user with an end-user license agreement that purportedly protects the
creator from prosecution under computer contaminant laws. However, spyware EULAs have not yet been upheld in
court.
Another way that financially motivated malware creators can profit from their infections is to directly use the
infected computers to do work for the creator. The infected computers are used as proxies to send out spam
messages. A computer left in this state is often known as a zombie computer. The advantage to spammers of using
infected computers is they provide anonymity, protecting the spammer from prosecution. Spammers have also used
infected PCs to target anti-spam organizations with distributed denial-of-service attacks.
In order to coordinate the activity of many infected computers, attackers have used coordinating systems known as
botnets. In a botnet, the malware or malbot logs in to an Internet Relay Chat channel or other chat system. The
attacker can then give instructions to all the infected systems simultaneously. Botnets can also be used to push
upgraded malware to the infected systems, keeping them resistant to antivirus software or other security measures.
It is possible for a malware creator to profit by stealing sensitive information from a victim. Some malware programs
install a key logger, which intercepts the user's keystrokes when entering a password, credit card number, or other
information that may be exploited. This is then transmitted to the malware creator automatically, enabling credit card
fraud and other theft. Similarly, malware may copy the CD key or password for online games, allowing the creator to
steal accounts or virtual items.
Another way of stealing money from the infected PC owner is to take control of a dial-up modem and dial an
expensive toll call. Dialer (or porn dialer) software dials up a premium-rate telephone number such as a U.S. "900
number" and leave the line open, charging the toll to the infected user.
Data-stealing malware
Data-stealing malware is a web threat that divests victims of personal and proprietary information with the intent of
monetizing stolen data through direct use or underground distribution. Content security threats that fall under this
umbrella include keyloggers, screen scrapers, spyware, adware, backdoors, and bots. The term does not refer to
activities such as spam, phishing, DNS poisoning, SEO abuse, etc. However, when these threats result in file
download or direct installation, as most hybrid attacks do, files that act as agents to proxy information will fall into
the data-stealing malware category.
89
Malware
Characteristics of data-stealing malware
Does not leave traces of the event
• The malware is typically stored in a cache that is routinely flushed
• The malware may be installed via a drive-by-download process
• The website hosting the malware as well as the malware is generally temporary or rogue
Frequently changes and extends its functions
• It is difficult for antivirus software to detect final payload attributes due to the combination(s) of malware
components
• The malware uses multiple file encryption levels
Thwarts Intrusion Detection Systems (IDS) after successful installation
• There are no perceivable network anomalies
• The malware hides in web traffic
• The malware is stealthier in terms of traffic and resource use
Thwarts disk encryption
• Data is stolen during decryption and display
• The malware can record keystrokes, passwords, and screenshots
Thwarts Data Loss Prevention (DLP)
• Leakage protection hinges on metadata tagging, not everything is tagged
• Miscreants can use encryption to port data
Examples of data-stealing malware
• Bancos, an info stealer that waits for the user to access banking websites then spoofs pages of the bank website to
steal sensitive information.
• Gator, spyware that covertly monitors web-surfing habits, uploads data to a server for analysis then serves
targeted pop-up ads.
• LegMir, spyware that steals personal information such as account names and passwords related to online games.
• Qhost, a Trojan that modifies the Hosts file to point to a different DNS server when banking sites are accessed
then opens a spoofed login page to steal login credentials for those financial institutions.
Data-stealing malware incidents
• Albert Gonzalez is accused of masterminding a ring to use malware to steal and sell more than 170 million credit
card numbers in 2006 and 2007—the largest computer fraud in history. Among the firms targeted were BJ's
Wholesale Club, TJX, DSW Shoe, OfficeMax, Barnes & Noble, Boston Market, Sports Authority and Forever
21.[14]
• A Trojan horse program stole more than 1.6 million records belonging to several hundred thousand people from
Monster Worldwide Inc’s job search service. The data was used by cybercriminals to craft phishing emails
targeted at Monster.com users to plant additional malware on users’ PCs.[15]
• Customers of Hannaford Bros. Co, a supermarket chain based in Maine, were victims of a data security breach
involving the potential compromise of 4.2 million debit and credit cards. The company was hit by several
class-action law suits.[16]
• The Torpig Trojan has compromised and stolen login credentials from approximately 250,000 online bank
accounts as well as a similar number of credit and debit cards. Other information such as email, and FTP accounts
from numerous websites, have also been compromised and stolen.[17]
90
Malware
Vulnerability to malware
In this context, as throughout, it should be borne in mind that the “system” under attack may be of various types, e.g.
a single computer and operating system, a network or an application.
Various factors make a system more vulnerable to malware:
• Homogeneity: e.g. when all computers in a network run the same OS, upon exploiting one, one can exploit them
all.
• Weight of numbers: simply because the vast majority of existing malware is written to attack Windows systems,
then Windows systems, ipso facto, are more vulnerable to succumbing to malware (regardless of the security
strengths or weaknesses of Windows itself).
• Defects: malware leveraging defects in the OS design.
• Unconfirmed code: code from a floppy disk, CD-ROM or USB device may be executed without the user’s
agreement.
• Over-privileged users: some systems allow all users to modify their internal structures.
• Over-privileged code: some systems allow code executed by a user to access all rights of that user.
An oft-cited cause of vulnerability of networks is homogeneity or software monoculture.[18] For example, Microsoft
Windows or Apple Mac have such a large share of the market that concentrating on either could enable a cracker to
subvert a large number of systems, but any total monoculture is a problem. Instead, introducing inhomogeneity
(diversity), purely for the sake of robustness, could increase short-term costs for training and maintenance. However,
having a few diverse nodes would deter total shutdown of the network, and allow those nodes to help with recovery
of the infected nodes. Such separate, functional redundancy would avoid the cost of a total shutdown, would avoid
homogeneity as the problem of "all eggs in one basket".
Most systems contain bugs, or loopholes, which may be exploited by malware. A typical example is the
buffer-overrun weakness, in which an interface designed to store data, in a small area of memory, allows the caller to
supply more data than will fit. This extra data then overwrites the interface's own executable structure (past the end
of the buffer and other data). In this manner, malware can force the system to execute malicious code, by replacing
legitimate code with its own payload of instructions (or data values) copied into live memory, outside the buffer
area.
Originally, PCs had to be booted from floppy disks, and until recently it was common for this to be the default boot
device. This meant that a corrupt floppy disk could subvert the computer during booting, and the same applies to
CDs. Although that is now less common, it is still possible to forget that one has changed the default, and rare that a
BIOS makes one confirm a boot from removable media.
In some systems, non-administrator users are over-privileged by design, in the sense that they are allowed to modify
internal structures of the system. In some environments, users are over-privileged because they have been
inappropriately granted administrator or equivalent status. This is primarily a configuration decision, but on
Microsoft Windows systems the default configuration is to over-privilege the user. This situation exists due to
decisions made by Microsoft to prioritize compatibility with older systems above security configuration in newer
systems and because typical applications were developed without the under-privileged users in mind. As privilege
escalation exploits have increased this priority is shifting for the release of Microsoft Windows Vista. As a result,
many existing applications that require excess privilege (over-privileged code) may have compatibility problems
with Vista. However, Vista's User Account Control feature attempts to remedy applications not designed for
under-privileged users, acting as a crutch to resolve the privileged access problem inherent in legacy applications.
Malware, running as over-privileged code, can use this privilege to subvert the system. Almost all currently popular
operating systems, and also many scripting applications allow code too many privileges, usually in the sense that
when a user executes code, the system allows that code all rights of that user. This makes users vulnerable to
malware in the form of e-mail attachments, which may or may not be disguised.
91
Malware
Given this state of affairs, users are warned only to open attachments they trust, and to be wary of code received
from untrusted sources. It is also common for operating systems to be designed so that device drivers need escalated
privileges, while they are supplied by more and more hardware manufacturers.
Eliminating over-privileged code
Over-privileged code dates from the time when most programs were either delivered with a computer or written
in-house, and repairing it would at a stroke render most antivirus software almost redundant. It would, however, have
appreciable consequences for the user interface and system management.
The system would have to maintain privilege profiles, and know which to apply for each user and program. In the
case of newly installed software, an administrator would need to set up default profiles for the new code.
Eliminating vulnerability to rogue device drivers is probably harder than for arbitrary rogue executables. Two
techniques, used in VMS, that can help are memory mapping only the registers of the device in question and a
system interface associating the driver with interrupts from the device.
Other approaches are:
• Various forms of virtualization, allowing the code unlimited access only to virtual resources
• Various forms of sandbox or jail
• The security functions of Java, in java.security
Such approaches, however, if not fully integrated with the operating system, would reduplicate effort and not be
universally applied, both of which would be detrimental to security.
Anti-malware programs
As malware attacks become more frequent, attention has begun to shift from viruses and spyware protection, to
malware protection, and programs have been developed to specifically combat them.
Anti-malware programs can combat malware in two ways:
1. They can provide real time protection against the installation of malware software on a computer. This type of
spyware protection works the same way as that of antivirus protection in that the anti-malware software scans all
incoming network data for malware software and blocks any threats it comes across.
2. Anti-malware software programs can be used solely for detection and removal of malware software that has
already been installed onto a computer. This type of malware protection is normally much easier to use and more
popular. This type of anti-malware software scans the contents of the Windows registry, operating system files,
and installed programs on a computer and will provide a list of any threats found, allowing the user to choose
which files to delete or keep, or to compare this list to a list of known malware components, removing files that
match.
Real-time protection from malware works identically to real-time antivirus protection: the software scans disk files at
download time, and blocks the activity of components known to represent malware. In some cases, it may also
intercept attempts to install start-up items or to modify browser settings. Because many malware components are
installed as a result of browser exploits or user error, using security software (some of which are anti-malware,
though many are not) to "sandbox" browsers (essentially babysit the user and their browser) can also be effective in
helping to restrict any damage done.
92
Malware
Academic research on malware: a brief overview
The notion of a self-reproducing computer program can be traced back to when presented lectures that encompassed
the theory and organization of complicated automata.[19] Neumann showed that in theory a program could reproduce
itself. This constituted a plausibility result in computability theory. Fred Cohen experimented with computer viruses
and confirmed Neumann's postulate. He also investigated other properties of malware (detectability, self-obfuscating
programs that used rudimentary encryption that he called "evolutionary", and so on). His 1988 doctoral dissertation
was on the subject of computer viruses.[20] Cohen's faculty advisor, Leonard Adleman (the A in RSA) presented a
rigorous proof that, in the general case, algorithmically determining whether a virus is or is not present is Turing
undecidable.[21] This problem must not be mistaken for that of determining, within a broad class of programs, that a
virus is not present; this problem differs in that it does not require the ability to recognize all viruses. Adleman's
proof is perhaps the deepest result in malware computability theory to date and it relies on Cantor's diagonal
argument as well as the halting problem. Ironically, it was later shown by Young and Yung that Adleman's work in
cryptography is ideal in constructing a virus that is highly resistant to reverse-engineering by presenting the notion of
a cryptovirus.[22] A cryptovirus is a virus that contains and uses a public key and randomly generated symmetric
cipher initialization vector (IV) and session key (SK). In the cryptoviral extortion attack, the virus hybrid encrypts
plaintext data on the victim's machine using the randomly generated IV and SK. The IV+SK are then encrypted
using the virus writer's public key. In theory the victim must negotiate with the virus writer to get the IV+SK back in
order to decrypt the ciphertext (assuming there are no backups). Analysis of the virus reveals the public key, not the
IV and SK needed for decryption, or the private key needed to recover the IV and SK. This result was the first to
show that computational complexity theory can be used to devise malware that is robust against reverse-engineering.
Another growing area of computer virus research is to mathematically model the infection behavior of worms using
models such as Lotka–Volterra equations, which has been applied in the study of biological virus. Various virus
propagation scenarios have been studied by researchers such as propagation of computer virus, fighting virus with
virus like predator codes,[23] [24] effectiveness of patching etc.
Grayware
Grayware[25] (or greyware) is a general term sometimes used as a classification for applications that behave in a
manner that is annoying or undesirable, and yet less serious or troublesome than malware.[26] Grayware encompasses
spyware, adware, dialers, joke programs, remote access tools, and any other unwelcome files and programs apart
from viruses that are designed to harm the performance of computers on your network. The term has been in use
since at least as early as September 2004.[27]
Grayware refers to applications or files that are not classified as viruses or trojan horse programs, but can still
negatively affect the performance of the computers on your network and introduce significant security risks to your
organization.[28] Often grayware performs a variety of undesired actions such as irritating users with pop-up
windows, tracking user habits and unnecessarily exposing computer vulnerabilities to attack.
• Spyware is software that installs components on a computer for the purpose of recording Web surfing habits
(primarily for marketing purposes). Spyware sends this information to its author or to other interested parties
when the computer is online. Spyware often downloads with items identified as 'free downloads' and does not
notify the user of its existence or ask for permission to install the components. The information spyware
components gather can include user keystrokes, which means that private information such as login names,
passwords, and credit card numbers are vulnerable to theft.
• Adware is software that displays advertising banners on Web browsers such as Internet Explorer and Mozilla
Firefox. While not categorized as malware, many users consider adware invasive. Adware programs often create
unwanted effects on a system, such as annoying popup ads and the general degradation in either network
connection or system performance. Adware programs are typically installed as separate programs that are bundled
with certain free software. Many users inadvertently agree to installing adware by accepting the End User License
93
Malware
94
Agreement (EULA) on the free software. Adware are also often installed in tandem with spyware programs. Both
programs feed off each other's functionalities: spyware programs profile users' Internet behavior, while adware
programs display targeted ads that correspond to the gathered user profile.
Web and spam
<iframe
src="http://example.net/out.ph
p?s_id=11" width=0 height=0 />
[29]
If an intruder can gain access to a website, it can be hijacked with a single HTML element.
The World Wide Web is a criminals' preferred pathway for spreading malware. Today's web threats use
combinations of malware to create infection chains. About one in ten Web pages may contain malicious code.[30]
Wikis and blogs
Innocuous wikis and blogs are not immune to hijacking. It has been reported that the German edition of Wikipedia
has recently been used as an attempt to vector infection. Through a form of social engineering, users with ill intent
have added links to web pages that contain malicious software with the claim that the web page would provide
detections and remedies, when in fact it was a lure to infect.[31]
Just in 2010, big hosting providers (GoDaddy, Network Solutions, etc) were hacked[32] and every site hosting in
there became a path to malware and spam.
Targeted SMTP threats
Targeted SMTP threats also represent an emerging attack vector through which malware is propagated. As users
adapt to widespread spam attacks, cybercriminals distribute crimeware to target one specific organization or
industry, often for financial gain.[33]
HTTP and FTP
Infections via "drive-by" download are spread through the Web over HTTP and FTP when resources containing
spurious keywords are indexed by legitimate search engines, as well as when JavaScript is surreptitiously added to
legitimate websites and advertising networks.[34]
See also
•
•
•
•
•
•
•
•
•
•
Browser exploit
Category:Web security exploits
Computer crime
Computer insecurity
Cyber spying
Firewall (networking)
Malvertising
Privacy-invasive software
Privilege escalation
Security in Web applications
• Social engineering (security)
• Spy software
Malware
•
•
•
•
•
Targeted threat
Securelist.com
Web server overload causes
White collar crime
Economic and Industrial Espionage
External links
• Malicious Software [35] at the Open Directory Project
• US Department of Homeland Security Identity Theft Technology Council report "The Crimeware Landscape:
Malware, Phishing, Identity Theft and Beyond" [36]
• Video: Mark Russinovich - Advanced Malware Cleaning [37]
• An analysis of targeted attacks using malware [38]
• Malicious Programs Hit New High [39] -retrieved February 8, 2008
• Malware Block List [40]
• Open Security Foundation Data Loss Database [41]
• Internet Crime Complaint Center [42]
References
[1] "Defining Malware: FAQ" (http:/ / technet. microsoft. com/ en-us/ library/ dd632948. aspx). technet.microsoft.com. . Retrieved 2009-09-10.
[2] National Conference of State Legislatures Virus/Contaminant/Destructive Transmission Statutes by State (http:/ / www. ncsl. org/ programs/
lis/ cip/ viruslaws. htm)
[3] jcots.state.va.us/2005%20Content/pdf/Computer%20Contamination%20Bill.pdf [§18.2-152.4:1 Penalty for Computer Contamination]
[4] "Symantec Internet Security Threat Report: Trends for July-December 2007 (Executive Summary)" (http:/ / eval. symantec. com/ mktginfo/
enterprise/ white_papers/ b-whitepaper_exec_summary_internet_security_threat_report_xiii_04-2008. en-us. pdf) (PDF). Symantec Corp..
April 2008. p. 29. . Retrieved 2008-05-11.
[5] F-Secure Corporation (December 4, 2007). "F-Secure Reports Amount of Malware Grew by 100% during 2007" (http:/ / www. f-secure. com/
f-secure/ pressroom/ news/ fs_news_20071204_1_eng. html). Press release. . Retrieved 2007-12-11.
[6] "F-Secure Quarterly Security Wrap-up for the first quarter of 2008" (http:/ / www. f-secure. com/ f-secure/ pressroom/ news/
fsnews_20080331_1_eng. html). F-Secure. March 31, 2008. . Retrieved 2008-04-25.
[7] "Continuing Business with Malware Infected Customers" (http:/ / www. technicalinfo. net/ papers/ MalwareInfectedCustomers. html). Gunter
Ollmann. October 2008. .
[8] "Symantec names Shaoxing, China as world's malware capital" (http:/ / www. engadget. com/ 2010/ 03/ 29/
symantec-names-shaoxing-china-worlds-malware-capital). Engadget. . Retrieved 2010-04-15.
[9] PC World - Zombie PCs: Silent, Growing Threat (http:/ / www. pcworld. com/ article/ id,116841-page,1/ article. html).
[10] Nick Farrell (20 February 2006). "Linux worm targets PHP flaw" (http:/ / www. theregister. co. uk/ 2006/ 02/ 20/ linux_worm/ ). The
Register. . Retrieved 19 May 2010.
[11] John Leyden (March 28, 2001). "Highly destructive Linux worm mutating" (http:/ / www. theregister. co. uk/ 2001/ 03/ 28/
highly_destructive_linux_worm_mutating/ ). The Register. . Retrieved 19 May 2010.
[12] "Aggressive net bug makes history" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 2720337. stm). BBC News. February 3, 2003. . Retrieved
19 May 2010.
[13] "Catb.org" (http:/ / catb. org/ jargon/ html/ meaning-of-hack. html). Catb.org. . Retrieved 2010-04-15.
[14] "Gonzalez, Albert - Indictment 080508" (http:/ / www. usdoj. gov/ usao/ ma/ Press Office - Press Release Files/ IDTheft/ Gonzalez, Albert Indictment 080508. pdf). US Department of Justice Press Office. pp. 01–18. . Retrieved 2010-.
[15] Keizer, Gregg (2007) Monster.com data theft may be bigger (http:/ / www. pcworld. com/ article/ 136154/
monstercom_data_theft_may_be_bigger. html)
[16] Vijayan, Jaikumar (2008) Hannaford hit by class-action lawsuits in wake of data breach disclosure (http:/ / www. computerworld. com/
action/ article. do?command=viewArticleBasic& articleId=9070281)
[17] BBC News: Trojan virus steals banking info (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 7701227. stm)
[18] "LNCS 3786 - Key Factors Influencing Worm Infection", U. Kanlayasiri, 2006, web (PDF): SL40-PDF (http:/ / www. springerlink. com/
index/ 3x8582h43ww06440. pdf).
[19] John von Neumann, "Theory of Self-Reproducing Automata", Part 1: Transcripts of lectures given at the University of Illinois, December
1949, Editor: A. W. Burks, University of Illinois, USA, 1966.
[20] Fred Cohen, "Computer Viruses", PhD Thesis, University of Southern California, ASP Press, 1988.
95
Malware
[21] L. M. Adleman, "An Abstract Theory of Computer Viruses", Advances in Cryptology---Crypto '88, LNCS 403, pp. 354-374, 1988.
[22] A. Young, M. Yung, "Cryptovirology: Extortion-Based Security Threats and Countermeasures," IEEE Symposium on Security & Privacy,
pp. 129-141, 1996.
[23] H. Toyoizumi, A. Kara. Predators: Good Will Mobile Codes Combat against Computer Viruses. Proc. of the 2002 New Security Paradigms
Workshop, 2002
[24] Zakiya M. Tamimi, Javed I. Khan, Model-Based Analysis of Two Fighting Worms (http:/ / www. medianet. kent. edu/ publications/
ICCCE06DL-2virusabstract-TK. pdf), IEEE/IIU Proc. of ICCCE '06, Kuala Lumpur, Malaysia, May 2006, Vol-I, p. 157-163.
[25] "Other meanings" (http:/ / mpc. byu. edu/ Exhibitions/ Of Earth Stone and Corn/ Activities/ Native American Pottery. dhtml). . Retrieved
2007-01-20. The term "grayware" is also used to describe a kind of Native American pottery and has also been used by some working in
computer technology as slang for the human brain. "grayware definition" (http:/ / www. techweb. com/ encyclopedia/ defineterm.
jhtml?term=grayware). TechWeb.com. . Retrieved 2007-01-02.
[26] "Greyware" (http:/ / webopedia. com/ TERM/ g/ greyware. html). What is greyware? - A word definition from the Webopedia Computer
Dictionary. . Retrieved 2006-06-05.
[27] Antony Savvas. "The network clampdown" (http:/ / www. computerweekly. com/ Articles/ 2004/ 09/ 28/ 205554/ the-network-clampdown.
htm). Computer Weekly. . Retrieved 2007-01-20.
[28] "Fortinet WhitePaper Protecting networks against spyware, adware and other forms of grayware" (http:/ / www. boll. ch/ fortinet/ assets/
Grayware. pdf) (PDF). . Retrieved 2007-01-20.
[29] Zittrain, Jonathan (Mike Deehan, producer). (2008-04-17). Berkman Book Release: The Future of the Internet — And How to Stop It (http:/ /
cyber. law. harvard. edu/ interactive/ events/ 2008/ 04/ zittrain). [video/audio]. Cambridge, MA, USA: Berkman Center, The President and
Fellows of Harvard College. . Retrieved 2008-04-21.
[30] "Google searches web's dark side" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 6645895. stm). BBC News. May 11, 2007. . Retrieved
2008-04-26.
[31] Sharon Khare. "Wikipedia Hijacked to Spread Malware" (http:/ / www. tech2. com/ india/ news/ telecom/
wikipedia-hijacked-to-spread-malware/ 2667/ 0). India: Tech2.com. . Retrieved 2010-04-15.
[32] "Attacks against Wordpress" (http:/ / blog. sucuri. net/ 2010/ 05/ new-attack-today-against-wordpress. html). Sucuri Security. May 11, 2010.
. Retrieved 2010-04-26.
[33] "Protecting Corporate Assets from E-mail Crimeware," Avinti, Inc., p.1 (http:/ / www. avinti. com/ download/ market_background/
whitepaper_email_crimeware_protection. pdf)
[34] F-Secure (March 31, 2008). "F-Secure Quarterly Security Wrap-up for the first quarter of 2008" (http:/ / www. f-secure. com/ f-secure/
pressroom/ news/ fsnews_20080331_1_eng. html). Press release. . Retrieved 2008-03-31.
[35] http:/ / www. dmoz. org/ Computers/ Security/ Malicious_Software/
[36] http:/ / www. antiphishing. org/ reports/ APWG_CrimewareReport. pdf
[37] http:/ / www. microsoft. com/ emea/ itsshowtime/ sessionh. aspx?videoid=359
[38] http:/ / www. daemon. be/ maarten/ targetedattacks. html
[39] http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 7232752. stm
[40] http:/ / www. malware. com. br
[41] http:/ / datalossdb. org/
[42] http:/ / www. ic3. gov/ default. aspx/
96
Vulnerability
Vulnerability
For other uses of the word "Vulnerability", please refer to vulnerability (computing) You may also want to
refer to natural disaster.
Vulnerability is the susceptibility to physical or emotional injury or attack. It also means to have one's guard down,
open to censure or criticism. Vulnerability refers to a person's state of being liable to succumb, as to manipulation,
persuasion or temptation.
A window of vulnerability, sometimes abbreviated to wov, is a time frame within which defensive measures are
reduced, compromised or lacking.
Vulnerabilities exploited by psychological manipulators
See Vulnerabilities exploited by manipulators
Common applications
In relation to hazards and disasters, vulnerability is a concept that links the relationship that people have with their
environment to social forces and institutions and the cultural values that sustain and contest them. “The concept of
vulnerability expresses the multidimensionality of disasters by focusing attention on the totality of relationships in a
given social situation which constitute a condition that, in combination with environmental forces, produces a
disaster” (Bankoff et al. 2004: 11).
It's also the extent to which changes could harm a system.In other words, it's the extent to which a community can be
affected by the impact of a hazard.
In global warming, vulnerability is the degree to which a system is susceptible to, or unable to cope with, adverse
effects of climate change, including climate variability and extremes [1] .
Emerging research
Vulnerability research covers a complex, multidisciplinary field including development and poverty studies, public
health, climate studies, security studies, engineering, geography, political ecology, and disaster and risk
management. This research is of particular importance and interest for organizations trying to reduce vulnerability –
especially as related to poverty and other Millennium Development Goals. Many institutions are conducting
interdisciplinary research on vulnerability. A forum that brings many of the current researchers on vulnerability
together is the Expert Working Group (EWG).1 Researchers are currently working to refine definitions of
“vulnerability”, measurement and assessment methods, and effective communication of research to decision makers
(Birkmann et al. 2006).
Major research questions
Within the body of literature related to vulnerability, major research streams include questions of methodology, such
as: measuring and assessing vulnerability, including finding appropriate indicators for various aspects of
vulnerability, up- and downscaling methods, and participatory methods (Villagran 2006).
A sub-category of vulnerability research is social vulnerability, where increasingly researchers are addressing some
of the problems of complex human interactions, vulnerability of specific groups of people, and shocks like natural
hazards, climate change, and other kinds of disruptions. The importance of the issue is indicated by the establishment
of endowed chairs at university departments to examine social vulnerability.
97
Vulnerability
Military vulnerability
In military circles Vulnerability is a subset of Survivability (the others being Susceptibility and Recoverability).
Vulnerability is defined in various ways depending on the nation and service arm concerned, but in general it refers
to the near-instantaneous effects of a weapon attack. In some definitions Recoverability (damage control,
firefighting, restoration of capability) is included in Vulnerability.
A discussion of warship vulnerability can be found here [2]
Invulnerability
Invulnerability is a common feature found in video games. It makes the player impervious to pain, damage or loss
of health. It can be found in the form of "power-ups" or cheats. Generally, it does not protect the player from certain
instant-death hazards, most notably "bottomless" pits from which, even if the player were to survive the fall, they
would be unable to escape. As a rule, invulnerability granted by power-ups is temporary, and wears off after a set
amount of time, while invulnerability cheats, once activated, remain in effect until deactivated, or the end of the level
is reached. Depending on the game in question, invulnerability to damage may or may not protect the player from
non-damage effects, such as being immobilized or sent flying.
In comic books, some superheroes are considered invulnerable, though this usually only applies up to a certain level.
(e.g. Superman is invulnerable to physical attacks from normal people but not to the extremely powerful attacks of
Doomsday).
Expert Working Group on Vulnerability
The Expert Working Group on Vulnerability is a group of experts brought together by the United Nations University
Institute of Environment and Human Security (UNU-EHS). The overall goal of the Expert Working Group is to
advance the concept of human security regarding vulnerability of societies to hazards of natural origin. The EWG
exchanges ideas about the development of methodologies, approaches and indicators to measure vulnerability. This
is a key task to build a bridge between the theoretical conceptualization of vulnerability and its practical application
in decision-making processes. The Expert Working Group is an exchange platform for experts and practitioners from
various scientific backgrounds and world regions dealing with the identification and measurement of vulnerability.
Emphasis is given to the identification of the different features and characteristics of vulnerability, coping capacities
and adaptation strategies of different social groups, economic sectors and environmental components.
See also
• Vulnerability in computing
• Social vulnerability
References
[1] Glossary Climate Change (http:/ / www. global-greenhouse-warming. com/ glossary-climate-change. html)
[2] Warship Vulnerability (http:/ / www. ausairpower. net/ Warship-Hits. html)
• Bankoff, Greg, George Frerks and Dorothea Hilhorst. 2004. Mapping Vulnerability. Sterling: Earthscan.
• Birkmann, Joern (editor). 2006. Measuring Vulnerability to Natural Hazards – Towards Disaster Resilient
Societies. UNU Press.
• Thywissen, Katharina. 2006. “Components of Risk: A comparative glossary." SOURCE No. 2/2006. Bonn,
Germany.
• Villagran, Juan Carlos. "“Vulnerability: A conceptual and methodological review." SOURCE. No. 2/2006. Bonn,
Germany.
98
Vulnerability
External links
•
•
•
•
vulnerable site reporter (http://bugtraq.byethost22.com)
United Nations University Institute of Environment and Human Security (http://www.ehs.unu.edu)
MunichRe Foundation (http://www.munichre-foundation.org)
Community based vulnerability mapping in Búzi, Mozambique (GIS and Remote Sensing) (http://projects.
stefankienberger.at/vulmoz/)
• Satellite Vulnerability (http://www.fas.org/spp/eprint/at_sp.htm)
• Top Computer Vulnerabilities (http://www.sans.org/top20/?utm_source=web-sans&utm_medium=text-ad&
utm_content=Free_Resources_Homepage_top20_free_rsrcs_homepage&utm_campaign=Top_20&ref=27974)
Computer virus
A computer virus is a computer program that can copy itself[1] and infect a computer. The term "virus" is also
commonly but erroneously used to refer to other types of malware, including but not limited to adware and spyware
programs that do not have the reproductive ability. A true virus can spread from one computer to another (in some
form of executable code) when its host is taken to the target computer; for instance because a user sent it over a
network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive.[2]
Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a
file system that is accessed by another computer.[3] [4]
As stated above, the term "computer virus" is sometimes used as a catch-all phrase to include all types of malware,
even those that do not have the reproductive ability. Malware includes computer viruses, computer worms, Trojan
horses, most rootkits, spyware, dishonest adware and other malicious and unwanted software, including true viruses.
Viruses are sometimes confused with worms and Trojan horses, which are technically different. A worm can exploit
security vulnerabilities to spread itself automatically to other computers through networks, while a Trojan horse is a
program that appears harmless but hides malicious functions. Worms and Trojan horses, like viruses, may harm a
computer system's data or performance. Some viruses and other malware have symptoms noticeable to the computer
user, but many are surreptitious or simply do nothing to call attention to themselves. Some viruses do nothing
beyond reproducing themselves.
History
Academic work
The first academic work on the theory of computer viruses (although the term "computer virus" was not invented at
that time) was done by John von Neumann in 1949 who held lectures at the University of Illinois about the "Theory
and Organization of Complicated Automata". The work of von Neumann was later published as the "Theory of
self-reproducing automata".[5] In his essay von Neumann postulated that a computer program could reproduce.
In 1972 Veith Risak published his article "Selbstreproduzierende Automaten mit minimaler
Informationsübertragung" (Self-reproducing automata with minimal information exchange).[6] The article describes a
fully functional virus written in assembler language for a SIEMENS 4004/35 computer system.
In 1980 Jürgen Kraus wrote his diplom thesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs)
at the University of Dortmund.[7] In his work Kraus postulated that computer programs can behave in a way similar
to biological viruses.
In 1984 Fred Cohen from the University of Southern California wrote his paper "Computer Viruses - Theory and
Experiments".[8] It was the first paper to explicitly call a self-reproducing program a "virus"; a term introduced by
99
Computer virus
his mentor Leonard Adleman.
An article that describes "useful virus functionalities" was published by J.B. Gunn under the title "Use of virus
functions to provide a virtual APL interpreter under user control" in 1984.[9]
Science Fiction
The Terminal Man, a science fiction novel by Michael Crichton (1972), told (as a sideline story) of a computer with
telephone modem dialing capability, which had been programmed to randomly dial phone numbers until it hit a
modem that is answered by another computer. It then attempted to program the answering computer with its own
program, so that the second computer would also begin dialing random numbers, in search of yet another computer
to program. The program is assumed to spread exponentially through susceptible computers.
The actual term 'virus' was first used in David Gerrold's 1972 novel, When HARLIE Was One. In that novel, a
sentient computer named HARLIE writes viral software to retrieve damaging personal information from other
computers to blackmail the man who wants to turn him off.
Virus programs
The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early 1970s.[10] Creeper
was an experimental self-replicating program written by Bob Thomas at BBN Technologies in 1971.[11] Creeper
used the ARPANET to infect DEC PDP-10 computers running the TENEX operating system.[12] Creeper gained
access via the ARPANET and copied itself to the remote system where the message, "I'm the creeper, catch me if
you can!" was displayed. The Reaper program was created to delete Creeper.[13]
A program called "Elk Cloner" was the first computer virus to appear "in the wild" — that is, outside the single
computer or lab where it was created.[14] Written in 1981 by Richard Skrenta, it attached itself to the Apple DOS 3.3
operating system and spread via floppy disk.[14] [15] This virus, created as a practical joke when Skrenta was still in
high school, was injected in a game on a floppy disk. On its 50th use the Elk Cloner virus would be activated,
infecting the computer and displaying a short poem beginning "Elk Cloner: The program with a personality."
The first PC virus in the wild was a boot sector virus dubbed (c)Brain[16] , created in 1986 by the Farooq Alvi
Brothers in Lahore, Pakistan, reportedly to deter piracy of the software they had written[17] . However, analysts have
claimed that the Ashar virus, a variant of Brain, possibly predated it based on code within the virus.
Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks.
In the early days of the personal computer, many users regularly exchanged information and programs on floppies.
Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot
sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. PCs
of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use,
this was the most successful infection strategy and boot sector viruses were the most common in the wild for many
years.[1]
Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant
increase in BBS, modem use, and software sharing. Bulletin board-driven software sharing contributed directly to
the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and
bootleg software were equally common vectors for viruses on BBS's.
Macro viruses have become common since the mid-1990s. Most of these viruses are written in the scripting
languages for Microsoft programs such as Word and Excel and spread throughout Microsoft Office by infecting
documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to
Macintosh computers. Although most of these viruses did not have the ability to send infected e-mail, those viruses
which did take advantage of the Microsoft Outlook COM interface.
100
Computer virus
Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro
viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a
"mating" of the two and would likely be detected as a virus unique from the "parents".[18]
A virus may also send a web address link as an instant message to all the contacts on an infected machine. If the
recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the
site may be able to infect this new computer and continue propagating.
Viruses that spread using cross-site scripting were first reported in 2002[19] , and were academically demonstrated in
2005.[20] There have been multiple instances of the cross-site scripting viruses in the wild, exploiting websites such
as MySpace and Yahoo.
Infection strategies
In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many
viruses attach themselves to executable files that may be part of legitimate programs. If a user attempts to launch an
infected program, the virus' code may be executed simultaneously. Viruses can be divided into two types based on
their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected,
infect those targets, and finally transfer control to the application program they infected. Resident viruses do not
search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers
control to the host program. The virus stays active in the background and infects new hosts when those files are
accessed by other programs or the operating system itself.
Nonresident viruses
Nonresident viruses can be thought of as consisting of a finder module and a replication module. The finder module
is responsible for finding new files to infect. For each new executable file the finder module encounters, it calls the
replication module to infect that file.
Resident viruses
Resident viruses contain a replication module that is similar to the one that is employed by nonresident viruses. This
module, however, is not called by a finder module. The virus loads the replication module into memory when it is
executed instead and ensures that this module is executed each time the operating system is called to perform a
certain operation. The replication module can be called, for example, each time the operating system executes a file.
In this case the virus infects every suitable program that is executed on the computer.
Resident viruses are sometimes subdivided into a category of fast infectors and a category of slow infectors. Fast
infectors are designed to infect as many files as possible. A fast infector, for instance, can infect every potential host
file that is accessed. This poses a special problem when using anti-virus software, since a virus scanner will access
every potential host file on a computer when it performs a system-wide scan. If the virus scanner fails to notice that
such a virus is present in memory the virus can "piggy-back" on the virus scanner and in this way infect all files that
are scanned. Fast infectors rely on their fast infection rate to spread. The disadvantage of this method is that infecting
many files may make detection more likely, because the virus may slow down a computer or perform many
suspicious actions that can be noticed by anti-virus software. Slow infectors, on the other hand, are designed to infect
hosts infrequently. Some slow infectors, for instance, only infect files when they are copied. Slow infectors are
designed to avoid detection by limiting their actions: they are less likely to slow down a computer noticeably and
will, at most, infrequently trigger anti-virus software that detects suspicious behavior by programs. The slow infector
approach, however, does not seem very successful.
101
Computer virus
Vectors and hosts
Viruses have targeted various types of transmission media or hosts. This list is not exhaustive:
• Binary executable files (such as COM files and EXE files in MS-DOS, Portable Executable files in Microsoft
Windows, the Mach-O format in OSX, and ELF files in Linux)
• Volume Boot Records of floppy disks and hard disk partitions
• The master boot record (MBR) of a hard disk
• General-purpose script files (such as batch files in MS-DOS and Microsoft Windows, VBScript files, and shell
script files on Unix-like platforms).
• Application-specific script files (such as Telix-scripts)
• System specific autorun script files (such as Autorun.inf file needed by Windows to automatically run software
stored on USB Memory Storage Devices).
• Documents that can contain macros (such as Microsoft Word documents, Microsoft Excel spreadsheets, AmiPro
documents, and Microsoft Access database files)
• Cross-site scripting vulnerabilities in web applications (see XSS Worm)
• Arbitrary computer files. An exploitable buffer overflow, format string, race condition or other exploitable bug in
a program which reads the file could be used to trigger the execution of code hidden within it. Most bugs of this
type can be made more difficult to exploit in computer architectures with protection features such as an execute
disable bit and/or address space layout randomization.
PDFs, like HTML, may link to malicious code. PDFs can also be infected with malicious code.
In operating systems that use file extensions to determine program associations (such as Microsoft Windows), the
extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type
than it appears to the user. For example, an executable may be created named "picture.png.exe", in which the user
sees only "picture.png" and therefore assumes that this file is an image and most likely is safe.
An additional method is to generate the virus code from parts of existing operating system files by using the
CRC16/CRC32 data. The initial code can be quite small (tens of bytes) and unpack a fairly large virus. This is
analogous to a biological "prion" in the way it works but is vulnerable to signature based detection. This attack has
not yet been seen "in the wild".
Methods to avoid detection
In order to avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially
on the MS-DOS platform, make sure that the "last modified" date of a host file stays the same when the file is
infected by the virus. This approach does not fool anti-virus software, however, especially those which maintain and
date Cyclic redundancy checks on file changes.
Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by
overwriting unused areas of executable files. These are called cavity viruses. For example, the CIH virus, or
Chernobyl Virus, infects Portable Executable files. Because those files have many empty gaps, the virus, which was
1 KB in length, did not add to the size of the file.
Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them.
As computers and operating systems grow larger and more complex, old hiding techniques need to be updated or
replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit
permission for every kind of file access.
102
Computer virus
Avoiding bait files and other undesirable hosts
A virus needs to infect hosts in order to spread further. In some cases, it might be a bad idea to infect a host program.
For example, many anti-virus programs perform an integrity check of their own code. Infecting such programs will
therefore increase the likelihood that the virus is detected. For this reason, some viruses are programmed not to infect
programs that are known to be part of anti-virus software. Another type of host that viruses sometimes avoid are bait
files. Bait files (or goat files) are files that are specially created by anti-virus software, or by anti-virus professionals
themselves, to be infected by a virus. These files can be created for various reasons, all of which are related to the
detection of the virus:
• Anti-virus professionals can use bait files to take a sample of a virus (i.e. a copy of a program file that is infected
by the virus). It is more practical to store and exchange a small, infected bait file, than to exchange a large
application program that has been infected by the virus.
• Anti-virus professionals can use bait files to study the behavior of a virus and evaluate detection methods. This is
especially useful when the virus is polymorphic. In this case, the virus can be made to infect a large number of
bait files. The infected files can be used to test whether a virus scanner detects all versions of the virus.
• Some anti-virus software employs bait files that are accessed regularly. When these files are modified, the
anti-virus software warns the user that a virus is probably active on the system.
Since bait files are used to detect the virus, or to make detection possible, a virus can benefit from not infecting
them. Viruses typically do this by avoiding suspicious programs, such as small program files or programs that
contain certain patterns of 'garbage instructions'.
A related strategy to make baiting difficult is sparse infection. Sometimes, sparse infectors do not infect a host file
that would be a suitable candidate for infection in other circumstances. For example, a virus can decide on a random
basis whether to infect a file or not, or a virus can only infect host files on particular days of the week.
Stealth
Some viruses try to trick antivirus software by intercepting its requests to the operating system. A virus can hide
itself by intercepting the antivirus software’s request to read the file and passing the request to the virus, instead of
the OS. The virus can then return an uninfected version of the file to the antivirus software, so that it seems that the
file is "clean". Modern antivirus software employs various techniques to counter stealth mechanisms of viruses. The
only completely reliable method to avoid stealth is to boot from a medium that is known to be clean.
Self-modification
Most modern antivirus programs try to find virus-patterns inside ordinary programs by scanning them for so-called
virus signatures. A signature is a characteristic byte-pattern that is part of a certain virus or family of viruses. If a
virus scanner finds such a pattern in a file, it notifies the user that the file is infected. The user can then delete, or (in
some cases) "clean" or "heal" the infected file. Some viruses employ techniques that make detection by means of
signatures difficult but probably not impossible. These viruses modify their code on each infection. That is, each
infected file contains a different variant of the virus.
Encryption with a variable key
A more advanced method is the use of simple encryption to encipher the virus. In this case, the virus consists of a
small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for
each infected file, the only part of the virus that remains constant is the decrypting module, which would (for
example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it
can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would
be symmetric keys, stored on the infected host, it is in fact entirely possible to decrypt the final virus, but this is
probably not required, since self-modifying code is such a rarity that it may be reason for virus scanners to at least
103
Computer virus
flag the file as suspicious.
An old, but compact, encryption involves XORing each byte in a virus with a constant, so that the exclusive-or
operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the
encryption/decryption may be part of the signature in many virus definitions.
Polymorphic code
Polymorphic code was the first technique that posed a serious threat to virus scanners. Just like regular encrypted
viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by a decryption module.
In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A
well-written polymorphic virus therefore has no parts which remain identical between infections, making it very
difficult to detect directly using signatures. Antivirus software can detect it by decrypting the viruses using an
emulator, or by statistical pattern analysis of the encrypted virus body. To enable polymorphic code, the virus has to
have a polymorphic engine (also called mutating engine or mutation engine) somewhere in its encrypted body. See
Polymorphic code for technical detail on how such engines operate.[21]
Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For
example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from
mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such
slow polymorphic code is that it makes it more difficult for antivirus professionals to obtain representative samples
of the virus, because bait files that are infected in one run will typically contain identical or similar samples of the
virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances
of the virus may be able to avoid detection.
Metamorphic code
To avoid being detected by emulation, some viruses rewrite themselves completely each time they are to infect new
executables. Viruses that utilize this technique are said to be metamorphic. To enable metamorphism, a
metamorphic engine is needed. A metamorphic virus is usually very large and complex. For example, W32/Simile
consisted of over 14000 lines of Assembly language code, 90% of which is part of the metamorphic engine.[22] [23]
Vulnerability and countermeasures
The vulnerability of operating systems to viruses
Just as genetic diversity in a population decreases the chance of a single disease wiping out a population, the
diversity of software systems on a network similarly limits the destructive potential of viruses. This became a
particular concern in the 1990s, when Microsoft gained market dominance in desktop operating systems and office
suites. The users of Microsoft software (especially networking software such as Microsoft Outlook and Internet
Explorer) are especially vulnerable to the spread of viruses. Microsoft software is targeted by virus writers due to
their desktop dominance, and is often criticized for including many errors and holes for virus writers to exploit.
Integrated and non-integrated Microsoft applications (such as Microsoft Office) and applications with scripting
languages with access to the file system (for example Visual Basic Script (VBS), and applications with networking
features) are also particularly vulnerable.
Although Windows is by far the most popular target operating system for virus writers, viruses also exist on other
platforms. Any operating system that allows third-party programs to run can theoretically run viruses. Some
operating systems are less secure than others. Unix-based operating systems (and NTFS-aware applications on
Windows NT based platforms) only allow their users to run executables within their own protected memory space.
An Internet based experiment revealed that there were cases when people willingly pressed a particular button to
download a virus. Security analyst Didier Stevens ran a half year advertising campaign on Google AdWords which
104
Computer virus
said "Is your PC virus-free? Get it infected here!". The result was 409 clicks.[24] [25]
As of 2006, there are relatively few security exploits targeting Mac OS X (with a Unix-based file system and
kernel).[26] The number of viruses for the older Apple operating systems, known as Mac OS Classic, varies greatly
from source to source, with Apple stating that there are only four known viruses, and independent sources stating
there are as many as 63 viruses. Many Mac OS Classic viruses targeted the HyperCard authoring environment. The
difference in virus vulnerability between Macs and Windows is a chief selling point, one that Apple uses in their Get
a Mac advertising.[27] In January 2009, Symantec announced the discovery of a trojan that targets Macs.[28] This
discovery did not gain much coverage until April 2009.[28]
While Linux, and Unix in general, has always natively blocked normal users from having access to make changes to
the operating system environment, Windows users are generally not. This difference has continued partly due to the
widespread use of administrator accounts in contemporary versions like XP. In 1997, when a virus for Linux was
released – known as "Bliss" – leading antivirus vendors issued warnings that Unix-like systems could fall prey to
viruses just like Windows.[29] The Bliss virus may be considered characteristic of viruses – as opposed to worms –
on Unix systems. Bliss requires that the user run it explicitly, and it can only infect programs that the user has the
access to modify. Unlike Windows users, most Unix users do not log in as an administrator user except to install or
configure software; as a result, even if a user ran the virus, it could not harm their operating system. The Bliss virus
never became widespread, and remains chiefly a research curiosity. Its creator later posted the source code to Usenet,
allowing researchers to see how it worked.[30]
The role of software development
Because software is often designed with security features to prevent unauthorized use of system resources, many
viruses must exploit software bugs in a system or application to spread. Software development strategies that
produce large numbers of bugs will generally also produce potential exploits.
Anti-virus software and other preventive measures
Many users install anti-virus software that can detect and eliminate known viruses after the computer downloads or
runs the executable. There are two common methods that an anti-virus software application uses to detect viruses.
The first, and by far the most common method of virus detection is using a list of virus signature definitions. This
works by examining the content of the computer's memory (its RAM, and boot sectors) and the files stored on fixed
or removable drives (hard drives, floppy drives), and comparing those files against a database of known virus
"signatures". The disadvantage of this detection method is that users are only protected from viruses that pre-date
their last virus definition update. The second method is to use a heuristic algorithm to find viruses based on common
behaviors. This method has the ability to detect novel viruses that anti-virus security firms have yet to create a
signature for.
Some anti-virus programs are able to scan opened files in addition to sent and received e-mails "on the fly" in a
similar manner. This practice is known as "on-access scanning". Anti-virus software does not change the underlying
capability of host software to transmit viruses. Users must update their software regularly to patch security holes.
Anti-virus software also needs to be regularly updated in order to recognize the latest threats.
One may also minimize the damage done by viruses by making regular backups of data (and the operating systems)
on different media, that are either kept unconnected to the system (most of the time), read-only or not accessible for
other reasons, such as using different file systems. This way, if data is lost through a virus, one can start again using
the backup (which should preferably be recent).
If a backup session on optical media like CD and DVD is closed, it becomes read-only and can no longer be affected
by a virus (so long as a virus or infected file was not copied onto the CD/DVD). Likewise, an operating system on a
bootable CD can be used to start the computer if the installed operating systems become unusable. Backups on
removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via
105
Computer virus
removable flash drives.[31] [32]
Recovery methods
Once a computer has been compromised by a virus, it is usually unsafe to continue using the same computer without
completely reinstalling the operating system. However, there are a number of recovery options that exist after a
computer has a virus. These actions depend on severity of the type of virus.
Virus removal
One possibility on Windows Me, Windows XP, Windows Vista and Windows 7 is a tool known as System Restore,
which restores the registry and critical system files to a previous checkpoint. Often a virus will cause a system to
hang, and a subsequent hard reboot will render a system restore point from the same day corrupt. Restore points from
previous days should work provided the virus is not designed to corrupt the restore files or also exists in previous
restore points.[33] Some viruses, however, disable System Restore and other important tools such as Task Manager
and Command Prompt. An example of a virus that does this is CiaDoor. However, many such viruses can be
removed by rebooting the computer, entering Windows safe mode, and then using system tools.
Administrators have the option to disable such tools from limited users for various reasons (for example, to reduce
potential damage from and the spread of viruses). A virus can modify the registry to do the same even if the
Administrator is controlling the computer; it blocks all users including the administrator from accessing the tools.
The message "Task Manager has been disabled by your administrator" may be displayed, even to the administrator.
Users running a Microsoft operating system can access Microsoft's website to run a free scan, provided they have
their 20-digit registration number. Many websites run by anti-virus software companies provide free online virus
scanning, with limited cleaning facilities (the purpose of the sites is to sell anti-virus products). Some websites allow
a single suspicious file to be checked by many antivirus programs in one operation.
Operating system reinstallation
Reinstalling the operating system is another approach to virus removal. It involves either reformatting the computer's
hard drive and installing the OS and all programs from original media, or restoring the entire partition with a clean
backup image. User data can be restored by booting from a Live CD, or putting the hard drive into another computer
and booting from its operating system with great care not to infect the second computer by executing any infected
programs on the original drive; and once the system has been restored precautions must be taken to avoid reinfection
from a restored executable file.
These methods are simple to do, may be faster than disinfecting a computer, and are guaranteed to remove any
malware. If the operating system and programs must be reinstalled from scratch, the time and effort to reinstall,
reconfigure, and restore user preferences must be taken into account. Restoring from an image is much faster, totally
safe, and restores the exact configuration to the state it was in when the image was made, with no further trouble.
See also
106
Computer virus
107
•
Adware
•
List of computer viruses
•
Antivirus software
•
List of computer viruses
(all)
•
Computer insecurity
•
Malware
•
Computer worm
•
Mobile viruses
•
Crimeware
•
Multipartite virus
•
Cryptovirology
•
Spam
•
Linux malware
•
Spyware
•
List of computer virus hoaxes •
•
Trojan horse (computing)
Virus hoax
Further reading
• Mark Russinovich, Advanced Malware Cleaning video [37], Microsoft TechEd: IT Forum, November 2006
• Szor, Peter (2005). The Art of Computer Virus Research and Defense. Boston: Addison-Wesley.
ISBN 0321304543.
• Jussi Parikka (2007) "Digital Contagions. A Media Archaeology of Computer Viruses", Peter Lang: New York.
Digital Formations-series. ISBN 978-0-8204-8837-0
• Burger, Ralf, 1991 Computer Viruses and Data Protection
• Ludwig, Mark, 1996 The Little Black Book of Computer Viruses [34]
• Ludwig, Mark, 1995 The Giant Black Book of Computer Viruses [35]
• Ludwig, Mark, 1993 Computer Viruses, Artificial Life and Evolution [36]
External links
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Viruses [37] at the Open Directory Project
US Govt CERT (Computer Emergency Readiness Team) site [38]
'Computer Viruses - Theory and Experiments' - The original paper published on the topic [39]
How Computer Viruses Work [40]
A Brief History of PC Viruses [41]" (early) by Dr. Alan Solomon
Are 'Good' Computer Viruses Still a Bad Idea? [42]
Protecting your Email from Viruses and Other MalWare [43]
Hacking Away at the Counterculture [44] by Andrew Ross
A Virus in Info-Space [45] by Tony Sampson
Dr Aycock's Bad Idea [46] by Tony Sampson
Digital Monsters, Binary Aliens [47] by Jussi Parikka
The Universal Viral Machine [48]" by Jussi Parikka
Hypervirus: A Clinical Report [49]" by Thierry Bardini
The Cross-site Scripting Virus [50]
The Virus Underground [51]
Microsoft conferences about IT Security - videos on demand [52] (Video)
Computer virus
References
[1] Dr. Solomon's Virus Encyclopedia, 1995, ISBN 1897661002, Abstract at http:/ / vx. netlux. org/ lib/ aas10. html
[2] Jussi Parikka (2007) "Digital Contagions. A Media Archaeology of Computer Viruses", Peter Lang: New York. Digital Formations-series.
ISBN 978-0-8204-8837-0, p. 19
[3] http:/ / www. bartleby. com/ 61/ 97/ C0539700. html
[4] http:/ / www. actlab. utexas. edu/ ~aviva/ compsec/ virus/ whatis. html
[5] von Neumann, John (1966). "Theory of Self-Reproducing Automata" (http:/ / cba. mit. edu/ events/ 03. 11. ASE/ docs/ VonNeumann. pdf).
Essays on Cellular Automata (University of Illinois Press): 66–87. . Retrieved June 10., 2010.
[6] Risak, Veith (1972), "Selbstreproduzierende Automaten mit minimaler Informationsübertragung" (http:/ / www. cosy. sbg. ac. at/ ~risak/
bilder/ selbstrep. html), Zeitschrift für Maschinenbau und Elektrotechnik,
[7] Kraus, Jürgen (February 1980), Selbstreproduktion bei Programmen (http:/ / vx. netlux. org/ lib/ pdf/ Selbstreproduktion bei programmen.
pdf),
[8] Cohen, Fred (1984), Computer Viruses - Theory and Experiments (http:/ / all. net/ books/ virus/ index. html),
[9] Gunn, J.B. (June 1984). "Use of virus functions to provide a virtual APL interpreter under user control" (http:/ / portal. acm. org/ ft_gateway.
cfm?id=801093& type=pdf& coll=GUIDE& dl=GUIDE& CFID=93800866& CFTOKEN=49244432). ACM SIGAPL APL Quote Quad
archive (ACM New York, NY, USA) 14 (4): 163–168. ISSN 0163-6006. .
[10] "Virus list" (http:/ / www. viruslist. com/ en/ viruses/ encyclopedia?chapter=153310937). . Retrieved 2008-02-07.
[11] Thomas Chen, Jean-Marc Robert (2004). "The Evolution of Viruses and Worms" (http:/ / vx. netlux. org/ lib/ atc01. html). . Retrieved
2009-02-16.
[12] Jussi Parikka (2007) "Digital Contagions. A Media Archaeology of Computer Viruses", Peter Lang: New York. Digital Formations-series.
ISBN 978-0-8204-8837-0, p. 50
[13] See page 86 of Computer Security Basics (http:/ / books. google. co. uk/ books?id=BtB1aBmLuLEC& printsec=frontcover&
source=gbs_summary_r& cad=0#PPA86,M1) by Deborah Russell and G. T. Gangemi. O'Reilly, 1991. ISBN 0937175714
[14] Anick Jesdanun (1 September 2007). "School prank starts 25 years of security woes" (http:/ / www. cnbc. com/ id/ 20534084/ ). CNBC. .
Retrieved 2010-01-07.
[15] "The anniversary of a nuisance" (http:/ / www. cnn. com/ 2007/ TECH/ 09/ 03/ computer. virus. ap/ ). .
[16] Boot sector virus repair (http:/ / antivirus. about. com/ od/ securitytips/ a/ bootsectorvirus. htm)
[17] http:/ / www. youtube. com/ watch?v=m58MqJdWgDc
[18] Vesselin Bontchev. "Macro Virus Identification Problems" (http:/ / www. people. frisk-software. com/ ~bontchev/ papers/ macidpro. html).
FRISK Software International. .
[19] Berend-Jan Wever. "XSS bug in hotmail login page" (http:/ / seclists. org/ bugtraq/ 2002/ Oct/ 119). .
[20] Wade Alcorn. "The Cross-site Scripting Virus" (http:/ / www. bindshell. net/ papers/ xssv/ ). .
[21] http:/ / www. virusbtn. com/ resources/ glossary/ polymorphic_virus. xml
[22] Perriot, Fredrick; Peter Ferrie and Peter Szor (May 2002). "Striking Similarities" (http:/ / securityresponse. symantec. com/ avcenter/
reference/ simile. pdf) (PDF). . Retrieved September 9, 2007.
[23] http:/ / www. virusbtn. com/ resources/ glossary/ metamorphic_virus. xml
[24] Need a computer virus?- download now (http:/ / www. infoniac. com/ offbeat-news/ computervirus. html)
[25] http:/ / blog. didierstevens. com/ 2007/ 05/ 07/ is-your-pc-virus-free-get-it-infected-here/
[26] "Malware Evolution: Mac OS X Vulnerabilities 2005-2006" (http:/ / www. viruslist. com/ en/ analysis?pubid=191968025). Kaspersky Lab.
2006-07-24. . Retrieved August 19, 2006.
[27] Apple - Get a Mac (http:/ / www. apple. com/ getamac)
[28] Sutter, John D. (22 April 2009). "Experts: Malicious program targets Macs" (http:/ / www. cnn. com/ 2009/ TECH/ 04/ 22/ first. mac.
botnet/ index. html). CNN.com. . Retrieved 24 April 2009.
[29] McAfee. "McAfee discovers first Linux virus" (http:/ / math-www. uni-paderborn. de/ ~axel/ bliss/ mcafee_press. html). news article. .
[30] Axel Boldt. "Bliss, a Linux "virus"" (http:/ / math-www. uni-paderborn. de/ ~axel/ bliss/ ). news article. .
[31] "Symantec Security Summary — W32.Gammima.AG." http:/ / www. symantec. com/ security_response/ writeup.
jsp?docid=2007-082706-1742-99
[32] "Yahoo Tech: Viruses! In! Space!" http:/ / tech. yahoo. com/ blogs/ null/ 103826
[33] "Symantec Security Summary — W32.Gammima.AG and removal details." http:/ / www. symantec. com/ security_response/ writeup.
jsp?docid=2007-082706-1742-99& tabid=3
[34] http:/ / vx. netlux. org/ lib/ vml00. html
[35] http:/ / vx. netlux. org/ lib/ vml01. html
[36] http:/ / vx. netlux. org/ lib/ vml02. html
[37] http:/ / www. dmoz. org/ Computers/ Security/ Malicious_Software/ Viruses/ /
[38] http:/ / www. us-cert. gov/
[39] http:/ / all. net/ books/ virus/ index. html
[40] http:/ / www. howstuffworks. com/ virus. htm
[41] http:/ / vx. netlux. org/ lib/ aas14. html
108
Computer virus
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
109
http:/ / vx. netlux. org/ lib/ avb02. html
http:/ / www. windowsecurity. com/ articles/ Protecting_Email_Viruses_Malware. html
http:/ / www3. iath. virginia. edu/ pmc/ text-only/ issue. 990/ ross-1. 990
http:/ / journal. media-culture. org. au/ 0406/ 07_Sampson. php
http:/ / journal. media-culture. org. au/ 0502/ 02-sampson. php
http:/ / journal. fibreculture. org/ issue4/ issue4_parikka. html
http:/ / www. ctheory. net/ articles. aspx?id=500
http:/ / www. ctheory. net/ articles. aspx?id=504
http:/ / www. bindshell. net/ papers/ xssv/
http:/ / www. cse. msu. edu/ ~cse825/ virusWriter. htm
http:/ / www. microsoft. com/ emea/ itsshowtime/ result_search. aspx?track=1& x=37& y=7
Computer worm
A computer worm is a self-replicating malware computer program. It
uses a computer network to send copies of itself to other nodes
(computers on the network) and it may do so without any user
intervention. This is due to security shortcomings on the target
computer. Unlike a virus, it does not need to attach itself to an existing
program. Worms almost always cause at least some harm to the
network, if only by consuming bandwidth, whereas viruses almost
always corrupt or modify files on a targeted computer.
Payloads
Many worms that have been created are only designed to spread, and
don't attempt to alter the systems they pass through. However, as the
Morris worm and Mydoom showed, the network traffic and other
unintended effects can often cause major disruption. A "payload" is
code designed to do more than spread the worm - it might delete files
on a host system (e.g., the ExploreZip worm), encrypt files in a
cryptoviral extortion attack, or send documents via e-mail. A very
common payload for worms is to install a backdoor in the infected
computer to allow the creation of a "zombie" computer under control
of the worm author. Networks of such machines are often referred to as
botnets and are very commonly used by spam senders for sending junk
email or to cloak their website's address.[1] Spammers are therefore
thought to be a source of funding for the creation of such worms,[2] [3]
and the worm writers have been caught selling lists of IP addresses of
infected machines.[4] Others try to blackmail companies with
threatened DoS attacks.[5]
Morris Worm source code disk
Spread of Conficker worm.
Backdoors can be exploited by other malware, including worms. Examples include Doomjuice, which spreads better
using the backdoor opened by Mydoom, and at least one instance of malware taking advantage of the rootkit and
backdoor installed by the Sony/BMG DRM software utilized by millions of music CDs prior to late 2005.
Computer worm
Worms with good intent
Beginning with the very first research into worms at Xerox PARC, there have been attempts to create useful worms.
The Nachi family of worms, for example, tried to download and install patches from Microsoft's website to fix
vulnerabilities in the host system – by exploiting those same vulnerabilities. In practice, although this may have
made these systems more secure, it generated considerable network traffic, rebooted the machine in the course of
patching it, and did its work without the consent of the computer's owner or user.
Some worms, such as XSS worms, have been written for research to determine the factors of how worms spread,
such as social activity and change in user behavior, while other worms are little more than a prank, such as one that
sends the popular image macro of an owl with the phrase "O RLY?" to a print queue in the infected computer.
Most security experts regard all worms as malware, whatever their payload or their writers' intentions.
Protecting against dangerous computer worms
Worms spread by exploiting vulnerabilities in operating systems. Vendors with security problems supply regular
security updates[6] (see "Patch Tuesday"), and if these are installed to a machine then the majority of worms are
unable to spread to it. If a vendor acknowledges a vulnerability, but has yet to release a security update to patch it, a
zero day exploit is possible. However, these are relatively rare.
Users need to be wary of opening unexpected email,[7] and should not run attached files or programs, or visit web
sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and
efficiency of phishing attacks, it remains possible to trick the end-user into running a malicious code.
Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every
few days. The use of a firewall is also recommended.
In the April-June, 2008, issue of IEEE Transactions on Dependable and Secure Computing, computer scientists
describe a potential new way to combat internet worms. The researchers discovered how to contain the kind of worm
that scans the Internet randomly, looking for vulnerable hosts to infect. They found that the key is for software to
monitor the number of scans that machines on a network sends out. When a machine starts sending out too many
scans, it is a sign that it has been infected, allowing administrators to take it off line and check it for viruses.[8] [9]
Mitigation techniques
•
•
•
•
TCP Wrapper/libwrap enabled network service daemons
ACLs in routers and switches
Packet-filters
Nullrouting
History
The actual term 'worm' was first used in John Brunner's 1975 novel, The Shockwave Rider. In that novel, Nichlas
Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a
national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net,
and it automatically sabotages any attempt to monitor it... There's never been a worm with that tough a head or that
long a tail!"[10]
On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed
what became known as the Morris worm, disrupting perhaps 10% of the computers then on the Internet[11] [12] and
prompting the formation of the CERT Coordination Center[13] and Phage mailing list [14]. Morris himself became the
first person tried and convicted under the 1986 Computer Fraud and Abuse Act[15] .
110
Computer worm
See also
•
•
•
•
•
•
•
Timeline of notable computer viruses and worms
Computer virus
Trojan horse (computing)
Spam
Computer surveillance
XSS Worm
Helpful worm
External links
The Wildlist [16] - List of viruses and worms 'in the wild' (i.e. regularly encountered by anti-virus companies)
Jose Nazario discusses worms [17] - Worms overview by a famous security researcher.
Computer worm suspect in court [18]
Vernalex.com's Malware Removal Guide [19] - Guide for understanding, removing and preventing worm
infections
• John Shoch, Jon Hupp "The "Worm" Programs - Early Experience with a Distributed Computation" [20]
•
•
•
•
•
•
•
•
•
•
RFC 1135 [21] The Helminthiasis of the Internet
Surfing Safe [22] - A site providing tips/advice on preventing and removing viruses.
Computer Worms Information [23]
The Case for Using Layered Defenses to Stop Worms [24]
Worm Evolution Paper from Digital Threat [25]
Step by step instructions on removing computer viruses, spyware and trojans. [26]
References
[1] The Seattle Times: Business & Technology: E-mail viruses blamed as spam rises sharply (http:/ / seattletimes. nwsource. com/ html/
businesstechnology/ 2001859752_spamdoubles18. html)
[2] Cloaking Device Made for Spammers (http:/ / www. wired. com/ news/ business/ 0,1367,60747,00. html)
[3] http:/ / www. channelnewsasia. com/ stories/ afp_world/ view/ 68810/ 1/ . html
[4] heise online - Uncovered: Trojans as Spam Robots (http:/ / www. heise. de/ english/ newsticker/ news/ 44879)
[5] BBC NEWS | Technology | Hacker threats to bookies probed (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 3513849. stm)
[6] USN list | Ubuntu (http:/ / www. ubuntu. com/ usn)
[7] Information on the Nimda Worm (http:/ / www. microsoft. com/ technet/ security/ alerts/ info/ nimda. mspx)
[8] http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?isnumber=4509574& arnumber=4358715& count=10& index=3 Sellke, SH. Shroff, NB.
Bagchi, S (2008). Modeling and Automated Containment of Worms. IEEE Transactions on Dependable and Secure Computing. 5(2), 71-86
[9] Newswise: A New Way to Protect Computer Networks from Internet Worms (http:/ / newswise. com/ articles/ view/ 541456/ ) Retrieved on
June 5, 2008.
[10] John Brunner, The Shockwave Rider, New York, Ballantine Books, 1975
[11] The Submarine (http:/ / www. paulgraham. com/ submarine. html#f4n)
[12] During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the virus from each installation was in the range
of $200 - 53,000. Possibly based on these numbers, Harvard spokesman Clifford Stoll estimated the total economic impact was between
$100,000 - 10,000,000. http:/ / www. bs2. com/ cvirus. htm#anchor111400
[13] Security of the Internet. CERT/CC (http:/ / www. cert. org/ encyc_article/ tocencyc. html)
[14] http:/ / securitydigest. org/ phage/
[15] Dressler, J. Cases and Materials on Criminal Law, "United States v. Morris" ISBN 9780-314-17719-3
[16] http:/ / www. wildlist. org
[17] http:/ / www. securityfocus. com/ print/ columnists/ 347
[18] http:/ / www. pc-news. org/ computer-worm-suspect-in-court/ virus-news
[19] http:/ / www. vernalex. com/ guides/ malware/
[20] http:/ / vx. netlux. org/ lib/ ajm01. html
[21] http:/ / tools. ietf. org/ rfc/ rfc1135. txt
[22] http:/ / www. surfing-safe. org/
111
Computer worm
[23]
[24]
[25]
[26]
http:/ / virusall. com/ worms. shtml
http:/ / www. nsa. gov/ snac/ support/ WORMPAPER. pdf
http:/ / www. digitalthreat. net/ ?p=17
http:/ / www. freecomputerrepairguide. com/
Exploit (computer security)
An exploit (from the same word in the French language, meaning "achievement", or "accomplishment") is a piece of
software, a chunk of data, or sequence of commands that take advantage of a bug, glitch or vulnerability in order to
cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic
(usually computerised). This frequently includes such things as gaining control of a computer system or allowing
privilege escalation or a denial of service attack.
Classification
There are several methods of classifying exploits. The most common is by how the exploit contacts the vulnerable
software. A 'remote exploit' works over a network and exploits the security vulnerability without any prior access to
the vulnerable system. A 'local exploit' requires prior access to the vulnerable system and usually increases the
privileges of the person running the exploit past those granted by the system administrator. Exploits against client
applications also exist, usually consisting of modified servers that send an exploit if accessed with client application.
Exploits against client applications may also require some interaction with the user and thus may be used in
combination with social engineering method. This is the hacker way of getting into computers and stealing data.
Another classification is by the action against vulnerable system: unauthorized data access, arbitrary code execution,
denial of service.
Many exploits are designed to provide superuser-level access to a computer system. However, it is also possible to
use several exploits, first to gain low-level access, then to escalate privileges repeatedly until one reaches root.
Normally a single exploit can only take advantage of a specific software vulnerability. Often, when an exploit is
published, the vulnerability is fixed through a patch and the exploit becomes obsolete for newer versions of the
software. This is the reason why some blackhat hackers do not publish their exploits but keep them private to
themselves or other crackers. Such exploits are referred to as 'zero day exploits' and to obtain access to such exploits
is the primary desire of unskilled attackers, often nicknamed script kiddies.
Types
Exploits are commonly categorized and named by these criteria:
• The type of vulnerability they exploit (See the article on vulnerabilities for a list)
• Whether they need to be run on the same machine as the program that has the vulnerability (local) or can be run
on one machine to attack a program running on another machine (remote).
• The result of running the exploit (EoP, DoS, Spoofing, etc...)
112
Exploit (computer security)
See also
•
•
•
•
•
•
•
Computer insecurity
Computer security
Computer virus
Crimeware
Offensive Security Exploit Database
Metasploit
Shellcode
113
Computer insecurity
114
Computer insecurity
Computer security
Secure operating systems
Security architecture
Security by design
Secure coding
Computer insecurity
Vulnerability Social engineering
Eavesdropping
Exploits
Trojans
Viruses and
worms
Denial of service
Payloads
Backdoors
Rootkits
Keyloggers
Many current computer systems have only limited security precautions in place. This computer insecurity article
describes the current battlefield of computer security exploits and defenses. Please see the computer security article
for an alternative approach, based on security engineering principles.
Security and systems design
Many current real-world computer security efforts focus on external threats, and generally treat the computer system
itself as a trusted system. Some knowledgeable observers consider this to be a disastrous mistake, and point out that
this distinction is the cause of much of the insecurity of current computer systems — once an attacker has subverted
one part of a system without fine-grained security, he or she usually has access to most or all of the features of that
system. Because computer systems can be very complex, and cannot be guaranteed to be free of defects, this security
stance tends to produce insecure systems.
Financial cost
Serious financial damage has been caused by computer security breaches, but reliably estimating costs is quite
difficult. Figures in the billions of dollars have been quoted in relation to the damage caused by malware such as
computer worms like the Code Red worm, but such estimates may be exaggerated. However, other losses, such as
those caused by the compromise of credit card information, can be more easily determined, and they have been
substantial, as measured by millions of individual victims of identity theft each year in each of several nations, and
the severe hardship imposed on each victim, that can wipe out all of their finances, prevent them from getting a job,
plus be treated as if they were the criminal. Volumes of victims of phishing and other scams may not be known.
Individuals who have been infected with spyware or malware likely go through a costly and time-consuming process
of having their computer cleaned. Spyware is considered to be a problem specific to the various Microsoft Windows
operating systems, however this can be partially explained by the fact that Microsoft controls a major share of the PC
market and thus represent the most prominent target.
Computer insecurity
Reasons
There are many similarities (yet many fundamental differences) between computer and physical security. Just like
real-world security, the motivations for breaches of computer security vary between attackers, sometimes called
hackers or crackers. Some are thrill-seekers or vandals (the kind often responsible for defacing web sites); similarly,
some web site defacements are done to make political statements. However, some attackers are highly skilled and
motivated with the goal of compromising computers for financial gain or espionage. An example of the latter is
Markus Hess (more diligent than skilled), who spied for the KGB and was ultimately caught because of the efforts of
Clifford Stoll, who wrote a memoir, The Cuckoo's Egg, about his experiences. For those seeking to prevent security
breaches, the first step is usually to attempt to identify what might motivate an attack on the system, how much the
continued operation and information security of the system are worth, and who might be motivated to breach it. The
precautions required for a home PC are very different for those of banks' Internet banking system, and different again
for a classified military network. Other computer security writers suggest that, since an attacker using a network
need know nothing about you or what you have on your computer, attacker motivation is inherently impossible to
determine beyond guessing. If true, blocking all possible attacks is the only plausible action to take.
Vulnerabilities
To understand the techniques for securing a computer system, it is important to first understand the various types of
"attacks" that can be made against it. These threats can typically be classified into one of these seven categories:
Exploits
An exploit (from the same word in the French language, meaning "achievement", or "accomplishment") is a piece of
software, a chunk of data, or sequence of commands that take advantage of a software 'bug' or 'glitch' in order to
cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic
(usually computerized). This frequently includes such things as gaining control of a computer system or allowing
privilege escalation or a denial of service attack. Many development methodologies rely on testing to ensure the
quality of any code released; this process often fails to discover unusual potential exploits. The term "exploit"
generally refers to small programs designed to take advantage of a software flaw that has been discovered, either
remote or local. The code from the exploit program is frequently reused in trojan horses and computer viruses. In
some cases, a vulnerability can lie in certain programs' processing of a specific file type, such as a non-executable
media file. Some security web sites maintain lists of currently known unpatched vulnerabilities found in common
programs (see "External links" below).
Eavesdropping
Eavesdropping is the act of surreptitiously listening to a private conversation. Even machines that operate as a closed
system (ie, with no contact to the outside world) can be eavesdropped upon via monitoring the faint electro-magnetic
transmissions generated by the hardware such as TEMPEST. The FBI's proposed Carnivore program was intended to
act as a system of eavesdropping protocols built into the systems of internet service providers.
Social engineering and human error
A computer system is no more secure than the human systems responsible for its operation. Malicious individuals
have regularly penetrated well-designed, secure computer systems by taking advantage of the carelessness of trusted
individuals, or by deliberately deceiving them, for example sending messages that they are the system administrator
and asking for passwords. This deception is known as Social engineering.
115
Computer insecurity
116
Denial of service attacks
Unlike other exploits, denial of service attacks are not used to gain unauthorized access or control of a system. They
are instead designed to render it unusable. Attackers can deny service to individual victims, such as by deliberately
entering a wrong password 3 consecutive times and thus causing the victim account to be locked, or they may
overload the capabilities of a machine or network and block all users at once. These types of attack are, in practice,
very hard to prevent, because the behavior of whole networks needs to be analyzed, not only the behaviour of small
pieces of code. Distributed denial of service (DDoS) attacks are common, where a large number of compromised
hosts (commonly referred to as "zombie computers", used as part of a botnet with, for example; a worm, trojan
horse, or backdoor exploit to control them.) are used to flood a target system with network requests, thus attempting
to render it unusable through resource exhaustion. Another technique to exhaust victim resources is through the use
of an attack amplifier — where the attacker takes advantage of poorly designed protocols on 3rd party machines,
such as FTP or DNS, in order to instruct these hosts to launch the flood. There are also commonly vulnerabilities in
applications that cannot be used to take control over a computer, but merely make the target application malfunction
or crash. This is known as a denial-of-service exploit.
Indirect attacks
An indirect attack is an attack launched by a third party computer. By using someone else's computer to launch an
attack, it becomes far more difficult to track down the actual attacker. There have also been cases where attackers
took advantage of public anonymizing systems, such as the tor onion router system.
Backdoors
A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication,
securing remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain
undetected. The backdoor may take the form of an installed program (e.g., Back Orifice), or could be a modification
to an existing program or hardware device. A specific form of backdoors are rootkits, which replaces system binaries
and/or hooks into the function calls of the operating system to hide the presence of other programs, users, services
and open ports. It may also fake information about disk and memory usage.
Direct access attacks
Someone who has gained access to a computer can install any type of
devices to compromise security, including operating system
modifications, software worms, key loggers, and covert listening
devices. The attacker can also easily download large quantities of data
onto backup media, for instance CD-R/DVD-R, tape; or portable
devices such as keydrives, digital cameras or digital audio players.
Another common technique is to boot an operating system contained
on a CD-ROM or other bootable media and read the data from the
harddrive(s) this way. The only way to defeat this is to encrypt the
storage media and store the key separate from the system.
See also: Category:Cryptographic attacks
Common consumer devices that can be used to
transfer data surreptitiously.
Computer insecurity
Reducing vulnerabilities
Computer code is regarded by some as a form of mathematics. It is theoretically possible to prove the correctness of
certain classes of computer programs, though the feasibility of actually achieving this in large-scale practical systems
is regarded as small by some with practical experience in the industry — see Bruce Schneier et al.
It's also possible to protect messages in transit (ie, communications) by means of cryptography. One method of
encryption — the one-time pad — is unbreakable when correctly used. This method was used by the Soviet Union
during the Cold War, though flaws in their implementation allowed some cryptanalysis (See Venona Project). The
method uses a matching pair of key-codes, securely distributed, which are used once-and-only-once to encode and
decode a single message. For transmitted computer encryption this method is difficult to use properly (securely), and
highly inconvenient as well. Other methods of encryption, while breakable in theory, are often virtually impossible
to directly break by any means publicly known today. Breaking them requires some non-cryptographic input, such as
a stolen key, stolen plaintext (at either end of the transmission), or some other extra cryptanalytic information.
Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means,
which can be difficult to enforce, relative to the sensitivity of the information. Even in a highly disciplined
environment, such as in military organizations, social engineering attacks can still be difficult to foresee and prevent.
In practice, only a small fraction of computer program code is mathematically proven, or even goes through
comprehensive information technology audits or inexpensive but extremely valuable computer security audits, so it's
usually possible for a determined hacker to read, copy, alter or destroy data in well secured computers, albeit at the
cost of great time and resources. Few attackers would audit applications for vulnerabilities just to attack a single
specific system. It is possible to reduce an attacker's chances by keeping systems up to date, using a security scanner
or/and hiring competent people responsible for security. The effects of data loss/damage can be reduced by careful
backing up and insurance.
Security measures
A state of computer "security" is the conceptual ideal, attained by the use of the three processes:
1. Prevention
2. Detection
3. Response
• User account access controls and cryptography can protect systems files and data, respectively.
• Firewalls are by far the most common prevention systems from a network security perspective as they can (if
properly configured) shield access to internal network services, and block certain kinds of attacks through packet
filtering.
• Intrusion Detection Systems (IDS's) are designed to detect network attacks in progress and assist in post-attack
forensics, while audit trails and logs serve a similar function for individual systems.
• "Response" is necessarily defined by the assessed security requirements of an individual system and may cover
the range from simple upgrade of protections to notification of legal authorities, counter-attacks, and the like. In
some special cases, a complete destruction of the compromised system is favored, as it may happen that not all
the compromised resources are detected.
Today, computer security comprises mainly "preventive" measures, like firewalls or an Exit Procedure. A firewall
can be defined as a way of filtering network data between a host or a network and another network, such as the
Internet, and can be implemented as software running on the machine, hooking into the network stack (or, in the case
of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide realtime
filtering and blocking. Another implementation is a so called physical firewall which consists of a separate machine
filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.
However, relatively few organisations maintain computer systems with effective detection systems, and fewer still
117
Computer insecurity
118
have organised response mechanisms in place.
Difficulty with response
Responding forcefully to attempted security breaches (in the manner that one would for attempted physical security
breaches) is often very difficult for a variety of reasons:
• Identifying attackers is difficult, as they are often in a different jurisdiction to the systems they attempt to breach,
and operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other
anonymising procedures which make backtracing difficult and are often located in yet another jurisdiction. If they
successfully breach security, they are often able to delete logs to cover their tracks.
• The sheer number of attempted attacks is so large that organisations cannot spend time pursuing each attacker (a
typical home user with a permanent (e.g., cable modem) connection will be attacked at least several times per day,
so more attractive targets could be presumed to see many more). Note however, that most of the sheer bulk of
these attacks are made by automated vulnerability scanners and computer worms.
• Law enforcement officers are often unfamiliar with information technology, and so lack the skills and interest in
pursuing attackers. There are also budgetary constraints. It has been argued that the high cost of technology, such
as DNA testing, and improved forensics mean less money for other kinds of law enforcement, so the overall rate
of criminals not getting dealt with goes up as the cost of the technology increases. In addition, the identification of
attackers across a network may require logs from various points in the network and in many countries, the release
of these records to law enforcement (with the exception of being voluntarily surrendered by a network
administrator or a system administrator) requires a search warrant and, depending on the circumstances, the legal
proceedings required can be drawn out to the point where the records are either regularly destroyed, or the
information is no longer relevant.
See also
Lists and categories
•
•
•
•
•
•
Category:Computer security exploits — Types of computer security vulnerabilities and attacks
Category:Spyware removal — Programs that find and remove spyware
List of computer virus hoaxes
List of computer viruses
List of trojan horses
Timeline of notable computer viruses and worms
Individual articles
•
•
•
•
•
•
•
•
•
•
•
Adware
Antivirus software
Black hat
Computer forensics
Computer virus
Crash-only software
Cryptography (aka cryptology)
Data remanence
Data spill
Defensive computing
Defensive programming
•
•
•
•
•
•
•
•
•
•
•
Full disclosure
Hacking
Malware
Mangled packet
Microreboot
Physical security
Ring (computer security)
RISKS Digest
Security engineering
Security through obscurity
Software Security Assurance
•
•
•
•
•
•
•
•
Spam
Spyware
Targeted threat
Trojan horse
Virus hoax
Worm
XSA
Zero day attack
Computer insecurity
References
• Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems, ISBN
0-471-38922-6
• Bruce Schneier: Secrets & Lies: Digital Security in a Networked World, ISBN 0-471-25311-1
• Cyrus Peikari, Anton Chuvakin: Security Warrior, ISBN 0-596-00545-8
• Jack Koziol, David Litchfield: The Shellcoder's Handbook: Discovering and Exploiting Security Holes, ISBN
0-7645-4468-3
• Clifford Stoll: The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, an informal — and
easily approachable by the non-specialist — account of a real incident (and pattern) of computer insecurity, ISBN
0-7434-1146-3
• Roger R. Schell: The Internet Rules but the Emperor Has No Clothes [1] ACSAC 1996
• William Caelli: Relearning "Trusted Systems" in an Age of NIIP: Lessons from the Past for the Future. [2] 2002
• Noel Davis: Cracked! [3] story of a community network that was cracked and what was done to recover from it
2000
• Shon Harris, "CISSP All-In-One Study Guide" ISBN 0071497870
• Daniel Ventre, "Information Warfare" Wiley - ISTET ISBN 9781848210943
External links
• Participating With Safety [4], a guide to electronic security threats from the viewpoint of civil liberties
organisations. Licensed under the GNU Free Documentation License.
• Article "Why Information Security is Hard — An Economic Perspective [7]" by Ross Anderson
• The Information Security Glossary [5]
• The SANS Top 20 Internet Security Vulnerabilities [6]
• Amit Singh: A Taste of Computer Security [7] 2004
Lists of currently known unpatched vulnerabilities
• Lists of advisories by product [8] Lists of known unpatched vulnerabilities from Secunia
• Vulnerabilities [9] from SecurityFocus, home of the famous Bugtraq mailing list.
• List of vulnerabilities maintained by the government of the USA [10]
References
[1] http:/ / csdl. computer. org/ comp/ proceedings/ acsac/ 1996/ 7606/ 00/ 7606xiv. pdf
[2] http:/ / cisse. info/ history/ CISSE%20J/ 2002/ cael. pdf
[3] http:/ / rootprompt. org/ article. php3?article=403
[4] http:/ / secdocs. net/ manual/ lp-sec/
[5] http:/ / www. yourwindow. to/ information-security/
[6] http:/ / www. sans. org/ top20/
[7] http:/ / www. kernelthread. com/ publications/ security/ index. html
[8] http:/ / secunia. com/ product/
[9] http:/ / www. securityfocus. com/ vulnerabilities
[10] https:/ / www. kb. cert. org/ vuls/
119
120
5.0 Networks
Communications security
Communications security is the discipline of preventing unauthorized interceptors from accessing
telecommunications in an intelligible form, while still delivering content to the intended recipients. In the United
States Department of Defense culture, it is often referred to by the portmanteau COMSEC. The field includes
cryptosecurity, transmission security, emission security, traffic-flow security. and physical security of COMSEC
equipment.
Applications
COMSEC is used to protect both classified and unclassified traffic on military communication networks, including
voice, video, and data. It is used for both analog and digital applications, and both wired and wireless links.
Secure voice over internet protocol (SVOIP) has become the defacto standard for securing voice communication,
replacing the need for STU-X and STE equipment in much of the U.S. Department of Defense. USCENTCOM
moved entirely to SVOIP in 2008.[1]
Specialties
• Cryptosecurity: The component of communications security that results from the provision of technically sound
cryptosystems and their proper use. This includes ensuring message confidentiality and authenticity.
• Emission security (EMSEC): Protection resulting from all measures taken to deny unauthorized persons
information of value which might be derived from intercept and analysis of compromising emanations from
crypto-equipment, automated information systems (computers), and telecommunications systems.
• Physical security: The component of communications security that results from all physical measures necessary
to safeguard classified equipment, material, and documents from access thereto or observation thereof by
unauthorized persons.
• Traffic-flow security: Measures that conceal the presence and properties of valid messages on a network. It
includes the protection resulting from features, inherent in some cryptoequipment, that conceal the presence of
valid messages on a communications circuit, normally achieved by causing the circuit to appear busy at all times.
• Transmission security (TRANSEC): The component of communications security that results from the
application of measures designed to protect transmissions from interception and exploitation by means other than
cryptanalysis (e.g.
frequency hopping and spread spectrum).
Communications security
Separating classified and unclassified information
The red/black concept requires electrical and electronic circuits, components, and systems which handle encrypted,
ciphertext information (black) be separated from those which handle unencrypted, classified plaintext information
(red). The red/black concept can operate on the level of circuits, components, equipment, systems, or the physical
areas in which they are contained.
Related terms
•
•
•
•
•
•
•
•
•
AKMS = the Army Key Management System
AEK = Algorithmic Encryption Key
CT3 = Common Tier 3
CCI = Controlled Cryptographic Item - equipment which contains COMSEC embedded devices
EKMS = Electronic Key Management System
NSA = National Security Agency
ACES = Automated Communications Engineering Software
DTD = The Data Transfer Device
DIRNSA = Director of National Security Agency
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
TEK = Traffic Encryption Key
TED = Trunk Encryption Device such as the WALBURN/KG family of CCI
KEK = Key Encryption Key
OWK = Over the Wire Key
OTAR = Over The Air Rekeying
LCMS = Local COMSEC Management Software
KYK-13 = Electronic Transfer Device
KOI-18 = Tape Reader General Purpose
KYX-15 = Electronic Transfer Device
KG-30 = TSEC family of COMSEC equipment
TSEC = Telecommunications Security (sometimes referred to in error transmission security or TRANSEC)
SOI = Signal Operating Instruction
SKL = Simple Key Loader
TPI = Two Person Integrity
STU-III (secure phone)
STU - Secure Terminal Equipment (secure phone)
Types of COMSEC equipment:
• Crypto equipment: Any equipment that embodies cryptographic logic or performs one or more cryptographic
functions (key generation, encryption, and authentication).
• Crypto-ancillary equipment: Equipment designed specifically to facilitate efficient or reliable operation of
crypto-equipment, without performing cryptographic functions itself.[2]
• Crypto-production equipment: Equipment used to produce or load keying material
• Authentication equipment:
121
Communications security
DoD key management system
The EKMS is DoD key management, COMSEC material distribution, and logistics support system. The NSA
established the EKMS program to supply electronic key to COMSEC devices in securely and timely manner, and to
provide COMSEC managers with an automated system capable of ordering, generation, production, distribution,
storage, security accounting, and access control.
The Army's platform in the four-tiered EKMS, AKMS, automates frequency management and COMSEC
management operations. It eliminates paper keying material, hardcopy SOI, and associated time and
resource-intensive courier distribution. It has 4 components:
• LCMS provides automation for the detailed accounting required for every COMSEC account, and electronic key
generation and distribution capability.
• ACES is the frequency management portion of AKMS. ACES has been designated by the Military
Communications Electronics Board as the joint standard for use by all services in development of frequency
management and cryptonet planning.
• CT3 with DTD software is in a fielded, ruggedized hand-held device that handles, views, stores, and loads SOI,
Key, and electronic protection data. DTD provides an improved net-control device to automate crypto-net control
operations for communications networks employing electronically-keyed COMSEC equipment.
• SKL is a hand-held PDA that handles, views, stores, and loads SOI, Key, and electronic protection data.
See also
•
•
•
•
•
•
•
•
•
Cryptography
Information security
Information warfare
NSA encryption systems
Operations security
Secure Communication
Signals Intelligence
Traffic analysis
Type 1 product
References
[1] USCENTCOM PL 117-02-1.
[2] INFOSEC-99
•
This article incorporates public domain material from the General Services Administration document
"Federal Standard 1037C" (http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm) (in support of MIL-STD-188).
•
•
•
•
•
•
•
•
National Information Systems Security Glossary
Department of Defense Dictionary of Military and Associated Terms
http://www.dtic.mil/doctrine/jel/cjcsd/cjcsi/6511_01.pdf
http://www.gordon.army.mil/sigbde15/Schools/25L/c03lp1.html
http://www.dtic.mil/whs/directives/corres/pdf/466002p.pdf
http://cryptome.sabotage.org/HB202D.PDF
http://peoc3t.monmouth.army.mil/netops/akms.html
Cryptography machines (http://www.jproc.ca/crypto/menu.html)
122
Communications security
External links
• COMSEC/SIGINT News Group - Discussion Forum (http://groups-beta.google.com/group/sigint)
Network security
In the field of networking, the specialist area of network security[1] consists of the provisions and policies adopted
by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the
computer network and network-accessible resources.
The first step to information security
The terms network security and information security are often used interchangeably. Network security is generally
taken as providing protection at the boundaries of an organization by keeping out intruders (hackers). Information
security, however, explicitly focuses on protecting data resources from malware attack or simple mistakes by people
within an organization by use of data loss prevention (DLP) techniques. One of these techniques is to
compartmentalize large networks with internal boundaries.
Network security concepts
Network security starts from authenticating the user, commonly with a username and a password. Since this requires
just one thing besides the user name, i.e. the password which is something you 'know', this is sometimes termed one
factor authentication. With two factor authentication something you 'have' is also used (e.g. a security token or
'dongle', an ATM card, or your mobile phone), or with three factor authentication something you 'are' is also used
(e.g. a fingerprint or retinal scan).
Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the
network users.[2] Though effective to prevent unauthorized access, this component may fail to check potentially
harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an
intrusion prevention system (IPS)[3] help detect and inhibit the action of such malware. An anomaly-based intrusion
detection system may also monitor the network and traffic for unexpected (i.e. suspicious) content or behavior and
other anomalies to protect resources, e.g. from denial of service attacks or an employee accessing files at strange
times. Individual events occurring on the network may be logged for audit purposes and for later high level analysis.
Communication between two hosts using a network could be encrypted to maintain privacy.
Honeypots, essentially decoy network-accessible resources, could be deployed in a network as surveillance and
early-warning tools as the honeypot will not normally be accessed. Techniques used by the attackers that attempt to
compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation
techniques. Such analysis could be used to further tighten security of the actual network being protected by the
honeypot.[4]
123
Network security
Security management
Security Management for networks is different for all kinds of situations. A small home or an office would only
require basic security while large businesses will require high maintenance and advanced software and hardware to
prevent malicious attacks from hacking and spamming.
Small homes
• A basic firewall like COMODO Internet Security or a unified threat management system.
• For Windows users, basic Antivirus software like AVG Antivirus, ESET NOD32 Antivirus, Kaspersky, McAfee,
Avast!, Zone Alarm Security Suite or Norton AntiVirus. An anti-spyware program such as Windows Defender or
Spybot – Search & Destroy would also be a good idea. There are many other types of antivirus or anti-spyware
programs out there to be considered.
• When using a wireless connection, use a robust password. Also try to use the strongest security supported by your
wireless devices, such as WPA2 with AES encryption.
• If using Wireless: Change the default SSID network name, also disable SSID Broadcast; as this function is
unnecessary for home use. (However, many security experts consider this to be relatively useless. http://blogs.
zdnet.com/Ou/index.php?p=43 )
• Enable MAC Address filtering to keep track of all home network MAC devices connecting to your router.
•
•
•
•
•
Assign STATIC IP addresses to network devices.
Disable ICMP ping on router.
Review router or firewall logs to help identify abnormal network connections or traffic to the Internet.
Use passwords for all accounts.
Have multiple accounts per family member, using non-administrative accounts for day-to-day activities. Disable
the guest account (Control Panel> Administrative Tools> Computer Management> Users).
• Raise awareness about information security to children.[5]
Medium businesses
•
•
•
•
•
•
•
A fairly strong firewall or Unified Threat Management System
Strong Antivirus software and Internet Security Software.
For authentication, use strong passwords and change it on a bi-weekly/monthly basis.
When using a wireless connection, use a robust password.
Raise awareness about physical security to employees.
Use an optional network analyzer or network monitor.
An enlightened administrator or manager.
Large businesses
•
•
•
•
•
•
•
•
A strong firewall and proxy to keep unwanted people out.
A strong Antivirus software package and Internet Security Software package.
For authentication, use strong passwords and change it on a weekly/bi-weekly basis.
When using a wireless connection, use a robust password.
Exercise physical security precautions to employees.
Prepare a network analyzer or network monitor and use it when needed.
Implement physical security management like closed circuit television for entry areas and restricted zones.
Security fencing to mark the company's perimeter.
• Fire extinguishers for fire-sensitive areas like server rooms and security rooms.
• Security guards can help to maximize security.
124
Network security
School
•
•
•
•
•
•
An adjustable firewall and proxy to allow authorized users access from the outside and inside.
Strong Antivirus software and Internet Security Software packages.
Wireless connections that lead to firewalls.
Children's Internet Protection Act compliance.
Supervision of network to guarantee updates and changes based on popular site usage.
Constant supervision by teachers, librarians, and administrators to guarantee protection against attacks by both
internet and sneakernet sources.
Large government
•
•
•
•
•
•
•
A strong firewall and proxy to keep unwanted people out.
Strong Antivirus software and Internet Security Software suites.
Strong encryption.
Whitelist authorized wireless connection, block all else.
All network hardware is in secure zones.
All host should be on a private network that is invisible from the outside.
Put web servers in a DMZ, or a firewall from the outside and from the inside.
• Security fencing to mark perimeter and set wireless range to this.
Further reading
• Security of the Internet [6] (The Froehlich/Kent Encyclopedia of Telecommunications vol. 15. Marcel Dekker,
New York, 1997, pp. 231-255.)
• Introduction to Network Security [7], Matt Curtin.
• Security Monitoring with Cisco Security MARS [8], Gary Halleen/Greg Kellogg, Cisco Press, Jul. 6, 2007.
• Self-Defending Networks: The Next Generation of Network Security [9], Duane DeCapite, Cisco Press, Sep. 8,
2006.
• Security Threat Mitigation and Response: Understanding CS-MARS [10], Dale Tesch/Greg Abelar, Cisco Press,
Sep. 26, 2006.
• Deploying Zone-Based Firewalls [11], Ivan Pepelnjak, Cisco Press, Oct. 5, 2006.
• Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman | Radia Perlman | Mike
Speciner, Prentice-Hall, 2002. ISBN .
• Network Infrastructure Security [21], Angus Wong and Alan Yeung, Springer, 2009.
See also
•
•
•
•
•
•
•
•
•
Cloud computing security
Crimeware
Data Loss Prevention
Wireless LAN Security
Timeline of hacker history
Information Leak Prevention
Network Security Toolkit
TCP sequence prediction attack
TCP Gender Changer
• Netsentron
125
Network security
External links
•
•
•
•
[12] Definition of Network Security
Cisco IT Case Studies [13] about Security and VPN
Debate: The data or the source - which is the real threat to network security? - Video [14]
OpenLearn - Network Security [15]
References
[1] Simmonds, A; Sandilands, P; van Ekert, L (2004). "An Ontology for Network Security Attacks". Lecture Notes in Computer Science 3285:
317–323.
[2] A Role-Based Trusted Network Provides Pervasive Security and Compliance (http:/ / newsroom. cisco. com/ dlls/ 2008/ ts_010208b.
html?sid=BAC-NewsWire) - interview with Jayshree Ullal, senior VP of Cisco
[3] Dave Dittrich, Network monitoring/Intrusion Detection Systems (IDS) (http:/ / staff. washington. edu/ dittrich/ network. html), University of
Washington.
[4] Honeypots, Honeynets (http:/ / www. honeypots. net)
[5] Julian Fredin, Social software development program Wi-Tech
[6] http:/ / www. cert. org/ encyc_article/ tocencyc. html
[7] http:/ / www. interhack. net/ pubs/ network-security
[8] http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587052709
[9] http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587052539
[10]
[11]
[12]
[13]
[14]
[15]
http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587052601
http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587053101
http:/ / www. deepnines. com/ secure-web-gateway/ definition-of-network-security
http:/ / www. cisco. com/ web/ about/ ciscoitatwork/ case_studies/ security. html
http:/ / www. netevents. tv/ docuplayer. asp?docid=102
http:/ / openlearn. open. ac. uk/ course/ view. php?id=2587
126
127
5.1 Internet
Book:Internet security
Internet security
This Wikipedia book is not located in the correct namespace. Please move it to either Book:Book:Internet security or
User:Username/Books/Book:Internet security. For information and help on Wikipedia books in general, see Help:Books
(general tips) and WikiProject Wikipedia-Books (questions and assistance).
[ Download PDF [1] ] [ Open in Book Creator [2] ] [ Order Printed Book [3] ]
[ About ] [ FAQ ] [ Feedback ] [ Help ] [ WikiProject ] [ [4] ]
Internet security
Methods of attack
Arbitrary code execution
Buffer overflow
Cross-site request forgery
Cross-site scripting
Denial-of-service attack
DNS cache poisoning
Drive-by download
Malware
Password cracking
Phishing
SQL injection
Virus hoax
Methods of prevention
Cryptography
Firewall
References
[1]
[2]
[3]
[4]
http:/ / en. wikipedia. org/ wiki/ Special%3Abook%2Frender_collection%2F
http:/ / en. wikipedia. org/ wiki/ Special%3Abook%2Fload_collection%2F
http:/ / en. wikipedia. org/ wiki/ Special%3Abook%2Forder_collection%2F
http:/ / en. wikipedia. org/ wiki/ Special%3Arecentchangeslinked%2Fbook%253ainternet_security
Firewall (computing)
128
Firewall (computing)
A firewall is a part of a computer system or
network that is designed to block
unauthorized access while permitting
authorized communications. It is a device or
set of devices which is configured to permit
or deny computer applications based upon a
set of rules and other criteria.
Firewalls can be implemented in either
hardware or software, or a combination of
both. Firewalls are frequently used to
prevent unauthorized Internet users from
accessing private networks connected to the
Internet, especially intranets. All messages
entering or leaving the intranet pass through
the firewall, which examines each message
and blocks those that do not meet the
specified security criteria.
There are
techniques:
several
types
of
An illustration of where a firewall would be located in a network.
firewall
1. Packet filter: Packet filtering inspects
each packet passing through the network
and accepts or rejects it based on
user-defined rules. Although difficult to
configure, it is fairly effective and mostly
transparent to its users. It is susceptible to
IP spoofing.
2. Application gateway: Applies security
mechanisms to specific applications, such
as FTP and Telnet servers. This is very
effective, but can impose a performance
degradation.
3. Circuit-level gateway: Applies security
mechanisms when a TCP or UDP
connection is established. Once the
connection has been made, packets can
flow between the hosts without further
checking.
An example of a user interface for a firewall on Ubuntu (Gufw)
4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the
true network addresses.
Function
Firewall (computing)
A firewall is a dedicated appliance, or software running on a computer, which inspects network traffic passing
through it, and denies or permits passage based on a set of rules/criteria.
It is normally placed between a protected network and an unprotected network and acts like a gate to protect assets to
ensure that nothing private goes out and nothing malicious comes in.
A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels.
Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher
trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often
referred to as a "perimeter network" or Demilitarized zone (DMZ).
A firewall's function within a network is similar to physical firewalls with fire doors in building construction. In the
former case, it is used to prevent network intrusion to the private network. In the latter case, it is intended to contain
and delay structural fire from spreading to adjacent structures.
History
The term firewall/fireblock originally meant a wall to confine a fire or potential fire within a building; cf. firewall
(construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a
vehicle or aircraft from the passenger compartment.
• The Morris Worm spread itself through multiple vulnerabilities in the machines of the time. Although it was not
malicious in intent, the Morris Worm was the first large scale attack on Internet security; the online community
was neither expecting an attack nor prepared to deal with one.[1]
First generation: packet filters
The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation
(DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of
what became a highly evolved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve
Bellovin were continuing their research in packet filtering and developed a working model for their own company
based upon their original first generation architecture.
This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no
information on connection "state"). Instead, it filters each packet based only on information contained in the packet
itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP
and UDP traffic, the port number).
TCP and UDP protocols comprise most communication over the Internet, and because TCP and UDP traffic by
convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between,
and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer),
unless the machines on each side of the packet filter are both using the same non-standard ports.
Packet filtering firewalls work on the first three layers of the OSI reference model, which means all the work is done
between the network and physical layers. When a packet originates from the sender and filters through a firewall, the
device checks for matches to any of the packet filtering rules that are configured in the firewall and drops or rejects
the packet accordingly. When the packet passes through the firewall it filters the packet on a protocol/port number
basis (GSS). For example if a rule in the firewall exists to block telnet access, then the firewall will block the IP
protocol for port number 23.
129
Firewall (computing)
Second generation: application layer
The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as
File Transfer Protocol, DNS, or web browsing), and it can detect if an unwanted protocol is sneaking through on a
non-standard port or if a protocol is being abused in any harmful way.
An application firewall is much more secure and reliable compared to packet filter firewalls because it works on all
seven layers of the OSI reference model, from the application down to the physical Layer. This is similar to a packet
filter firewall but here we can also filter information on the basis of content. The best example of an application
firewall is ISA (Internet Security and Acceleration) server. An application firewall can filter higher-layer protocols
such as FTP, Telnet, DNS, DHCP, HTTP, TCP, UDP and TFTP (GSS). For example, if an organization wants to
block all the information related to "foo" then content filtering can be enabled on the firewall to block that particular
word. Software-based firewalls are thus much slower than stateful firewalls.
Third generation: "stateful" filters
From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij
Nigam, developed the third generation of firewalls, calling them circuit level firewalls.
Third-generation firewalls, in addition to what first- and second-generation look for, regard placement of each
individual packet within the packet series. This technology is generally referred to as a stateful packet inspection as it
maintains records of all connections passing through the firewall and is able to determine whether a packet is the
start of a new connection, a part of an existing connection, or is an invalid packet. Though there is still a set of static
rules in such a firewall, the state of a connection can itself be one of the criteria which trigger specific rules.
This type of firewall can actually be exploiting by certain Denial-of-service attacks which can fill the connection
tables with illegitimate connections.
Subsequent developments
In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept
of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colors
and icons, which could be easily implemented and accessed on a computer operating system such as Microsoft's
Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into
readily available software known as FireWall-1.
The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems
(IPS).
Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working
on standardizing protocols for managing firewalls and other middleboxes.
Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such
features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned
around. The NuFW firewall provides real identity-based firewalling, by requesting the user's signature for each
connection.
130
Firewall (computing)
Types
There are several classifications of firewalls depending on where the communication is taking place, where the
communication is intercepted and the state that is being traced.
Network layer and packet filters
Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not
allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator
may define the rules; or default rules may apply. The term "packet filter" originated in the context of BSD operating
systems.
Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain
context about active sessions, and use that "state information" to speed packet processing. Any existing network
connection can be described by several properties, including source and destination IP address, UDP or TCP ports,
and the current stage of the connection's lifetime (including session initiation, handshaking, data transfer, or
completion connection). If a packet does not match an existing connection, it will be evaluated according to the
ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state
table, it will be allowed to pass without further processing.
Stateless firewalls require less memory, and can be faster for simple filters that require less time to filter than to look
up a session. They may also be necessary for filtering stateless network protocols that have no concept of a session.
However, they cannot make more complex decisions based on what stage communications between hosts have
reached.
Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP
address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of
originator, of the source, and many other attributes.
Commonly used packet filters on various versions of Unix are ipf (various), ipfw (FreeBSD/Mac OS X), pf
(OpenBSD, and all other BSDs), iptables/ipchains (Linux).
Application-layer
Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or
ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually
dropping them without acknowledgment to the sender). In principle, application firewalls can prevent all unwanted
outside traffic from reaching protected machines.
On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked
computer worms and trojans. The additional inspection criteria can add extra latency to the forwarding of packets to
their destination.
Proxies
A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a
firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst
blocking other packets.
Proxies make tampering with an internal system from the external network more difficult and misuse of one internal
system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application
proxy remains intact and properly configured). Conversely, intruders mays hijack a publicly-reachable system and
use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While
use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to
pass packets to a target network.
131
Firewall (computing)
Network address translation
Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall
commonly have addresses in the "private address range", as defined in RFC 1918. Firewalls often have such
functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the
limited number of IPv4 routable addresses that could be used or assigned to companies or individuals as well as
reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an
organization. Hiding the addresses of protected devices has become an increasingly important defense against
network reconnaissance.
See also
•
•
•
•
•
•
Access control list
Bastion host
Circuit-level gateway
Comparison of firewalls
Computer security
Egress filtering
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
End-to-end connectivity
Firewall pinhole
Firewalls and Internet Security (book)
Golden Shield Project (aka Great Firewall of China)
List of Linux router or firewall distributions
Mangled packet
network reconnaissance
Packet
Personal firewall
Sandbox (computer security)
Screened-subnet firewall
Stateful firewall
Stateful packet inspection
Unified threat management
Virtual firewall
External links
• Internet Firewalls: Frequently Asked Questions [2], compiled by Matt Curtin, Marcus Ranum and Paul Robertson.
• Evolution of the Firewall Industry [3] - Discusses different architectures and their differences, how packets are
processed, and provides a timeline of the evolution.
• A History and Survey of Network Firewalls [4] - provides an overview of firewalls at the various ISO levels, with
references to the original papers where first firewall work was reported.
• Software Firewalls: Made of Straw? Part 1 [5] and Software Firewalls: Made of Straw? Part 2 [6] - a technical view
on software firewall design and potential weaknesses
pb:ਫਾਯਰਵਾਲ (ਕੰਪਿਉਟਰ)
132
Firewall (computing)
References
[1]
[2]
[3]
[4]
[5]
[6]
RFC 1135 The Helminthiasis of the Internet (http:/ / tools. ietf. org/ html/ rfc1135)
http:/ / www. faqs. org/ faqs/ firewalls-faq/
http:/ / www. cisco. com/ univercd/ cc/ td/ doc/ product/ iaabu/ centri4/ user/ scf4ch3. htm
http:/ / www. cs. unm. edu/ ~treport/ tr/ 02-12/ firewall. pdf
http:/ / www. securityfocus. com/ infocus/ 1839
http:/ / www. securityfocus. com/ infocus/ 1840
Denial-of-service attack
A denial-of-service attack (DoS attack) or distributed
denial-of-service attack (DDoS attack) is an attempt to make a
computer resource unavailable to its intended users. Although the
means to carry out, motives for, and targets of a DoS attack may vary,
it generally consists of the concerted efforts of a person or people to
prevent an Internet site or service from functioning efficiently or at all,
temporarily or indefinitely. Perpetrators of DoS attacks typically target
sites or services hosted on high-profile web servers such as banks,
credit card payment gateways, and even root nameservers. The term is
generally used with regards to computer networks, but is not limited to
this field, for example, it is also used in reference to CPU resource
management.[1]
One common method of attack involves saturating the target (victim)
machine with external communications requests, such that it cannot
respond to legitimate traffic, or responds so slowly as to be rendered
effectively unavailable. In general terms, DoS attacks are implemented
DDoS Stacheldraht Attack diagram.
by either forcing the targeted computer(s) to reset, or consuming its
resources so that it can no longer provide its intended service or obstructing the communication media between the
intended users and the victim so that they can no longer communicate adequately.
Denial-of-service attacks are considered violations of the IAB's Internet proper use policy, and also violate the
acceptable use policies of virtually all Internet service providers. They also commonly constitute violations of the
laws of individual nations.[2]
Symptoms and Manifestations
The United States Computer Emergency Response Team defines symptoms of denial-of-service attacks to include:
•
•
•
•
Unusually slow network performance (opening files or accessing web sites)
Unavailability of a particular web site
Inability to access any web site
Dramatic increase in the number of spam emails received—(this type of DoS attack is considered an e-mail
bomb)[3]
Denial-of-service attacks can also lead to problems in the network 'branches' around the actual computer being
attacked. For example, the bandwidth of a router between the Internet and a LAN may be consumed by an attack,
compromising not only the intended computer, but also the entire network.
If the attack is conducted on a sufficiently large scale, entire geographical regions of Internet connectivity can be
compromised without the attacker's knowledge or intent by incorrectly configured or flimsy network infrastructure
133
Denial-of-service attack
equipment.
Methods of attack
A "denial-of-service" attack is characterized by an explicit attempt by attackers to prevent legitimate users of a
service from using that service. Attacks can be directed at any network device, including attacks on routing devices
and web, electronic mail, or Domain Name System servers.
A DoS attack can be perpetrated in a number of ways. The five basic types of attack are:
1.
2.
3.
4.
5.
Consumption of computational resources, such as bandwidth, disk space, or processor time
Disruption of configuration information, such as routing information.
Disruption of state information, such as unsolicited resetting of TCP sessions.
Disruption of physical network components.
Obstructing the communication media between the intended users and the victim so that they can no longer
communicate adequately.
A DoS attack may include execution of malware intended to:
• Max out the processor's usage, preventing any work from occurring.
• Trigger errors in the microcode of the machine.
• Trigger errors in the sequencing of instructions, so as to force the computer into an unstable state or lock-up.
• Exploit errors in the operating system, causing resource starvation and/or thrashing, i.e. to use up all available
facilities so no real work can be accomplished.
• Crash the operating system itself.
ICMP flood
A smurf attack is one particular variant of a flooding DoS attack on the public Internet. It relies on misconfigured
network devices that allow packets to be sent to all computer hosts on a particular network via the broadcast address
of the network, rather than a specific machine. The network then serves as a smurf amplifier. In such an attack, the
perpetrators will send large numbers of IP packets with the source address faked to appear to be the address of the
victim. The network's bandwidth is quickly used up, preventing legitimate packets from getting through to their
destination.[4] To combat Denial of Service attacks on the Internet, services like the Smurf Amplifier Registry have
given network service providers the ability to identify misconfigured networks and to take appropriate action such as
filtering.
Ping flood is based on sending the victim an overwhelming number of ping packets, usually using the "ping"
command from unix-like hosts (the -t flag on Windows systems has a far less malignant function). It is very simple
to launch, the primary requirement being access to greater bandwidth than the victim.
SYN flood sends a flood of TCP/SYN packets, often with a forged sender address. Each of these packets is handled
like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK
packet, and waiting for a packet in response from the sender address. However, because the sender address is forged,
the response never comes. These half-open connections saturate the number of available connections the server is
able to make, keeping it from responding to legitimate requests until after the attack ends.
134
Denial-of-service attack
Teardrop attacks
A Teardrop attack involves sending mangled IP fragments with overlapping, over-sized payloads to the target
machine. This can crash various operating systems due to a bug in their TCP/IP fragmentation re-assembly code.[5]
Windows 3.1x, Windows 95 and Windows NT operating systems, as well as versions of Linux prior to versions
2.0.32 and 2.1.63 are vulnerable to this attack.
Around September 2009, a vulnerability in Vista was referred to as a "teardrop attack", but the attack targeted SMB2
which is a higher layer than the TCP packets that teardrop used.[6] [7]
Peer-to-peer attacks
Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS attacks. The most
aggressive of these peer-to-peer-DDoS attacks exploits DC++. Peer-to-peer attacks are different from regular
botnet-based attacks. With peer-to-peer there is no botnet and the attacker does not have to communicate with the
clients it subverts. Instead, the attacker acts as a 'puppet master,' instructing clients of large peer-to-peer file sharing
hubs to disconnect from their peer-to-peer network and to connect to the victim's website instead. As a result, several
thousand computers may aggressively try to connect to a target website. While a typical web server can handle a few
hundred connections/sec before performance begins to degrade, most web servers fail almost instantly under five or
six thousand connections/sec. With a moderately big peer-to-peer attack a site could potentially be hit with up to
750,000 connections in a short order. The targeted web server will be plugged up by the incoming connections.
While peer-to-peer attacks are easy to identify with signatures, the large number of IP addresses that need to be
blocked (often over 250,000 during the course of a big attack) means that this type of attack can overwhelm
mitigation defenses. Even if a mitigation device can keep blocking IP addresses, there are other problems to
consider. For instance, there is a brief moment where the connection is opened on the server side before the signature
itself comes through. Only once the connection is opened to the server can the identifying signature be sent and
detected, and the connection torn down. Even tearing down connections takes server resources and can harm the
server.
This method of attack can be prevented by specifying in the p2p protocol which ports are allowed or not. If port 80 is
not allowed, the possibilities for attack on websites can be very limited.
Permanent denial-of-service attacks
A permanent denial-of-service (PDoS), also known loosely as phlashing,[8] is an attack that damages a system so
badly that it requires replacement or reinstallation of hardware.[9] Unlike the distributed denial-of-service attack, a
PDoS attack exploits security flaws which allow remote administration on the management interfaces of the victim's
hardware, such as routers, printers, or other networking hardware. The attacker uses these vulnerabilities to replace a
device's firmware with a modified, corrupt, or defective firmware image—a process which when done legitimately is
known as flashing. This therefore "bricks" the device, rendering it unusable for its original purpose until it can be
repaired or replaced.
The PDoS is a pure hardware targeted attack which can be much faster and requires fewer resources than using a
botnet in a DDoS attack. Because of these features, and the potential and high probability of security exploits on
Network Enabled Embedded Devices (NEEDs), this technique has come to the attention of numerous hacker
communities. PhlashDance is a tool created by Rich Smith[10] (an employee of Hewlett-Packard's Systems Security
Lab) used to detect and demonstrate PDoS vulnerabilities at the 2008 EUSecWest Applied Security Conference [11]
in London.[10]
135
Denial-of-service attack
Application level floods
On IRC, IRC floods are a common electronic warfare weapon.
Various DoS-causing exploits such as buffer overflow can cause server-running software to get confused and fill the
disk space or consume all available memory or CPU time.
Other kinds of DoS rely primarily on brute force, flooding the target with an overwhelming flux of packets,
oversaturating its connection bandwidth or depleting the target's system resources. Bandwidth-saturating floods rely
on the attacker having higher bandwidth available than the victim; a common way of achieving this today is via
Distributed Denial of Service, employing a botnet. Other floods may use specific packet types or connection requests
to saturate finite resources by, for example, occupying the maximum number of open connections or filling the
victim's disk space with logs.
A "banana attack" is another particular type of DoS. It involves redirecting outgoing messages from the client back
onto the client, preventing outside access, as well as flooding the client with the sent packets.
An attacker with access to a victim's computer may slow it until it is unusable or crash it by using a fork bomb.
Nuke
A Nuke is an old denial-of-service attack against computer networks consisting of fragmented or otherwise invalid
ICMP packets sent to the target, achieved by using a modified ping utility to repeatedly send this corrupt data, thus
slowing down the affected computer until it comes to a complete stop.
A specific example of a nuke attack that gained some prominence is the WinNuke, which exploited the vulnerability
in the NetBIOS handler in Windows 95. A string of out-of-band data was sent to TCP port 139 of the victim's
machine, causing it to lock up and display a Blue Screen of Death (BSOD).
Distributed attack
A distributed denial of service attack (DDoS) occurs when multiple systems flood the bandwidth or resources of a
targeted system, usually one or more web servers. These systems are compromised by attackers using a variety of
methods.
Malware can carry DDoS attack mechanisms; one of the better-known examples of this was MyDoom. Its DoS
mechanism was triggered on a specific date and time. This type of DDoS involved hardcoding the target IP address
prior to release of the malware and no further interaction was necessary to launch the attack.
A system may also be compromised with a trojan, allowing the attacker to download a zombie agent (or the trojan
may contain one). Attackers can also break into systems using automated tools that exploit flaws in programs that
listen for connections from remote hosts. This scenario primarily concerns systems acting as servers on the web.
Stacheldraht is a classic example of a DDoS tool. It utilizes a layered structure where the attacker uses a client
program to connect to handlers, which are compromised systems that issue commands to the zombie agents, which
in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker, using automated
routines to exploit vulnerabilities in programs that accept remote connections running on the targeted remote hosts.
Each handler can control up to a thousand agents.[12]
These collections of systems compromisers are known as botnets. DDoS tools like stacheldraht still use classic DoS
attack methods centered on IP spoofing and amplification like smurf attacks and fraggle attacks (these are also
known as bandwidth consumption attacks). SYN floods (also known as resource starvation attacks) may also be
used. Newer tools can use DNS servers for DoS purposes. See next section.
Simple attacks such as SYN floods may appear with a wide range of source IP addresses, giving the appearance of a
well distributed DDoS. These flood attacks do not require completion of the TCP three way handshake and attempt
to exhaust the destination SYN queue or the server bandwidth. Because the source IP addresses can be trivially
136
Denial-of-service attack
spoofed, an attack could come from a limited set of sources, or may even originate from a single host. Stack
enhancements such as syn cookies may be effective mitigation against SYN queue flooding, however complete
bandwidth exhaustion may require involvement
Unlike MyDoom's DDoS mechanism, botnets can be turned against any IP address. Script kiddies use them to deny
the availability of well known websites to legitimate users.[2] More sophisticated attackers use DDoS tools for the
purposes of extortion — even against their business rivals.[13]
It is important to note the difference between a DDoS and DoS attack. If an attacker mounts an attack from a single
host it would be classified as a DoS attack. In fact, any attack against availability would be classed as a Denial of
Service attack. On the other hand, if an attacker uses a thousand systems to simultaneously launch smurf attacks
against a remote host, this would be classified as a DDoS attack.
The major advantages to an attacker of using a distributed denial-of-service attack are that multiple machines can
generate more attack traffic than one machine, multiple attack machines are harder to turn off than one attack
machine, and that the behavior of each attack machine can be stealthier, making it harder to track down and shut
down. These attacker advantages cause challenges for defense mechanisms. For example, merely purchasing more
incoming bandwidth than the current volume of the attack might not help, because the attacker might be able to
simply add more attack machines.
Reflected attack
A distributed reflected denial of service attack (DRDoS) involves sending forged requests of some type to a very
large number of computers that will reply to the requests. Using Internet protocol spoofing, the source address is set
to that of the targeted victim, which means all the replies will go to (and flood) the target.
ICMP Echo Request attacks (Smurf Attack) can be considered one form of reflected attack, as the flooding host(s)
send Echo Requests to the broadcast addresses of mis-configured networks, thereby enticing many hosts to send
Echo Reply packets to the victim. Some early DDoS programs implemented a distributed form of this attack.
Many services can be exploited to act as reflectors, some harder to block than others.[14] DNS amplification attacks
involve a new mechanism that increased the amplification effect, using a much larger list of DNS servers than seen
earlier.[15]
Degradation-of-service attacks
"Pulsing" zombies are compromised computers that are directed to launch intermittent and short-lived floodings of
victim websites with the intent of merely slowing it rather than crashing it. This type of attack, referred to as
"degradation-of-service" rather than "denial-of-service", can be more difficult to detect than regular zombie
invasions and can disrupt and hamper connection to websites for prolonged periods of time, potentially causing more
damage than concentrated floods.[16] [17] Exposure of degradation-of-service attacks is complicated further by the
matter of discerning whether the attacks really are attacks or just healthy and likely desired increases in website
traffic.[18]
Unintentional denial of service
Aka VIPDoS
This describes a situation where a website ends up denied, not due to a deliberate attack by a single individual or
group of individuals, but simply due to a sudden enormous spike in popularity. This can happen when an extremely
popular website posts a prominent link to a second, less well-prepared site, for example, as part of a news story. The
result is that a significant proportion of the primary site's regular users — potentially hundreds of thousands of
people — click that link in the space of a few hours, having the same effect on the target website as a DDoS attack.
137
Denial-of-service attack
An example of this occurred when Michael Jackson died in 2009. Websites such as Google and Twitter slowed down
or even crashed. Many sites' servers thought the requests were from a virus or spyware trying to cause a Denial of
Service attack, warning users that their queries looked like "automated requests from a computer virus or spyware
application".
News sites and link sites — sites whose primary function is to provide links to interesting content elsewhere on the
Internet — are most likely to cause this phenomenon. The canonical example is the Slashdot effect. Sites such as
Digg, the Drudge Report, Fark, Something Awful, and the webcomic Penny Arcade have their own corresponding
"effects", known as "the Digg effect", being "drudged", "farking", "goonrushing" and "wanging"; respectively.
Routers have also been known to create unintentional DoS attacks, as both D-Link and Netgear routers have created
NTP vandalism by flooding NTP servers without respecting the restrictions of client types or geographical
limitations.
Similar unintentional denials of service can also occur via other media, e.g. when a URL is mentioned on television.
If a server is being indexed by Google or another search engine during peak periods of activity, or does not have a lot
of available bandwidth while being indexed, it can also experience the effects of a DoS attack.
Legal action has been taken in at least one such case. In 2006, Universal Tube & Rollform Equipment Corporation
sued YouTube: massive numbers of would-be youtube.com users accidentally typed the tube company's URL,
utube.com. As a result, the tube company ended up having to spend large amounts of money on upgrading their
bandwidth.[19]
Denial-of-Service Level II
The goal of DoS L2 (possibly DDoS) attack is to cause a launching of a defense mechanism which blocks the
network segment from which the attack originated. In case of distributed attack or IP header modification (that
depends on the kind of security behavior) it will fully block the attacked network from Internet, but without system
crash.
Blind denial of service
In a blind denial of service attack, the attacker has a significant advantage. The attacker must be able to receive
traffic from the victim, then the attacker must either subvert the routing fabric or use the attacker's own IP address.
Either provides an opportunity for the victim to track the attacker and/or filter out his traffic. With a blind attack the
attacker uses a forged IP addresses, making it extremely difficult for the victim to filter out those packets. The TCP
SYN flood attack is an example of a blind attack. Designers should make every attempt possible to prevent blind
denial of service attacks.[20]
Incidents
• The first major attack involving DNS servers as reflectors occurred in January 2001. The target was
Register.com.[21] This attack, which forged requests for the MX records of AOL.com (to amplify the attack)
lasted about a week before it could be traced back to all attacking hosts and shut off. It used a list of tens of
thousands of DNS records that were a year old at the time of the attack.
• In February, 2001, the Irish Government's Department of Finance server was hit by a denial of service attack
carried out as part of a student campaign from NUI Maynooth. The Department officially complained to the
University authorities and a number of students were disciplined.
• In July 2002, the Honeynet Project Reverse Challenge was issued.[22] The binary that was analyzed turned out to
be yet another DDoS agent, which implemented several DNS related attacks, including an optimized form of a
reflection attack.
138
Denial-of-service attack
• On two occasions to date, attackers have performed DNS Backbone DDoS Attacks on the DNS root servers.
Since these machines are intended to provide service to all Internet users, these two denial of service attacks
might be classified as attempts to take down the entire Internet, though it is unclear what the attackers' true
motivations were. The first occurred in October 2002 and disrupted service at 9 of the 13 root servers. The second
occurred in February 2007 and caused disruptions at two of the root servers.[23]
• In February 2007, more than 10,000 online game servers in games such as Return to Castle Wolfenstein, Halo,
Counter-Strike and many others were attacked by the hacker group RUS. The DDoS attack was made from more
than a thousand computer units located in the republics of the former Soviet Union, mostly from Russia,
Uzbekistan and Belarus. Minor attacks are still continuing to be made today.
• In the weeks leading up to the five-day 2008 South Ossetia war, a DDoS attack directed at Georgian government
sites containing the message: "win+love+in+Rusia" effectively overloaded and shut down multiple Georgian
servers. Websites targeted included the Web site of the Georgian president, Mikhail Saakashvili, rendered
inoperable for 24 hours, and the National Bank of Georgia. While heavy suspicion was placed on Russia for
orchestrating the attack through a proxy, the St. Petersburg-based criminal gang known as the Russian Business
Network, or R.B.N, the Russian government denied the allegations, stating that it was possible that individuals in
Russia or elsewhere had taken it upon themselves to start the attacks.[24]
• During the 2009 Iranian election protests, foreign activists seeking to help the opposition engaged in DDoS
attacks against Iran's government. The official website of the Iranian government (ahmedinejad.ir [25]) was
rendered inaccessible on several occasions.[26] Critics claimed that the DDoS attacks also cut off internet access
for protesters inside Iran; activists countered that, while this may have been true, the attacks still hindered
President Mahmoud Ahmadinejad's government enough to aid the opposition.
• On June 25, 2009, the day Michael Jackson died, the spike in searches related to Michael Jackson was so big that
Google News initially mistook it for an automated attack. As a result, for about 25 minutes, when some people
searched Google News they saw a "We're sorry" page before finding the articles they were looking for.[27]
• June 2009 the P2P site The Pirate Bay was rendered inaccessible due to a DDoS attack. This was most likely
provoked by the recent sellout to Global Gaming Factory X AB, which was seen as a "take the money and run"
solution to the website's legal issues.[28] In the end, due to the buyers' financial troubles, the site was not sold.
• Multiple waves of July 2009 cyber attacks targeted a number of major websites in South Korea and the United
States. The attacker used botnet and file update through internet is known to assist its spread. As it turns out, a
computer trojan was coded to scan for existing MyDoom bots. MyDoom was a worm in 2004, and in July around
20,000-50,000 were present. MyDoom has a backdoor, which the DDoS bot could exploit. Since then, the DDoS
bot removed itself, and completely formatted the hard drives. Most of the bots originated from China, and North
Korea.
• On August 6, 2009 several social networking sites, including Twitter, Facebook, Livejournal, and Google
blogging pages were hit by DDoS attacks, apparently aimed at Georgian blogger "Cyxymu". Although Google
came through with only minor set-backs, these attacks left Twitter crippled for hours and Facebook did eventually
restore service although some users still experienced trouble. Twitter's Site latency has continued to improve,
however some web requests continue to fail.[29] [30] [31]
139
Denial-of-service attack
Performing DoS-attacks
A wide array of programs are used to launch DoS-attacks. Most of these programs are completely focused on
performing DoS-attacks, while others are also true Packet injectors, thus able to perform other tasks as well.
Some examples of such tools are hping, JAVA socket programming, and httping but these are not the only programs
capable of such attacks. Such tools are intended for benign use, but they can also be utilized in launching attacks on
victim networks. In addition to these tools, there exist a vast amount of underground tools used by attackers.[32]
Prevention and response
Firewalls
Firewalls have simple rules such as to allow or deny protocols, ports or IP addresses. Some DoS attacks are too
complex for today's firewalls, e.g. if there is an attack on port 80 (web service), firewalls cannot prevent that attack
because they cannot distinguish good traffic from DoS attack traffic. Additionally, firewalls are too deep in the
network hierarchy. Routers may be affected even before the firewall gets the traffic. Nonetheless, firewalls can
effectively prevent users from launching simple flooding type attacks from machines behind the firewall.
Some stateful firewalls like OpenBSD's pF, can act as a proxy for connections, the handshake is validated (with the
client) instead of simply forwarding the packet to the destination. It is available for other BSDs as well. In that
context, it is called "synproxy".[33]
Switches
Most switches have some rate-limiting and ACL capability. Some switches provide automatic and/or system-wide
rate limiting, traffic shaping, delayed binding (TCP splicing), deep packet inspection and Bogon filtering (bogus IP
filtering) to detect and remediate denial of service attacks through automatic rate filtering and WAN Link failover
and balancing.
These schemes will work as long as the DoS attacks are something that can be prevented by using them. For example
SYN flood can be prevented using delayed binding or TCP splicing. Similarly content based DoS can be prevented
using deep packet inspection. Attacks originating from dark addresses or going to dark addresses can be prevented
using Bogon filtering. Automatic rate filtering can work as long as you have set rate-thresholds correctly and
granularly. Wan-link failover will work as long as both links have DoS/DDoS prevention mechanism.
Routers
Similar to switches, routers have some rate-limiting and ACL capability. They, too, are manually set. Most routers
can be easily overwhelmed under DoS attack. If you add rules to take flow statistics out of the router during the DoS
attacks, they further slow down and complicate the matter. Cisco IOS has features that prevent flooding, i.e. example
settings.[34]
Application front end hardware
Application front end hardware is intelligent hardware placed on the network before traffic reaches the servers. It can
be used on networks in conjunction with routers and switches. Application front end hardware analyzes data packets
as they enter the system, and then identifies them as priority, regular, or dangerous. There are more than 25
bandwidth management vendors. Hardware acceleration is key to bandwidth management. Look for granularity of
bandwidth management, hardware acceleration, and automation while selecting an appliance.
140
Denial-of-service attack
141
IPS based prevention
Intrusion-prevention systems (IPS) are effective if the attacks have signatures associated with them. However, the
trend among the attacks is to have legitimate content but bad intent. Intrusion-prevention systems which work on
content recognition cannot block behavior-based DoS attacks.
An ASIC based IPS can detect and block denial of service attacks because they have the processing power and the
granularity to analyze the attacks and act like a circuit breaker in an automated way.
A rate-based IPS (RBIPS) must analyze traffic granularly and continuously monitor the traffic pattern and determine
if there is traffic anomaly. It must let the legitimate traffic flow while blocking the DoS attack traffic.
Prevention via proactive testing
Test platforms such as Mu Dynamics' Service Analyzer are available to perform simulated denial-of-service attacks
that can be used to evaluate defensive mechanisms such IPS, RBIPS, as well as the popular denial-of-service
mitigation products from Arbor Networks. An example of proactive testing of denial-of-service throttling
capabilities in a switch was performed in 2008: The Juniper EX 4200 [35] switch with integrated denial-of-service
throttling was tested by Network Test [36] and the resulting review [37] was published in Network World [38].
Blackholing/Sinkholing
With blackholing, all the traffic to the attacked DNS or IP address is sent to a "black hole" (null interface,
non-existent server, ...). To be more efficient and avoid affecting your network connectivity, it can be managed by
the ISP.[39]
Sinkholing routes to a valid IP address which analyzes traffic and reject bad ones. Sinkholing is not efficient for
most severe attacks.
Clean pipes
All traffic is passed through a "cleaning center" via a proxy, which separates "bad" traffic (DDoS and also other
common internet attacks) and only sends good traffic beyond to the server. The provider needs central connectivity
to the Internet to manage this kind of service.[40]
Prolexic and Verisign are examples of providers of this service.[41] [42]
Side effects of DoS attacks
Backscatter
In computer network security, backscatter is a side-effect of a spoofed denial of service (DoS) attack. In this kind of
attack, the attacker spoofs (or forges) the source address in IP packets sent to the victim. In general, the victim
machine can not distinguish between the spoofed packets and legitimate packets, so the victim responds to the
spoofed packets as it normally would. These response packets are known as backscatter.
If the attacker is spoofing source addresses randomly, the backscatter response packets from the victim will be sent
back to random destinations. This effect can be used by network telescopes as indirect evidence of such attacks.
The term "backscatter analysis" refers to observing backscatter packets arriving at a statistically significant portion
of the IP address space to determine characteristics of DoS attacks and victims.
An educational animation describing such backscatter can be found on the animations page
Cooperative Association for Internet Data Analysis.
[43]
maintained by the
Denial-of-service attack
Denial-of-service attacks and the law
In the Police and Justice Act 2006, the United Kingdom specifically outlawed denial-of-service attacks and set a
maximum penalty of 10 years in prison.[44]
In the US, they can be a serious federal crime under the National Information Infrastructure Protection Act of 1996
with penalties that include years of imprisonment, and many countries have similar laws.
See also
•
•
•
•
•
•
•
•
•
Billion laughs
Black fax
Cybercrime
Dosnet
Industrial espionage
Intrusion-detection system
Network intrusion detection system
Regular expression Denial of Service - ReDoS
Wireless signal jammer
External links
•
•
•
•
•
•
•
RFC 4732 Internet Denial-of-Service Considerations
W3C The World Wide Web Security FAQ [45]
Understanding and surviving DDoS attacks [46]
cert.org [47] CERT's Guide to DoS attacks.
ATLAS Summary Report [48] - Real-time global report of DDoS attacks.
linuxsecurity.com [49] An article on preventing DDoS attacks.
Is Your PC a Zombie? [50], About.com.
References
[1] Yuval, Fledel. Uri, Kanonov. Yuval, Elovici. Shlomi, Dolev. Chanan, Glezer. "Google Android: A Comprehensive Security Assessment".
IEEE Security & Privacy (IEEE) (in press). doi:10.1109/MSP.2010.2. ISSN 1540-7993.
[2] Phillip Boyle (2000). "SANS Institute - Intrusion Detection FAQ: Distributed Denial of Service Attack Tools: n/a" (http:/ / www. sans. org/
resources/ idfaq/ trinoo. php). SANS Institute. . Retrieved May 2, 2008.
[3] Mindi McDowell (2007). "Cyber Security Tip ST04-015" (http:/ / www. us-cert. gov/ cas/ tips/ ST04-015. html). United States Computer
Emergency Readiness Team. . Retrieved May 2, 2008.
[4] "Types of DDoS Attacks" (http:/ / anml. iu. edu/ ddos/ types. html). 2001. . Retrieved May 2, 2008.
[5] "CERT Advisory CA-1997-28 IP Denial-of-Service Attacks" (http:/ / www. cert. org/ advisories/ CA-1997-28. html). CERT. 1998. .
Retrieved May 2, 2008.
[6] http:/ / www. zdnet. com/ blog/ security/ windows-7-vista-exposed-to-teardrop-attack/ 4222
[7] http:/ / www. microsoft. com/ technet/ security/ advisory/ 975497. mspx
[8] Leyden, John (2008-05-21). "Phlashing attack thrashes embedded systems" (http:/ / www. theregister. co. uk/ 2008/ 05/ 21/ phlashing/ ).
theregister.co.uk. . Retrieved 2009-03-07.
[9] "Permanent Denial-of-Service Attack Sabotages Hardware" (http:/ / www. darkreading. com/ document. asp?doc_id=154270& WT.
svl=news1_1). Dark Reading. 2008. . Retrieved May 19, 2008.
[10] "EUSecWest Applied Security Conference: London, U.K." (http:/ / eusecwest. com/ speakers. html#Smith). EUSecWest. 2008. .
[11] http:/ / eusecwest. com
[12] The "stacheldraht" distributed denial of service attack tool (http:/ / staff. washington. edu/ dittrich/ misc/ stacheldraht. analysis. txt)
[13] US credit card firm fights DDoS attack (http:/ / www. theregister. co. uk/ 2004/ 09/ 23/ authorize_ddos_attack/ )
[14] Paxson, Vern (2001), An Analysis of Using Reflectors for Distributed Denial-of-Service Attacks (http:/ / www. icir. org/ vern/ papers/
reflectors. CCR. 01/ reflectors. html)
[15] Vaughn, Randal and Evron, Gadi (2006), DNS Amplification Attacks (http:/ / www. isotf. org/ news/ DNS-Amplification-Attacks. pdf)
[16] Encyclopaedia Of Information Technology. Atlantic Publishers & Distributors. 2007. pp. 397. ISBN 8126907525.
142
Denial-of-service attack
[17] Schwabach, Aaron (2006). Internet and the Law. ABC-CLIO. pp. 325. ISBN 1851097317.
[18] Lu, Xicheng; Wei Zhao (2005). Networking and Mobile Computing. Birkhäuser. pp. 424. ISBN 3540281029.
[19] "YouTube sued by sound-alike site" (http:/ / news. bbc. co. uk/ 2/ hi/ business/ 6108502. stm). BBC News. 2006-11-02. .
[20] "RFC 3552 - Guidelines for Writing RFC Text on Security Considerations" (http:/ / www. faqs. org/ rfcs/ rfc3552. html). July 2003. .
[21] January 2001 thread on the UNISOG mailing list (http:/ / staff. washington. edu/ dittrich/ misc/ ddos/ register. com-unisog. txt)
[22] Honeynet Project Reverse Challenge (http:/ / old. honeynet. org/ reverse/ index. html)
[23] "Factsheet - Root server attack on 6 February 2007" (http:/ / www. icann. org/ announcements/ factsheet-dns-attack-08mar07. pdf). ICANN.
2007-03-01. . Retrieved 2009-08-01.
[24] Markoff, John. "Before the Gunfire, Cyberattacks" (http:/ / www. nytimes. com/ 2008/ 08/ 13/ technology/ 13cyber. html?em). The New
York Times. . Retrieved 2008-08-12.
[25] http:/ / www. ahmadinejad. ir/
[26] Shachtman, Noah (2009-06-15). "Activists Launch Hack Attacks on Tehran Regime" (http:/ / www. wired. com/ dangerroom/ 2009/ 06/
activists-launch-hack-attacks-on-tehran-regime/ ). Wired. . Retrieved 2009-06-15.
[27] Outpouring of searches for the late Michael Jackson (http:/ / googleblog. blogspot. com/ 2009/ 06/ outpouring-of-searches-for-late-michael.
html), June 26, 2009, Official Google Blog
[28] Pirate Bay Hit With DDoS Attack After "Selling Out" (http:/ / www. tomshardware. com/ news/ Pirate-Bay-DDoS-Sell-Out,8173. html),
8:01 AM - July 1, 2009, by Jane McEntegart - Tom's Hardware
[29] Ongoing denial-of-service attack (http:/ / status. twitter. com/ post/ 157191978/ ongoing-denial-of-service-attack), August 6, 2009, Twitter
Status Blog
[30] Facebook Down. Twitter Down. Social Media Meltdown. (http:/ / mashable. com/ 2009/ 08/ 06/ facebook-down-3/ ), August 6, 2009, By
Pete Cashmore, Mashable
[31] Wortham, Jenna; Kramer, Andrew E. (August 8, 2009). "Professor Main Target of Assault on Twitter" (http:/ / www. nytimes. com/ 2009/
08/ 08/ technology/ internet/ 08twitter. html?_r=1& hpw). New York Times. . Retrieved 2009-08-07.
[32] Managing WLAN Risks with Vulnerability Assessment (http:/ / www. airmagnet. com/ assets/ whitepaper/
WLAN_Vulnerabilities_White_Paper. pdf), August 5, 2008, By Lisa Phifer:Core Competence, Inc. ,Technology Whitepaper, AirMagnet, Inc.
[33] Froutan, Paul (June 24, 2004). "How to defend against DDoS attacks" (http:/ / www. computerworld. com/ s/ article/ 94014/
How_to_defend_against_DDoS_attacks). Computerworld. . Retrieved May 15, 2010.
[34] "Some IoS tips for Internet Service (Providers)" (http:/ / mehmet. suzen. googlepages. com/ qos_ios_dos_suzen2005. pdf) (Mehmet Suzen)
[35] http:/ / www. juniper. net/ products_and_services/ ex_series/ index. html
[36] http:/ / www. networktest. com
[37] http:/ / www. networkworld. com/ reviews/ 2008/ 071408-test-juniper-switch. html
[38] http:/ / www. networkworld. com/ reviews/ 2008/ 071408-test-juniper-switch-how. html
[39] Distributed Denial of Service Attacks (http:/ / www. cisco. com/ web/ about/ ac123/ ac147/ archived_issues/ ipj_7-4/ dos_attacks. html), by
Charalampos Patrikakis, Michalis Masikos, and Olga Zouraraki, The Internet Protocol Journal - Volume 7, Number 4, National Technical
University of Athens, Cisco Systems Inc
[40] "DDoS Mitigation via Regional Cleaning Centers (Jan 2004)" (https:/ / research. sprintlabs. com/ publications/ uploads/ RR04-ATL-013177.
pdf)
[41] "VeriSign Rolls Out DDoS Monitoring Service" (http:/ / www. darkreading. com/ securityservices/ security/ intrusion-prevention/
showArticle. jhtml?articleID=219900002)
[42] "DDoS: A Threat That's More Common Than You Think" (http:/ / developertutorials-whitepapers. tradepub. com/ free/ w_verb09/ prgm.
cgi)
[43] http:/ / www. caida. org/ publications/ animations/
[44] U.K. outlaws denial-of-service attacks (http:/ / news. cnet. com/ U. K. -outlaws-denial-of-service-attacks/ 2100-7348_3-6134472. html),
November 10, 2006, By Tom Espiner - CNET News
[45] http:/ / www. w3. org/ Security/ Faq/ wwwsf6. html
[46] http:/ / www. armoraid. com/ survive/
[47] http:/ / www. cert. org/ tech_tips/ denial_of_service. html
[48] http:/ / atlas. arbor. net/ summary/ dos
[49] http:/ / www. linuxsecurity. com/ content/ view/ 121960/ 49/
[50] http:/ / antivirus. about. com/ od/ whatisavirus/ a/ zombiepc. htm
143
Spam (electronic)
144
Spam (electronic)
Spam is the use of electronic messaging
systems (including most broadcast media,
digital delivery systems) to send unsolicited
bulk messages indiscriminately. While the
most widely recognized form of spam is
e-mail spam, the term is applied to similar
abuses in other media: instant messaging
spam, Usenet newsgroup spam, Web search
engine spam, spam in blogs, wiki spam,
online classified ads spam, mobile phone
messaging spam, Internet forum spam, junk
fax transmissions, social networking spam,
television advertising and file sharing
network spam.
An email box folder littered with spam messages.
Spamming remains economically viable because advertisers have no operating costs beyond the management of their
mailing lists, and it is difficult to hold senders accountable for their mass mailings. Because the barrier to entry is so
low, spammers are numerous, and the volume of unsolicited mail has become very high. The costs, such as lost
productivity and fraud, are borne by the public and by Internet service providers, which have been forced to add
extra capacity to cope with the deluge. Spamming is universally reviled, and has been the subject of legislation in
many jurisdictions.[1]
People who create electronic spam are called spammers.[2]
In different media
E-mail
E-mail spam, known as unsolicited bulk Email (UBE), junk mail, or unsolicited commercial email (UCE), is the
practice of sending unwanted e-mail messages, frequently with commercial content, in large quantities to an
indiscriminate set of recipients. Spam in e-mail started to become a problem when the Internet was opened up to the
general public in the mid-1990s. It grew exponentially over the following years, and today composes some 80 to
85% of all the email in the world, by a "conservative estimate".[3] Pressure to make e-mail spam illegal has been
successful in some jurisdictions, but less so in others. Spammers take advantage of this fact, and frequently
outsource parts of their operations to countries where spamming will not get them into legal trouble.
Increasingly, e-mail spam today is sent via "zombie networks", networks of virus- or worm-infected personal
computers in homes and offices around the globe; many modern worms install a backdoor which allows the
spammer access to the computer and use it for malicious purposes. This complicates attempts to control the spread of
spam, as in many cases the spam doesn't even originate from the spammer. In November 2008 an ISP, McColo,
which was providing service to botnet operators, was depeered and spam dropped 50%-75% Internet-wide. At the
same time, it is becoming clear that malware authors, spammers, and phishers are learning from each other, and
possibly forming various kinds of partnerships.
An industry of e-mail address harvesting is dedicated to collecting email addresses and selling compiled databases.[4]
Some of these address harvesting approaches rely on users not reading the fine print of agreements, resulting in them
agreeing to send messages indiscriminately to their contacts. This is a common approach in social networking spam
such as that generated by the social networking site Quechup.[5]
Spam (electronic)
Instant Messaging
Instant Messaging spam, known also as spim (a portmanteau of spam and IM, short for instant messaging), makes
use of instant messaging systems. Although less ubiquitous than its e-mail counterpart, spam is reaching more users
all the time. According to a report from Ferris Research, 500 million spam IMs were sent in 2003, twice the level of
2002. As instant messaging tends to not be blocked by firewalls it is an especially useful channel for spammers.
One way to protect yourself against spammers is to only allow messages from people on your friends lists. Many
email services now offer spam filtering (Junk Mail) and some instant messaging providers offer hints and tips on
avoiding email spam and spim (BT Yahoo for example).
Newsgroup and forum
Newsgroup spam is a type of spam where the targets are Usenet newsgroups. Spamming of Usenet newsgroups
actually pre-dates e-mail spam. Usenet convention defines spamming as excessive multiple posting, that is, the
repeated posting of a message (or substantially similar messages). The prevalence of Usenet spam led to the
development of the Breidbart Index as an objective measure of a message's "spamminess".
Forum spam is the creating of messages that are advertisements, abusive, or otherwise unwanted on Internet forums.
It is generally done by automated spambots. Most forum spam consists of links to external sites, with the dual goals
of increasing search engine visibility in highly competitive areas such as weight loss, pharmaceuticals, gambling,
pornography, real estate or loans, and generating more traffic for these commercial websites. Some of these links
contain code to track the spambot's identity if a sale goes through, when the spammer behind the spambot works on
commission.
Mobile phone
Mobile phone spam is directed at the text messaging service of a mobile phone. This can be especially irritating to
customers not only for the inconvenience but also because of the fee they may be charged per text message received
in some markets. The term "SpaSMS" was coined at the adnews website Adland in 2000 to describe spam SMS.
Online game messaging
Many online games allow players to contact each other via player-to-player messaging, chat rooms, or public
discussion areas. What qualifies as spam varies from game to game, but usually this term applies to all forms of
message flooding, violating the terms of service contract for the website. This is particularly common in MMORPGs
where the spammers are trying to sell game-related "items" for real-world money, chiefly among these items is
in-game currency. This kind of spamming is also called Real Money Trading (RMT). In the popular MMORPG
World of Warcraft, it is common for spammers to advertise sites that sell gold in multiple methods of spam. They
send spam via the in-game private messaging system, via the in-game mailing system, via yelling publicly to
everyone in the area and by creating a lot of characters and committing suicide (with hacks) and making a row of
bodies resemble a site URL which takes the user to a gold-selling website. All of these spam methods can interfere
with the user's gameplay experience and this is one reason why spam is discouraged by game developers.
Spam targeting search engines (spamdexing)
Spamdexing (a portmanteau of spamming and indexing) refers to a practice on the World Wide Web of modifying
HTML pages to increase the chances of them being placed high on search engine relevancy lists. These sites use
"black hat search engine optimization (SEO) techniques" to unfairly increase their rank in search engines. Many
modern search engines modified their search algorithms to try to exclude web pages utilizing spamdexing tactics.
For example, the search bots will detect repeated keywords as spamming by using a grammar analysis. If a website
owner is found to have spammed the webpage to falsely increase its page rank, the website may be penalized by
145
Spam (electronic)
search engines.
Blog, wiki, and guestbook
Blog spam, or "blam" for short, is spamming on weblogs. In 2003, this type of spam took advantage of the open
nature of comments in the blogging software Movable Type by repeatedly placing comments to various blog posts
that provided nothing more than a link to the spammer's commercial web site.[6] Similar attacks are often performed
against wikis and guestbooks, both of which accept user contributions.
Spam targeting video sharing sites
Video sharing sites, such as YouTube, are now being frequently targeted by spammers. The most common technique
involves people (or spambots) posting links to sites, most likely pornographic or dealing with online dating, on the
comments section of random videos or people's profiles. Another frequently used technique is using bots to post
messages on random users' profiles to a spam account's channel page, along with enticing text and images, usually of
a sexually suggestive nature. These pages may include their own or other users' videos, again often suggestive. The
main purpose of these accounts is to draw people to their link in the home page section of their profile. YouTube has
blocked the posting of such links. In addition, YouTube has implemented a CAPTCHA system that makes rapid
posting of repeated comments much more difficult than before, because of abuse in the past by mass-spammers who
would flood people's profiles with thousands of repetitive comments.
Yet another kind is actual video spam, giving the uploaded movie a name and description with a popular figure or
event which is likely to draw attention, or within the video has a certain image timed to come up as the video's
thumbnail image to mislead the viewer. The actual content of the video ends up being totally unrelated, a Rickroll,
sometimes offensive, or just features on-screen text of a link to the site being promoted.[7] Others may upload videos
presented in an infomercial-like format selling their product which feature actors and paid testimonials, though the
promoted product or service is of dubious quality and would likely not pass the scrutiny of a standards and practices
department at a television station or cable network.
Noncommercial forms
E-mail and other forms of spamming have been used for purposes other than advertisements. Many early Usenet
spams were religious or political. Serdar Argic, for instance, spammed Usenet with historical revisionist screeds. A
number of evangelists have spammed Usenet and e-mail media with preaching messages. A growing number of
criminals are also using spam to perpetrate various sorts of fraud,[8] and in some cases have used it to lure people to
locations where they have been kidnapped, held for ransom, and even murdered.[9]
Geographical origins
A 2009 Cisco Systems report lists the origin of spam by country as follows:[10]
(trillions of spam messages per year)
1.
2.
3.
4.
5.
6.
7.
Brazil: 7.7;
USA: 6.6;
India: 3.6;
South Korea: 3.1;
Turkey: 2.6;
Vietnam: 2.5;
China: 2.4;
8. Poland: 2.4;
9. Russia: 2.3;
146
Spam (electronic)
10. Argentina: 1.5.
History
Pre-Internet
In the late 19th Century Western Union allowed telegraphic messages on its network to be sent to multiple
destinations. The first recorded instance of a mass unsolicited commercial telegram is from May 1864.[11] Up until
the Great Depression wealthy North American residents would be deluged with nebulous investment offers. This
problem never fully emerged in Europe to the degree that it did in the Americas, because telegraphy was regulated
by national post offices in the European region.
Etymology
According to the Internet Society and other sources, the term spam is derived from the 1970 Spam sketch of the BBC
television comedy series "Monty Python's Flying Circus"[12] [13] The sketch is set in a cafe where nearly every item
on the menu includes Spam canned luncheon meat. As the waiter recites the Spam-filled menu, a chorus of Viking
patrons drowns out all conversations with a song repeating "Spam, Spam, Spam, Spam... lovely Spam! wonderful
Spam!", hence "Spamming" the dialogue. The excessive amount of Spam mentioned in the sketch is a reference to
the preponderance of imported canned meat products in the United Kingdom, particularly corned beef from
Argentina, in the years after World War II, as the country struggled to rebuild its agricultural base. Spam captured a
large slice of the British market within lower economic classes and became a byword among British schoolboys of
the 1960s for low-grade fodder due to its commonality, monotonous taste and cheap price - hence the humour of the
Python sketch.
In the 1980s the term was adopted to describe certain abusive users who frequented BBSs and MUDs, who would
repeat "Spam" a huge number of times to scroll other users' text off the screen.[14] In early Chat rooms services like
PeopleLink and the early days of AOL, they actually flooded the screen with quotes from the Monty Python Spam
sketch. With internet connections over phone lines, typically running at 1200 or even 300 baud, it could take an
enormous amount of time for a spammy logo, drawn in ASCII art to scroll to completion on a viewer's terminal.
Sending an irritating, large, meaningless block of text in this way was called spamming. This was used as a tactic by
insiders of a group that wanted to drive newcomers out of the room so the usual conversation could continue. It was
also used to prevent members of rival groups from chatting—for instance, Star Wars fans often invaded Star Trek
chat rooms, filling the space with blocks of text until the Star Trek fans left.[15] This act, previously called flooding
or trashing, came to be known as spamming.[16] The term was soon applied to a large amount of text broadcast by
many users.
It later came to be used on Usenet to mean excessive multiple posting—the repeated posting of the same message.
The unwanted message would appear in many if not all newsgroups, just as Spam appeared in nearly all the menu
items in the Monty Python sketch. The first usage of this sense was by Joel Furr[17] in the aftermath of the ARMM
incident of March 31, 1993, in which a piece of experimental software released dozens of recursive messages onto
the news.admin.policy newsgroup.[18] This use had also become established—to spam Usenet was flooding
newsgroups with junk messages. The word was also attributed to the flood of "Make Money Fast" messages that
clogged many newsgroups during the 1990s. In 1998, the New Oxford Dictionary of English, which had previously
only defined "spam" in relation to the trademarked food product, added a second definition to its entry for "spam":
"Irrelevant or inappropriate messages sent on the Internet to a large number of newsgroups or users."[19]
There are several popular false etymologies of the word "spam". One, promulgated by early spammers Laurence
Canter and Martha Siegel, is that "spamming" is what happens when one dumps a can of Spam luncheon meat into a
fan blade. Some others are the backronyms "shit posing as mail" and "stupid pointless annoying messages."
147
Spam (electronic)
History of Internet forms
The earliest documented spam was a message advertising the availability of a new model of Digital Equipment
Corporation computers sent to 393 recipients on ARPANET in 1978, by Gary Thuerk.[17] [20] [21] The term "spam"
for this practice had not yet been applied. Spamming had been practiced as a prank by participants in multi-user
dungeon games, to fill their rivals' accounts with unwanted electronic junk.[21] The first known electronic chain
letter, titled Make Money Fast, was released in 1988.
The first major commercial spam incident started on March 5, 1994, when a husband and wife team of lawyers,
Laurence Canter and Martha Siegel, began using bulk Usenet posting to advertise immigration law services. The
incident was commonly termed the "Green Card spam", after the subject line of the postings. Defiant in the face of
widespread condemnation, the attorneys claimed their detractors were hypocrites or "zealouts", claimed they had a
free speech right to send unwanted commercial messages, and labeled their opponents "anti-commerce radicals." The
couple wrote a controversial book entitled How to Make a Fortune on the Information Superhighway.[21]
Later that year a poster operating under the alias Serdar Argic posted antagonistic messages denying the Armenian
Genocide to tens of thousands of Usenet discussions that had been searched for the word Turkey. Within a few years,
the focus of spamming (and anti-spam efforts) moved chiefly to e-mail, where it remains today.[14] Arguably, the
aggressive email spamming by a number of high-profile spammers such as Sanford Wallace of Cyber Promotions in
the mid-to-late 1990s contributed to making spam predominantly an email phenomenon in the public mind. By 2009,
the majority of spam sent around the world was in the English language; spammers began using automatic
translation services to send spam in other languages.[22]
Trademark issues
Hormel Foods Corporation, the maker of Spam luncheon meat, does not object to the Internet use of the term
"spamming". However, they did ask that the capitalized word "Spam" be reserved to refer to their product and
trademark.[23] By and large, this request is obeyed in forums which discuss spam. In Hormel Foods v SpamArrest,
Hormel attempted to assert its trademark rights against SpamArrest, a software company, from using the mark
"spam", since Hormel owns the trademark. In a dilution claim, Hormel argued that Spam Arrest's use of the term
"spam" had endangered and damaged "substantial goodwill and good reputation" in connection with its trademarked
lunch meat and related products. Hormel also asserts that Spam Arrest's name so closely resembles its luncheon meat
that the public might become confused, or might think that Hormel endorses Spam Arrest's products.
Hormel did not prevail. Attorney Derek Newman responded on behalf of Spam Arrest: "Spam has become
ubiquitous throughout the world to describe unsolicited commercial e-mail. No company can claim trademark rights
on a generic term." Hormel stated on its website: "Ultimately, we are trying to avoid the day when the consuming
public asks, 'Why would Hormel Foods name its product after junk email?'".[24]
Hormel also made two attempts that were dismissed in 2005 to revoke the marks "SPAMBUSTER".[25] and Spam
Cube.[26] Hormel's Corporate Attorney Melanie J. Neumann also sent SpamCop's Julian Haight a letter on August
27, 1999 requesting that he delete an objectionable image (a can of Hormel's Spam luncheon meat product in a trash
can), change references to UCE spam to all lower case letters, and confirm his agreement to do so.[27]
Costs
The European Union's Internal Market Commission estimated in 2001 that "junk e-mail" cost Internet users €10
billion per year worldwide.[28] The California legislature found that spam cost United States organizations alone
more than $13 billion in 2007, including lost productivity and the additional equipment, software, and manpower
needed to combat the problem.[29] Spam's direct effects include the consumption of computer and network resources,
and the cost in human time and attention of dismissing unwanted messages.[30]
148
Spam (electronic)
In addition, spam has costs stemming from the kinds of spam messages sent, from the ways spammers send them,
and from the arms race between spammers and those who try to stop or control spam. In addition, there are the
opportunity cost of those who forgo the use of spam-afflicted systems. There are the direct costs, as well as the
indirect costs borne by the victims—both those related to the spamming itself, and to other crimes that usually
accompany it, such as financial theft, identity theft, data and intellectual property theft, virus and other malware
infection, child pornography, fraud, and deceptive marketing.
The cost to providers of search engines is not insignificant: "The secondary consequence of spamming is that search
engine indexes are inundated with useless pages, increasing the cost of each processed query".[2] }} The methods of
spammers are likewise costly. Because spamming contravenes the vast majority of ISPs' acceptable-use policies,
most spammers have for many years gone to some trouble to conceal the origins of their spam. E-mail, Usenet, and
instant-message spam are often sent through insecure proxy servers belonging to unwilling third parties. Spammers
frequently use false names, addresses, phone numbers, and other contact information to set up "disposable" accounts
at various Internet service providers. In some cases, they have used falsified or stolen credit card numbers to pay for
these accounts. This allows them to quickly move from one account to the next as each one is discovered and shut
down by the host ISPs.
The costs of spam also include the collateral costs of the struggle between spammers and the administrators and
users of the media threatened by spamming. [31] Many users are bothered by spam because it impinges upon the
amount of time they spend reading their e-mail. Many also find the content of spam frequently offensive, in that
pornography is one of the most frequently advertised products. Spammers send their spam largely indiscriminately,
so pornographic ads may show up in a work place e-mail inbox—or a child's, the latter of which is illegal in many
jurisdictions. Recently, there has been a noticeable increase in spam advertising websites that contain child
pornography[32] .
Some spammers argue that most of these costs could potentially be alleviated by having spammers reimburse ISPs
and persons for their material. There are three problems with this logic: first, the rate of reimbursement they could
credibly budget is not nearly high enough to pay the direct costs , second, the human cost (lost mail, lost time, and
lost opportunities) is basically unrecoverable, and third, spammers often use stolen bank accounts and credit cards to
finance their operations, and would conceivably do so to pay off any fines imposed.
E-mail spam exemplifies a tragedy of the commons: spammers use resources (both physical and human), without
bearing the entire cost of those resources. In fact, spammers commonly do not bear the cost at all. This raises the
costs for everyone. In some ways spam is even a potential threat to the entire e-mail system, as operated in the past.
Since e-mail is so cheap to send, a tiny number of spammers can saturate the Internet with junk mail. Although only
a tiny percentage of their targets are motivated to purchase their products (or fall victim to their scams), the low cost
may provide a sufficient conversion rate to keep the spamming alive. Furthermore, even though spam appears not to
be economically viable as a way for a reputable company to do business, it suffices for professional spammers to
convince a tiny proportion of gullible advertisers that it is viable for those spammers to stay in business. Finally, new
spammers go into business every day, and the low costs allow a single spammer to do a lot of harm before finally
realizing that the business is not profitable.
Some companies and groups "rank" spammers; spammers who make the news are sometimes referred to by these
rankings.[33] [34] The secretive nature of spamming operations makes it difficult to determine how proliferated an
individual spammer is, thus making the spammer hard to track, block or avoid. Also, spammers may target different
networks to different extents, depending on how successful they are at attacking the target. Thus considerable
resources are employed to actually measure the amount of spam generated by a single person or group. For example,
victims that use common anti-spam hardware, software or services provide opportunities for such tracking.
Nevertheless, such rankings should be taken with a grain of salt.
149
Spam (electronic)
General costs
In all cases listed above, including both commercial and non-commercial, "spam happens" because of a positive
Cost-benefit analysis result if the cost to recipients is excluded as an externality the spammer can avoid paying.
Cost is the combination of
• Overhead: The costs and overhead of electronic spamming include bandwidth, developing or acquiring an
email/wiki/blog spam tool, taking over or acquiring a host/zombie, etc.
• Transaction cost: The incremental cost of contacting each additional recipient once a method of spamming is
constructed, multiplied by the number of recipients. (see CAPTCHA as a method of increasing transaction costs)
• Risks: Chance and severity of legal and/or public reactions, including damages and punitive damages
• Damage: Impact on the community and/or communication channels being spammed (see Newsgroup spam)
Benefit is the total expected profit from spam, which may include any combination of the commercial and
non-commercial reasons listed above. It is normally linear, based on the incremental benefit of reaching each
additional spam recipient, combined with the conversion rate. The conversion rate for botnet-generated spam has
recently been measured to be around one in 12,000,000 for pharmaceutical spam and one in 200,000 for infection
sites as used by the Storm botnet.[35]
Spam is prevalent on the Internet because the transaction cost of electronic communications is radically less than any
alternate form of communication, far outweighing the current potential losses, as seen by the amount of spam
currently in existence. Spam continues to spread to new forms of electronic communication as the gain (number of
potential recipients) increases to levels where the cost/benefit becomes positive. Spam has most recently evolved to
include wikispam and blogspam as the levels of readership increase to levels where the overhead is no longer the
dominating factor. According to the above analysis, spam levels will continue to increase until the cost/benefit
analysis is balanced .
In crime
Spam can be used to spread computer viruses, trojan horses or other malicious software. The objective may be
identity theft, or worse (e.g., advance fee fraud). Some spam attempts to capitalize on human greed whilst other
attempts to use the victims' inexperience with computer technology to trick them (e.g., phishing). On May 31, 2007,
one of the world's most prolific spammers, Robert Alan Soloway, was arrested by U.S. authorities.[36] Described as
one of the top ten spammers in the world, Soloway was charged with 35 criminal counts, including mail fraud, wire
fraud, e-mail fraud, aggravated identity theft and money laundering.[36] Prosecutors allege that Soloway used
millions of "zombie" computers to distribute spam during 2003. This is the first case in which U.S. prosecutors used
identity theft laws to prosecute a spammer for taking over someone else's Internet domain name.
Political issues
Spamming remains a hot discussion topic. In 2004, the seized Porsche of an indicted spammer was advertised on the
Internet;[37] this revealed the extent of the financial rewards available to those who are willing to commit duplicitous
acts online. However, some of the possible means used to stop spamming may lead to other side effects, such as
increased government control over the Internet, loss of privacy, barriers to free expression, and the
commercialization of e-mail.
One of the chief values favored by many long-time Internet users and experts, as well as by many members of the
public, is the free exchange of ideas. Many have valued the relative anarchy of the Internet, and bridle at the idea of
restrictions placed upon it. A common refrain from spam-fighters is that spamming itself abridges the historical
freedom of the Internet, by attempting to force users to carry the costs of material which they would not choose.
An ongoing concern expressed by parties such as the Electronic Frontier Foundation and the ACLU has to do with
so-called "stealth blocking", a term for ISPs employing aggressive spam blocking without their users' knowledge.
150
Spam (electronic)
These groups' concern is that ISPs or technicians seeking to reduce spam-related costs may select tools which (either
through error or design) also block non-spam e-mail from sites seen as "spam-friendly". SPEWS is a common target
of these criticisms. Few object to the existence of these tools; it is their use in filtering the mail of users who are not
informed of their use which draws fire.
Some see spam-blocking tools as a threat to free expression—and laws against spamming as an untoward precedent
for regulation or taxation of e-mail and the Internet at large. Even though it is possible in some jurisdictions to treat
some spam as unlawful merely by applying existing laws against trespass and conversion, some laws specifically
targeting spam have been proposed. In 2004, United States passed the CAN-SPAM Act of 2003 which provided ISPs
with tools to combat spam. This act allowed Yahoo! to successfully sue Eric Head, reportedly one of the biggest
spammers in the world, who settled the lawsuit for several thousand U.S. dollars in June 2004. But the law is
criticized by many for not being effective enough. Indeed, the law was supported by some spammers and
organizations which support spamming, and opposed by many in the anti-spam community. Examples of effective
anti-abuse laws that respect free speech rights include those in the U.S. against unsolicited faxes and phone calls, and
those in Australia and a few U.S. states against spam.
In November 2004, Lycos Europe released a screen saver called make LOVE not SPAM which made Distributed
Denial of Service attacks on the spammers themselves. It met with a large amount of controversy and the initiative
ended in December 2004.
While most countries either outlaw or at least ignore spam, Bulgaria is the first and until now only one to partially
legalize it. According to recent changes in the Bulgarian E-Commerce act anyone can send spam to mailboxes,
owned by company or organization, as long as there is warning that this may be unsolicited commercial email in the
message body. The law contains many other inadequate texts - for example the creation of a nationwide public
electronic register of email addresses that do not want to receive spam, something valuable only as source for e-mail
address harvesting.
Anti-spam policies may also be a form of disguised censorship, a way to ban access or reference to questioning
alternative forums or blogs by an institution. This form of occult censorship is mainly used by private companies
when they can not muzzle criticism by legal ways.[38]
Court cases
United States
Sanford Wallace and Cyber Promotions were the target of a string of lawsuits, many of which were settled out of
court, up through the famous 1998 Earthlink settlement which put Cyber Promotions out of business. Attorney
Laurence Canter was disbarred by the Tennessee Supreme Court in 1997 for sending prodigious amounts of spam
advertising his immigration law practice. In 2005, Jason Smathers, a former America Online employee, pled guilty
to charges of violating the CAN-SPAM Act. In 2003, he sold a list of approximately 93 million AOL subscriber
e-mail addresses to Sean Dunaway who, in turn, sold the list to spammers.[39] [40]
In 2007, Robert Soloway lost a case in a federal court against the operator of a small Oklahoma-based Internet
service provider who accused him of spamming. U.S. Judge Ralph G. Thompson granted a motion by plaintiff
Robert Braver for a default judgment and permanent injunction against him. The judgment includes a statutory
damages award of $10,075,000 under Oklahoma law.[41]
In June 2007, two men were convicted of eight counts stemming from sending millions of e-mail spam messages that
included hardcore pornographic images. Jeffrey A. Kilbride, 41, of Venice, California was sentenced to six years in
prison, and James R. Schaffer, 41, of Paradise Valley, Arizona, was sentenced to 63 months. In addition, the two
were fined $100,000, ordered to pay $77,500 in restitution to AOL, and ordered to forfeit more than $1.1 million, the
amount of illegal proceeds from their spamming operation.[42] The charges included conspiracy, fraud, money
laundering, and transportation of obscene materials. The trial, which began on June 5, was the first to include
151
Spam (electronic)
charges under the CAN-SPAM Act of 2003, according to a release from the Department of Justice. The specific law
that prosecutors used under the CAN-Spam Act was designed to crack down on the transmission of pornography in
spam.[43]
In 2005, Scott J. Filary and Donald E. Townsend of Tampa, Florida were sued by Florida Attorney General Charlie
Crist for violating the Florida Electronic Mail Communications Act.[44] The two spammers were required to pay
$50,000 USD to cover the costs of investigation by the state of Florida, and a $1.1 million penalty if spamming were
to continue, the $50,000 was not paid, or the financial statements provided were found to be inaccurate. The
spamming operation was successfully shut down.[45]
Edna Fiedler, 44, of Olympia, Washington, on June 25, 2008, pleaded guilty in a Tacoma court and was sentenced to
2 years imprisonment and 5 years of supervised release or probation in an Internet $1 million "Nigerian check scam."
She conspired to commit bank, wire and mail fraud, against US citizens, specifically using Internet by having had an
accomplice who shipped counterfeit checks and money orders to her from Lagos, Nigeria, last November. Fiedler
shipped out $ 609,000 fake check and money orders when arrested and prepared to send additional $ 1.1 million
counterfeit materials. Also, the U.S. Postal Service recently intercepted counterfeit checks, lottery tickets and eBay
overpayment schemes with a face value of $2.1 billion.[46] [47]
United Kingdom
In the first successful case of its kind, Nigel Roberts from the Channel Islands won £270 against Media Logistics UK
who sent junk e-mails to his personal account.[48]
In January 2007, a Sheriff Court in Scotland awarded Mr. Gordon Dick £750 (the then maximum sum which could
be awarded in a Small Claim action) plus expenses of £618.66, a total of £1368.66 against Transcom Internet
Services Ltd.[49] for breaching anti-spam laws.[50] Transcom had been legally represented at earlier hearings but
were not represented at the proof, so Gordon Dick got his decree by default. It is the largest amount awarded in
compensation in the United Kingdom since Roberts -v- Media Logistics case in 2005 above, but it is not known if
Mr Dick ever received anything. (An image of Media Logistics' cheque is shown on Roberts' website[51] ) Both
Roberts and Dick are well known figures in the British Internet industry for other things. Dick is currently Interim
Chairman of Nominet UK (the manager of .UK and .CO.UK) while Roberts is CEO of CHANNELISLES.NET
(manager of .GG and .JE).
Despite the statutory tort that is created by the Regulations implementing the EC Directive, few other people have
followed their example. As the Courts engage in active case management, such cases would probably now be
expected to be settled by mediation and payment of nominal damages.
New Zealand
In October 2008, a vast international internet spam operation run from New Zealand was cited by American
authorities as one of the world’s largest, and for a time responsible for up to a third of all unwanted emails. In a
statement the US Federal Trade Commission (FTC) named Christchurch’s Lance Atkinson as one of the principals of
the operation. New Zealand’s Internal Affairs announced it had lodged a $200,000 claim in the High Court against
Atkinson and his brother Shane Atkinson and courier Roland Smits, after raids in Christchurch. This marked the first
prosecution since the Unsolicited Electronic Messages Act (UEMA) was passed in September 2007. The FTC said it
had received more than three million complaints about spam messages connected to this operation, and estimated
that it may be responsible for sending billions of illegal spam messages. The US District Court froze the defendants’
assets to preserve them for consumer redress pending trial.[52] U.S. co-defendant Jody Smith forfeited more than
$800,000 and faces up to five years in prison for charges to which he plead guilty.[53]
152
Spam (electronic)
153
Newsgroups
• news.admin.net-abuse.email
See also
•
•
•
•
•
•
•
•
•
•
•
•
•
SPAMfighter
Address munging (avoidance technique)
Anti-spam techniques
Bacn (electronic)
E-mail fraud
Identity theft
Image spam
Internet Troll
Job scams
Junk mail
List of spammers
Malware
Network Abuse Clearinghouse
•
•
•
•
•
•
•
•
•
•
•
Advance fee fraud (Nigerian spam)
Phishing
Scam
Social networking spam
SORBS
Spam
SpamCop
Spamigation
Spam Lit
Spoetry
Sporgery
•
•
Virus (computer)
Vishing
History
•
•
•
•
•
•
Howard Carmack
Make money fast
Sanford Wallace
Spam King
UUnet
Usenet Death Penalty
References
Sources
• Specter, Michael (2007-08-06). "Damn Spam" [54]. The New Yorker. Retrieved 2007-08-02.
Further reading
• Sjouwerman, Stu; Posluns, Jeffrey, "Inside the spam cartel: trade secrets from the dark side" [55],
Elsevier/Syngress; 1st edition, November 27, 2004. ISBN 978-1-932266-86-3
External links
•
•
•
•
•
•
•
•
•
Spamtrackers SpamWiki [56]: a peer-reviewed spam information and analysis resource.
Federal Trade Commission page advising people to forward spam e-mail to them [57]
Slamming Spamming Resource on Spam [58]
Why am I getting all this spam? CDT [59]
Cybertelecom:: Federal spam law and policy [60]
Reaction to the DEC Spam of 1978 [61] Overview and text of the first known internet email spam.
Malware City - The Spam Omelette [62] BitDefender’s weekly report on spam trends and techniques.
1 December 2009: arrest of a major spammer [63]
EatSpam.org [64] - This website provides you with disposable e-mail addresses which expire after 15 Minutes.
You can read and reply to e-mails that are sent to the temporary e-mail address within the given time frame.
Spam (electronic)
References
[1] The Spamhaus Project - The Definition Of Spam (http:/ / www. spamhaus. org/ definition. html)
[2] Gyongyi, Zoltan; Garcia-Molina, Hector (2005). "Web spam taxonomy" (http:/ / airweb. cse. lehigh. edu/ 2005/ gyongyi. pdf). Proceedings of
the First International Workshop on Adversarial Information Retrieval on the Web (AIRWeb), 2005 in The 14th International World Wide
Web Conference (WWW 2005) May 10, (Tue)-14 (Sat), 2005, Nippon Convention Center (Makuhari Messe), Chiba, Japan.. New York, N.Y.:
ACM Press. ISBN 1-59593-046-9.
[3] http:/ / www. maawg. org/ about/ MAAWG20072Q_Metrics_Report. pdf
[4] FileOn List Builder-Extract URL,MetaTags,Email,Phone,Fax from www-Optimized Webcrawler (http:/ / www. listdna. com/ )
[5] Saul Hansell Social network launches worldwide spam campaign (http:/ / bits. blogs. nytimes. com/ 2007/ 09/ 13/
your-former-boyfriends-mother-wants-to-be-your-friend/ ) New York Times, September 13, 2007
[6] The (Evil) Genius of Comment Spammers (http:/ / www. wired. com/ wired/ archive/ 12. 03/ google. html?pg=7) - Wired Magazine, March
2004
[7] Fabrício Benevenuto, Tiago Rodrigues, Virgílio Almeida, Jussara Almeida and Marcos Gonçalves. Detecting Spammers and Content
Promoters in Online Video Social Networks. In ACM SIGIR Conference, Boston, MA, USA, July 2009. (http:/ / www. dcc. ufmg. br/
~fabricio/ download/ sigirfp437-benevenuto. pdf).
[8] See: Advance fee fraud
[9] SA cops, Interpol probe murder (http:/ / www. news24. com/ News24/ South_Africa/ News/ 0,,2-7-1442_1641875,00. html) - News24.com,
2004-12-31
[10] Brasil assume a liderança do spam mundial em 2009, diz Cisco (http:/ / idgnow. uol. com. br/ seguranca/ 2009/ 12/ 08/
brasil-assume-a-lideranca-do-spam-mundial-em-2009-diz-cisco/ ) (Portuguese)
[11] "Getting the message, at last" (http:/ / www. economist. com/ opinion/ PrinterFriendly. cfm?story_id=10286400). 2007-12-14. .
[12] Internet Society's Internet Engineering Taskforce: A Set of Guidelines for Mass Unsolicited Mailings and Postings (spam*) (http:/ / tools.
ietf. org/ html/ rfc2635)
[13] Origin of the term "spam" to mean net abuse (http:/ / www. templetons. com/ brad/ spamterm. html)
[14] Origin of the term "spam" to mean net abuse (http:/ / www. templetons. com/ brad/ spamterm. html)
[15] The Origins of Spam in Star Trek chat rooms (http:/ / www. myshelegoldberg. com/ words/ item/ the-origins-of-spam/ )
[16] Spamming? (rec.games.mud) (http:/ / groups. google. com/ groups?selm=MAT. 90Sep25210959@zeus. organpipe. cs. arizona. edu) Google Groups USENET archive, 1990-09-26
[17] At 30, Spam Going Nowhere Soon (http:/ / www. npr. org/ templates/ story/ story. php?storyId=90160617) - Interviews with Gary Thuerk
and Joel Furr
[18] news.bbc.co.uk (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 7322615. stm)
[19] "Oxford dictionary adds Net terms" on News.com (http:/ / news. com. com/ 2100-1023-214535. html)
[20] Reaction to the DEC Spam of 1978 (http:/ / www. templetons. com/ brad/ spamreact. html)
[21] Tom Abate (May 3, 2008). "A very unhappy birthday to spam, age 30". San Francisco Chronicle.
[22] Danchev, Dancho. " Spammers go multilingual, use automatic translation services (http:/ / blogs. zdnet. com/ security/ ?p=3813&
tag=rbxccnbzd1)." ZDNet. July 28, 2009. Retrieved on August 31, 2009.
[23] (http:/ / www. spam. com/ about/ internet. aspx) - Official SPAM Website
[24] Hormel Foods v SpamArrest, Motion for Summary Judgment, Redacted Version (PDF) (http:/ / img. spamarrest. com/
HormelSummaryJudgment. pdf)
[25] Hormel Foods Corpn v Antilles Landscape Investments NV (2005) EWHC 13 (Ch) (http:/ / www. lawreports. co. uk/ WLRD/ 2005/ CHAN/
chanjanf0. 3. htm)
[26] "Hormel Foods Corporation v. Spam Cube, Inc" (http:/ / ttabvue. uspto. gov/ ttabvue/ v?pno=91171346& pty=OPP). United States Patent
and Trademark Office. . Retrieved 2008-02-12.
[27] Letter from Hormel's Corporate Attorney Melanie J. Neumann to SpamCop's Julian Haight (http:/ / www. spamcop. net/ images/
hormel_letter. gif)
[28] "Data protection: "Junk" e-mail costs internet users 10 billion a year worldwide - Commission study" (http:/ / europa. eu/ rapid/
pressReleasesAction. do?reference=IP/ 01/ 154& format=HTML& aged=0& language=EN& guiLanguage=en)
[29] CALIFORNIA BUSINESS AND PROFESSIONS CODE (http:/ / www. spamlaws. com/ state/ ca. shtml)
[30] Spam Cost Calculator: Calculate enterprise spam cost? (http:/ / www. commtouch. com/ spam-cost-calculator)
[31] Thank the Spammers (http:/ / linxnet. com/ misc/ spam/ thank_spammers. html) - William R. James 2003-03-10
[32] Fadul, Jose (2010). The EPIC Generation: Experiential, Participative, Image-Driven & Connected. Raleigh, NC: Lulu Press.
ISBN 978-0-557-41877-0.
[33] Spamhaus' "TOP 10 spam service ISPs" (http:/ / www. spamhaus. org/ statistics/ networks. lasso)
[34] The 10 Worst ROKSO Spammers (http:/ / www. spamhaus. org/ statistics/ spammers. lasso)
[35] Kanich, C.; C. Kreibich, K. Levchenko, B. Enright, G. Voelker, V. Paxson and S. Savage (2008-10-28). "Spamalytics: An Empirical
Analysis of Spam Marketing Conversion" (http:/ / www. icsi. berkeley. edu/ pubs/ networking/ 2008-ccs-spamalytics. pdf) (PDF). .
Alexandria, VA, USA. . Retrieved 2008-11-05.
154
Spam (electronic)
[36] Alleged 'Seattle Spammer' arrested - CNET News.com (http:/ / www. news. com/ Alleged-Seattle-Spammer-arrested/
2100-7348_3-6187754. html)
[37] timewarner.com (http:/ / www. timewarner. com/ corp/ newsroom/ pr/ 0,20812,670327,00. html)
[38] See for instance the black list of the French wikipedia encyclopedia
[39] U.S. v Jason Smathers and Sean Dunaway, amended complaint, US District Court for the Southern District of New York (2003). Retrieved 7
March 2007, from http:/ / www. thesmokinggun. com/ archive/ 0623042aol1. html
[40] Ex-AOL employee pleads guilty in spam case. (2005, February 4). CNN. Retrieved 7 March 2007, from http:/ / www. cnn. com/ 2005/
TECH/ internet/ 02/ 04/ aol. spam. plea/
[41] Braver v. Newport Internet Marketing Corporation et al. (http:/ / www. mortgagespam. com/ soloway) -U.S. District Court - Western
District of Oklahoma (Oklahoma City), 2005-02-22
[42] "Two Men Sentenced for Running International Pornographic Spamming Business" (http:/ / www. usdoj. gov/ opa/ pr/ 2007/ October/
07_crm_813. html). United States Department of Justice. October 12, 2007. . Retrieved 2007-10-25.
[43] Gaudin, Sharon, Two Men Convicted Of Spamming Pornography (http:/ / www. informationweek. com/ news/ showArticle.
jhtml?articleID=200000756) InformationWeek, June 26, 2007
[44] "Crist Announces First Case Under Florida Anti-Spam Law" (http:/ / myfloridalegal. com/ __852562220065EE67. nsf/ 0/
F978639D46005F6585256FD90050AAC9?Open& Highlight=0,spam). Office of the Florida Attorney General. . Retrieved 2008-02-23.
[45] "Crist: Judgment Ends Duo's Illegal Spam, Internet Operations" (http:/ / myfloridalegal. com/ __852562220065EE67. nsf/ 0/
F08DE06CB354A7D7852570CF005912A2?Open& Highlight=0,spam). Office of the Florida Attorney General. . Retrieved 2008-02-23.
[46] upi.com, Woman gets prison for 'Nigerian' scam (http:/ / www. upi. com/ Top_News/ 2008/ 06/ 26/
Woman_gets_prison_for_Nigerian_scam/ UPI-73791214521169/ )
[47] yahoo.com, Woman Gets Two Years for Aiding Nigerian Internet Check Scam (PC World) (http:/ / tech. yahoo. com/ news/ pcworld/
147575)
[48] Businessman wins e-mail spam case (http:/ / news. bbc. co. uk/ 1/ hi/ world/ europe/ jersey/ 4562726. stm) - BBC News, 2005-12-27
[49] Gordon Dick v Transcom Internet Service Ltd. (http:/ / www. scotchspam. co. uk/ transcom. html)
[50] Article 13-Unsolicited communications (http:/ / eur-lex. europa. eu/ LexUriServ/ LexUriServ. do?uri=CELEX:32002L0058:EN:HTML)
[51] website (http:/ / www. roberts. co. uk)
[52] Kiwi spam network was 'world's biggest' (http:/ / www. stuff. co. nz/ stuff/ 4729188a28. html)
[53] Court Orders Australia-based Leader of International Spam Network to Pay $15.15 Million (http:/ / www. ftc. gov/ opa/ 2009/ 11/
herbalkings. shtm)
[54] http:/ / www. newyorker. com/ reporting/ 2007/ 08/ 06/ 070806fa_fact_specter
[55] http:/ / books. google. com/ books?id=1gsUeCcA7qMC& printsec=frontcover
[56] http:/ / www. spamtrackers. eu/ wiki
[57] http:/ / www. ftc. gov/ spam/
[58] http:/ / www. uic. edu/ depts/ accc/ newsletter/ adn29/ spam. html
[59] http:/ / www. cdt. org/ speech/ spam/ 030319spamreport. shtml
[60] http:/ / www. cybertelecom. org/ spam/
[61] http:/ / www. templetons. com/ brad/ spamreact. html
[62] http:/ / www. malwarecity. com/ site/ News/ browseBlogsByCategory/ 46/
[63] http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 8388737. stm
[64] http:/ / www. eatspam. org/
155
Phishing
Phishing
In the field of computer security, phishing
is the criminally fraudulent process of
attempting to acquire sensitive information
such as usernames, passwords and credit
card details by masquerading as a
trustworthy entity in an electronic
communication.
Communications
purporting to be from popular social web
sites, auction sites, online payment
processors or IT administrators are
commonly used to lure the unsuspecting
public. Phishing is typically carried out by
e-mail or instant messaging,[1] and it often
directs users to enter details at a fake
An example of a phishing e-mail, disguised as an official e-mail from a (fictional)
website whose look and feel are almost
bank. The sender is attempting to trick the recipient into revealing confidential
identical to the legitimate one. Even when
information by "confirming" it at the phisher's website. Note the misspelling of the
using server authentication, it may require
words received and discrepancy. Such mistakes are common in most phishing
tremendous skill to detect that the website is
emails. Also note that although the URL of the bank's webpage appears to be
legitimate, it actually links to the phisher's webpage.
fake. Phishing is an example of social
[2]
engineering techniques used to fool users,
and exploits the poor usability of current web security technologies.[3] Attempts to deal with the growing number of
reported phishing incidents include legislation, user training, public awareness, and technical security measures.
A phishing technique was described in detail in 1987, and the first recorded use of the term "phishing" was made in
1996. The term is a variant of fishing,[4] probably influenced by phreaking,[5] [6] and alludes to baits used to "catch"
financial information and passwords.
History and current status of phishing
A phishing technique was described in detail in 1987, in a paper and presentation delivered to the International HP
Users Group, Interex.[7] The first recorded mention of the term "phishing" is on the alt.online-service.america-online
Usenet newsgroup on January 2, 1996,[8] although the term may have appeared earlier in the print edition of the
hacker magazine 2600.[9]
Early phishing on AOL
Phishing on AOL was closely associated with the warez community that exchanged pirated software and the hacking
scene that perpetrated credit card fraud and other online crimes. After AOL brought in measures in late 1995 to
prevent using fake, algorithmically generated credit card numbers to open accounts, AOL crackers resorted to
phishing for legitimate accounts[10] and exploiting AOL.
A phisher might pose as an AOL staff member and send an instant message to a potential victim, asking him to
reveal his password.[11] In order to lure the victim into giving up sensitive information the message might include
imperatives like "verify your account" or "confirm billing information". Once the victim had revealed the password,
the attacker could access and use the victim's account for fraudulent purposes or spamming. Both phishing and
warezing on AOL generally required custom-written programs, such as AOHell. Phishing became so prevalent on
AOL that they added a line on all instant messages stating: "no one working at AOL will ask for your password or
156
Phishing
157
billing information", though even this didn't prevent some people from giving away their passwords and personal
information if they read and believed the IM first. A user using both an AIM account and an AOL account from an
ISP simultaneously could phish AOL members with relative impunity as internet AIM accounts could be used by
non-AOL internet members and could not be actioned (ie- reported to AOL TOS department for disciplinary action.)
After 1997, AOL's policy enforcement with respect to phishing and warez became stricter and forced pirated
software off AOL servers. AOL simultaneously developed a system to promptly deactivate accounts involved in
phishing, often before the victims could respond. The shutting down of the warez scene on AOL caused most
phishers to leave the service, and many phishers—often young teens—grew out of the habit.[12]
Transition from AOL to financial institutions
The capture of AOL account information may have led phishers to misuse credit card information, and to the
realization that attacks against online payment systems were feasible. The first known direct attempt against a
payment system affected E-gold in June 2001, which was followed up by a "post-9/11 id check" shortly after the
September 11 attacks on the World Trade Center.[13] Both were viewed at the time as failures, but can now be seen
as early experiments towards more fruitful attacks against mainstream banks. By 2004, phishing was recognized as a
fully industrialized part of the economy of crime: specializations emerged on a global scale that provided
components for cash, which were assembled into finished attacks.[14] [15]
Phishing techniques
Recent phishing attempts
Phishers are targeting the customers of banks and online payment
services. E-mails, supposedly from the Internal Revenue Service, have
been used to glean sensitive data from U.S. taxpayers.[16] While the
first such examples were sent indiscriminately in the expectation that
some would be received by customers of a given bank or service,
recent research has shown that phishers may in principle be able to
determine which banks potential victims use, and target bogus e-mails
accordingly.[17] Targeted versions of phishing have been termed spear
phishing.[18] Several recent phishing attacks have been directed
specifically at senior executives and other high profile targets within
businesses, and the term whaling has been coined for these kinds of
attacks.[19]
A chart showing the increase in phishing reports
from October 2004 to June 2005.
Social networking sites are now a prime target of phishing, since the personal details in such sites can be used in
identity theft;[20] in late 2006 a computer worm took over pages on MySpace and altered links to direct surfers to
websites designed to steal login details.[21] Experiments show a success rate of over 70% for phishing attacks on
social networks.[22]
The RapidShare file sharing site has been targeted by phishing to obtain a premium account, which removes speed
caps on downloads, auto-removal of uploads, waits on downloads, and cooldown times between downloads.[23]
Attackers who broke into TD Ameritrade's database (containing all 6.3 million customers' social security numbers,
account numbers and email addresses as well as their names, addresses, dates of birth, phone numbers and trading
activity) also wanted the account usernames and passwords, so they launched a follow-up spear phishing attack.[24]
Almost half of phishing thefts in 2006 were committed by groups operating through the Russian Business Network
based in St. Petersburg.[25]
Phishing
Some people are being victimized by a Facebook Scam, the link being hosted by T35 Web Hosting and people are
losing their accounts.[26]
There are anti-phishing websites which publish exact messages that have been recently circulating the internet, such
as FraudWatch International and Millersmiles. Such sites often provide specific details about the particular
messages.[27] [28]
Link manipulation
Most methods of phishing use some form of technical deception designed to make a link in an e-mail (and the
spoofed website it leads to) appear to belong to the spoofed organization. Misspelled URLs or the use of subdomains
are
common
tricks
used
by
phishers.
In
the
following
example
URL,
http://www.yourbank.example.com/, it appears as though the URL will take you to the example section
of the yourbank website; actually this URL points to the "yourbank" (i.e. phishing) section of the example website.
Another common trick is to make the displayed text for a link (the text between the <A> tags) suggest a reliable
destination, when the link actually goes to the phishers' site. The following example link, http:/ / en. wikipedia. org/
wiki/ Genuine, appears to take you to an article entitled "Genuine"; clicking on it will in fact take you to the article
entitled "Deception". In the lower left hand corner of most browsers you can preview and verify where the link is
going to take you.[29]
An old method of spoofing used links containing the '@' symbol, originally intended as a way to include a username
and
password
(contrary
to
the
standard).[30]
For
example,
the
link
http://www.google.com@members.tripod.com/ might deceive a casual observer into believing that it
will open a page on www.google.com, whereas it actually directs the browser to a page on
members.tripod.com, using a username of www.google.com: the page opens normally, regardless of the
username supplied. Such URLs were disabled in Internet Explorer,[31] while Mozilla Firefox[32] and Opera present a
warning message and give the option of continuing to the site or cancelling.
A further problem with URLs has been found in the handling of Internationalized domain names (IDN) in web
browsers, that might allow visually identical web addresses to lead to different, possibly malicious, websites. Despite
the publicity surrounding the flaw, known as IDN spoofing[33] or homograph attack,[34] phishers have taken
advantage of a similar risk, using open URL redirectors on the websites of trusted organizations to disguise
malicious URLs with a trusted domain.[35] [36] [37] Even digital certificates do not solve this problem because it is
quite possible for a phisher to purchase a valid certificate and subsequently change content to spoof a genuine
website.
Filter evasion
Phishers have used images instead of text to make it harder for anti-phishing filters to detect text commonly used in
phishing e-mails.[38]
Website forgery
Once a victim visits the phishing website the deception is not over. Some phishing scams use JavaScript commands
in order to alter the address bar.[39] This is done either by placing a picture of a legitimate URL over the address bar,
or by closing the original address bar and opening a new one with the legitimate URL.[40]
An attacker can even use flaws in a trusted website's own scripts against the victim.[41] These types of attacks
(known as cross-site scripting) are particularly problematic, because they direct the user to sign in at their bank or
service's own web page, where everything from the web address to the security certificates appears correct. In
reality, the link to the website is crafted to carry out the attack, making it very difficult to spot without specialist
knowledge. Just such a flaw was used in 2006 against PayPal.[42]
158
Phishing
A Universal Man-in-the-middle (MITM) Phishing Kit, discovered in 2007, provides a simple-to-use interface that
allows a phisher to convincingly reproduce websites and capture log-in details entered at the fake site.[43]
To avoid anti-phishing techniques that scan websites for phishing-related text, phishers have begun to use
Flash-based websites. These look much like the real website, but hide the text in a multimedia object.[44]
Phone phishing
Not all phishing attacks require a fake website. Messages that claimed to be from a bank told users to dial a phone
number regarding problems with their bank accounts.[45] Once the phone number (owned by the phisher, and
provided by a Voice over IP service) was dialed, prompts told users to enter their account numbers and PIN. Vishing
(voice phishing) sometimes uses fake caller-ID data to give the appearance that calls come from a trusted
organization.[46]
Other techniques
• Another attack used successfully is to forward the client to a bank's legitimate website, then to place a popup
window requesting credentials on top of the website in a way that it appears the bank is requesting this sensitive
information.[47]
• One of the latest phishing techniques is tabnabbing. It takes advantage of the multiple tabs that users use and
silently redirect a user to the affected site.
Damage caused by phishing
The damage caused by phishing ranges from denial of access to e-mail to substantial financial loss. It is estimated
that between May 2004 and May 2005, approximately 1.2 million computer users in the United States suffered losses
caused by phishing, totaling approximately US$929 million . United States businesses lose an estimated US$2
billion per year as their clients become victims.[48] In 2007, phishing attacks escalated. 3.6 million adults lost US$3.2
billion in the 12 months ending in August 2007.[49] Microsoft claims these estimates are grossly exaggerated and
puts the annual phishing loss in the US at US$60 million .[50] In the United Kingdom losses from web banking
fraud—mostly from phishing—almost doubled to GB£23.2m in 2005, from GB£12.2m in 2004,[51] while 1 in 20
computer users claimed to have lost out to phishing in 2005.[52]
The stance adopted by the UK banking body APACS is that "customers must also take sensible precautions ... so that
they are not vulnerable to the criminal."[53] Similarly, when the first spate of phishing attacks hit the Irish Republic's
banking sector in September 2006, the Bank of Ireland initially refused to cover losses suffered by its customers (and
it still insists that its policy is not to do so[54] ), although losses to the tune of €11,300 were made good.[55]
Anti-phishing
There are several different techniques to combat phishing, including legislation and technology created specifically
to protect against phishing.
Social responses
One strategy for combating phishing is to train people to recognize phishing attempts, and to deal with them.
Education can be effective, especially where training provides direct feedback.[56] One newer phishing tactic, which
uses phishing e-mails targeted at a specific company, known as spear phishing, has been harnessed to train
individuals at various locations, including United States Military Academy at West Point, NY. In a June 2004
experiment with spear phishing, 80% of 500 West Point cadets who were sent a fake e-mail were tricked into
revealing personal information.[57]
159
Phishing
People can take steps to avoid phishing attempts by slightly modifying their browsing habits. When contacted about
an account needing to be "verified" (or any other topic used by phishers), it is a sensible precaution to contact the
company from which the e-mail apparently originates to check that the e-mail is legitimate. Alternatively, the
address that the individual knows is the company's genuine website can be typed into the address bar of the browser,
rather than trusting any hyperlinks in the suspected phishing message.[58]
Nearly all legitimate e-mail messages from companies to their customers contain an item of information that is not
readily available to phishers. Some companies, for example PayPal, always address their customers by their
username in e-mails, so if an e-mail addresses the recipient in a generic fashion ("Dear PayPal customer") it is likely
to be an attempt at phishing.[59] E-mails from banks and credit card companies often include partial account
numbers. However, recent research[60] has shown that the public do not typically distinguish between the first few
digits and the last few digits of an account number—a significant problem since the first few digits are often the
same for all clients of a financial institution. People can be trained to have their suspicion aroused if the message
does not contain any specific personal information. Phishing attempts in early 2006, however, used personalized
information, which makes it unsafe to assume that the presence of personal information alone guarantees that a
message is legitimate.[61] Furthermore, another recent study concluded in part that the presence of personal
information does not significantly affect the success rate of phishing attacks,[62] which suggests that most people do
not pay attention to such details.
The Anti-Phishing Working Group, an industry and law enforcement association, has suggested that conventional
phishing techniques could become obsolete in the future as people are increasingly aware of the social engineering
techniques used by phishers.[63] They predict that pharming and other uses of malware will become more common
tools for stealing information.
Everyone can help educate the public by encouraging safe practices, and by avoiding dangerous ones. Unfortunately,
even well-known players are known to incite users to hazardous behaviour, e.g. by requesting their users to reveal
their passwords for third party services, such as email.[64]
Technical responses
Anti-phishing measures have been implemented as features embedded in browsers, as extensions or toolbars for
browsers, and as part of website login procedures. The following are some of the main approaches to the problem.
Helping to identify legitimate websites
Most websites targeted for phishing are secure websites meaning that SSL with strong PKI cryptography is used for
server authentication, where the website's URL is used as identifier. In theory it should be possible for the SSL
authentication to be used to confirm the site to the user, and this was SSL v2's design requirement and the meta of
secure browsing. But in practice, this is easy to trick.
The superficial flaw is that the browser's security user interface (UI) is insufficient to deal with today's strong threats.
There are three parts to secure authentication using TLS and certificates: indicating that the connection is in
authenticated mode, indicating which site the user is connected to, and indicating which authority says it is this site.
All three are necessary for authentication, and need to be confirmed by/to the user.
Secure Connection. The standard display for secure browsing from the mid-1990s to mid-2000s was the padlock. In
2005, Mozilla fielded a yellow URL bar 2005 as a better indication of the secure connection. This innovation was
later reversed due to the EV certificates, which replaced certain certificates providing a high level of organization
identity verification with a green display, and other certificates with an extended blue favicon box to the left of the
URL bar (in addition to the switch from "http" to "https" in the url itself).
Which Site. The user is expected to confirm that the domain name in the browser's URL bar was in fact where they
intended to go. URLs can be too complex to be easily parsed. Users often do not know or recognise the URL of the
legitimate sites they intend to connect to, so that the authentication becomes meaningless.[3] A condition for
160
Phishing
meaningful server authentication is to have a server identifier that is meaningful to the user; many ecommerce sites
will change the domain names within their overall set of websites, adding to the opportunity for confusion. Simply
displaying the domain name for the visited website[65] as some anti-phishing toolbars do is not sufficient.
Some newer browsers, such as Internet Explorer 8, display the entire URL in grey, with just the domain name itself
in black, as a means of assisting users in identifying fraudulent URLs.
An alternate approach is the petname extension for Firefox which lets users type in their own labels for websites, so
they can later recognize when they have returned to the site. If the site is not recognised, then the software may either
warn the user or block the site outright. This represents user-centric identity management of server identities.[66]
Some suggest that a graphical image selected by the user is better than a petname.[67]
With the advent of EV certificates, browsers now typically display the organisation's name in green, which is much
more visible and is hopefully more consistent with the user's expectations. Unfortunately, browser vendors have
chosen to limit this prominent display only to EV certificates, leaving the user to fend for himself with all other
certificates.
Who is the Authority. The browser needs to state who the authority is that makes the claim of who the user is
connected to. At the simplest level, no authority is stated, and therefore the browser is the authority, as far as the user
is concerned. The browser vendors take on this responsibility by controlling a root list of acceptable CAs. This is the
current standard practice.
The problem with this is that not all certification authorities (CAs) employ equally good nor applicable checking,
regardless of attempts by browser vendors to control the quality. Nor do all CAs subscribe to the same model and
concept that certificates are only about authenticating ecommerce organisations. Certificate Manufacturing is the
name given to low-value certificates that are delivered on a credit card and an email confirmation; both of these are
easily perverted by fraudsters. Hence, a high-value site may be easily spoofed by a valid certificate provided by
another CA. This could be because the CA is in another part of the world, and is unfamiliar with high-value
ecommerce sites, or it could be that no care is taken at all. As the CA is only charged with protecting its own
customers, and not the customers of other CAs, this flaw is inherent in the model.
The solution to this is that the browser should show, and the user should be familiar with, the name of the authority.
This presents the CA as a brand, and allows the user to learn the handful of CAs that she is likely to come into
contact within her country and her sector. The use of brand is also critical to providing the CA with an incentive to
improve their checking, as the user will learn the brand and demand good checking for high-value sites.
This solution was first put into practice in early IE7 versions, when displaying EV certificates.[68] In that display, the
issuing CA is displayed. This was an isolated case, however. There is resistance to CAs being branded on the
chrome, resulting in a fallback to the simplest level above: the browser is the user's authority.
Fundamental flaws in the security model of secure browsing
Experiments to improve the security UI have resulted in benefits, but have also exposed fundamental flaws in the
security model. The underlying causes for the failure of the SSL authentication to be employed properly in secure
browsing are many and intertwined.
Security before threat. Because secure browsing was put into place before any threat was evident, the security
display lost out in the "real estate wars" of the early browsers. The original design of Netscape's browser included a
prominent display of the name of the site and the CA's name, but these were dropped in the first release. Users are
now highly experienced in not checking security information at all.
Click-thru syndrome. However, warnings to poorly configured sites continued, and were not down-graded. If a
certificate had an error in it (mismatched domain name, expiry), then the browser would commonly launch a popup
to warn the user. As the reason was generally a minor misconfiguration, the users learned to bypass the warnings,
and now, users are accustomed to treat all warnings with the same disdain, resulting in Click-thru syndrome. For
161
Phishing
example, Firefox 3 has a 4-click process for adding an exception, but it has been shown to be ignored by an
experienced user in a real case of MITM. Even today, as the vast majority of warnings will be for misconfigurations
not real MITMs, it is hard to see how click-thru syndrome will ever be avoided.
Lack of interest. Another underlying factor is the lack of support for virtual hosting. The specific causes are a lack
of support for Server Name Indication in TLS webservers, and the expense and inconvenience of acquiring
certificates. The result is that the use of authentication is too rare to be anything but a special case. This has caused a
general lack of knowledge and resources in authentication within TLS, which in turn has meant that the attempts by
browser vendors to upgrade their security UIs have been slow and lacklustre.
Lateral communications. The security model for secure browser includes many participants: user, browser vendor,
developers, CA, auditor, webserver vendor, ecommerce site, regulators (e.g., FDIC), and security standards
committees. There is a lack of communication between different groups that are committed to the security model.
E.g., although the understanding of authentication is strong at the protocol level of the IETF committees, this
message does not reach the UI groups. Webserver vendors do not prioritise the Server Name Indication (TLS/SNI)
fix, not seeing it as a security fix but instead a new feature. In practice, all participants look to the others as the
source of the failures leading to phishing, hence the local fixes are not prioritised.
Matters improved slightly with the CAB Forum, as that group includes browser vendors, auditors and CAs. But the
group did not start out in an open fashion, and the result suffered from commercial interests of the first players, as
well as a lack of parity between the participants. Even today, CAB forum is not open, and does not include
representation from small CAs, end-users, ecommerce owners, etc.
Standards gridlock. Vendors commit to standards, which results in an outsourcing effect when it comes to security.
Although there have been many and good experiments in improving the security UI, these have not been adopted
because they are not standard, or clash with the standards. Threat models can re-invent themselves in around a
month; Security standards take around 10 years to adjust.
Venerable CA model. Control mechanisms employed by the browser vendors over the CAs have not been
substantially updated; the threat model has. The control and quality process over CAs is insufficiently tuned to the
protection of users and the addressing of actual and current threats. Audit processes are in great need of updating.
The recent EV Guidelines documented the current model in greater detail, and established a good benchmark, but did
not push for any substantial changes to be made.
Browsers alerting users to fraudulent websites
Another popular approach to fighting phishing is to maintain a list of known phishing sites and to check websites
against the list. Microsoft's IE7 browser, Mozilla Firefox 2.0, Safari 3.2, and Opera all contain this type of
anti-phishing measure.[69] [70] [71] [72] Firefox 2 used Google anti-phishing software. Opera 9.1 uses live blacklists
from PhishTank and GeoTrust, as well as live whitelists from GeoTrust. Some implementations of this approach
send the visited URLs to a central service to be checked, which has raised concerns about privacy.[73] According to a
report by Mozilla in late 2006, Firefox 2 was found to be more effective than Internet Explorer 7 at detecting
fraudulent sites in a study by an independent software testing company.[74]
An approach introduced in mid-2006 involves switching to a special DNS service that filters out known phishing
domains: this will work with any browser,[75] and is similar in principle to using a hosts file to block web adverts.
To mitigate the problem of phishing sites impersonating a victim site by embedding its images (such as logos),
several site owners have altered the images to send a message to the visitor that a site may be fraudulent. The image
may be moved to a new filename and the original permanently replaced, or a server can detect that the image was not
requested as part of normal browsing, and instead send a warning image.[76] [77] and its totally safe
162
Phishing
Augmenting password logins
The Bank of America's website[78] [79] is one of several that ask users to select a personal image, and display this
user-selected image with any forms that request a password. Users of the bank's online services are instructed to
enter a password only when they see the image they selected. However, a recent study suggests few users refrain
from entering their password when images are absent.[80] [81] In addition, this feature (like other forms of two-factor
authentication) is susceptible to other attacks, such as those suffered by Scandinavian bank Nordea in late 2005,[82]
and Citibank in 2006.[83]
A similar system, in which an automatically-generated "Identity Cue" consisting of a colored word within a colored
box is displayed to each website user, is in use at other financial institutions.[84]
Security skins[85] [86] are a related technique that involves overlaying a user-selected image onto the login form as a
visual cue that the form is legitimate. Unlike the website-based image schemes, however, the image itself is shared
only between the user and the browser, and not between the user and the website. The scheme also relies on a mutual
authentication protocol, which makes it less vulnerable to attacks that affect user-only authentication schemes.
Eliminating phishing mail
Specialized spam filters can reduce the number of phishing e-mails that reach their addressees' inboxes. These
approaches rely on machine learning and natural language processing approaches to classify phishing e-mails.[87] [88]
Monitoring and takedown
Several companies offer banks and other organizations likely to suffer from phishing scams round-the-clock services
to monitor, analyze and assist in shutting down phishing websites.[89] Individuals can contribute by reporting
phishing to both volunteer and industry groups,[90] such as PhishTank.[91] Individuals can also contribute by
reporting phone phishing attempts to Phone Phishing,[92] Federal Trade Commission.[93]
Legal responses
On January 26, 2004, the U.S. Federal Trade Commission filed the first lawsuit against a suspected phisher. The
defendant, a Californian teenager, allegedly created a webpage designed to look like the America Online website,
and used it to steal credit card information.[94] Other countries have followed this lead by tracing and arresting
phishers. A phishing kingpin, Valdir Paulo de Almeida, was arrested in Brazil for leading one of the largest phishing
crime rings, which in two years stole between US$18 million and US$37 million .[95] UK authorities jailed two men
in June 2005 for their role in a phishing scam,[96] in a case connected to the U.S. Secret Service Operation Firewall,
which targeted notorious "carder" websites.[97] In 2006 eight people were arrested by Japanese police on suspicion of
phishing fraud by creating bogus Yahoo Japan Web sites, netting themselves ¥100 million (US$870,000 ).[98] The
arrests continued in 2006 with the FBI Operation Cardkeeper detaining a gang of sixteen in the U.S. and Europe.[99]
In the United States, Senator Patrick Leahy introduced the Anti-Phishing Act of 2005 in Congress on March 1, 2005.
This bill, if it had been enacted into law, would have subjected criminals who created fake web sites and sent bogus
e-mails in order to defraud consumers to fines of up to US$250,000 and prison terms of up to five years.[100] The UK
strengthened its legal arsenal against phishing with the Fraud Act 2006,[101] which introduces a general offence of
fraud that can carry up to a ten year prison sentence, and prohibits the development or possession of phishing kits
with intent to commit fraud.[102]
Companies have also joined the effort to crack down on phishing. On March 31, 2005, Microsoft filed 117 federal
lawsuits in the U.S. District Court for the Western District of Washington. The lawsuits accuse "John Doe"
defendants of obtaining passwords and confidential information. March 2005 also saw a partnership between
Microsoft and the Australian government teaching law enforcement officials how to combat various cyber crimes,
including phishing.[103] Microsoft announced a planned further 100 lawsuits outside the U.S. in March 2006,[104]
followed by the commencement, as of November 2006, of 129 lawsuits mixing criminal and civil actions.[105] AOL
163
Phishing
reinforced its efforts against phishing[106] in early 2006 with three lawsuits[107] seeking a total of US$18 million
under the 2005 amendments to the Virginia Computer Crimes Act,[108] [109] and Earthlink has joined in by helping to
identify six men subsequently charged with phishing fraud in Connecticut.[110]
In January 2007, Jeffrey Brett Goodin of California became the first defendant convicted by a jury under the
provisions of the CAN-SPAM Act of 2003. He was found guilty of sending thousands of e-mails to America Online
users, while posing as AOL's billing department, which prompted customers to submit personal and credit card
information. Facing a possible 101 years in prison for the CAN-SPAM violation and ten other counts including wire
fraud, the unauthorized use of credit cards, and the misuse of AOL's trademark, he was sentenced to serve 70
months. Goodin had been in custody since failing to appear for an earlier court hearing and began serving his prison
term immediately.[111] [112] [113] [114]
See also
•
•
•
•
•
Advanced Persistent Threat
Anti-phishing software
Brandjacking
Certificate authority
Computer hacking
•
•
•
•
•
•
•
•
•
•
•
•
Confidence trick
E-mail spoofing
FBI
In-session phishing
Internet fraud
Pharming
SMiShing
Social engineering
Spy-phishing
Vishing
White collar crime
Wire fraud
External links
Anti-Phishing Working Group [115]
Center for Identity Management and Information Protection [116] – Utica College
How the bad guys actually operate [117] – Ha.ckers.org Application Security Lab
Plugging the "phishing" hole: legislation versus technology [118] – Duke Law & Technology Review
Know Your Enemy: Phishing [119] – Honeynet project case study
Banking Scam Revealed [120] – forensic examination of a phishing attack on SecurityFocus
The Phishing Guide: Understanding and Preventing Phishing Attacks [121] – TechnicalInfo.net
A Profitless Endeavor: Phishing as Tragedy of the Commons [122] – Microsoft Corporation
Database for information on phishing sites reported by the public [123] - PhishTank
The Impact of Incentives on Notice and Take-down [124] − Computer Laboratory, University of Cambridge (PDF,
344 kB)
• One Gang Responsible For Most Phishing Attacks - InternetNews.com [125]
•
•
•
•
•
•
•
•
•
•
164
Phishing
References
[1] Tan, Koon. "Phishing and Spamming via IM (SPIM)" (http:/ / isc. sans. org/ diary. php?storyid=1905). Internet Storm Center. . Retrieved
December 5, 2006.
[2] Microsoft Corporation. "What is social engineering?" (http:/ / www. microsoft. com/ protect/ yourself/ phishing/ engineering. mspx). .
Retrieved August 22, 2007.
[3] Jøsang, Audun et al.. "Security Usability Principles for Vulnerability Analysis and Risk Assessment." (http:/ / www. unik. no/ people/ josang/
papers/ JAGAM2007-ACSAC. pdf) (PDF). Proceedings of the Annual Computer Security Applications Conference 2007 (ACSAC'07). .
Retrieved 2007.
[4] "Spam Slayer: Do You Speak Spam?" (http:/ / www. pcworld. com/ article/ id,113431-page,1/ article. html). PCWorld.com. . Retrieved
August 16, 2006.
[5] ""phishing, n." OED Online, March 2006, Oxford University Press." (http:/ / dictionary. oed. com/ cgi/ entry/ 30004304/ ). Oxford English
Dictionary Online. . Retrieved August 9, 2006.
[6] "Phishing" (http:/ / itre. cis. upenn. edu/ ~myl/ languagelog/ archives/ 001477. html). Language Log, September 22, 2004. . Retrieved August
9, 2006.
[7] Felix, Jerry and Hauck, Chris (September 1987). "System Security: A Hacker's Perspective". 1987 Interex Proceedings 1: 6.
[8] ""phish, v." OED Online, March 2006, Oxford University Press." (http:/ / dictionary. oed. com/ cgi/ entry/ 30004303/ ). Oxford English
Dictionary Online. . Retrieved August 9, 2006.
[9] Ollmann, Gunter. "The Phishing Guide: Understanding and Preventing Phishing Attacks" (http:/ / www. technicalinfo. net/ papers/ Phishing.
html). Technical Info. . Retrieved July 10, 2006.
[10] "Phishing" (http:/ / www. wordspy. com/ words/ phishing. asp). Word Spy. . Retrieved September 28, 2006.
[11] Stutz, Michael (January 29, 1998). "AOL: A Cracker's Paradise?" (http:/ / wired-vig. wired. com/ news/ technology/ 0,1282,9932,00. html).
Wired News. .
[12] "History of AOL Warez" (http:/ / www. rajuabju. com/ warezirc/ historyofaolwarez. htm). . Retrieved September 28, 2006.
[13] "GP4.3 - Growth and Fraud — Case #3 - Phishing" (https:/ / financialcryptography. com/ mt/ archives/ 000609. html). Financial
Cryptography. December 30, 2005. .
[14] "In 2005, Organized Crime Will Back Phishers" (http:/ / itmanagement. earthweb. com/ secu/ article. php/ 3451501). IT Management.
December 23, 2004. .
[15] "The economy of phishing: A survey of the operations of the phishing market" (http:/ / www. firstmonday. org/ issues/ issue10_9/ abad/ ).
First Monday. September 2005. .
[16] "Suspicious e-Mails and Identity Theft" (http:/ / www. irs. gov/ newsroom/ article/ 0,,id=155682,00. html). Internal Revenue Service. .
Retrieved July 5, 2006.
[17] "Phishing for Clues" (http:/ / www. browser-recon. info/ ). Indiana University Bloomington. September 15, 2005. .
[18] "What is spear phishing?" (http:/ / www. microsoft. com/ athome/ security/ email/ spear_phishing. mspx). Microsoft Security At Home. .
Retrieved July 10, 2006.
[19] Goodin, Dan (April 17, 2008). "Fake subpoenas harpoon 2,100 corporate fat cats" (http:/ / www. theregister. co. uk/ 2008/ 04/ 16/
whaling_expedition_continues/ . ). The Register. .
[20] Kirk, Jeremy (June 2, 2006). "Phishing Scam Takes Aim at [[MySpace.com (http:/ / www. pcworld. com/ resource/ article/
0,aid,125956,pg,1,RSS,RSS,00. asp)]"]. IDG Network. .
[21] "Malicious Website / Malicious Code: MySpace XSS QuickTime Worm" (http:/ / www. websense. com/ securitylabs/ alerts/ alert.
php?AlertID=708). Websense Security Labs. . Retrieved December 5, 2006.
[22] Tom Jagatic and Nathan Johnson and Markus Jakobsson and Filippo Menczer. "Social Phishing" (http:/ / www. indiana. edu/ ~phishing/
social-network-experiment/ phishing-preprint. pdf) (PDF). To appear in the CACM (October 2007). . Retrieved June 3, 2006.
[23] "1-Click Hosting at RapidTec — Warning of Phishing!" (http:/ / rapidshare. de/ en/ phishing. html). . Retrieved December 21, 2008.
[24] "Torrent of spam likely to hit 6.3 million TD Ameritrade hack victims" (http:/ / www. webcitation. org/ 5gY2R1j1g). Archived from the
original (http:/ / www. sophos. com/ pressoffice/ news/ articles/ 2007/ 09/ ameritrade. html) on 2009-05-05. .
[25] Shadowy Russian Firm Seen as Conduit for [[Cybercrime (http:/ / www. washingtonpost. com/ wp-dyn/ content/ story/ 2007/ 10/ 12/
ST2007101202661. html?hpid=topnews)]], by Brian Krebs, Washington post, October 13, 2007
[26] Phishsos.Blogspot.com (http:/ / phishsos. blogspot. com/ 2010/ 01/ facebook-scam. html)
[27] "Millersmiles Home Page" (http:/ / www. millersmiles. co. uk). Oxford Information Services. . Retrieved 2010-01-03.
[28] "FraudWatch International Home Page" (http:/ / www. fraudwatchinternational. com). FraudWatch International. . Retrieved 2010-01-03.
[29] HSBCUSA.com (http:/ / www. hsbcusa. com/ security/ recognize_fraud. html)
[30] Berners-Lee, Tim. "Uniform Resource Locators (URL)" (http:/ / www. w3. org/ Addressing/ rfc1738. txt). IETF Network Working Group. .
Retrieved January 28, 2006.
[31] Microsoft. "A security update is available that modifies the default behavior of Internet Explorer for handling user information in HTTP and
in HTTPS URLs" (http:/ / support. microsoft. com/ kb/ 834489). Microsoft Knowledgebase. . Retrieved August 28, 2005.
[32] Fisher, Darin. "Warn when HTTP URL auth information isn't necessary or when it's provided" (https:/ / bugzilla. mozilla. org/ show_bug.
cgi?id=232567). Bugzilla. . Retrieved August 28, 2005.
165
Phishing
[33] Johanson, Eric. "The State of Homograph Attacks Rev1.1" (http:/ / www. shmoo. com/ idn/ homograph. txt). The Shmoo Group. . Retrieved
August 11, 2005.
[34] Evgeniy Gabrilovich and Alex Gontmakher (February 2002). "The Homograph Attack" (http:/ / www. cs. technion. ac. il/ ~gabr/ papers/
homograph_full. pdf) (PDF). Communications of the ACM 45(2): 128. .
[35] Leyden, John (August 15, 2006). "Barclays scripting SNAFU exploited by phishers" (http:/ / www. theregister. co. uk/ 2006/ 08/ 15/
barclays_phish_scam/ ). The Register. .
[36] Levine, Jason. "Goin' phishing with eBay" (http:/ / q. queso. com/ archives/ 001617). Q Daily News. . Retrieved December 14, 2006.
[37] Leyden, John (December 12, 2007). "Cybercrooks lurk in shadows of big-name websites" (http:/ / www. theregister. co. uk/ 2007/ 12/ 12/
phishing_redirection/ ). The Register. .
[38] Mutton, Paul. "Fraudsters seek to make phishing sites undetectable by content filters" (http:/ / news. netcraft. com/ archives/ 2005/ 05/ 12/
fraudsters_seek_to_make_phishing_sites_undetectable_by_content_filters. html). Netcraft. . Retrieved July 10, 2006.
[39] Mutton, Paul. "Phishing Web Site Methods" (http:/ / www. fraudwatchinternational. com/ phishing-fraud/ phishing-web-site-methods/ ).
FraudWatch International. . Retrieved December 14, 2006.
[40] "Phishing con hijacks browser bar" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 3608943. stm). BBC News. April 8, 2004. .
[41] Krebs, Brian. "Flaws in Financial Sites Aid Scammers" (http:/ / blog. washingtonpost. com/ securityfix/ 2006/ 06/
flaws_in_financial_sites_aid_s. html). Security Fix. . Retrieved June 28, 2006.
[42] Mutton, Paul. "PayPal Security Flaw allows Identity Theft" (http:/ / news. netcraft. com/ archives/ 2006/ 06/ 16/
paypal_security_flaw_allows_identity_theft. html). Netcraft. . Retrieved June 19, 2006.
[43] Hoffman, Patrick (January 10, 2007). "RSA Catches Financial Phishing Kit" (http:/ / www. eweek. com/ article2/ 0,1895,2082039,00. asp).
eWeek. .
[44] Miller, Rich. "Phishing Attacks Continue to Grow in Sophistication" (http:/ / news. netcraft. com/ archives/ 2007/ 01/ 15/
phishing_attacks_continue_to_grow_in_sophistication. html). Netcraft. . Retrieved December 19, 2007.
[45] Gonsalves, Antone (April 25, 2006). "Phishers Snare Victims With VoIP" (http:/ / www. techweb. com/ wire/ security/ 186701001).
Techweb. .
[46] "Identity thieves take advantage of VoIP" (http:/ / www. silicon. com/ research/ specialreports/ voip/ 0,3800004463,39128854,00. htm).
Silicon.com. March 21, 2005. .
[47] "Internet Banking Targeted Phishing Attack" (http:/ / www. met. police. uk/ fraudalert/ docs/ internet_bank_fraud. pdf). Metropolitan Police
Service. 2005-06-03. . Retrieved 2009-03-22.
[48] Kerstein, Paul (July 19, 2005). "How Can We Stop Phishing and Pharming Scams?" (http:/ / www. csoonline. com/ talkback/ 071905. html).
CSO. .
[49] McCall, Tom (December 17, 2007). "Gartner Survey Shows Phishing Attacks Escalated in 2007; More than $3 Billion Lost to These
Attacks" (http:/ / www. gartner. com/ it/ page. jsp?id=565125). Gartner. .
[50] "A Profitless Endeavor: Phishing as Tragedy of the Commons" (http:/ / research. microsoft. com/ ~cormac/ Papers/ PhishingAsTragedy. pdf)
(PDF). Microsoft. . Retrieved November 15, 2008.
[51] "UK phishing fraud losses double" (http:/ / www. finextra. com/ fullstory. asp?id=15013). Finextra. March 7, 2006. .
[52] Richardson, Tim (May 3, 2005). "Brits fall prey to phishing" (http:/ / www. theregister. co. uk/ 2005/ 05/ 03/ aol_phishing/ ). The Register. .
[53] Miller, Rich. "Bank, Customers Spar Over Phishing Losses" (http:/ / news. netcraft. com/ archives/ 2006/ 09/ 13/
bank_customers_spar_over_phishing_losses. html). Netcraft. . Retrieved December 14, 2006.
[54] "Latest News" (http:/ / applications. boi. com/ updates/ Article?PR_ID=1430). .
[55] "Bank of Ireland agrees to phishing refunds – vnunet.com" (http:/ / www. vnunet. com/ vnunet/ news/ 2163714/ bank-ireland-backtracks). .
[56] Ponnurangam Kumaraguru, Yong Woo Rhee, Alessandro Acquisti, Lorrie Cranor, Jason Hong and Elizabeth Nunge (November 2006).
"Protecting People from Phishing: The Design and Evaluation of an Embedded Training Email System" (http:/ / www. cylab. cmu. edu/ files/
cmucylab06017. pdf) (PDF). Technical Report CMU-CyLab-06-017, CyLab, Carnegie Mellon University.. . Retrieved November 14, 2006.
[57] Bank, David (August 17, 2005). "'Spear Phishing' Tests Educate People About Online Scams" (http:/ / online. wsj. com/ public/ article/
0,,SB112424042313615131-z_8jLB2WkfcVtgdAWf6LRh733sg_20060817,00. html?mod=blogs). The Wall Street Journal. .
[58] "Anti-Phishing Tips You Should Not Follow" (http:/ / www. hexview. com/ sdp/ node/ 24). HexView. . Retrieved June 19, 2006.
[59] "Protect Yourself from Fraudulent Emails" (https:/ / www. paypal. com/ us/ cgi-bin/ webscr?cmd=_vdc-security-spoof-outside). PayPal. .
Retrieved July 7, 2006.
[60] Markus Jakobsson, Alex Tsow, Ankur Shah, Eli Blevis, Youn-kyung Lim.. "What Instills Trust? A Qualitative Study of Phishing." (http:/ /
www. informatics. indiana. edu/ markus/ papers/ trust_USEC. pdf) (PDF). USEC '06. .
[61] Zeltser, Lenny (March 17, 2006). "Phishing Messages May Include Highly-Personalized Information" (http:/ / isc. incidents. org/ diary.
php?storyid=1194). The SANS Institute. .
[62] Markus Jakobsson and Jacob Ratkiewicz. "Designing Ethical Phishing Experiments" (http:/ / www2006. org/ programme/ item.
php?id=3533). WWW '06. .
[63] Kawamoto, Dawn (August 4, 2005). "Faced with a rise in so-called pharming and crimeware attacks, the Anti-Phishing Working Group will
expand its charter to include these emerging threats." (http:/ / www. zdnetindia. com/ news/ features/ stories/ 126569. html). ZDNet India. .
[64] "Social networking site teaches insecure password practices" (http:/ / blog. anta. net/ 2008/ 11/ 09/
social-networking-site-teaches-insecure-password-practices/ ). blog.anta.net. 2008-11-09. ISSN 1797-1993. . Retrieved 2008-11-09.
166
Phishing
[65] Brandt, Andrew. "Privacy Watch: Protect Yourself With an Antiphishing Toolbar" (http:/ / www. pcworld. com/ article/ 125739-1/ article.
html). PC World – Privacy Watch. . Retrieved September 25, 2006.
[66] Jøsangm Audun and Pope, Simon. "User Centric Identity Management" (http:/ / www. unik. no/ people/ josang/ papers/ JP2005-AusCERT.
pdf) (PDF). Proceedings of AusCERT 2005. . Retrieved 2008.
[67] " Phishing - What it is and How it Will Eventually be Dealt With (http:/ / www. arraydev. com/ commerce/ jibc/ 2005-02/ jibc_phishing.
HTM)" by Ian Grigg 2005
[68] " Brand matters (IE7, Skype, Vonage, Mozilla) (https:/ / financialcryptography. com/ mt/ archives/ 000645. html)" Ian Grigg
[69] Franco, Rob. "Better Website Identification and Extended Validation Certificates in IE7 and Other Browsers" (http:/ / blogs. msdn. com/ ie/
archive/ 2005/ 11/ 21/ 495507. aspx). IEBlog. . Retrieved May 20, 2006.
[70] "Bon Echo Anti-Phishing" (http:/ / www. mozilla. org/ projects/ bonecho/ anti-phishing/ ). Mozilla. . Retrieved June 2, 2006.
[71] "Safari 3.2 finally gains phishing protection" (http:/ / arstechnica. com/ journals/ apple. ars/ 2008/ 11/ 13/
safari-3-2-finally-gains-phishing-protection). Ars Technica. November 13, 2008. . Retrieved November 15, 2008.
[72] "Gone Phishing: Evaluating Anti-Phishing Tools for Windows" (http:/ / www. 3sharp. com/ projects/ antiphish/ index. htm). 3Sharp.
September 27, 2006. . Retrieved 2006-10-20.
[73] "Two Things That Bother Me About Google’s New Firefox Extension" (http:/ / www. oreillynet. com/ onlamp/ blog/ 2005/ 12/
two_things_that_bother_me_abou. html). Nitesh Dhanjani on O'Reilly ONLamp. . Retrieved July 1, 2007.
[74] "Firefox 2 Phishing Protection Effectiveness Testing" (http:/ / www. mozilla. org/ security/ phishing-test. html). . Retrieved January 23,
2007.
[75] Higgins, Kelly Jackson. "DNS Gets Anti-Phishing Hook" (http:/ / www. darkreading. com/ document. asp?doc_id=99089& WT.
svl=news1_1). Dark Reading. . Retrieved October 8, 2006.
[76] Krebs, Brian (August 31, 2006). "Using Images to Fight Phishing" (http:/ / blog. washingtonpost. com/ securityfix/ 2006/ 08/
using_images_to_fight_phishing. html). Security Fix. .
[77] Seltzer, Larry (August 2, 2004). "Spotting Phish and Phighting Back" (http:/ / www. eweek. com/ article2/ 0,1759,1630161,00. asp). eWeek.
.
[78] Bank of America. "How Bank of America SiteKey Works For Online Banking Security" (http:/ / www. bankofamerica. com/ privacy/
sitekey/ ). . Retrieved January 23, 2007.
[79] Brubaker, Bill (July 14, 2005). "Bank of America Personalizes Cyber-Security" (http:/ / www. washingtonpost. com/ wp-dyn/ content/
article/ 2005/ 07/ 13/ AR2005071302181. html). Washington Post. .
[80] Stone, Brad (February 5, 2007). "Study Finds Web Antifraud Measure Ineffective" (http:/ / www. nytimes. com/ 2007/ 02/ 05/ technology/
05secure. html?ex=1328331600& en=295ec5d0994b0755& ei=5090& partner=rssuserland& emc=rss). New York Times. . Retrieved
February 5, 2007.
[81] Stuart Schechter, Rachna Dhamija, Andy Ozment, Ian Fischer (May 2007). "The Emperor's New Security Indicators: An evaluation of
website authentication and the effect of role playing on usability studies" (http:/ / www. deas. harvard. edu/ ~rachna/ papers/
emperor-security-indicators-bank-sitekey-phishing-study. pdf) (PDF). IEEE Symposium on Security and Privacy, May 2007. . Retrieved
February 5, 2007.
[82] "Phishers target Nordea's one-time password system" (http:/ / www. finextra. com/ fullstory. asp?id=14384). Finextra. October 12, 2005. .
[83] Krebs, Brian (July 10, 2006). "Citibank Phish Spoofs 2-Factor Authentication" (http:/ / blog. washingtonpost. com/ securityfix/ 2006/ 07/
citibank_phish_spoofs_2factor_1. html). Security Fix. .
[84] Graham Titterington. "More doom on phishing" (http:/ / www. ovum. com/ news/ euronews. asp?id=4166). Ovum Research, April 2006. .
[85] Schneier, Bruce. "Security Skins" (http:/ / www. schneier. com/ blog/ archives/ 2005/ 07/ security_skins. html). Schneier on Security. .
Retrieved December 3, 2006.
[86] Rachna Dhamija, J.D. Tygar (July 2005). "The Battle Against Phishing: Dynamic Security Skins" (http:/ / people. deas. harvard. edu/
~rachna/ papers/ securityskins. pdf) (PDF). Symposium On Usable Privacy and Security (SOUPS) 2005. . Retrieved February 5, 2007.
[87] Madhusudhanan Chandrasekaran, Krishnan Narayanan, Shambhu Upadhyaya (March 2006). "Phishing E-mail Detection Based on
Structural Properties" (http:/ / www. albany. edu/ iasymposium/ 2006/ chandrasekaran. pdf) (PDF). NYS Cyber Security Symposium. .
[88] Ian Fette, Norman Sadeh, Anthony Tomasic (June 2006). "Learning to Detect Phishing Emails" (http:/ / reports-archive. adm. cs. cmu. edu/
anon/ isri2006/ CMU-ISRI-06-112. pdf) (PDF). Carnegie Mellon University Technical Report CMU-ISRI-06-112. .
[89] "Anti-Phishing Working Group: Vendor Solutions" (http:/ / www. antiphishing. org/ solutions. html#takedown). Anti-Phishing Working
Group. . Retrieved July 6, 2006.
[90] McMillan, Robert (March 28, 2006). "New sites let users find and report phishing" (http:/ / www. linuxworld. com. au/ index. php/
id;1075406575;fp;2;fpid;1. ). LinuxWorld. .
[91] Schneier, Bruce (2006-10-05). "PhishTank" (http:/ / www. schneier. com/ blog/ archives/ 2006/ 10/ phishtank. html). Schneier on Security. .
Retrieved 2007-12-07.
[92] "Phone Phishing" (http:/ / phonephishing. info). Phone Phishing. . Retrieved Feb 25, 2009.
[93] "Federal Trade Commission" (http:/ / www. ftc. gov/ phonefraud). Federal Trade Commission. . Retrieved Mar 6, 2009.
[94] Legon, Jeordan (January 26, 2004). "'Phishing' scams reel in your identity" (http:/ / www. cnn. com/ 2003/ TECH/ internet/ 07/ 21/ phishing.
scam/ index. html). CNN. .
[95] Leyden, John (March 21, 2005). "Brazilian cops net 'phishing kingpin'" (http:/ / www. channelregister. co. uk/ 2005/ 03/ 21/
brazil_phishing_arrest/ ). The Register. .
167
Phishing
[96] Roberts, Paul (June 27, 2005). "UK Phishers Caught, Packed Away" (http:/ / www. eweek. com/ article2/ 0,1895,1831960,00. asp). eWEEK.
.
[97] "Nineteen Individuals Indicted in Internet 'Carding' Conspiracy" (http:/ / www. cybercrime. gov/ mantovaniIndict. htm). . Retrieved
November 20, 2005.
[98] "8 held over suspected phishing fraud". The Daily Yomiuri. May 31, 2006.
[99] "Phishing gang arrested in USA and Eastern Europe after FBI investigation" (http:/ / www. sophos. com/ pressoffice/ news/ articles/ 2006/
11/ phishing-arrests. html). . Retrieved December 14, 2006.
[100] "Phishers Would Face 5 Years Under New Bill" (http:/ / informationweek. com/ story/ showArticle. jhtml?articleID=60404811).
Information Week. March 2, 2005. .
[101] "Fraud Act 2006" (http:/ / www. opsi. gov. uk/ ACTS/ en2006/ 2006en35. htm). . Retrieved December 14, 2006.
[102] "Prison terms for phishing fraudsters" (http:/ / www. theregister. co. uk/ 2006/ 11/ 14/ fraud_act_outlaws_phishing/ ). The Register.
November 14, 2006. .
[103] "Microsoft Partners with Australian Law Enforcement Agencies to Combat Cyber Crime" (http:/ / www. microsoft. com/ australia/
presspass/ news/ pressreleases/ cybercrime_31_3_05. aspx). . Retrieved August 24, 2005.
[104] Espiner, Tom (March 20, 2006). "Microsoft launches legal assault on phishers" (http:/ / news. zdnet. co. uk/ 0,39020330,39258528,00.
htm). ZDNet. .
[105] Leyden, John (November 23, 2006). "MS reels in a few stray phish" (http:/ / www. theregister. co. uk/ 2006/ 11/ 23/
ms_anti-phishing_campaign_update/ ). The Register. .
[106] "A History of Leadership - 2006" (http:/ / corp. aol. com/ whoweare/ history/ 2006. shtml). .
[107] "AOL Takes Fight Against Identity Theft To Court, Files Lawsuits Against Three Major Phishing Gangs" (http:/ / media. aoltimewarner.
com/ media/ newmedia/ cb_press_view. cfm?release_num=55254535). . Retrieved March 8, 2006.
[108] "HB 2471 Computer Crimes Act; changes in provisions, penalty." (http:/ / leg1. state. va. us/ cgi-bin/ legp504. exe?051+ sum+ HB2471). .
Retrieved March 8, 2006.
[109] Brulliard, Karin (April 10, 2005). "Va. Lawmakers Aim to Hook Cyberscammers" (http:/ / www. washingtonpost. com/ wp-dyn/ articles/
A40578-2005Apr9. html). Washington Post. .
[110] "Earthlink evidence helps slam the door on phisher site spam ring" (http:/ / www. earthlink. net/ about/ press/ pr_phishersite/ ). . Retrieved
December 14, 2006.
[111] Prince, Brian (January 18, 2007). "Man Found Guilty of Targeting AOL Customers in Phishing Scam" (http:/ / www. pcmag. com/ article2/
0,1895,2085183,00. asp). PCMag.com. .
[112] Leyden, John (January 17, 2007). "AOL phishing fraudster found guilty" (http:/ / www. theregister. co. uk/ 2007/ 01/ 17/
aol_phishing_fraudster/ ). The Register. .
[113] Leyden, John (June 13, 2007). "AOL phisher nets six years' imprisonment" (http:/ / www. theregister. co. uk/ 2007/ 06/ 13/
aol_fraudster_jailed/ ). The Register. .
[114] Gaudin, Sharon (June 12, 2007). "California Man Gets 6-Year Sentence For Phishing" (http:/ / www. informationweek. com/ story/
showArticle. jhtml?articleID=199903450). InformationWeek. .
[115] http:/ / www. antiphishing. org
[116] http:/ / www. utica. edu/ academic/ institutes/ cimip/
[117] http:/ / ha. ckers. org/ blog/ 20060609/ how-phishing-actually-works/
[118] http:/ / www. law. duke. edu/ journals/ dltr/ articles/ 2005dltr0006. html
[119] http:/ / www. honeynet. org/ papers/ phishing/
[120] http:/ / www. securityfocus. com/ infocus/ 1745
[121] http:/ / www. technicalinfo. net/ papers/ Phishing. html
[122] http:/ / research. microsoft. com/ en-us/ um/ people/ cormac/ Papers/ PhishingAsTragedy. pdf
[123] http:/ / www. phishtank. com/
[124] http:/ / www. cl. cam. ac. uk/ %7Ernc1/ takedown. pdf
[125] http:/ / www. internetnews. com/ security/ article. php/ 3882136/ One+ Gang+ Responsible+ For+ Most+ Phishing+ Attacks. htm
168
169
6.0 Information
Data security
Data security is the means of ensuring that data is kept safe from corruption and that access to it is suitably
controlled. Thus data security helps to ensure privacy. It also helps in protecting personal data.
Data Security Technologies
Disk Encryption
Disk encryption refers to encryption technology that encrypts data on a hard disk drive. Disk encryption typically
takes form in either software (see disk encryption software] or hardware (see disk encryption hardware). Disk
encryption is often referred to as on-the-fly encryption ("OTFE") or transparent encryption.
Hardware based Mechanisms for Protecting Data
Software based security solutions encrypt the data to prevent data from being stolen. However, a malicious program
or a hacker may corrupt the data in order to make it unrecoverable or unusable. Similarly, encrypted operating
systems can be corrupted by a malicious program or a hacker, making the system unusable. Hardware-based security
solutions can prevent read and write access to data and hence offers very strong protection against tampering and
unauthorized access.
Hardware based or assisted computer security offers an alternative to software-only computer security. Security
tokens such as those using PKCS#11 may be more secure due to the physical access required in order to be
compromised. Access is enabled only when the token is connected and correct PIN is entered (see two factor
authentication). However, dongles can be used by anyone who can gain physical access to it. Newer technologies in
hardware based security solves this problem offering fool proof security for data.
Working of Hardware based security: A hardware device allows a user to login, logout and to set different privilege
levels by doing manual actions. The device uses biometric technology to prevent malicious users from logging in,
logging out, and changing privilege levels. The current state of a user of the device is read by controllers in
peripheral devices such as harddisks. Illegal access by a malicious user or a malicious program is interrupted based
on the current state of a user by harddisk and DVD controllers making illegal access to data impossible. Hardware
based access control is more secure than protection provided by the operating systems as operating systems are
vulnerable to malicious attacks by viruses and hackers. The data on harddisks can be corrupted after a malicious
access is obtained. With hardware based protection, software cannot manipulate the user privilege levels, it is
impossible for a hacker or a malicious program to gain access to secure data protected by hardware or perform
unauthorized privileged operations. The hardware protects the operating system image and file system privileges
from being tampered. Therefore, a completely secure system can be created using a combination of hardware based
security and secure system administration policies.
Data security
Backups
Backups are used to ensure data which is lost can be recovered
Data Masking
Data Masking of structured data is the process of obscuring (masking) specific data within a database table or cell to
ensure that data security is maintained and sensitive information is not exposed to unauthorized personnel. This may
include masking the data from users (for example so banking customer representatives can only see the last 4 digits
of a customers national identity number), developers (who need real production data to test new software releases
but should not be able to see sensitive financial data), outsourcing vendors, etc.
Data Erasure
Data erasure is a method of software-based overwriting that completely destroys all electronic data residing on a
hard drive or other digital media to ensure that no sensitive data is leaked when an asset is retired or reused.
International Laws and Standards
International Laws
In the UK, the Data Protection Act is used to ensure that personal data is accessible to those whom it concerns, and
provides redress to individuals if there are inaccuracies. This is particularly important to ensure individuals are
treated fairly, for example for credit checking purposes. The Data Protection Act states that only individuals and
companies with legitimate and lawful reasons can process personal information and cannot be shared.
International Standards
The International Standard ISO/IEC 17799 covers data security under the topic of information security, and one of
its cardinal principles is that all stored information, i.e. data, should be owned so that it is clear whose responsibility
it is to protect and control access to that data.
The Trusted Computing Group is an organization that helps standardize computing security technologies.
See also
•
•
•
•
•
•
Copy Protection
Data masking
Data erasure
Data recovery
Digital inheritance
Disk encryption
•
•
•
•
•
•
• Comparison of disk encryption software
Hardware Based Security for Computers, Data and Information
Pre-boot authentication
Secure USB drive
Security Breach Notification Laws
Single sign-on
Smartcard
• Trusted Computing Group
170
Information security
171
Information security
Information security means protecting
information and information systems from
unauthorized access, use, disclosure,
disruption, modification or destruction.[1]
The terms information security, computer
security and information assurance are
frequently incorrectly used interchangeably.
These fields are interrelated often and share
the common goals of protecting the
confidentiality, integrity and availability of
information; however, there are some subtle
differences between them.
These differences lie primarily in the
approach to the subject, the methodologies
used, and the areas of concentration.
Information security is concerned with the
confidentiality, integrity and availability of
data regardless of the form the data may
take: electronic, print, or other forms.
Computer security can focus on ensuring the
availability and correct operation of a
computer system without concern for the
information stored or processed by the
computer.
Information Security Components: or qualities, i.e., Confidentiality, Integrity
and Availability (CIA). Information Systems are decomposed in three main
portions, hardware, software and communications with the purpose to identify and
apply information security industry standards, as mechanisms of protection and
prevention, at three levels or layers: Physical, personal and organizational.
Essentially, procedures or policies are implemented to tell people (administrators,
users and operators)how to use products to ensure information security within the
organizations.
Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of
confidential information about their employees, customers, products, research, and financial status. Most of this
information is now collected, processed and stored on electronic computers and transmitted across networks to other
computers.
Should confidential information about a business' customers or finances or new product line fall into the hands of a
competitor, such a breach of security could lead to lost business, law suits or even bankruptcy of the business.
Protecting confidential information is a business requirement, and in many cases also an ethical and legal
requirement.
For the individual, information security has a significant effect on privacy, which is viewed very differently in
different cultures.
The field of information security has grown and evolved significantly in recent years. There are many ways of
gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and
allied infrastructure, securing applications and databases, security testing, information systems auditing, business
continuity planning and digital forensics science, to name a few, which are carried out by Information Security
Consultants
This article presents a general overview of information security and its core concepts.
Information security
History
Since the early days of writing, heads of state and military commanders understood that it was necessary to provide
some mechanism to protect the confidentiality of written correspondence and to have some means of detecting
tampering.
Julius Caesar is credited with the invention of the Caesar cipher c50 B.C., which was created in order to prevent his
secret messages from being read should a message fall into the wrong hands.
World War II brought about many advancements in information security and marked the beginning of the
professional field of information security.
The end of the 20th century and early years of the 21st century saw rapid advancements in telecommunications,
computing hardware and software, and data encryption. The availability of smaller, more powerful and less
expensive computing equipment made electronic data processing within the reach of small business and the home
user. These computers quickly became interconnected through a network generically called the Internet or World
Wide Web.
The rapid growth and widespread use of electronic data processing and electronic business conducted through the
Internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting
the computers and the information they store, process and transmit. The academic disciplines of computer security,
information security and information assurance emerged along with numerous professional organizations - all
sharing the common goals of ensuring the security and reliability of information systems.
Basic principles
Key concepts
For over twenty years, information security has held confidentiality, integrity and availability (known as the CIA
triad) to be the core principles of information security.
There is continuous debate about extending this classic trio. Other principles such as Accountability have sometimes
been proposed for addition - it has been pointed out that issues such as Non-Repudiation do not fit well within the
three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations)
Legality is becoming a key consideration for practical security installations.
In 2002, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements
of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The
merits of the Parkerian hexad are a subject of debate amongst security professionals.
Confidentiality
Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For
example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to
the merchant and from the merchant to a transaction processing network. The system attempts to enforce
confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in
databases, log files, backups, printed receipts, and so on), and by restricting access to the places where it is stored. If
an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred.
Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer
screen while you have confidential data displayed on it could be a breach of confidentiality. If a laptop computer
containing sensitive information about a company's employees is stolen or sold, it could result in a breach of
confidentiality. Giving out confidential information over the telephone is a breach of confidentiality if the caller is
not authorized to have the information.
172
Information security
Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information
a system holds.
Integrity
In information security, integrity means that data cannot be modified without authorization. This is not the same
thing as referential integrity in databases. Integrity is violated when an employee accidentally or with malicious
intent deletes important data files, when a computer virus infects a computer, when an employee is able to modify
his own salary in a payroll database, when an unauthorized user vandalizes a web site, when someone is able to cast
a very large number of votes in an online poll, and so on.
There are many ways in which integrity could be violated without malicious intent. In the simplest case, a user on a
system could mis-type someone's address. On a larger scale, if an automated process is not written and tested
correctly, bulk updates to a database could alter data in an incorrect way, leaving the integrity of the data
compromised. Information security professionals are tasked with finding ways to implement controls that prevent
errors of integrity.
Availability
For any information system to serve its purpose, the information must be available when it is needed. This means
that the computing systems used to store and process the information, the security controls used to protect it, and the
communication channels used to access it must be functioning correctly. High availability systems aim to remain
available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.
Ensuring availability also involves preventing denial-of-service attacks.
Authenticity
In computing, e-Business and information security it is necessary to ensure that the data, transactions,
communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate
that both parties involved are who they claim they are.
Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party
of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction.
Electronic commerce uses technology such as digital signatures and encryption to establish authenticity and
non-repudiation.
Risk management
A comprehensive treatment of the topic of risk management is beyond the scope of this article. However, a useful
definition of risk management will be provided as well as some basic terminology and a commonly used process for
risk management.
The CISA Review Manual 2006 provides the following definition of risk management: "Risk management is the
process of identifying vulnerabilities and threats to the information resources used by an organization in achieving
business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based
on the value of the information resource to the organization."[2]
There are two things in this definition that may need some clarification. First, the process of risk management is an
ongoing iterative process. It must be repeated indefinitely. The business environment is constantly changing and new
threats and vulnerability emerge every day. Second, the choice of countermeasures (controls) used to manage risks
must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the
informational asset being protected.
173
Information security
Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the
asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A
threat is anything (man made or act of nature) that has the potential to cause harm.
The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a
vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of
availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It
should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining
risk is called residual risk.
A risk assessment is carried out by a team of people who have knowledge of specific areas of the business.
Membership of the team may vary over time as different parts of the business are assessed. The assessment may use
a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical
information is available, the analysis may use quantitative analysis.
The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be
examined during a risk assessment:
• security policy,
• organization of information security,
• asset management,
•
•
•
•
•
•
•
•
human resources security,
physical and environmental security,
communications and operations management,
access control,
information systems acquisition, development and maintenance,
information security incident management,
business continuity management, and
regulatory compliance.
In broad terms the risk management process consists of:
1. Identification of assets and estimating their value. Include: people, buildings, hardware, software, data
(electronic, print, other), supplies.
2. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious acts originating from
inside or outside the organization.
3. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited.
Evaluate policies, procedures, standards, training, physical security, quality control, technical security.
4. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis.
5. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost
effectiveness, and value of the asset.
6. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective
protection without discernible loss of productivity.
For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the
asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may
choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some
cases, the risk can be transferred to another business by buying insurance or out-sourcing to another business. The
reality of some risks may be disputed. In such cases leadership may choose to deny the risk. This is itself a potential
risk.
174
Information security
Controls
When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types
of controls.
Administrative
Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards
and guidelines. Administrative controls form the framework for running the business and managing people. They
inform people on how the business is to be run and how day to day operations are to be conducted. Laws and
regulations created by government bodies are also a type of administrative control because they inform the business.
Some industry sectors have policies, procedures, standards and guidelines that must be followed - the Payment Card
Industry (PCI) Data Security Standard required by Visa and Master Card is such an example. Other examples of
administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary
policies.
Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical
and physical controls are manifestations of administrative controls. Administrative controls are of paramount
importance.
Logical
Logical controls (also called technical controls) use software and data to monitor and control access to information
and computing systems. For example: passwords, network and host based firewalls, network intrusion detection
systems, access control lists, and data encryption are logical controls.
An important logical control that is frequently overlooked is the principle of least privilege. The principle of least
privilege requires that an individual, program or system process is not granted any more access privileges than are
necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging
into Windows as user Administrator to read Email and surf the Web. Violations of this principle can also occur when
an individual collects additional access privileges over time. This happens when employees' job duties change, or
they are promoted to a new position, or they transfer to another department. The access privileges required by their
new duties are frequently added onto their already existing access privileges which may no longer be necessary or
appropriate.
Physical
Physical controls monitor and control the environment of the work place and computing facilities. They also monitor
and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and
fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the
network and work place into functional areas are also physical controls.
An important physical control that is frequently overlooked is the separation of duties. Separation of duties ensures
that an individual can not complete a critical task by himself. For example: an employee who submits a request for
reimbursement should not also be able to authorize payment or print the check. An applications programmer should
not also be the server administrator or the database administrator - these roles and responsibilities must be separated
from one another.[3]
175
Information security
Security classification for information
An important aspect of information security and risk management is recognizing the value of information and
defining appropriate procedures and protection requirements for the information. Not all information is equal and so
not all information requires the same degree of protection. This requires information to be assigned a security
classification.
The first step in information classification is to identify a member of senior management as the owner of the
particular information to be classified. Next, develop a classification policy. The policy should describe the different
classification labels, define the criteria for information to be assigned a particular label, and list the required security
controls for each classification.
Some factors that influence which classification information should be assigned include how much value that
information has to the organization, how old the information is and whether or not the information has become
obsolete. Laws and other regulatory requirements are also important considerations when classifying information.
The type of information security classification labels selected and used will depend on the nature of the organisation,
with examples being:
• In the business sector, labels such as: Public, Sensitive, Private, Confidential.
• In the government sector, labels such as: Unclassified, Sensitive But Unclassified, Restricted, Confidential,
Secret, Top Secret and their non-English equivalents.
• In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber and Red.
All employees in the organization, as well as business partners, must be trained on the classification schema and
understand the required security controls and handling procedures for each classification. The classification a
particular information asset has been assigned should be reviewed periodically to ensure the classification is still
appropriate for the information and to ensure the security controls required by the classification are in place.
Access control
Access to protected information must be restricted to people who are authorized to access the information. The
computer programs, and in many cases the computers that process the information, must also be authorized. This
requires that mechanisms be in place to control the access to protected information. The sophistication of the access
control mechanisms should be in parity with the value of the information being protected - the more sensitive or
valuable the information the stronger the control mechanisms need to be. The foundation on which access control
mechanisms are built start with identification and authentication.
Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my
name is John Doe." they are making a claim of who they are. However, their claim may or may not be true. Before
John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be
John Doe really is John Doe.
Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he
tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller
his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the
photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then
the teller has authenticated that John Doe is who he claimed to be.
There are three different types of information that can be used for authentication: something you know, something
you have, or something you are. Examples of something you know include such things as a PIN, a password, or
your mother's maiden name. Examples of something you have include a driver's license or a magnetic swipe card.
Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and
retina (eye) scans. Strong authentication requires providing information from two of the three different types of
authentication information. For example, something you know plus something you have. This is called two factor
176
Information security
authentication.
On computer systems in use today, the Username is the most common form of identification and the Password is the
most common form of authentication. Usernames and passwords have served their purpose but in our modern world
they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated
authentication mechanisms.
After a person, program or computer has successfully been identified and authenticated then it must be determined
what informational resources they are permitted to access and what actions they will be allowed to perform (run,
view, create, delete, or change). This is called authorization.
Authorization to access information and other computing services begins with administrative policies and
procedures. The policies prescribe what information and computing services can be accessed, by whom, and under
what conditions. The access control mechanisms are then configured to enforce these policies.
Different computing systems are equipped with different kinds of access control mechanisms - some may even offer
a choice of different access control mechanisms. The access control mechanism a system offers will be based upon
one of three approaches to access control or it may be derived from a combination of the three approaches.
The non-discretionary approach consolidates all access control under a centralized administration. The access to
information and other resources is usually based on the individuals function (role) in the organization or the tasks the
individual must perform. The discretionary approach gives the creator or owner of the information resource the
ability to control access to those resources. In the Mandatory access control approach, access is granted or denied
basing upon the security classification assigned to the information resource.
Examples of common access control mechanisms in use today include Role-based access control available in many
advanced Database Management Systems, simple file permissions provided in the UNIX and Windows operating
systems, Group Policy Objects provided in Windows network systems, Kerberos, RADIUS, TACACS, and the
simple access lists used in many firewalls and routers.
To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that
people are held accountable for their actions. All failed and successful authentication attempts must be logged, and
all access to information must leave some type of audit trail.
Cryptography
Information security uses cryptography to transform usable information into a form that renders it unusable by
anyone other than an authorized user; this process is called encryption. Information that has been encrypted
(rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the
cryptographic key, through the process of decryption. Cryptography is used in information security to protect
information from unauthorized or accidental disclosure while the information is in transit (either electronically or
physically) and while information is in storage.
Cryptography provides information security with other useful applications as well including improved authentication
methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older less
secure application such as telnet and ftp are slowly being replaced with more secure applications such as ssh that use
encrypted network communications. Wireless communications can be encrypted using protocols such as
WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-T G.hn) are secured using
AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP
can be used to encrypt data files and Email.
Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to
be implemented using industry accepted solutions that have undergone rigorous peer review by independent experts
in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak
or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the
177
Information security
same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and
destruction and they must be available when needed. PKI solutions address many of the problems that surround key
management.
Defense in depth
Information security must protect information throughout the life span
of the information, from the initial creation of the information on
through to the final disposal of the information. The information must
be protected while in motion and while at rest. During its life time,
information may pass through many different information processing
systems and through many different parts of information processing
systems. There are many different ways the information and
information systems can be threatened. To fully protect the information
during its lifetime, each component of the information processing
system must have its own protection mechanisms. The building up,
layering on and overlapping of security measures is called defense in
depth. The strength of any system is no greater than its weakest link.
Using a defence in depth strategy, should one defensive measure fail
there are other defensive measures in place that continue to provide protection.
Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of
controls can be used to form the basis upon which to build a defense-in-depth strategy. With this approach,
defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional
insight into defense-in- depth can be gained by thinking of it as forming the layers of an onion, with data at the core
of the onion, people as the outer layer of the onion, and network security, host-based security and application
security forming the inner layers of the onion. Both perspectives are equally valid and each provides valuable insight
into the implementation of a good defense-in-depth strategy.
Process
The terms reasonable and prudent person, due care and due diligence have been used in the fields of Finance,
Securities, and Law for many years. In recent years these terms have found their way into the fields of computing
and information security. U.S.A. Federal Sentencing Guidelines now make it possible to hold corporate officers
liable for failing to exercise due care and due diligence in the management of their information systems.
In the business world, stockholders, customers, business partners and governments have the expectation that
corporate officers will run the business in accordance with accepted business practices and in compliance with laws
and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent
person takes due care to ensure that everything necessary is done to operate the business by sound business
principles and in a legal ethical manner. A prudent person is also diligent (mindful, attentive, and ongoing) in their
due care of the business.
In the field of Information Security, Harris[4] offers the following definitions of due care and due diligence:
"Due care are steps that are taken to show that a company has taken responsibility for the activities that
take place within the corporation and has taken the necessary steps to help protect the company, its
resources, and employees." And, [Due diligence are the] "continual activities that make sure the
protection mechanisms are continually maintained and operational."
Attention should be made to two important points in these definitions. First, in due care, steps are taken to show this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there
178
Information security
are continual activities - this means that people are actually doing things to monitor and maintain the protection
mechanisms, and these activities are ongoing.
Security governance
The Software Engineering Institute at Carnegie Mellon University, in a publication titled "Governing for Enterprise
Security (GES)", defines characteristics of effective security governance. These include:
•
•
•
•
•
•
•
•
•
•
•
An enterprise-wide issue
Leaders are accountable
Viewed as a business requirement
Risk-based
Roles, responsibilities, and segregation of duties defined
Addressed and enforced in policy
Adequate resources committed
Staff aware and trained
A development life cycle requirement
Planned, managed, measurable, and measured
Reviewed and audited
Incident response plans
1 to 3 paragraphs (non technical) that discuss:
•
•
•
•
•
•
•
•
•
•
•
Selecting team members
Define roles, responsibilities and lines of authority
Define a security incident
Define a reportable incident
Training
Detection
Classification
Escalation
Containment
Eradication
Documentation
Change management
Change management is a formal process for directing and controlling alterations to the information processing
environment. This includes alterations to desktop computers, the network, servers and software. The objectives of
change management are to reduce the risks posed by changes to the information processing environment and
improve the stability and reliability of the processing environment as changes are made. It is not the objective of
change management to prevent or hinder necessary changes from being implemented.
Any change to the information processing environment introduces an element of risk. Even apparently simple
changes can have unexpected effects. One of Managements many responsibilities is the management of risk. Change
management is a tool for managing the risks introduced by changes to the information processing environment. Part
of the change management process ensures that changes are not implemented at inopportune times when they may
disrupt critical business processes or interfere with other changes being implemented.
Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information
processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing
environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not
179
Information security
generally require change management. However, relocating user file shares, or upgrading the Email server pose a
much higher level of risk to the processing environment and are not a normal everyday activity. The critical first
steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope
of the change system.
Change management is usually overseen by a Change Review Board composed of representatives from key business
areas, security, networking, systems administrators, Database administration, applications development, desktop
support and the help desk. The tasks of the Change Review Board can be facilitated with the use of automated work
flow application. The responsibility of the Change Review Board is to ensure the organizations documented change
management procedures are followed. The change management process is as follows:
• Requested: Anyone can request a change. The person making the change request may or may not be the same
person that performs the analysis or implements the change. When a request for change is received, it may
undergo a preliminary review to determine if the requested change is compatible with the organizations business
model and practices, and to determine the amount of resources needed to implement the change.
• Approved: Management runs the business and controls the allocation of resources therefore, Management must
approve requests for changes and assign a priority for every change. Management might choose to reject a change
request if the change is not compatible with the business model, industry standards or best practices. Management
might also choose to reject a change request if the change requires more resources than can be allocated for the
change.
• Planned: Planning a change involves discovering the scope and impact of the proposed change; analyzing the
complexity of the change; allocation of resources and, developing, testing and documenting both implementation
and backout plans. Need to define the criteria on which a decision to back out will be made.
• Tested: Every change must be tested in a safe test environment, which closely reflects the actual production
environment, before the change is applied to the production environment. The backout plan must also be tested.
• Scheduled: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing
the proposed implementation date for potential conflicts with other scheduled changes or critical business
activities.
• Communicated: Once a change has been scheduled it must be communicated. The communication is to give
others the opportunity to remind the change review board about other changes or critical business activities that
might have been overlooked when scheduling the change. The communication also serves to make the Help Desk
and users aware that a change is about to occur. Another responsibility of the change review board is to ensure
that scheduled changes have been properly communicated to those who will be affected by the change or
otherwise have an interest in the change.
• Implemented: At the appointed date and time, the changes must be implemented. Part of the planning process
was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change
should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan
should be implemented.
• Documented: All changes must be documented. The documentation includes the initial request for change, its
approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change
review board critique, the date/time the change was implemented, who implemented it, and whether the change
was implemented successfully, failed or postponed.
• Post change review: The change review board should hold a post implementation review of changes. It is
particularly important to review failed and backed out changes. The review board should try to understand the
problems that were encountered, and look for areas for improvement.
Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created
when changes are made to the information processing environment. Good change management procedures improve
180
Information security
the over all quality and success of changes as they are implemented. This is accomplished through planning, peer
review, documentation and communication.
ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps [5] (Full book
summary [6]), and Information Technology Infrastructure Library all provide valuable guidance on implementing an
efficient and effective change management program. information security
Business continuity
Business continuity is the mechanism by which an organization continues to operate its critical business units, during
planned or unplanned disruptions that affect normal business operations, by invoking planned and managed
procedures.
Unlike what most people think business continuity is not necessarily an IT system or process, simply because it is
about the business. Today disasters or disruptions to business are a reality. Whether the disaster is natural or
man-made (the TIME magazine has a website on the top 10), it affects normal life and so business. So why is
planning so important? Let us face reality that "all businesses recover", whether they planned for recovery or not,
simply because business is about earning money for survival.
The planning is merely getting better prepared to face it, knowing fully well that the best plans may fail. Planning
helps to reduce cost of recovery, operational overheads and most importantly sail through some smaller ones
effortlessly.
For businesses to create effective plans they need to focus upon the following key questions. Most of these are
common knowledge, and anyone can do a BCP.
1. Should a disaster strike, what are the first few things that I should do? Should I call people to find if they are OK
or call up the bank to figure out my money is safe? This is Emergencey Response. Emergency Response services
help take the first hit when the disaster strikes and if the disaster is serious enough the Emergency Response
teams need to quickly get a Crisis Management team in place.
2. What parts of my business should I recover first? The one that brings me most money or the one where I spend
the most, or the one that will ensure I shall be able to get sustained future growth? The identified sections are the
critical business units. There is no magic bullet here, no one answer satisfies all. Businesses need to find answers
that meet business requirements.
3. How soon should I target to recover my critical business units? In BCP technical jargon this is called Recovery
Time Objective, or RTO. This objective will define what costs the business will need to spend to recover from a
disruption. For example, it is cheaper to recover a business in 1 day than in 1 hour.
4. What all do I need to recover the business? IT, machinery, records...food, water, people...So many aspects to
dwell upon. The cost factor becomes clearer now...Business leaders need to drive business continuity. Hold on.
My IT manager spent $200000 last month and created a DRP (Disaster Recovery Plan), whatever happened to
that? a DRP is about continuing an IT system, and is one of the sections of a comprehensive Business Continuity
Plan. Look below for more on this.
5. And where do I recover my business from... Will the business center give me space to work, or would it be
flooded by many people queuing up for the same reasons that I am.
6. But once I do recover from the disaster and work in reduced production capacity, since my main operational sites
are unavailable, how long can this go on. How long can I do without my original sites, systems, people? this
defines the amount of business resilience a business may have.
7. Now that I know how to recover my business. How do I make sure my plan works? Most BCP pundits would
recommend testing the plan at least once a year, reviewing it for adequacy and rewriting or updating the plans
either annually or when businesses change.
181
Information security
Disaster recovery planning
While a business continuity plan (BCP) takes a broad approach to dealing with organizational-wide effects of a
disaster, a disaster recovery plan (DRP), which is a subset of the business continuity plan, is instead focused on
taking the necessary steps to resume normal business operations as quickly as possible. A disaster recovery plan is
executed immediately after the disaster occurs and details what steps are to be taken in order to recover critical
information technology infrastructure.[7]
Laws and regulations
Below is a partial listing of European, United Kingdom, Canadian and USA governmental laws and regulations that
have, or will have, a significant effect on data processing and information security. Important industry sector
regulations have also been included when they have a significant impact on information security.
• UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to
individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data
Protection Directive (EUDPD) requires that all EU member must adopt national regulations to standardize the
protection of data privacy for citizens throughout the EU.
• The Computer Misuse Act 1990 is an Act of the UK Parliament making computer crime (e.g. cracking sometimes incorrectly referred to as hacking) a criminal offence. The Act has become a model upon which
several other countries including Canada and the Republic of Ireland have drawn inspiration when subsequently
drafting their own information security laws.
• EU Data Retention laws requires Internet service providers and phone companies to keep data on every electronic
message sent and phone call made for between six months and two years.
• The Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232 [8] g; 34 CFR Part 99) is a USA
Federal law that protects the privacy of student education records. The law applies to all schools that receive
funds under an applicable program of the U.S. Department of Education. Generally, schools must have written
permission from the parent or eligible student in order to release any information from a student's education
record.
• Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards
for electronic health care transactions and national identifiers for providers, health insurance plans, and
employers. And, it requires health care providers, insurance providers and employers to safeguard the security and
privacy of health data.
• Gramm-Leach-Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999,
protects the privacy and security of private financial information that financial institutions collect, hold, and
process.
• Sarbanes-Oxley Act of 2002 (SOX). Section 404 of the act requires publicly traded companies to assess the
effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each
fiscal year. Chief information officers are responsible for the security, accuracy and the reliability of the systems
that manage and report the financial data. The act also requires publicly traded companies to engage independent
auditors who must attest to, and report on, the validity of their assessments.
• Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing
payment account data security. It was developed by the founding payment brands of the PCI Security Standards
Council, including American Express, Discover Financial Services, JCB, MasterCard Worldwide and Visa
International, to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI
DSS is a multifaceted security standard that includes requirements for security management, policies, procedures,
network architecture, software design and other critical protective measures.
182
Information security
• State Security Breach Notification Laws (California and many others) require businesses, nonprofits, and state
institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or
stolen.
• Personal Information Protection and Electronics Document Act (PIPEDA) - An Act to support and promote
electronic commerce by protecting personal information that is collected, used or disclosed in certain
circumstances, by providing for the use of electronic means to communicate or record information or transactions
and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act.
Sources of standards
International Organization for Standardization (ISO) is a consortium of national standards institutes from 157
countries with a Central Secretariat in Geneva Switzerland that coordinates the system. The ISO is the world's largest
developer of standards. The ISO-15443: "Information technology - Security techniques - A framework for IT
security assurance", ISO-27002 (previously ISO-17799): "Information technology - Security techniques - Code of
practice for information security management", ISO-20000: "Information technology - Service management", and
ISO-27001: "Information technology - Security techniques - Information security management systems" are of
particular interest to information security professionals.
The USA National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S.
Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests and validation
programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management
and operation. NIST is also the custodian of the USA Federal Information Processing Standard publications (FIPS).
The Internet Society is a professional membership society with more than 100 organization and over 20,000
individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the
Internet, and is the organization home for the groups responsible for Internet infrastructure standards, including the
Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for
Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security
Handbook.
The Information Security Forum is a global nonprofit organization of several hundred leading organizations in
financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It provides
research into best practice and practice advice summarized in its biannual Standard of Good Practice, incorporating
detail specifications across many areas.
The IT Baseline Protection Catalogs, or IT-Grundschutz Catalogs, ("IT Baseline Protection Manual" before 2005)
are a collection of documents from the German Federal Office for Security in Information Technology (FSI), useful
for detecting and combating security-relevant weak points in the IT environment (“IT cluster“). The collection
encompasses over 3000 pages with the introduction and catalogs.
Professionalism
In 1989, Carnegie Mellon University established the Information Networking Institute, the United States' first
research and education center devoted to information networking. The academic disciplines of computer security,
information security and information assurance emerged along with numerous professional organizations during the
later years of the 20th century and early years of the 21st century.
Entry into the field can be accomplished through self-study, college or university schooling in the field, or through
week long focused training camps. Many colleges, universities and training companies offer many of their programs
on- line. The GIAC-GSEC and Security+ certifications are both entry level security certifications. Membership of
the Institute of Information Security Professionals (IISP) is gaining traction in the U.K. as the professional standard
for Information Security Professionals.
183
Information security
184
The Certified Information Systems Security Professional (CISSP) is a mid- to senior-level information security
certification. The Information Systems Security Architecture Professional (ISSAP), Information Systems Security
Engineering Professional (ISSEP), Information Systems Security Management Professional (ISSMP), and Certified
Information Security Manager (CISM) certifications are well-respected advanced certifications in
information-security architecture, engineering, and management respectively.
Within the UK a recognised senior level information security certification is provided by CESG.
CLAS is the CESG Listed Adviser Scheme - a partnership linking the unique Information Assurance knowledge of
CESG with the expertise and resources of the private sector.
CESG recognises that there is an increasing demand for authoritative Information Assurance advice and guidance.
This demand has come as a result of an increasing awareness of the threats and vulnerabilities that information
systems are likely to face in an ever-changing world.
The Scheme aims to satisfy this demand by creating a pool of high quality consultants approved by CESG to provide
Information Assurance advice to government departments and other organisations who provide vital services for the
United Kingdom.
CLAS consultants are approved to provide Information Assurance advice on systems processing protectively marked
information up to, and including, SECRET. Potential customers of the CLAS Scheme should also note that if the
information is not protectively marked then they do not need to specify membership of CLAS in their invitations to
tender, and may be challenged if equally competent non-scheme members are prevented from bidding.
The profession of information security has seen an increased demand for security professionals who are experienced
in network security auditing, penetration testing, and digital forensics investigation. In addition, many smaller
companies have cropped up as the result of this increased demand in information security training and consulting.
Conclusion
Information security is the ongoing process of exercising due care and due diligence to protect information, and
information systems, from unauthorized access, use, disclosure, destruction, modification, or disruption or
distribution. The never ending process of information security involves ongoing training, assessment, protection,
monitoring & detection, incident response & repair, documentation, and review. This makes Information Security
Consultant an indispensable part of all the business operations across different domains.
See also
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Computer insecurity
Enterprise information security architecture
Data erasure
Data loss prevention products
Disk encryption
Information assurance
Information security audit
Information Security Forum
Information security governance
Information security management
Information security management system
Information security policies
Information security standards
Information technology security audit
•
•
•
•
•
•
•
•
•
•
•
•
•
•
ISO/IEC 27001
ITIL security management
Network Security Services
Physical information security
Privacy enhancing technologies
Parkerian Hexad
Security breach notification laws
Security information management
Security of Information Act
Security level management
Security bug
Single sign-on
Standard of Good Practice
Verification and validation
Information security
185
Scholars working in the field
•
•
•
•
Stefan Brands
Adam Back
Lance Cottrell
Ian Goldberg
•
•
•
Peter Gutmann
Bruce Schneier
Gene Spafford
Further reading
•
•
•
•
Anderson, K., "IT Security Professionals Must Evolve for Changing Market [9]", SC Magazine, October 12, 2006.
Aceituno, V., "On Information Security Paradigms [10]",ISSA Journal, September, 2005.
Dhillon, G., "Principles of Information Systems Security: text and cases", John Wiley & Sons, 2007.
Lambo, T."ISO/IEC 27001: The future of infosec certification [11]",ISSA Journal, November, 2006.
Notes and references
[1] 44 U.S.C. § 3542 (http:/ / www. law. cornell. edu/ uscode/ 44/ 3542. html)(b)(1)
[2] ISACA (2006). CISA Review Manual 2006. Information Systems Audit and Control Association (http:/ / www. isaca. org/ ).
pp. 85. ISBN 1-933284-15-3.
[3] "Segregation of Duties Control matrix" (http:/ / www. isaca. org/ AMTemplate. cfm?Section=CISA1& Template=/ ContentManagement/
ContentDisplay. cfm& ContentID=40835). ISACA. 2008. . Retrieved 2008-09-30.
[4] Harris, Shon (2003). All-in-one CISSP Certification Exam Guide (2nd Ed. ed.). Emeryville, CA: McGraw-Hill/Osborne.
ISBN 0-07-222966-7.
[5] http:/ / www. itpi. org/ home/ visibleops2. php
[6] http:/ / wikisummaries. org/ Visible_Ops
[7] Harris, Shon (2008). All-in-one CISSP Certification Exam Guide (4th Ed. ed.). New York, NY: McGraw-Hill.
ISBN 978-0-07-149786-2.
[8] http:/ / www. law. cornell. edu/ uscode/ 20/ 1232. html
[9] http:/ / www. scmagazineus. com/ IT-security-professionals-must-evolve-for-changing-market/ article/ 33990/
[10] http:/ / www. issa. org/ Library/ Journals/ 2005/ September/ Aceituno%20Canal%20-%20On%20Information%20Security%20Paradigms.
pdf
[11] https:/ / www. issa. org/ Library/ Journals/ 2006/ November/ Lambo-ISO-IEC%2027001-The%20future%20of%20infosec%20certification.
pdf
External links
• InfoSecNews.us (http://www.infosecnews.us/) Information Security News
• DoD IA Policy Chart (http://iac.dtic.mil/iatac/ia_policychart.html) on the DoD Information Assurance
Technology Analysis Center web site.
• patterns & practices Security Engineering Explained (http://msdn2.microsoft.com/en-us/library/ms998382.
aspx)
• Open Security Architecture- Controls and patterns to secure IT systems (http://www.opensecurityarchitecture.
org)
• Introduction to Security Governance (http://www.logicalsecurity.com/resources/resources_articles.html)
• COE Security - Information Security Articles (http://www.coesecurity.com/services/resources.asp)
• An Introduction to Information Security (http://security.practitioner.com/introduction/)
• Example Security Policy (http://www.davidstclair.co.uk/example-security-templates/
example-internet-e-mail-usage-policy-2.html) Link broken
• IWS - Information Security Chapter (http://www.iwar.org.uk/comsec/)
Information security
Bibliography
• Allen, Julia H. (2001). The CERT Guide to System and Network Security Practices. Boston, MA:
Addison-Wesley. ISBN 0-201-73723-X.
• Krutz, Ronald L.; Russell Dean Vines (2003). The CISSP Prep Guide (Gold Edition ed.). Indianapolis, IN: Wiley.
ISBN 0-471-26802-X.
• Layton, Timothy P. (2007). Information Security: Design, Implementation, Measurement, and Compliance. Boca
Raton, FL: Auerbach publications. ISBN 978-0-8493-7087-8.
• McNab, Chris (2004). Network Security Assessment. Sebastopol, CA: O'Reilly. ISBN 0-596-00611-X.
• Peltier, Thomas R. (2001). Information Security Risk Analysis. Boca Raton, FL: Auerbach publications.
ISBN 0-8493-0880-1.
• Peltier, Thomas R. (2002). Information Security Policies, Procedures, and Standards: guidelines for effective
information security management. Boca Raton, FL: Auerbach publications. ISBN 0-8493-1137-3.
• White, Gregory (2003). All-in-one Security+ Certification Exam Guide. Emeryville, CA: McGraw-Hill/Osborne.
ISBN 0-07-222633-1.
• Dhillon, Gurpreet (2007). Principles of Information Systems Security: text and cases. NY: John Wiley & Sons.
ISBN 978-0471450566.
Encryption
In cryptography, encryption is the process of transforming information (referred to as plaintext) using an algorithm
(called cipher) to make it unreadable to anyone except those possessing special knowledge, usually referred to as a
key. The result of the process is encrypted information (in cryptography, referred to as ciphertext). In many
contexts, the word encryption also implicitly refers to the reverse process, decryption (e.g. “software for
encryption” can typically also perform decryption), to make the encrypted information readable again (i.e. to make it
unencrypted).
Encryption has long been used by militaries and governments to facilitate secret communication. Encryption is now
commonly used in protecting information within many kinds of civilian systems. For example, the Computer
Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in
transit, and 53% utilized encryption for some of their data in storage.[1] Encryption can be used to protect data "at
rest", such as files on computers and storage devices (e.g. USB flash drives). In recent years there have been
numerous reports of confidential data such as customers' personal records being exposed through loss or theft of
laptops or backup drives. Encrypting such files at rest helps protect them should physical security measures fail.
Digital rights management systems which prevent unauthorized use or reproduction of copyrighted material and
protect software against reverse engineering (see also copy protection) are another somewhat different example of
using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet,
e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank
automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years.[2]
Encrypting data in transit also helps to secure it as it is often difficult to physically secure all access to networks.
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the
integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a
digital signature. Standards and cryptographic software and hardware to perform encryption are widely available, but
successfully using encryption to ensure security may be a challenging problem. A single slip-up in system design or
execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly
undoing the encryption. See, e.g., traffic analysis, TEMPEST, or Trojan horse.
186
Encryption
187
One of the earliest public key encryption applications was called Pretty Good Privacy (PGP). It was written in 1991
by Phil Zimmermann and was purchased by Network Associates (now PGP Corporation) in 1997.
There are a number of reasons why an encryption product may not be suitable in all cases. First, e-mail must be
digitally signed at the point it was created to provide non-repudiation for some legal purposes, otherwise the sender
could argue that it was tampered with after it left their computer but before it was encrypted at a gateway. An
encryption product may also not be practical when mobile users need to send e-mail from outside the corporate
network.[3]
See also
•
•
•
•
Cryptography
Cold boot attack
Cyberspace Electronic Security Act (in the US)
Encryption software
•
•
•
•
•
Cipher
Key
Famous ciphertexts
Rip van Winkle cipher
Strong secrecy
•
•
•
•
Disk encryption
Secure USB drive
Secure Network Communications
Security and Freedom Through Encryption
Act
References
• Helen Fouché Gaines, “Cryptanalysis”, 1939, Dover. ISBN 0-486-20097-3
• David Kahn, The Codebreakers - The Story of Secret Writing (ISBN 0-684-83130-9) (1967)
• Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America,
1966. ISBN 0-88385-622-0
External links
• http://www.enterprisenetworkingplanet.com/_featured/article.php/3792771/
PGPs-Universal-Server-Provides-Unobtrusive-Encryption.htm
• SecurityDocs Resource for encryption whitepapers [4]
• A guide to encryption for beginners [5]
• Accumulative archive of various cryptography mailing lists. [6] Includes Cryptography list at metzdowd and
SecurityFocus Crypto list.
References
[1] Robert Richardson, 2008 CSI Computer Crime and Security Survey at 19. Online at http:/ / i. cmpnet. com/ v2. gocsi. com/ pdf/
CSIsurvey2008. pdf.
[2] Fiber Optic Networks Vulnerable to Attack, Information Security Magazine, November 15, 2006, Sandra Kay Miller
[3] http:/ / www. enterprisenetworkingplanet. com/ _featured/ article. php/ 3792771/ PGPs-Universal-Server-Provides-Unobtrusive-Encryption.
htm.
[4] http:/ / www. securitydocs. com/ Encryption
[5] http:/ / rexor. codeplex. com/ documentation
[6] http:/ / www. xml-dev. com/ lurker/ list/ crypto. en. html
Cryptography
Cryptography
Cryptography (or cryptology; from Greek κρυπτός,
kryptos, "hidden, secret"; and ράφω, gráphō, "I write",
or -λογία, -logia, respectively)[1] is the practice and
study of hiding information. Modern cryptography
intersects the disciplines of mathematics, computer
science, and engineering. Applications of cryptography
include ATM cards, computer passwords, and
electronic commerce.
Cryptology prior to the modern age was almost
synonymous with encryption, the conversion of
information from a readable state to nonsense. The
sender retained the ability to decrypt the information
German Lorenz cipher machine, used in World War II to encrypt
and therefore avoid unwanted persons being able to
very-high-level general staff messages
read it. Since WWI and the advent of the computer, the
methods used to carry out cryptology have become increasingly complex and its application more widespread.
Alongside the advancement in cryptology-related technology, the practice has raised a number of legal issues, some
of which remain unresolved.
Terminology
Until modern times cryptography referred almost exclusively to encryption, which is the process of converting
ordinary information (plaintext) into unintelligible gibberish (i.e., ciphertext).[2] Decryption is the reverse, in other
words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that
create the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the
algorithm and in each instance by a key. This is a secret parameter (ideally known only to the communicants) for a
specific message exchange context. Keys are important, as ciphers without variable keys can be trivially broken with
only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes.
Historically, ciphers were often used directly for encryption or decryption without additional procedures such as
authentication or integrity checks.
In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning.
However, in cryptography, code has a more specific meaning. It means the replacement of a unit of plaintext (i.e., a
meaningful word or phrase) with a code word (for example, wallaby replaces attack at dawn). Codes are no
longer used in serious cryptography—except incidentally for such things as unit designations (e.g., Bronco Flight or
Operation Overlord)—since properly chosen ciphers are both more practical and more secure than even the best
codes and also are better adapted to computers.
Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without
access to the key normally required to do so; i.e., it is the study of how to crack encryption algorithms or their
implementations.
Some use the terms cryptography and cryptology interchangeably in English, while others (including US military
practice generally) use cryptography to refer specifically to the use and practice of cryptographic techniques and
cryptology to refer to the combined study of cryptography and cryptanalysis.[3] [4] English is more flexible than
several other languages in which cryptology (done by cryptologists) is always used in the second sense above. In the
English Wikipedia the general term used for the entire field is cryptography (done by cryptographers).
188
Cryptography
The study of characteristics of languages which have some application in cryptography (or cryptology), i.e.
frequency data, letter combinations, universal patterns, etc., is called cryptolinguistics.
History of cryptography and cryptanalysis
Before the modern era, cryptography was concerned solely with message confidentiality (i.e.,
encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at
the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key
needed for decryption of that message). Encryption was used to (attempt to) ensure secrecy in communications, such
as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality
concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital
signatures, interactive proofs and secure computation, among others.
Classic cryptography
The earliest forms of secret writing required little more than local pen and
paper analogs, as most people could not read. More literacy, or literate
opponents, required actual cryptography. The main classical cipher types are
transposition ciphers, which rearrange the order of letters in a message (e.g.,
'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement
scheme), and substitution ciphers, which systematically replace letters or
groups of letters with other letters or groups of letters (e.g., 'fly at once'
becomes 'gmz bu podf' by replacing each letter with the one following it in
Reconstructed ancient Greek scytale
the Latin alphabet). Simple versions of either offered little confidentiality
(rhymes with "Italy"), an early cipher
device
from enterprising opponents, and still do. An early substitution cipher was the
Caesar cipher, in which each letter in the plaintext was replaced by a letter
some fixed number of positions further down the alphabet. It was named after Julius Caesar who is reported to have
used it, with a shift of 3, to communicate with his generals during his military campaigns, just like EXCESS-3 code
in boolean algebra. There is record of several early Hebrew ciphers as well. The earliest known use of cryptography
is some carved ciphertext on stone in Egypt (ca 1900 BC), but this may have been done for the amusement of literate
observers. The next oldest is bakery recipes from Mesopotamia.
Cryptography is recommended in the Kama Sutra as a way for lovers to communicate without inconvenient
discovery.[5] Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first
developed in ancient times. An early example, from Herodotus, concealed a message—a tattoo on a slave's shaved
head—under the regrown hair.[2] More modern examples of steganography include the use of invisible ink,
microdots, and digital watermarks to conceal information.
Ciphertexts produced by a classical cipher (and some modern ciphers) always reveal statistical information about the
plaintext, which can often be used to break them. After the discovery of frequency analysis perhaps by the Arab
mathematician and polymath, Al-Kindi (also known as Alkindus), in the 9th century, nearly all such ciphers became
more or less readily breakable by any informed attacker. Such classical ciphers still enjoy popularity today, though
mostly as puzzles (see cryptogram). Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma
(Manuscript for the Deciphering Cryptographic Messages), in which described the first cryptanalysis techniques,
including some for polyalphabetic ciphers.[6] [7]
189
Cryptography
Essentially all ciphers remained vulnerable to cryptanalysis using the
frequency analysis technique until the development of the polyalphabetic
cipher, most clearly by Leon Battista Alberti around the year 1467, though
there is some indication that it was already known to Al-Kindi.[7] Alberti's
innovation was to use different ciphers (i.e., substitution alphabets) for
various parts of a message (perhaps for each successive plaintext letter at the
limit). He also invented what was probably the first automatic cipher device, a
wheel which implemented a partial realization of his invention. In the
polyalphabetic Vigenère cipher, encryption uses a key word, which controls
letter substitution depending on which letter of the key word is used. In the
mid 1800s Charles Babbage showed that polyalphabetic ciphers of this type
remained partially vulnerable to extended frequency analysis techniques.[2]
190
16th-century book-shaped French cipher
machine, with arms of Henri II of France
Although frequency analysis is a powerful and general technique against
many ciphers, encryption has still been often effective in practice; many a
would-be cryptanalyst was unaware of the technique. Breaking a message
without using frequency analysis essentially required knowledge of the cipher
Enciphered letter from Gabriel de Luetz
used and perhaps of the key involved, thus making espionage, bribery,
d'Aramon, French Ambassador to the
burglary, defection, etc., more attractive approaches to the cryptanalytically
Ottoman Empire, after 1546, with partial
uninformed. It was finally explicitly recognized in the 19th century that
decipherment
secrecy of a cipher's algorithm is not a sensible nor practical safeguard of
message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should
remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should
alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first
explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs' principle; alternatively and more
bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical
cryptography, as Shannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the
scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher (see image
above). In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of
steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own
cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's multi-cylinder (not publicly known,
and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were
invented early in the 20th century, and several patented, among them rotor machines—famously including the
Enigma machine used by the German government and military from the late '20s and during World War II.[8] The
ciphers implemented by better quality examples of these machine designs brought about a substantial increase in
cryptanalytic difficulty after WWI.[9]
The computer era
The development of digital computers and electronics after WWII made possible much more complex ciphers.
Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike
classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus
supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be
characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and
mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However,
computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity.
Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality
Cryptography
cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it
requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher,
making cryptanalysis so inefficient and impractical as to be effectively impossible. Alternate methods of attack
(bribery, burglary, threat, torture, ...) have become more attractive in consequence.
Extensive open academic research into cryptography is relatively
recent; it began only in the mid-1970s. In recent times, IBM
personnel designed the algorithm that became the Federal (i.e.,
US) Data Encryption Standard; Whitfield Diffie and Martin
Hellman published their key agreement algorithm,[10] ; and the
RSA algorithm was published in Martin Gardner's Scientific
American column. Since then, cryptography has become a widely
used tool in communications, computer networks, and computer
security generally. Some modern cryptographic techniques can
only keep their keys secret if certain mathematical problems are
intractable, such as the integer factorization or the discrete
Credit card with smart-card capabilities. The
3-by-5-mm chip embedded in the card is shown,
logarithm problems, so there are deep connections with abstract
enlarged. Smart cards combine low cost and portability
mathematics. There are no absolute proofs that a cryptographic
with the power to compute cryptographic algorithms.
technique is secure (but see one-time pad); at best, there are proofs
that some techniques are secure if some computational problem is
difficult to solve, or this or that assumption about implementation or practical use is met.
As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly
consider probable future developments while working on their designs. For instance, continuous improvements in
computer processing power have increased the scope of brute-force attacks, thus when specifying key lengths, the
required key lengths are similarly advancing. The potential effects of quantum computing are already being
considered by some cryptographic system designers; the announced imminence of small implementations of these
machines may be making the need for this preemptive caution rather more than merely speculative.[11]
Essentially, prior to the early 20th century, cryptography was chiefly concerned with linguistic and lexicographic
patterns. Since then the emphasis has shifted, and cryptography now makes extensive use of mathematics, including
aspects of information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory,
and finite mathematics generally. Cryptography is, also, a branch of engineering, but an unusual one as it deals with
active, intelligent, and malevolent opposition (see cryptographic engineering and security engineering); other kinds
of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active
research examining the relationship between cryptographic problems and quantum physics (see quantum
cryptography and quantum computing).
Modern cryptography
The modern field of cryptography can be divided into several areas of study. The chief ones are discussed here; see
Topics in Cryptography for more.
Symmetric-key cryptography
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key
(or, less commonly, in which their keys are different, but related in an easily computable way). This was the only
kind of encryption publicly known until June 1976.[10]
191
Cryptography
The modern study of symmetric-key ciphers relates mainly to the study
of block ciphers and stream ciphers and to their applications. A block
cipher is, in a sense, a modern embodiment of Alberti's polyalphabetic
cipher: block ciphers take as input a block of plaintext and a key, and
output a block of ciphertext of the same size. Since messages are
almost always longer than a single block, some method of knitting
together successive blocks is required. Several have been developed,
some with better security in one aspect or another than others. They are
the modes of operation and must be carefully considered when using a
block cipher in a cryptosystem.
The Data Encryption Standard (DES) and the Advanced Encryption
Standard (AES) are block cipher designs which have been designated
One round (out of 8.5) of the patented IDEA
cipher, used in some versions of PGP for
cryptography standards by the US government (though DES's
high-speed encryption of, for instance, e-mail
designation was finally withdrawn after the AES was adopted).[12]
Despite its deprecation as an official standard, DES (especially its
still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of
applications, from ATM encryption[13] to e-mail privacy[14] and secure remote access.[15] Many other block ciphers
have been designed and released, with considerable variation in quality. Many have been thoroughly broken; see
Category:Block ciphers.[11] [16]
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined
with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output
stream is created based on a hidden internal state which changes as the cipher operates. That internal state is initially
set up using the secret key material. RC4 is a widely used stream cipher; see Category:Stream ciphers.[11] Block
ciphers can be used as stream ciphers; see Block cipher modes of operation.
Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as
input, and output a short, fixed length hash which can be used in (for example) a digital signature. For good hash
functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function
which is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The U.S.
National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a
flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts
have identified attacks against it; the SHA-2 family improves on SHA-1, but it isn't yet widely deployed, and the
U.S. standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly
improve the robustness of NIST's overall hash algorithm toolkit."[17] Thus, a hash function design competition is
underway and meant to select a new U.S. national standard, to be called SHA-3, by 2012.
Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be
used to authenticate the hash value[11] upon receipt.
Public-key cryptography
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, though a message or
group of messages may have a different key than others. A significant disadvantage of symmetric ciphers is the key
management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a
different key, and perhaps each ciphertext exchanged as well. The number of keys required increases as the square of
the number of network members, which very quickly requires complex key management schemes to keep them all
straight and secret. The difficulty of securely establishing a secret key between two communicating parties, when a
secure channel does not already exist between them, also presents a chicken-and-egg problem which is a
192
Cryptography
193
considerable practical obstacle for cryptography users in the real world.
Whitfield Diffie and Martin Hellman, authors of
the first published paper on public-key
cryptography
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman
proposed the notion of public-key (also, more generally, called
asymmetric key) cryptography in which two different but
mathematically related keys are used—a public key and a private
key.[18] A public key system is so constructed that calculation of one
key (the 'private key') is computationally infeasible from the other (the
'public key'), even though they are necessarily related. Instead, both
keys are generated secretly, as an interrelated pair.[19] The historian
David Kahn described public-key cryptography as "the most
revolutionary new concept in the field since polyalphabetic substitution
emerged in the Renaissance".[20]
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain
secret. The public key is typically used for encryption, while the private or secret key is used for decryption. Diffie
and Hellman showed that public-key cryptography was possible by presenting the Diffie–Hellman key exchange
protocol.[10]
In 1978, Ronald Rivest, Adi Shamir, and Len Adleman invented RSA, another public-key system.[21]
In 1997, it finally became publicly known that asymmetric key cryptography had been invented by James H. Ellis at
GCHQ, a British intelligence organization, and that, in the early 1970s, both the Diffie–Hellman and RSA
algorithms had been previously developed (by Malcolm J. Williamson and Clifford Cocks, respectively).[22]
The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high quality
public-key algorithms, have been among the most widely used. Others include the Cramer–Shoup cryptosystem,
ElGamal encryption, and various elliptic curve techniques. See Category:Asymmetric-key cryptosystems.
In addition to encryption, public-key cryptography can be used to
implement digital signature schemes. A digital signature is reminiscent of
an ordinary signature; they both have the characteristic that they are easy
Padlock icon from the Firefox Web browser,
for a user to produce, but difficult for anyone else to forge. Digital
meant to indicate a page has been sent in
SSL or TLS-encrypted protected form.
signatures can also be permanently tied to the content of the message being
However, such an icon is not a guarantee of
signed; they cannot then be 'moved' from one document to another, for any
security; any subverted browser might
attempt will be detectable. In digital signature schemes, there are two
mislead a user by displaying such an icon
algorithms: one for signing, in which a secret key is used to process the
when a transmission is not actually being
protected by SSL or TLS.
message (or a hash of the message, or both), and one for verification, in
which the matching public key is used with the message to check the
validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are
central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many
VPNs, etc.).[16]
Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number
theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and
DSA are related to the discrete logarithm problem. More recently, elliptic curve cryptography has developed in
which security is based on number theoretic problems involving elliptic curves. Because of the difficulty of the
underlying problems, most public-key algorithms involve operations such as modular multiplication and
exponentiation, which are much more computationally expensive than the techniques used in most block ciphers,
especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in
which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant
symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature
Cryptography
194
schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally
signed.[11]
Cryptanalysis
The goal of cryptanalysis is to find some weakness or
insecurity in a cryptographic scheme, thus permitting its
subversion or evasion.
Variants of the Enigma machine, used by Germany's military and
civil authorities from the late 1920s through World War II,
implemented a complex electro-mechanical polyalphabetic cipher.
Breaking and reading of the Enigma cipher at Poland's Cipher
Bureau, for 7 years before the war, and subsequent decryption at
[2]
Bletchley Park, was important to Allied victory.
It is a common misconception that every encryption
method can be broken. In connection with his WWII
work at Bell Labs, Claude Shannon proved that the
one-time pad cipher is unbreakable, provided the key
material is truly random, never reused, kept secret from
all possible attackers, and of equal or greater length than
the message.[23] Most ciphers, apart from the one-time
pad, can be broken with enough computational effort by
brute force attack, but the amount of effort needed may be
exponentially dependent on the key size, as compared to
the effort needed to use the cipher. In such cases,
effective security could be achieved if it is proven that the
effort required (i.e., "work factor", in Shannon's terms) is
beyond the ability of any adversary. This means it must
be shown that no efficient method (as opposed to the
time-consuming brute force method) can be found to
break the cipher. Since no such showing can be made
currently, as of today, the one-time-pad remains the only
theoretically unbreakable cipher.
There are a wide variety of cryptanalytic attacks, and they
can be classified in any of several ways. A common
distinction turns on what an attacker knows and what
capabilities are available. In a ciphertext-only attack, the cryptanalyst has access only to the ciphertext (good modern
cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, the
cryptanalyst has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext
attack, the cryptanalyst may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an
example is gardening, used by the British during WWII. Finally, in a chosen-ciphertext attack, the cryptanalyst may
be able to choose ciphertexts and learn their corresponding plaintexts.[11] Also important, often overwhelmingly so,
are mistakes (generally in the design or use of one of the protocols involved; see Cryptanalysis of the Enigma for
some historical examples of this).
Cryptography
Cryptanalysis of symmetric-key ciphers typically
involves looking for attacks against the block ciphers or
stream ciphers that are more efficient than any attack
that could be against a perfect cipher. For example, a
simple brute force attack against DES requires one
known plaintext and 255 decryptions, trying
approximately half of the possible keys, to reach a
point at which chances are better than even the key
sought will have been found. But this may not be
enough assurance; a linear cryptanalysis attack against
DES requires 243 known plaintexts and approximately
243 DES operations.[24] This is a considerable
improvement on brute force attacks.
195
Poznań monument (center) to Polish cryptologists whose breaking of
Germany's Enigma machine ciphers, beginning in 1932, altered the
course of World War II
Public-key algorithms are based on the computational
difficulty of various problems. The most famous of these is integer factorization (e.g., the RSA algorithm is based on
a problem related to integer factoring), but the discrete logarithm problem is also important. Much public-key
cryptanalysis concerns numerical algorithms for solving these computational problems, or some of them, efficiently
(i.e., in a practical time). For instance, the best known algorithms for solving the elliptic curve-based version of
discrete logarithm are much more time-consuming than the best known algorithms for factoring, at least for
problems of more or less equivalent size. Thus, other things being equal, to achieve an equivalent strength of attack
resistance, factoring-based encryption techniques must use larger keys than elliptic curve techniques. For this reason,
public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on
actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for
example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or
PIN character, he may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An
attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic
analysis,[25] and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting
too short keys, will make any system vulnerable, regardless of other virtues. And, of course, social engineering, and
other attacks against the personnel who work with cryptosystems or the messages they handle (e.g., bribery,
extortion, blackmail, espionage, torture, ...) may be the most productive attacks of all.
Cryptographic primitives
Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic
cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic
tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to
develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more
high-level security properties. Note however, that the distinction between cryptographic primitives and
cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and
sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way
functions, etc.
Cryptography
Cryptosystems
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic
system, or cryptosystem. Cryptosystems (e.g. El-Gamal encryption) are designed to provide particular functionality
(e.g. public key encryption) while guaranteeing certain security properties (e.g. CPA security in the random oracle
model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security
properties. Of course, as the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated
cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the
cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between
the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such
cryptosystems are sometimes called cryptographic protocols.
Some widely known cryptosystems include RSA encryption, Schnorr signature, El-Gamal encryption, PGP, etc.
More complex cryptosystems include electronic cash[26] systems, signcryption systems, etc. Some more 'theoretical'
cryptosystems include interactive proof systems,[27] (like zero-knowledge proofs,[28] ), systems for secret sharing,[29]
[30]
etc.
Until recently, most security properties of most cryptosystems were demonstrated using empirical techniques, or
using ad hoc reasoning. Recently, there has been considerable effort to develop formal techniques for establishing
the security of cryptosystems; this has been generally called provable security. The general idea of provable security
is to give arguments about the computational difficulty needed to compromise some security aspect of the
cryptosystem (i.e., to any adversary).
The study of how best to implement and integrate cryptography in software applications is itself a distinct field; see:
Cryptographic engineering and Security engineering.
Legal issues
Prohibitions
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Actually secret
communications may be criminal or even treasonous; those whose communications are open to inspection may be
less likely to be either. Because of its facilitation of privacy, and the diminution of privacy attendant on its
prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a
history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers
has made widespread access to high quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly
restricted the use of cryptography domestically, though it has relaxed many of these. In China, a license is still
required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more
restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.[31]
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues
related to cryptography. One particularly important issue has been the export of cryptography and cryptographic
software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that
cryptography would continue to be important for national security, many Western governments have, at some point,
strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption
technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States
Munitions List.[32] Until the development of the personal computer, asymmetric key algorithms (i.e., public key
techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers
became more widely available, high quality encryption techniques became well-known around the globe. As a result,
export controls came to be seen to be an impediment to commerce and to research.
196
Cryptography
Export controls
In the 1990s, there were several challenges to US export regulations of cryptography. One involved Philip
Zimmermann's Pretty Good Privacy (PGP) encryption program; it was released in the US, together with its source
code, and found its way onto the Internet in June 1991. After a complaint by RSA Security (then called RSA Data
Security, Inc., or RSADSI), Zimmermann was criminally investigated by the Customs Service and the FBI for
several years. No charges were ever filed, however.[33] [34] Also, Daniel Bernstein, then a graduate student at UC
Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free
speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source
code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.[35]
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of
arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short
key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[36]
Cryptography exports from the US are now much less strictly regulated than in the past as a consequence of a major
relaxation in 2000;[31] there are no longer very many restrictions on key sizes in US-exported mass-market software.
In practice today, since the relaxation in US export restrictions, and because almost every personal computer
connected to the Internet, everywhere in the world, includes US-sourced web browsers such as Mozilla Firefox or
Microsoft Internet Explorer, almost every Internet user worldwide has access to quality cryptography (i.e., when
using sufficiently long keys with properly operating and unsubverted software, etc.) in their browsers; examples are
Transport Layer Security or SSL stack. The Mozilla Thunderbird and Microsoft Outlook E-mail client programs
similarly can connect to IMAP or POP servers via TLS, and can send and receive email encrypted with S/MIME.
Many Internet users don't realize that their basic application software contains such extensive cryptosystems. These
browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of
cryptography generally don't find it practical to do much to control distribution or use of cryptography of this quality,
so even when such laws are in force, actual enforcement is often effectively impossible.
NSA involvement
Another contentious issue connected to cryptography in the United States is the influence of the National Security
Agency on cipher development and policy. NSA was involved with the design of DES during its development at
IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography.[37]
DES was designed to be resistant to differential cryptanalysis,[38] a powerful and general cryptanalytic technique
known to NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[39] According
to Steven Levy, IBM rediscovered differential cryptanalysis,[40] but kept the technique secret at NSA's request. The
technique became publicly known only when Biham and Shamir re-rediscovered and announced it some years later.
The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually
have.
Another instance of NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be
part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two
reasons. The cipher algorithm was then classified (the cipher, called Skipjack, though it was declassified in 1998
long after the Clipper initiative lapsed). The secret cipher caused concerns that NSA had deliberately made the
cipher weak in order to assist its intelligence efforts. The whole initiative was also criticized based on its violation of
Kerckhoffs' principle, as the scheme included a special escrow key held by the government for use by law
enforcement, for example in wiretaps.[34]
197
Cryptography
Digital rights management
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling
use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In
1998, American President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized
all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later
discovered); specifically, those that could be used to circumvent DRM technological schemes.[41] This had a
noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic
research violated, or might violate, the DMCA. Similar statutes have since been enacted in several countries and
regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties
signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by
some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography
researcher, has publicly stated[42] that he will not release some of his research into an Intel security design for fear of
prosecution under the DMCA. Both Alan Cox (longtime number 2 in Linux kernel development) and Professor
Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry
Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged
violations of the DMCA arising from work he done in Russia, where the work was legal. In 2007, the cryptographic
keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In
both cases, the MPAA sent out numerous DMCA takedown notices, and there was a massive internet backlash
triggered by the perceived impact of such notices on fair use and free speech.
See also
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Books on cryptography
Watermarking
Watermark detection
Category:Cryptographers
Encyclopedia of Cryptography and Security
List of cryptographers
List of important publications in computer science#Cryptography
Topics in cryptography
Cipher System Identification
Unsolved problems in computer science
CrypTool Most widespread e-learning program about cryptography and cryptanalysis, open source
List of multiple discoveries (see "RSA")
Flexiprovider open source Java Cryptographic Provider
Strong secrecy, a term used in cryptography
Further reading
• Richard J. Aldrich, GCHQ: The Uncensored Story of Britain's Most Secret Intelligence Agency, HarperCollins,
July 2010.
• Becket, B (1988). Introduction to Cryptology. Blackwell Scientific Publications. ISBN 0-632-01836-4.
OCLC 16832704. Excellent coverage of many classical ciphers and cryptography concepts and of the "modern"
DES and RSA systems.
• Cryptography and Mathematics by Bernhard Esslinger, 200 pages, part of the free open-source package
CrypTool, PDF download [43].
198
Cryptography
• In Code: A Mathematical Journey by Sarah Flannery (with David Flannery). Popular account of Sarah's
award-winning project on public-key cryptography, co-written with her father.
• James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century,
Washington, D.C., Brassey's, 2001, ISBN 1-57488-367-4.
• Oded Goldreich, Foundations of Cryptography [44], in two volumes, Cambridge University Press, 2001 and 2004.
• Introduction to Modern Cryptography [45] by Jonathan Katz and Yehuda Lindell.
• Alvin's Secret Code by Clifford B. Hicks (children's novel that introduces some basic cryptography and
cryptanalysis).
• Ibrahim A. Al-Kadi, "The Origins of Cryptology: the Arab Contributions," Cryptologia, vol. 16, no. 2 (April
1992), pp. 97–126.
• Handbook of Applied Cryptography [46] by A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone CRC Press,
(PDF download available), somewhat more mathematical than Schneier's Applied Cryptography.
• Christof Paar [47], Jan Pelzl, Understanding Cryptography, A Textbook for Students and Practitioners. [48]
Springer, 2009. (Slides and other information available on the web site.) Very accessible introduction to practical
cryptography for non-mathematicians.
• Introduction to Modern Cryptography by Phillip Rogaway and Mihir Bellare, a mathematical introduction to
theoretical cryptography including reduction-based security proofs. PDF download [49].
• Cryptonomicon by Neal Stephenson (novel, WW2 Enigma cryptanalysis figures into the story, though not always
realistically).
• Johann-Christoph Woltag, 'Coded Communications (Encryption)' in Rüdiger Wolfrum (ed) Max Planck
Encyclopedia of Public International Law (Oxford University Press 2009). *"Max Planck Encyclopedia of Public
International Law" [50]., giving an overview of international law issues regarding cryptography.
External links
•
•
•
•
•
•
•
•
•
•
•
•
•
"DNA computing and cryptology: the future for Basel in Switzerland?" [51]
Crypto Glossary and Dictionary of Technical Cryptography [52]
Attack/Prevention [53] Resource for Cryptography Whitepapers, Tools, Videos, and Podcasts.
Cryptography: The Ancient Art of Secret Messages [54] by Monica Pawlan - February 1998
Handbook of Applied Cryptography [46] by A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone (PDF
download available), somewhat more mathematical than Schneier's book.
NSA's CryptoKids [55].
Overview and Applications of Cryptology [56] by the CrypTool Team; PDF; 3.8 MB—July 2008
RSA Laboratories' frequently asked questions about today's cryptography [57]
sci.crypt mini-FAQ [58]
Slides of a two-semester course Introduction to Cryptography [59] by Prof. Christof Paar, University of Bochum
(slides are in English, site contains also videos in German)
Early Cryptology, Cryptographic Shakespeare [60]
GCHQ: Britain's Most Secret Intelligence Agency [61]
Cryptix, complete cryptography solution for Mac OS X. [62]
199
Cryptography
References
[1]
[2]
[3]
[4]
Liddell and Scott's Greek-English Lexicon. Oxford University Press. (1984)
David Kahn, The Codebreakers, 1967, ISBN 0-684-83130-9.
Oded Goldreich, Foundations of Cryptography, Volume 1: Basic Tools, Cambridge University Press, 2001, ISBN 0-521-79172-3
"Cryptology (definition)" (http:/ / www. merriam-webster. com/ dictionary/ cryptology). Merriam-Webster's Collegiate Dictionary (11th
edition ed.). Merriam-Webster. . Retrieved 2008-02-01.
[5] Kama Sutra, Sir Richard F. Burton, translator, Part I, Chapter III, 44th and 45th arts.
[6] Simon Singh, The Code Book, pp. 14-20
[7] Ibrahim A. Al-Kadi (April 1992), "The origins of cryptology: The Arab contributions”, Cryptologia 16 (2): 97–126
[8] Hakim, Joy (1995). A History of Us: War, Peace and all that Jazz. New York: Oxford University Press. ISBN 0-19-509514-6.
[9] James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C.,
Brassey's, 2001, ISBN 1-57488-367-4.
[10] Whitfield Diffie and Martin Hellman, "New Directions in Cryptography", IEEE Transactions on Information Theory, vol. IT-22, Nov. 1976,
pp: 644–654. ( pdf (http:/ / citeseer. ist. psu. edu/ rd/ 86197922,340126,1,0. 25,Download/ http:/ / citeseer. ist. psu. edu/ cache/ papers/ cs/
16749/ http:zSzzSzwww. cs. rutgers. eduzSz~tdnguyenzSzclasseszSzcs671zSzpresentationszSzArvind-NEWDIRS. pdf/ diffie76new. pdf))
[11] AJ Menezes, PC van Oorschot, and SA Vanstone, Handbook of Applied Cryptography (http:/ / web. archive. org/ web/ 20050307081354/
www. cacr. math. uwaterloo. ca/ hac/ ) ISBN 0-8493-8523-7.
[12] FIPS PUB 197: The official Advanced Encryption Standard (http:/ / www. csrc. nist. gov/ publications/ fips/ fips197/ fips-197. pdf).
[13] NCUA letter to credit unions (http:/ / www. ncua. gov/ letters/ 2004/ 04-CU-09. pdf), July 2004
[14] RFC 2440 - Open PGP Message Format
[15] SSH at windowsecurity.com (http:/ / www. windowsecurity. com/ articles/ SSH. html) by Pawel Golen, July 2004
[16] Bruce Schneier, Applied Cryptography, 2nd edition, Wiley, 1996, ISBN 0-471-11709-9.
[17] National Institute of Standards and Technology (http:/ / csrc. nist. gov/ groups/ ST/ hash/ documents/ FR_Notice_Nov07. pdf)
[18] Whitfield Diffie and Martin Hellman, "Multi-user cryptographic techniques" [Diffie and Hellman, AFIPS Proceedings 45, pp109–112, June
8, 1976].
[19] Ralph Merkle was working on similar ideas at the time and encountered publication delays, and Hellman has suggested that the term used
should be Diffie–Hellman–Merkle aysmmetric key cryptography.
[20] David Kahn, "Cryptology Goes Public", 58 Foreign Affairs 141, 151 (fall 1979), p. 153.
[21] R. Rivest, A. Shamir, L. Adleman. A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (http:/ / theory. lcs. mit. edu/
~rivest/ rsapaper. pdf). Communications of the ACM, Vol. 21 (2), pp.120–126. 1978. Previously released as an MIT "Technical Memo" in
April 1977, and published in Martin Gardner's Scientific American Mathematical recreations column
[22] Clifford Cocks. A Note on 'Non-Secret Encryption', CESG Research Report, 20 November 1973 (http:/ / www. fi. muni. cz/ usr/ matyas/
lecture/ paper2. pdf).
[23] "Shannon": Claude Shannon and Warren Weaver, The Mathematical Theory of Communication, University of Illinois Press, 1963, ISBN
0-252-72548-4
[24] Pascal Junod, "On the Complexity of Matsui's Attack" (http:/ / citeseer. ist. psu. edu/ cache/ papers/ cs/ 22094/ http:zSzzSzeprint. iacr.
orgzSz2001zSz056. pdf/ junod01complexity. pdf), SAC 2001.
[25] Dawn Song, David Wagner, and Xuqing Tian, "Timing Analysis of Keystrokes and Timing Attacks on SSH" (http:/ / citeseer. ist. psu. edu/
cache/ papers/ cs/ 22094/ http:zSzzSzeprint. iacr. orgzSz2001zSz056. pdf/ junod01complexity. pdf), In Tenth USENIX Security Symposium,
2001.
[26] S. Brands, "Untraceable Off-line Cash in Wallets with Observers" (http:/ / scholar. google. com/ url?sa=U& q=http:/ / ftp. se. kde. org/ pub/
security/ docs/ ecash/ crypto93. ps. gz), In Advances in Cryptology—Proceedings of CRYPTO, Springer-Verlag, 1994.
[27] László Babai. "Trading group theory for randomness" (http:/ / portal. acm. org/ citation. cfm?id=22192). Proceedings of the Seventeenth
Annual Symposium on the Theory of Computing, ACM, 1985.
[28] S. Goldwasser, S. Micali, and C. Rackoff, "The Knowledge Complexity of Interactive Proof Systems", SIAM J. Computing, vol. 18, num. 1,
pp. 186–208, 1989.
[29] G. Blakley. "Safeguarding cryptographic keys." In Proceedings of AFIPS 1979, volume 48, pp. 313–317, June 1979.
[30] A. Shamir. "How to share a secret." In Communications of the ACM, volume 22, pp. 612–613, ACM, 1979.
[31] RSA Laboratories' Frequently Asked Questions About Today's Cryptography (http:/ / www. rsasecurity. com/ rsalabs/ node. asp?id=2152)
[32] Cryptography & Speech (http:/ / web. archive. org/ web/ 20051201184530/ http:/ / www. cyberlaw. com/ cylw1095. html) from Cyberlaw
[33] "Case Closed on Zimmermann PGP Investigation" (http:/ / www. ieee-security. org/ Cipher/ Newsbriefs/ 1996/ 960214. zimmerman. html),
press note from the IEEE.
[34] Levy, Steven (2001). "Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age. Penguin Books. p. 56.
ISBN 0-14-024432-8. OCLC 244148644 48066852 48846639.
[35] Bernstein v USDOJ (http:/ / www. epic. org/ crypto/ export_controls/ bernstein_decision_9_cir. html), 9th Circuit court of appeals decision.
[36] The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (http:/ / www. wassenaar.
org/ guidelines/ index. html)
200
Cryptography
[37] "The Data Encryption Standard (DES)" (http:/ / www. schneier. com/ crypto-gram-0006. html#DES) from Bruce Schneier's CryptoGram
newsletter, June 15, 2000
[38] Coppersmith, D. (May 1994). "The Data Encryption Standard (DES) and its strength against attacks" (http:/ / www. research. ibm. com/
journal/ rd/ 383/ coppersmith. pdf) (PDF). IBM Journal of Research and Development 38 (3): 243. doi:10.1147/rd.383.0243. .
[39] E. Biham and A. Shamir, "Differential cryptanalysis of DES-like cryptosystems" (http:/ / scholar. google. com/ url?sa=U& q=http:/ / www.
springerlink. com/ index/ K54H077NP8714058. pdf), Journal of Cryptology, vol. 4 num. 1, pp. 3–72, Springer-Verlag, 1991.
[40] Levy, pg. 56
[41] Digital Millennium Copyright Act (http:/ / www. copyright. gov/ legislation/ dmca. pdf)
[42] http:/ / www. macfergus. com/ niels/ dmca/ cia. html
[43] https:/ / www. cryptool. org/ download/ CrypToolScript-en. pdf
[44] http:/ / www. wisdom. weizmann. ac. il/ ~oded/ foc-book. html
[45] http:/ / www. cs. umd. edu/ ~jkatz/ imc. html
[46] http:/ / www. cacr. math. uwaterloo. ca/ hac/
[47] http:/ / www. crypto. rub. de/ en_paar. html
[48] http:/ / www. cryptography-textbook. com
[49] http:/ / www. cs. ucdavis. edu/ ~rogaway/ classes/ 227/ spring05/ book/ main. pdf
[50] http:/ / www. mpepil. com
[51] http:/ / www. basel-research. eu. com
[52] http:/ / ciphersbyritter. com/ GLOSSARY. HTM
[53] http:/ / www. attackprevention. com/ Cryptology/
[54] http:/ / www. pawlan. com/ Monica/ crypto/
[55] http:/ / www. nsa. gov/ kids/
[56]
[57]
[58]
[59]
[60]
[61]
[62]
http:/ / www. cryptool. org/ download/ CrypToolPresentation-en. pdf
http:/ / www. rsasecurity. com/ rsalabs/ node. asp?id=2152
http:/ / www. spinstop. com/ schlafly/ crypto/ faq. htm
http:/ / wiki. crypto. rub. de/ Buch/ slides_movies. php
http:/ / www. baconscipher. com/ EarlyCryptology. html
http:/ / www2. warwick. ac. uk/ fac/ soc/ pais/ staff/ aldrich/ vigilant/ lectures/ gchq
http:/ / www. rbcafe. com/ Cryptix
201
Bruce Schneier
202
Bruce Schneier
Bruce Schneier
[1]
Born
January 15, 1963
Residence
United States
Citizenship
American
Fields
Computer science
Institutions Counterpane Internet Security
Bell Labs
United States Department of
Defense
BT Group
Alma mater American University
University of Rochester
Known for
Cryptography, security
Bruce Schneier (born January 15, 1963,[1] pronounced /ˈʃnaɪər/) is an American cryptographer, computer security
specialist, and writer. He is the author of several books on computer security and cryptography, and is the founder
and chief technology officer of BT Counterpane, formerly Counterpane Internet Security, Inc. He received his
master's degree in computer science from the American University in Washington, DC in 1988[2] .
Writings on computer security and general security
In 2000, Schneier published Secrets and Lies: Digital Security in a Networked World. In 2003, Schneier published
Beyond Fear: Thinking Sensibly About Security in an Uncertain World.
Schneier writes a freely available monthly Internet newsletter on computer and other security issues, Crypto-Gram,
as well as a security weblog, Schneier on Security. The weblog started out as a way to publish essays before they
appeared in Crypto-Gram, making it possible for others to comment on them while the stories were still current, but
over time the newsletter became a monthly email version of the blog, re-edited and re-organized.[3] Schneier is
frequently quoted in the press on computer and other security issues, pointing out flaws in security and cryptographic
implementations ranging from biometrics to airline security after the September 11, 2001 attacks. He also writes
"Security Matters", a regular column for Wired Magazine.[4]
He has also criticized security approaches that try to prevent any malicious incursion, instead arguing that designing
systems to fail well is more important.[5]
Schneier revealed on his blog that in the December 2004 issue of the SIGCSE Bulletin, three Pakistani academics,
Khawaja Amer Hayat, Umar Waqar Anis, and S. Tauseef-ur-Rehman, from the International Islamic University in
Islamabad, Pakistan, plagiarized an article written by Schneier and got it published.[6] The same academics
Bruce Schneier
subsequently plagiarized another article by Ville Hallivuori on "Real-time Transport Protocol (RTP) security" as
well.[6] Schneier complained to the editors of the periodical, which generated a minor controversy.[7] The editor of
the SIGCSE Bulletin removed the paper from their website and demanded official letters of admission and apology.
Schneier noted on his blog that International Islamic University personnel had requested him "to close comments in
this blog entry"; Schneier refused to close comments on the blog, but he did delete posts which he deemed
"incoherent or hostile".[6]
Other writing
Schneier and Karen Cooper were nominated in 2000 for the Hugo Award, in the category of Best Related Book, for
their Minicon 34 Restaurant Guide, a work originally published for the Minneapolis science fiction convention
Minicon which gained a readership internationally in science fiction fandom for its wit and good humor.[8]
Cryptographic algorithms
Schneier has been involved in the creation of many cryptographic algorithms.
Hash functions:
• Skein
Stream ciphers:
• Solitaire
• Phelix
• Helix
Pseudo-random number generators:
• Fortuna
• Yarrow algorithm
Block ciphers:
•
•
•
•
Twofish
Blowfish
Threefish
MacGuffin
Publications
•
•
•
•
•
•
•
•
•
Schneier, Bruce. Applied Cryptography, John Wiley & Sons, 1994. ISBN 0-471-59756-2
Schneier, Bruce. Protect Your Macintosh, Peachpit Press, 1994. ISBN 1-56609-101-2
Schneier, Bruce. E-Mail Security, John Wiley & Sons, 1995. ISBN 0-471-05318-X
Schneier, Bruce. Applied Cryptography, Second Edition, John Wiley & Sons, 1996. ISBN 0-471-11709-9
Schneier, Bruce; Kelsey, John; Whiting, Doug; Wagner, David; Hall, Chris; Ferguson, Niels. The Twofish
Encryption Algorithm, John Wiley & Sons, 1996. ISBN 0-471-35381-7
Schneier, Bruce; Banisar, David. The Electronic Privacy Papers, John Wiley & Sons, 1997. ISBN 0-471-12297-1
Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons, 2000. ISBN
0-471-25311-1
Schneier, Bruce. Beyond Fear: Thinking Sensibly about Security in an Uncertain World, Copernicus Books,
2003. ISBN 0-387-02620-7
Ferguson, Niels; Schneier, Bruce. Practical Cryptography, John Wiley & Sons, 2003. ISBN 0-471-22357-3
• Schneier, Bruce. Schneier on Security, John Wiley & Sons, 2008. ISBN 978-0-470-39535-6
203
Bruce Schneier
• Ferguson, Niels; Schneier, Bruce; Kohno, Tadayoshi. Cryptography Engineering, John Wiley & Sons, 2010.
ISBN 978-0-470-47424-2
See also
•
•
•
•
•
Attack tree
Failing badly
Security theater
Snake oil (cryptography)
Schneier's Law
External links
• Personal website, Schneier.com [9]
• Talking security with Bruce Almighty [10]
• Schneier at the 2009 RSA conference [11], video with Schneier participating on the Cryptographer's Panel, April
21, 2009, Moscone Center, San Francisco
• Bruce Schneier Facts [12] (Parody)
References
[1] http:/ / www. facebook. com/ bruce. schneier
[2] Charles C. Mann Homeland Insecurity (http:/ / www. theatlantic. com/ doc/ 200209/ mann) www.theatlantic.com
[3] Blood, Rebecca (January 2007). "Bruce Schneier" (http:/ / www. rebeccablood. net/ bloggerson/ bruceschneier. html). Bloggers on Blogging.
. Retrieved 2007-04-19.
[4] Schneier, Bruce. "Security Matters" (http:/ / www. wired. com/ commentary/ securitymatters). Wired Magazine. . Retrieved 2008-03-10.
[5] Homeland Insecurity (http:/ / charlesmann. org/ articles/ Homeland-Insecurity-Atlantic. pdf), Atlantic Monthly, September 2002
[6] "Schneier on Security: Plagiarism and Academia: Personal Experience" (http:/ / www. schneier. com/ blog/ archives/ 2005/ 08/
plagiarism_and. html). Schneier.com. . Retrieved 2009-06-09.
[7] "ONLINE - International News Network" (http:/ / www. onlinenews. com. pk/ details. php?id=85519). Onlinenews.com.pk. 2007-06-09. .
Retrieved 2009-06-09.
[8] "Hugo Awards Nominations" (http:/ / www. locusmag. com/ 2000/ News/ News04d. html). Locus Magazine. 2000-04-21. .
[9] http:/ / www. schneier. com/
[10] http:/ / www. itwire. com/ content/ view/ 16422/ 1090/ 1/ 0/
[11] http:/ / media. omediaweb. com/ rsa2009/ preview/ webcast. htm?id=1_5
[12] http:/ / geekz. co. uk/ schneierfacts/
204
205
7.0 Application
Application security
Application security encompasses measures taken throughout the application's life-cycle to prevent exceptions in
the security policy of an application or the underlying system (vulnerabilities) through flaws in the design,
development, deployment, upgrade, or maintenance of the application, .
Applications only control the use of resources granted to them, and not which resources are granted to them. They, in
turn, determine the use of these resources by users of the application through application security.
Open Web Application Security Project (OWASP) and Web Application Security Consortium (WASC) updates on
the latest threats which impair web based applications. This aids developers, security testers and architects to focus
on better design and mitigation strategy. OWASP Top 10 has become an industrial norm is assessing Web
Applications.
Methodology
According to the patterns & practices Improving Web Application Security book, a principle-based approach for
application security includes: [1]
• Know your threats.
• Secure the network, host and application.
• Incorporate security into your application life cycle
Note that this approach is technology / platform independent. It is focused on principles, patterns, and practices.
For more information on a principle-based approach to application security, see patterns & practices Application
Security Methodology [2]
Threats, Attacks, Vulnerabilities, and Countermeasures
According to the patterns & practices Improving Web Application Security book, the following terms are relevant to
application security: [1]
•
•
•
•
•
Asset. A resource of value such as the data in a database or on the file system, or a system resource.
Threat. A negative effect.
Vulnerability. A weakness that makes a threat possible.
Attack (or exploit). An action taken to harm an asset.
Countermeasure. A safeguard that addresses a threat and mitigates risk.
Application Threats / Attacks
According to the patterns & practices Improving Web Application Security book, the following are classes of
common application security threats / attacks: [1]
Application security
206
Category
Threats / Attacks
Input Validation
Buffer overflow; cross-site scripting; SQL injection; canonicalization
Authentication
Network eavesdropping ; Brute force attack; dictionary attacks; cookie replay; credential theft
Authorization
Elevation of privilege; disclosure of confidential data; data tampering; luring attacks
Configuration
management
Unauthorized access to administration interfaces; unauthorized access to configuration stores; retrieval of clear text
configuration data; lack of individual accountability; over-privileged process and service accounts
Sensitive information Access sensitive data in storage; network eavesdropping; data tampering
Session management
Session hijacking; session replay; man in the middle
Cryptography
Poor key generation or key management; weak or custom encryption
Parameter
manipulation
Query string manipulation; form field manipulation; cookie manipulation; HTTP header manipulation
Exception
management
Information disclosure; denial of service
Auditing and logging User denies performing an operation; attacker exploits an application without trace; attacker covers his or her tracks
Mobile Application Security
The proportion of mobile devices providing open platform functionality is expected to continue to increase as time
moves on. The openness of these platforms offers significant opportunities to all parts of the mobile eco-system by
delivering the ability for flexible program and service delivery options that may be installed, removed or refreshed
multiple times in line with the user’s needs and requirements. However, with openness comes responsibility and
unrestricted access to mobile resources and APIs by applications of unknown or untrusted origin could result in
damage to the user, the device, the network or all of these, if not managed by suitable security architectures and
network precautions. Mobile Application Security is provided in some form on most open OS mobile devices
(Symbian OS [3] , Microsoft , BREW, etc.). Industry groups have also created recommendations including the GSM
Association and Open Mobile Terminal Platform (OMTP) [4]
Security testing for applications
Security testing techniques scour for vulnerabilities or security holes in applications. These vulnerabilities leave
applications open to exploitation. Ideally, security testing is implemented throughout the entire software
development life cycle (SDLC) so that vulnerabilities may be addressed in a timely and thorough manner.
Unfortunately, testing is often conducted as an afterthought at the end of the development cycle.
Vulnerability scanners, and more specifically web application scanners, otherwise known as penetration testing tools
(i.e. ethical hacking tools) have been historically used by security organizations within corporations and security
consultants to automate the security testing of http request/responses; however, this is not a substitute for the need
for actual source code review. Physical code reviews of an application's source code can be accomplished manually
or in an automated fashion. Given the common size of individual programs (often 500K Lines of Code or more), the
human brain can not execute a comprehensive data flow analysis needed in order to completely check all circuitous
paths of an application program to find vulnerability points. The human brain is suited more for filtering, interrupting
and reporting the outputs of automated source code analysis tools available commercially versus trying to trace every
possible path through a compiled code base to find the root cause level vulnerabilities.
The two types of automated tools associated with application vulnerability detection (application vulnerability
scanners) are Penetration Testing Tools (often categorized as Black Box Testing Tools) and static code analysis tools
(often categorized as White Box Testing Tools). Tools in the Black Box Testing arena include HP Quality Center [5]
Application security
(through the acquisition of SPI Dynamics [6] ), Nikto (open source). Tools in the static code analysis arena include
Veracode [7] , Pre-Emptive Solutions[8] , and Parasoft [9] .
Banking and large E-Commerce corporations have been the very early adopter customer profile for these types of
tools. It is commonly held within these firms that both Black Box testing and White Box testing tools are needed in
the pursuit of application security. Typically sited, Black Box testing (meaning Penetration Testing tools) are ethical
hacking tools used to attack the application surface to expose vulnerabilities suspended within the source code
hierarchy. Penetration testing tools are executed on the already deployed application. White Box testing (meaning
Source Code Analysis tools) are used by either the application security groups or application development groups.
Typically introduced into a company through the application security organization, the White Box tools complement
the Black Box testing tools in that they give specific visibility into the specific root vulnerabilities within the source
code in advance of the source code being deployed. Vulnerabilities identified with White Box testing and Black Box
testing are typically in accordance with the OWASP taxonomy for software coding errors. White Box testing
vendors have recently introduced dynamic versions of their source code analysis methods; which operates on
deployed applications. Given that the White Box testing tools have dynamic versions similar to the Black Box
testing tools, both tools can be correlated in the same software error detection paradigm ensuring full application
protection to the client company.
The advances in professional Malware targeted at the Internet customers of online organizations has seen a change in
Web application design requirements since 2007. It is generally assumed that a sizable percentage of Internet users
will be compromised through malware and that any data coming from their infected host may be tainted. Therefore
application security has begun to manifest more advanced anti-fraud and heuristic detection systems in the
back-office, rather than within the client-side or Web server code.[10] devv
Security standards and regulations
• Sarbanes-Oxley Act (SOX)
• Health Insurance Portability and Accountability Act (HIPAA)
• IEEE P1074
• ISO/IEC 7064:2003 Information technology -- Security techniques -- Check character systems
• ISO/IEC 9796-2:2002 Information technology -- Security techniques -- Digital signature schemes giving message
recovery -- Part 2: Integer factorization based mechanisms
• ISO/IEC 9796-3:2006 Information technology -- Security techniques -- Digital signature schemes giving message
recovery -- Part 3: Discrete logarithm based mechanisms
• ISO/IEC 9797-1:1999 Information technology -- Security techniques -- Message Authentication Codes (MACs) -Part 1: Mechanisms using a block cipher
• ISO/IEC 9797-2:2002 Information technology -- Security techniques -- Message Authentication Codes (MACs) -Part 2: Mechanisms using a dedicated hash-function
• ISO/IEC 9798-1:1997 Information technology -- Security techniques -- Entity authentication -- Part 1: General
• ISO/IEC 9798-2:1999 Information technology -- Security techniques -- Entity authentication -- Part 2:
Mechanisms using symmetric encipherment algorithms
• ISO/IEC 9798-3:1998 Information technology -- Security techniques -- Entity authentication -- Part 3:
Mechanisms using digital signature techniques
• ISO/IEC 9798-4:1999 Information technology -- Security techniques -- Entity authentication -- Part 4:
Mechanisms using a cryptographic check function
• ISO/IEC 9798-5:2004 Information technology -- Security techniques -- Entity authentication -- Part 5:
Mechanisms using zero-knowledge techniques
207
Application security
• ISO/IEC 9798-6:2005 Information technology -- Security techniques -- Entity authentication -- Part 6:
Mechanisms using manual data transfer
• ISO/IEC 14888-1:1998 Information technology -- Security techniques -- Digital signatures with appendix -- Part
1: General
• ISO/IEC 14888-2:1999 Information technology -- Security techniques -- Digital signatures with appendix -- Part
2: Identity-based mechanisms
• ISO/IEC 14888-3:2006 Information technology -- Security techniques -- Digital signatures with appendix -- Part
3: Discrete logarithm based mechanisms
• ISO/IEC 17799:2005 Information technology -- Security techniques -- Code of practice for information security
management
• ISO/IEC 24762:2008 Information technology -- Security techniques -- Guidelines for information and
communications technology disaster recovery services
• ISO/IEC 27006:2007 Information technology -- Security techniques -- Requirements for bodies providing audit
and certification of information security management systems
• Gramm-Leach-Bliley Act
• PCI Data Security Standard (PCI DSS)
See also
•
•
•
•
•
•
•
•
•
Countermeasure
Data security
Database security
Information security
Trustworthy Computing Security Development Lifecycle
Web application
Web application framework
XACML
HERAS-AF
External links
•
•
•
•
•
•
Open Web Application Security Project [11]
The Web Application Security Consortium [12]
The Microsoft Security Development Lifecycle (SDL) [13]
patterns & practices Security Guidance for Applications [14]
QuietMove Web Application Security Testing Plug-in Collection for FireFox [15]
Advantages of an integrated security solution for HTML and XML [16]
208
Application security
References
[1] Improving Web Application Security: Threats and Countermeasures (http:/ / msdn2. microsoft. com/ en-us/ library/ ms994920. aspx#),
published by Microsoft Corporation.
[2] http:/ / channel9. msdn. com/ wiki/ default. aspx/ SecurityWiki. ApplicationSecurityMethodology
[3] "Platform Security Concepts" (http:/ / developer. symbian. com/ main/ documentation/ books/ books_files/ sops/ plat_sec_chap. pdf), Simon
Higginson.
[4] Application Security Framework (https:/ / www. omtp. org/ Publications/ Display. aspx?Id=c4ee46b6-36ae-46ae-95e2-cfb164b758b5), Open
Mobile Terminal Platform
[5] Application security: Find web application security vulnerabilities during every phase of the software development lifecycle (https:/ / h10078.
www1. hp. com/ cda/ hpms/ display/ main/ hpms_content. jsp?zn=bto& cp=1-11-201_4000_100__), HP center
[6] HP acquires SPI Dynamics (http:/ / news. cnet. com/ 8301-10784_3-9731312-7. html), CNET news.com
[7] http:/ / www. veracode. com/ solutions Veracode Security Static Analysis Solutions
[8] http:/ / www. preemptive. com/ application-protection. html Application Protection
[9] http:/ / www. parasoft. com/ parasoft_security Parasoft Application Security Solution
[10] "Continuing Business with Malware Infected Customers" (http:/ / www. technicalinfo. net/ papers/ MalwareInfectedCustomers. html).
Gunter Ollmann. October, 2008. .
[11] http:/ / www. owasp. org
[12] http:/ / www. webappsec. org
[13] http:/ / msdn. microsoft. com/ en-us/ security/ cc420639. aspx
[14] http:/ / msdn. microsoft. com/ en-gb/ library/ ms998408. aspx
[15] https:/ / addons. mozilla. org/ en-US/ firefox/ collection/ webappsec
[16] http:/ / community. citrix. com/ blogs/ citrite/ sridharg/ 2008/ 11/ 17/ Advantages+ of+ an+ integrated+ security+ solution+ for+ HTML+
and+ XML
Application software
Application software, also known as an
application, is computer software designed to
help the user to perform singular or multiple
related specific tasks. Examples include enterprise
software, accounting software, office suites,
graphics software and media players.
Application software is contrasted with system
software and middleware, which manage and
integrate a computer's capabilities, but typically
do not directly apply them in the performance of
tasks that benefit the user. A simple, if imperfect
OpenOffice.org Writer word processor. OpenOffice.org is a popular
analogy in the world of hardware would be the
example of open source application software
relationship of an electric light bulb (an
application) to an electric power generation plant
(a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application
like the electric light that performs a service that benefits the user.
Terminology
In computer science, an application is a computer program designed to help people perform an activity. An
application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or
general-purpose chores), and a programming language (with which computer programs are created). Depending on
the activity for which it was designed, an application can manipulate text, numbers, graphics, or a combination of
these elements. Some application packages offer considerable computing power by focusing on a single task, such as
209
Application software
word processing; others, called integrated software, offer somewhat less power but include several applications.[1]
User-written software tailors systems to meet the user's specific needs. User-written software include spreadsheet
templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a
kind of user software. Users create this software themselves and often overlook how important it is. The delineation
between system software such as operating systems and application software is not exact, however, and is
occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft
antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a
separable piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to
disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In
some types of embedded systems, the application software and the operating system software may be
indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven. The
above definitions may exclude some applications that may exist on some computers in large organizations. For an
alternative definition of an application: see Application Portfolio Management.
Application software classification
There are many types of application software:
• An application suite consists of multiple applications bundled together. They usually have related functions,
features and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business
applications often come in suites, e.g. Microsoft Office, OpenOffice.org, and iWork, which bundle together a
word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.
• Enterprise software addresses the needs of organization processes and data flow, often in a large distributed
environment. (Examples include Financial, Customer Relationship Management, and Supply Chain
Management). Note that Departmental Software is a sub-type of Enterprise Software with a focus on smaller
organizations or groups within a large organization. (Examples include Travel Expense Management, and IT
Helpdesk)
• Enterprise infrastructure software provides common capabilities needed to support enterprise software systems.
(Examples include Databases, Email servers, and Network and Security Management)
• Information worker software addresses the needs of individuals to create and manage information, often for
individual projects within a department, in contrast to enterprise management. Examples include time
management, resource management, documentation tools, analytical, and collaborative. Word processors,
spreadsheets, email and blog clients, personal information system, and individual media editors may aid in
multiple information worker tasks.
• Content access software is software used primarily to access content without editing, but may include software
that allows for content editing. Such software addresses the needs of individuals and groups to consume digital
entertainment and published digital content. (Examples include Media Players, Web Browsers, Help browsers,
and Games)
• Educational software is related to content access software, but has the content and/or features adapted for use in
by educators or students. For example, it may deliver evaluations (tests), track progress through material, or
include collaborative capabilities.
• Simulation software are computer software for simulation of physical or abstract systems for either research,
training or entertainment purposes.
• Media development software addresses the needs of individuals who generate print and electronic media for
others to consume, most often in a commercial or educational setting. This includes Graphic Art software,
Desktop Publishing software, Multimedia Development software, HTML editors, Digital Animation editors,
Digital Audio and Video composition, and many others.[2]
• Product engineering software is used in developing hardware and software products. This includes computer
aided design (CAD), computer aided engineering (CAE), computer language editing and compiling tools,
210
Application software
211
Integrated Development Environments, and Application Programmer Interfaces.
Examples Of Application software
Information worker software
Content access software
•
•
Time and Resource Management
•
•
Enterprise Resource Planning (ERP)
systems
• Accounting software
• Task and Scheduling
• Field service management software
Data Management
•
• Contact Management
• Spreadsheet
• Personal Database
Documentation
•
• Document Automation/Assembly
• Word Processing
• Desktop publishing software
• Diagramming Software
• Presentation software
Analytical software
•
•
Computer algebra systems
Numerical computing
•
• List of numerical software
• Physics software
• Science software
• List of statistical software
• Neural network software
Collaborative software
•
•
• E-mail
• Blog
• Wiki
Reservation systems
Financial Software
•
•
•
Day trading software
Banking systems
Clearing systems
Electronic media software
•
•
•
Web browser
Media Players
Hybrid editor players
Entertainment software
•
•
•
Digital pets
Screen savers
Video Games
•
•
•
•
•
Arcade games
Emulators for console games
Personal computer games
Console games
Mobile games
Educational software
•
•
•
•
•
•
Classroom Management
Entertainment Software
Learning/Training Management Software
Reference software
Sales Readiness Software
Survey Management
Media development
software
•
•
Image organizer
Media content creating/editing
•
•
•
•
3D computer graphics software
Animation software
Graphic art software
Image editing software
•
•
• Raster graphics editor
• Vector graphics editor
Video editing software
Sound editing software
•
• Digital audio editor
Music sequencer
•
• Scorewriter
Hypermedia editing software
•
Web Development Software
Application software
212
Enterprise infrastructure
software
Product engineering
software
•
•
•
Hardware Engineering
•
• Computer-aided engineering
• Computer-aided design (CAD)
• Finite Element Analysis
Software Engineering
•
•
•
Business workflow software
Database management system (DBMS)
software
Digital asset management (DAM) software
Document Management software
Geographic Information System (GIS)
software
Simulation software
•
Computer simulators
•
•
•
•
•
Scientific simulators
Social simulators
Battlefield simulators
Emergency simulators
Vehicle simulators
•
• Flight simulators
• Driving simulators
Simulation games
•
•
•
•
•
•
•
•
Computer Language Editor
Compiler Software
Integrated Development
Environments
Game creation software
Debuggers
Program testing tools
License manager
Vehicle simulation games
References
[1] Ceruzzi, Paul E. (1998). A History of Modern Computing. Cambridge, Mass.: MIT Press. ISBN 0262032554.
[2] Campbell-Kelly, Martin; Aspray, William (1996). Computer: A History of the Information Machine. New York: Basic Books. ISBN
0465029906.
Software cracking
Software cracking
Software cracking is the modification of software to remove or disable features which are considered undesirable
by the person cracking the software, usually related to protection methods: copy protection, trial/demo version, serial
number, hardware key, date checks, CD check or software annoyances like nag screens and adware. The distribution
and use of cracked copies is illegal in almost every Economic development. There have been many lawsuits over
cracking software.
History
The first software copy protection was on early Apple II, Atari 800 and Commodore 64 software. Software
publishers, particularly of gaming software, have over time resorted to increasingly complex measures to try to stop
unauthorized copying of their software.
On the Apple II, unlike modern computers that use standardized device drivers to manage device communications,
the operating system directly controlled the step motor that moves the floppy drive head, and also directly interpreted
the raw data called nibbles) read from each track to find the data sectors. This allowed complex disk-based software
copy protection, by storing data on half tracks (0, 1, 2.5, 3.5, 5, 6...), quarter tracks (0, 1, 2.25, 3.75, 5, 6...), and any
combination thereof. In addition, tracks did not need to be perfect rings, but could be sectioned so that sectors could
be staggered across overlapping offset tracks, the most extreme version being known as spiral tracking. It was also
discovered that many floppy drives did not have a fixed upper limit to head movement, and it was sometimes
possible to write an additional 36th track above the normal 35 tracks. The standard Apple II copy programs could not
read such protected floppy disks, since the standard DOS assumed that all disks had a uniform 35-track, 13- or
16-sector layout. Special nibble-copy programs such as Locksmith and Copy II Plus could sometimes duplicate these
disks by using a reference library of known protection methods; when protected programs were cracked they would
be completely stripped of the copy protection system, and transferred onto a standard format disk that any normal
Apple II copy program could read.
One of the primary routes to hacking these early copy protections was to run a program that simulates the normal
CPU operation. The CPU simulator provides a number of extra features to the hacker, such as the ability to
single-step through each processor instruction and to examine the CPU registers and modified memory spaces as the
simulation runs. The Apple II provided a built-in opcode disassembler, allowing raw memory to be decoded into
CPU opcodes, and this would be utilized to examine what the copy-protection was about to do next. Generally there
was little to no defense available to the copy protection system, since all its secrets are made visible through the
simulation. But because the simulation itself must run on the original CPU, in addition to the software being hacked,
the simulation would often run extremely slowly even at maximum speed.
On Atari 8-bit computers, the most common protection method was via "bad sectors". These were sectors on the disk
that were intentionally unreadable by the disk drive. The software would look for these sectors when the program
was loading and would stop loading if an error code was not returned when accessing these sectors. Special copy
programs were available that would copy the disk and remember any bad sectors. The user could then use an
application to spin the drive by constantly reading a single sector and display the drive RPM. With the disk drive top
removed a small screwdriver could be used to slow the drive RPM below a certain point. Once the drive was slowed
down the application could then go and write "bad sectors" where needed. When done the drive RPM was sped up
back to normal and an uncracked copy was made. Of course cracking the software to expect good sectors made for
readily copied disks without the need to meddle with the disk drive. As time went on more sophisticated methods
were developed, but almost all involved some form of malformed disk data, such as a sector that might return
different data on separate accesses due to bad data alignment. Products became available (from companies such as
Happy Computers) which replaced the controller BIOS in Atari's "smart" drives. These upgraded drives allowed the
user to make exact copies of the original program with copy protections in place on the new disk.
213
Software cracking
On the Commodore 64, several methods were used to protect software. For software distributed on ROM cartridges,
subroutines were included which attempted to write over the program code. If the software was on ROM, nothing
would happen, but if the software had been moved to RAM, the software would be disabled. Because of the
operation of Commodore floppy drives, some write protection schemes would cause the floppy drive head to bang
against the end of its rail, which could cause the drive head to become misaligned. In some cases, cracked versions
of software were desirable to avoid this result.
Most of the early software crackers were computer hobbyists who often formed groups that competed against each
other in the cracking and spreading of software. Breaking a new copy protection scheme as quickly as possible was
often regarded as an opportunity to demonstrate one's technical superiority rather than a possibility of
money-making. Some low skilled hobbyists would take already cracked software and edit various unencrypted
strings of text in it to change messages a game would tell a game player, often something not suitable for children.
Then pass the altered copy along in the pirate networks, mainly for laughs among adult users. The cracker groups of
the 1980s started to advertise themselves and their skills by attaching animated screens known as crack intros in the
software programs they cracked and released. Once the technical competition had expanded from the challenges of
cracking to the challenges of creating visually stunning intros, the foundations for a new subculture known as
demoscene were established. Demoscene started to separate itself from the illegal "warez scene" during the 1990s
and is now regarded as a completely different subculture. Many software crackers have later grown into extremely
capable software reverse engineers; the deep knowledge of assembly required in order to crack protections enables
them to reverse engineer drivers in order to port them from binary-only drivers for Windows to drivers with source
code for Linux and other free operating systems.
With the rise of the Internet, software crackers developed secretive online organizations. In the latter half of the
nineties, one of the most respected sources of information about "software protection reversing" was Fravia's
website.
Most of the well-known or "elite" cracking groups make software cracks entirely for respect in the "The Scene", not
profit. From there, the cracks are eventually leaked onto public Internet sites by people/crackers who use
well-protected/secure FTP release archives, which are made into pirated copies and sometimes sold illegally by other
parties.
The Scene today is formed of small groups of very talented people, who informally compete to have the best
crackers, methods of cracking, and reverse engineering.
Methods
The most common software crack is the modification of an application's binary to cause or prevent a specific key
branch in the program's execution. This is accomplished by reverse engineering the compiled program code using a
debugger such as SoftICE, OllyDbg, GDB, or MacsBug until the software cracker reaches the subroutine that
contains the primary method of protecting the software (or by disassembling an executable file with a program such
as IDA). The binary is then modified using the debugger or a hex editor in a manner that replaces a prior branching
opcode with its complement or a NOP opcode so the key branch will either always execute a specific subroutine or
skip over it. Almost all common software cracks are a variation of this type. Proprietary software developers are
constantly developing techniques such as code obfuscation, encryption, and self-modifying code to make this
modification increasingly difficult.
A specific example of this technique is a crack that removes the expiration period from a time-limited trial of an
application. These cracks are usually programs that patch the program executable and sometimes the .dll or .so
linked to the application. Similar cracks are available for software that requires a hardware dongle. A company can
also break the copy protection of programs that they have legally purchased but that are licensed to particular
hardware, so that there is no risk of downtime due to hardware failure (and, of course, no need to restrict oneself to
running the software on bought hardware only).
214
Software cracking
Another method is the use of special software such as CloneCD to scan for the use of a commercial copy protection
application. After discovering the software used to protect the application, another tool may be used to remove the
copy protection from the software on the CD or DVD. This may enable another program such as Alcohol 120%,
CloneDVD, Game Jackal, or Daemon Tools to copy the protected software to a user's hard disk. Popular commercial
copy protection applications which may be scanned for include SafeDisc and StarForce.[1]
In other cases, it might be possible to decompile a program in order to get access to the original source code or code
on a level higher than machine code. This is often possible with scripting languages and languages utilizing JIT
compilation. An example is cracking (or debugging) on the .NET platform where one might consider manipulating
CIL to achieve one's needs. Java's bytecode also works in a similar fashion in which there is an intermediate
language before the program is compiled to run on the platform dependent machine code.
Advanced reverse engineering for protections such as Securom, Safedisc or StarForce requires a cracker, or many
crackers to spend much time studying the protection, eventually finding every flaw within the protection code, and
then coding their own tools to "unwrap" the protection automatically from executable (.EXE) and library (.DLL)
files.
There are a number of sites on the Internet that let users download cracks for popular games and applications
(although at the danger of acquiring malicious software that is sometimes distributed via such sites). Although these
cracks are used by legal buyers of software, they can also be used by people who have downloaded or otherwise
obtained pirated software (often through P2P networks).
Effects
The most visible and controversial effect of software cracking is the releasing of fully operable proprietary software
without any copy protection. Software companies represented by the Business Software Alliance estimate and claim
losses due to piracy.
Industry response
Apple Computer has begun incorporating a Trusted Platform Module into their Apple Macintosh line of computers,
and making use of it in such applications as Rosetta. Parts of the operating system not fully x86-native run through
the Rosetta PowerPC binary translator, which in turn requires the Trusted Platform Module for proper operation.
(This description applies to the developer preview version, but the mechanism differs in the release version.)
Recently, the OSx86 project has been releasing patches to circumvent this mechanism. There are also industrial
solutions available like Matrix Software License Protection System.
Microsoft reduced common Windows based software cracking with the release of the Next-Generation Secure
Computing Base initiative in future versions of their operating system.[2]
References
[1] Gamecopyworld Howto (http:/ / m0001. gamecopyworld. com/ games/ gcw_cd-backup. shtml)
[2] Evers, Joris (2005-08-30). "Microsoft's leaner approach to Vista security" (http:/ / www. builderau. com. au/ news/ soa/
Microsoft-s-leaner-approach-to-Vista-security/ 0,339028227,339205781,00. htm?feed=pt_windows_7). BuilderAU. . Retrieved 2008-12-31.
215
Article Sources and Contributors
Article Sources and Contributors
Security Source: http://en.wikipedia.org/w/index.php?oldid=378224140 Contributors: 2beornot57, Abby724, Adamylo, Ahoerstemeier, AlMac, Alansohn, Altermike, Andrewia, Andrewrp,
Andyj00, Angel ivanov angelov, Antandrus, Aranel, Ark2120, ArnoldReinhold, Astharoth1, Backburner001, Beetstra, Bhadani, Biblbroks, BigFatBuddha, Bob Costello, Bosnabosna, Brianjd,
Brockert, Bsroiaadn, Burrettwilce, COMPFUNK2, Cacycle, Caltas, Calvin 1998, Causa sui, CelebritySecurity, ChrisO, Chrisbrown, ChristopheS, Cncs wikipedia, Commander Keane, Common
Man, Corp Vision, Correogsk, Courcelles, Cryptic, DJ Clayworth, DMacks, DanielPharos, Davidfstr, Dbiel, Deagle AP, Derekcslater, Dethme0w, Disooqi, DocWatson42, Docboat,
DocendoDiscimus, Dori, Draeco, Drbreznjev, DreamHaze, Drmies, Dzzl, EFE YILDIRIM, ERobson, Edward, Electronicommerce, Enchanter, Eras-mus, Exegete48, Exegetetheology,
Exit2DOS2000, Fcoulter, Fieldday-sunday, Frap, GB fan, Galoubet, Gerbrant, Gogo Dodo, Greatread, Gurch, H2g2bob, HansWDaniel, Hariva, Havanafreestone, Hephaestos, Heron, HerveB,
Heyta, Ice-Evil, Iceturf, Imnotminkus, Iridescent, Irishguy, Isecom, J.delanoy, JHeinonen, JHunterJ, JRM, Jadams76, Jakejo2009, JamesBWatson, Jeff G., Jim.henderson, Jmabel, Jmax-, JohnLai,
Jose77, Joy, Jpg, Julesd, JungleJym2, Karl-Henner, Kbrose, Kcmoduk, Kku, Klf uk, Kostisl, Koyaanis Qatsi, Kubigula, Kungming2, Kurieeto, Kuru, L Kensington, Laaa200, Lakerfan48,
Latiligence, Levineps, Loganmstarr, Londonlinks, Lucca.Ghidoni, Luk, Luna Santin, M7, MER-C, Mackensen, Mapletip, MathKnight, Matt Britt, Maurreen, Maxlaker, Maziotis, Meaghan,
Mentifisto, Mentmic, Mercy, Mic, Michael Hardy, Minghong, Mlduda, Moonriddengirl, Mozzerati, MrOllie, Muchness, Mysecurecyberspace, Nesberries, Neurolysis, Nick, Nnp, Nothingmore
Nevermore, Oda Mari, OffsBlink, Old Moonraker, Oli Filth, Ominae, Onyx020, Orion11M87, Oxymoron83, PCHS-NJROTC, Pagingmrherman, Patrick, Pearle, Pengo, Peter S., Pinkadelica,
Poli, Portal60hooch, Prolog, Quarl, Qxz, Ragle Fragle 007, Reach Out to the Truth, Reconsider the static, Redeyeglasses, Reedy, Revth, Ronz, Rory096, Roscoe x, RoyBoy, SNIyer12,
SRE.K.A.L.24, ST47, Sander Säde, Sceptre, ShelfSkewed, Shoeofdeath, Shoman93, Sinar, SiobhanHansa, SkyWalker, SpaceFlight89, Spasioc, Spineofgod, Spitfire19, Steeev, Stephenb,
Stopspam4567, Storm Rider, Stupid Corn, Supergreg no1, Suruena, Sushilover boy, T0jikist0ni, Technowonk, Tempshill, The Anome, The Thing That Should Not Be, TheMindsEye, Tide rolls,
Timschocker, Tinidril, Toddgee, Touchingwater, Tqbf, Track1200, Tulocci, UkPaolo, Uncle G, Untchable, Uvainio, Vegaswikian, Veinor, VolatileChemical, Voltov45, Wafulz, Wanne673,
WarthogDemon, Weregerbil, WhisperToMe, Wiki alf, WojPob, Woohookitty, Yaxh, Yidisheryid, Zoom-Arrow, 350 anonymous edits
Security risk Source: http://en.wikipedia.org/w/index.php?oldid=330774118 Contributors: Ash, Reddi, Supergreg no1, 3 anonymous edits
ISO/IEC 27000 Source: http://en.wikipedia.org/w/index.php?oldid=376925700 Contributors: Burt Harris, Claporte, Dravenelle, Elgeebar, Mitch Ames, Nasa-verve, NoticeBored, Pedronet,
RGWZygma, Salinix, Tevildo, Tiggerjay, Veridion, 12 anonymous edits
ISO/IEC 27000-series Source: http://en.wikipedia.org/w/index.php?oldid=376833681 Contributors: A. B., AlephGamma, Burt Harris, Djo3500, GregorB, Nasa-verve, NoticeBored, Nzjoe,
Ron519098, Sagsaw, Star reborn, Tevildo, 30 anonymous edits
Threat Source: http://en.wikipedia.org/w/index.php?oldid=373742971 Contributors: Abb615, Acolyte of Discord, Andycjp, CambridgeBayWeather, Ccacsmss, Cosprings, Davecrosby uk,
Dbachmann, Dingus of hackingnis, Erud, Gilliam, Luna Santin, Missionary, Moonriddengirl, Patrick, PotentialDanger, Rd232, Ronhjones, SharkD, Shinpah1, Signalhead, TheTHREAT, Tisane,
Ufim, 18 anonymous edits
Risk Source: http://en.wikipedia.org/w/index.php?oldid=378223999 Contributors: -oo0(GoldTrader)0oo-, 1ForTheMoney, 4twenty42o, 927, Abheid, Actorstudio, AdjustShift, Ahoerstemeier,
Aixroot, Al001, Alansohn, Aldaron, Alex S, Algorithme, Allanon97, Altenmann, Ancheta Wis, Andman8, Andycjp, Anneaholaward, Anon lynx, Apparition11, Arauzo, ArnoldReinhold,
Avraham, BBird, Baartmns, Backvoods, Bart133, Beland, BenC7, Bhupinder.Kunwar, Bihco, Bill52270, Billy hentenaar, Bjp716, BlairZajac, Bob Burkhardt, Bonus Onus, Bovineone, Brick
Thrower, BrokenSegue, Bruceporteous, Burntsauce, CALR, CDN99, COMPFUNK2, CSWarren, Caissa's DeathAngel, Calliopejen1, CambridgeBayWeather, Capricorn42, CaptainLexicon,
Carax, CarolynLMM, Carrp, Caster23, Cate, Charles Matthews, Charles T. Betz, Cheapskate08, Chinsurance, Christian List, Ciphergoth, Cobaltbluetony, Coldmember, Colignatus, Commander
Keane, Condrs01, Conversion script, Cretog8, CurranH, Cutler, Cyberdog, DCstats, DFLovett, Daguero, David w carraher, Davies69, Dean Armond, Derek Balsam, Dhartung, Dialectric,
Dionisiofranca, Dkeditor, DocendoDiscimus, DomCleal, DonSiano, Doobliebop, DrWorld, Drivi86, Dsp13, Duoduoduo, Dylan Lake, EPM, ERosa, Editor1962, Eeekster, Eep², Emezei, Emvee,
Enchanter, Evil Monkey, Fieldday-sunday, Fjejsing, Former user, Freddie Coster, Frederic Y Bois, Fuhghettaboutit, Gaius Cornelius, Galoubet, Garion96, Geoff Leeming, Geoffr, Giftlite, Glinos,
Gogo Dodo, Gomm, Gregorford, Gsaup, Gurch, Habiibi, Hairhorn, HalJor, HalfShadow, Hayabusa future, He Who Is, Heiyuu, Helgus, Henrygb, Heron, Hgty, Hobsonlane, Hu12, Hubbardaie,
Husond, Iam, Iamaav8r, Iamthesaju, Icewraithuk, Interik, J.delanoy, JForget, JIH3, JRR Trollkien, Jauricchio, Jaxl, Jeff3000, Jeronimo, Jerryseinfeld, Jheald, Jimmaths, Jjeeppyy, John Quiggin,
Johnny Pez, Jon Lackow, Jpbowen, Jtneill, Kenmckinley, Kered77, Kku, Kpmiyapuram, Krakfly, Kuru, Kvdveer, Kwhitten, Larry V, LeadSongDog, Lear's Fool, Lee Daniel Crocker, Levineps,
LilHelpa, Linuxlad, Lowellian, M-banker-212, MCTales, Maashatra11, Mac, Marco Krohn, Martarius, Martinp, Mason3*, Matthew Blake, Maudonk, Max Neill, Mbc362, Mboverload,
Melcombe, MementoVivere, Michael Hardy, Miller52, Mirv, Mnath, Monk Bretton, Moonriddengirl, MrOllie, MrQwag, Msngupta, My Cat inn, Myanw, Myrvin, NHRHS2010, Neelix, Nevcao,
Nikai, Nurg, Oliverwyman, Omegapowers, Osias, Overix, OwenX, Oxymoron83, Pakoistinen, Panybour, Patrick, Pgan002, Pgr94, Pgreenfinch, PhilipMW, Pianoman23, Platothefish, Pm master,
Pointergrl, RSStockdale, Razimantv, Rd232, Requestion, Rescherpa, RevRagnarok, Rgvandewalker, Rhobite, RicCooper, Rich Farmbrough, Rich257, Rimini, Risk Analyst, Riversider2008,
Rjwilmsi, Roelzzz, Roni38, Ronz, SAE1962, SDC, Sam Hocevar, Satori Son, Scalene, Scott Sanchez, SebastianHelm, Seffer, Severa, Shoefly, SilkTork, Simesa, Simoes, Simon123, Sjforman,
Sketchmoose, Snow555, SnowFire, Snowded, Snowmanradio, SocialNeuro, Sonderbro, SpeakerFTD, Spearhead, Spunch, Stemonitis, Stevenjones15, Stijn Vermeeren, Subsolar, Sugarfoot1001,
Supergreg no1, Tedickey, Testbed, Texacali3d, The Anome, The Thing That Should Not Be, The-dissonance-reports, TheGrappler, Thfmax, Tlroche, Tony Fox, Tonytypoon, Treisijs,
Tutmaster321, Tyrol5, UninvitedCompany, Urbanette, Vald, Velho, Visual risk management, WadeLondon, Warpfactor, WaysToEscape, Wemlands, Wikiklrsc, Wikomidia, Wood Thrush,
Woohookitty, Wordsmith, Wouldyou, Wtmitchell, Xyzzyplugh, Yamamoto Ichiro, Z10x, Zaggernaut, Zara1709, Zodon, Zzyzx11, 347 anonymous edits
Authentication Source: http://en.wikipedia.org/w/index.php?oldid=373970667 Contributors: (, Ablewisuk, AbsolutDan, AlephGamma, Almit39, Altenmann, Anavaman, AngoraFish,
Apelbaum, Arnneisp, ArnoldReinhold, Ashdo, Bachrach44, Backburner001, Bdpq, Beland, Bernburgerin, Bertix, Billymac00, Blm, Bludpojke, Bnovack, Boozinf, Bukowski, Ceyockey, Cherlin,
ClementSeveillac, Clymbon, Cryptosmith, DanMS, Dancter, Darth Andy, David-Sarah Hopwood, Davy7891, Dendrolo, Dmg46664, Double Dickel, DreamGuy, Dtodorov, EJNorman, EJVargas,
EconoPhysicist, EdJohnston, Edmilne, Ednar, Edward, Edward Z. Yang, Emarsee, EntmootsOfTrolls, Epbr123, Euryalus, Fakestop, Fedayee, Fish and karate, Gabriel Kielland, Galoubet,
Gbroiles, Goochelaar, GraemeL, H2g2bob, Heavy chariot, Hnguyen322, Homer Landskirty, Hu12, Husky, Id babe, Imran, Imz, Ingvarr2000, Jheiv, John Vandenberg, Johntinker, Jomsborg,
JonHarder, Jorunn, Jose.canedo, Kokoriko, Lakshmin, LeCire, Leotohill, Lindenhills, Lowellian, Ludvikus, Luís Felipe Braga, MarioS-de, Mark.murphy, MartinHarper, Matt Crypto, Mayahlynn,
Mdsam2, Metadigital, Mindmatrix, Mswake, Mywikiid99, Nabeth, Neverquick, Nickshanks, NoticeBored, Nurg, Nuwewsco, Ohnoitsjamie, Oneiros, Pavel Vozenilek, Pde, Pdelong, Pgan002,
Pink!Teen, Pinkadelica, Pkgx, Primasz, Radagast83, Rami R, Rholton, Rich Farmbrough, Rlsheehan, Rodii, Roland2, Ronhjones, Rshoham, Shadowjams, Shonzilla, Skidude9950, Skippydo,
Smack, SpeedyGonsales, Sqying, Stlmh, Syphertext, Szquirrel, THEN WHO WAS PHONE?, Technologyvoices, The Founders Intent, Thepofo, Theshadow27, Tjcoppet, Tom-, Tommy2010,
UncleBubba, User A1, ValC, Versus22, WereSpielChequers, Wikiold1, Wireless friend, Woohookitty, Ww, Xemoi, Yerpo, Zzuuzz, 218 anonymous edits
Authorization Source: http://en.wikipedia.org/w/index.php?oldid=370058405 Contributors: 16@r, Ali farjami, Althepal, ArnoldReinhold, BD2412, BMF81, Betacommand, Cjcollier,
Danbalam, Deville, Edajgi76, Gaius Cornelius, GraemeL, Hnguyen322, Jaap3, Jamelan, Joining a hencing, JonHarder, Jordav, Josang, Lectonar, Lowellian, Luís Felipe Braga, MacTire02, Mark
T, Mark.murphy, Metrax, Michael Hardy, MichaelBillington, Mindmatrix, MrMelonhead, Mulad, NewEnglandYankee, Philip Baird Shearer, Rgrof, Rodii, Ru.spider, Rupert Clayton,
Sean.hoyland, Tagishsimon, Theshadow27, Tjcoppet, 46 anonymous edits
Social engineering (security) Source: http://en.wikipedia.org/w/index.php?oldid=378834890 Contributors: (jarbarf), Aaa111, Abaddon314159, Academic Challenger, Adamdaley, Aldaron,
Alerante, AlistairMcMillan, Anon126, Anton Khorev, Apotheon, Arkelweis, ArnoldReinhold, Arsenikk, Bbarmada, Beagle2, Beland, Bjornar, Bmicomp, Brockert, Brutulf, Buster7, Chahax,
ChangChienFu, Chinasaur, Chipuni, ChiuMan, Chmod007, Chovain, Chrisdab, Chuayw2000, CliffC, Coemgenus, Courcelles, Cryptic C62, Cumulus Clouds, Cutter, Cybercobra, Cynical, D6, Da
nuke, DaedalusInfinity, Dancter, Daniel Quinlan, DanielPharos, DavidDW, Dcoetzee, Ddddan, Dddenton, DevastatorIIC, Dionyziz, Dp76764, Dravir, DuFF, EDGE, ESkog, EVula, Ehheh,
Einsteininmyownmind, Elonka, Emergentchaos, Equendil, Eric1608, Evilandi, Faradayplank, Fenice, Frecklefoot, Frehley, Fskrc1, Gdo01, Gettingtoit, Gizzakk, Gogo Dodo, Greswik, Ground
Zero, Gscshoyru, Guy Harris, H I T L E R37, Haseo9999, Heqwm, Hmwith, Hydrargyrum, I already forgot, IGEL, InfoSecPro, Intgr, J Cricket, JMMING, Jeff Muscato, Jerzy, Jfire, Jkl,
Jlmorgan, Joelr31, John Broughton, Johnisnotafreak, Jumpropekids, Kaihsu, Katanada, Katheder, Khym Chanur, Kimchi.sg, Kleinheero, KnightRider, Knowledge Seeker, Kpjas, Ksharkawi,
Lamename3000, Leonard G., Lexlex, Lightmouse, Lioux, Lord Matt, Lukeonia1, MER-C, Mac Davis, Majorly, Matt Crypto, McGeddon, Mckaysalisbury, Mdebets, MeekMark, MeltBanana,
Midnightcomm, Mild Bill Hiccup, Mns556, Moitio, MrOllie, Myredroom, NTK, Nafango2, Namzie11, Netsnipe, Nirvana888, NoticeBored, Nuno Tavares, Nuwewsco, Oddity-, Olrick,
Omicronpersei8, Omphaloscope, Othtim, Overloggingon, Paquitotrek, Penbat, Pgillman, Ph.eyes, Philip Trueman, Phoenixrod, Pmsyyz, Pol098, Primarscources, Princess Tiswas, RJBurkhart3,
RainbowOfLight, Rebroad, RenniePet, RevolverOcelotX, Rich Farmbrough, Rjwilmsi, RobertG, Rohasnagpal, Rosenny, Rossami, SGBailey, Sephiroth storm, Sesquiped, Shabda, Shirulashem,
Socrates2008, Srikeit, Starschreck, Studiosonic, Sue Rangell, Svick, TXiKi, Taylor Karras, Teemu Maki, The Anome, The Firewall, TheMandarin, Thepatriots, Thesloth, Thingg, Thipburg,
Tmchk, Tomisti, TonyW, Tsnapp, Tunheim, Unyoyega, Uriber, Vary, Ventura, Versus22, Virgil Vaduva, Waldir, WhisperToMe, Wikipelli, Wilku997, WolFStaR, Woohookitty, Wshepp, XL2D,
Xiong Chiamiov, Zarkthehackeralliance, Zomgoogle, 396 anonymous edits
Security engineering Source: http://en.wikipedia.org/w/index.php?oldid=375250509 Contributors: 193.203.83.xxx, 213.253.39.xxx, Ablewisuk, AllThatJazz65, Andreas Kaufmann,
ArnoldReinhold, Arthena, Baldbunny619, CelebritySecurity, Commander Keane, Conversion script, D6, David stokar, Debresser, Dwheeler, El C, Enigmasoldier, Excirial, Exit2DOS2000,
FrankWilliams, Frap, Giftlite, Grutness, Gurch, Imecs, Ingolfson, JA.Davidson, Josh Parris, Jpbowen, Kcordina, Korath, Ljean, Lotte Monz, Manionc, Matt Crypto, Mauls, Mikebar, MrOllie,
Mywikicontribs, Neonumbers, OrgasGirl, Ospalh, Peter Horn, Philip Trueman, Remi0o, Robert Merkel, Ross Fraser, SCΛRECROW, Selket, Shamatt, Shustov, Sjfield, SteinbDJ, Suruena,
Sverdrup, Tassedethe, That Guy, From That Show!, The Anome, Tonyshan, Tpbradbury, TreasuryTag, Vegaswikian, Whiner01, Willowrock, Wimt, Wyatt915, 77 anonymous edits
216
Article Sources and Contributors
Physical security Source: http://en.wikipedia.org/w/index.php?oldid=375625283 Contributors: 95j, Advancesafes55, Andyj00, Ani.naran, Arvindn, Beetstra, CCTVPro, CliffC, Coffee,
Conversion script, Corp Vision, Cptmurdok, Dcljr, Dss311, Eastlaw, Egmontaz, EverGreg, Exit2DOS2000, Frap, Gabriel1907, GraemeL, Graham87, Itai, Jazzwick, KellyCoinGuy, Kuru,
LogicX, Lupo, Lwlvii, M0nde, Magog the Ogre, Matt Crypto, MattKingston, Mattisse, Mav, Mazarin07, McGov1258, Mmernex, Mojska, Nateusc, Notinasnaid, Patrick, PatrickVSS, Pinkadelica,
Prari, Quatloo, Qxz, Red Thrush, Requestion, Romal, Ronhjones, Ronz, SecProf, Securiger, Shadowjams, Shiseiji, Shustov, SiobhanHansa, Stephenb, Steven hillard, The Anome, TonyW,
Trav123, Verbal, Weregerbil, Whiner01, William Avery, Willowrock, Wuzzeb, Zapptastic, Zeerak88, ZeroOne, 98 anonymous edits
Computer security Source: http://en.wikipedia.org/w/index.php?oldid=378617236 Contributors: 213.253.39.xxx, 2mcm, =Josh.Harris, ABF, Aapo Laitinen, Aarontay, Access-bb, Acroterion,
Add32, AdjustShift, Adrian, Agilmore, AlMac, Alansohn, Albedo, AlephGamma, Alvin-cs, Alxndrpaz, Amcfreely, Ansh1979, Ant, Antandrus, Arcade, Architectchamp, Arctific, Ark, Arkelweis,
ArnoldReinhold, Arpingstone, Arthena, Arvindn, Autarch, AvicAWB, BMF81, Bachrach44, Bar-abban, Barek, Beetstra, Bertix, Bingo-101a, Blackjackmagic, Bluefoxicy, Bobblewik, Bonadea,
Booker.ercu, Boredzo, Boxflux, Bsdlogical, Bslede, C17GMaster, CSTAR, Caesura, Cameron Scott, Cdc, CesarB, Chadnibal, Chill doubt, Chuq, Clapaucius, ClementSeveillac, CliffC, Clovis
Sangrail, Codemaster32, Condor33, Conversion script, Corp Vision, Cosmoskramer, Cpt, Crazypete101, Cudwin, Cyanoa Crylate, Dachshund, Dan100, Danakil, DanielPharos, Dave Nelson,
David Gerard, David-Sarah Hopwood, DavidHOzAu, Dcampbell30, Dcljr, Ddawson, DerHexer, Devmem, Dictouray, Dkontyko, Dmerrill, Donnymo, Dookie, Dori, Doug Bell, Dr Roots, Dr-Mx,
Druiloor, Dthvt, Dubhe.sk, Dwcmsc, Dwheeler, Edggar, Ehheh, El C, Elassint, Elfguy, Ellenaz, Emantras, Endpointsecurity, Epbr123, Expertour, FT2, Falcon8765, Fathisules, Filx,
Foxxygirltamara, FrankTobia, Frap, Frecklefoot, Fredrik, Fubar Obfusco, Future Perfect at Sunrise, FutureDomain, Gaius Cornelius, Galoubet, Gam2121, Gbeeker, Geni, GeoGreg, Geoinline,
George1997, Gerrardperrett, Gingekerr, Gogo Dodo, Gordon Ecker, Gorgonzilla, GraemeL, Graham Chapman, Graham87, Grammaton, Ground Zero, Gscshoyru, Guyd, Gwernol, H2g2bob,
HRV, Haakon, Hadal, Hall Monitor, Harryboyles, Haseo9999, HenkvD, Heron, Honta, Hubbardaie, I already forgot, Icey, Immzw4, Intelligentsium, Irishguy, Ishisaka, IvanLanin, Ixfd64,
J.delanoy, JA.Davidson, JDavis680, JXS, Jadams76, Jargonexpert, Javeed Safai, Jengod, Jenny MacKinnon, Jensbn, Jesper Laisen, JidGom, JimWalker67, Jimothytrotter, Jmorgan, Joanjoc,
JoanneB, JoeSmack, Joffeloff, JohnManuel, Johntex, JohnyDog, JonHarder, Jonasyorg, JonathanFreed, JonnyJD, Joy, Joyous!, Jreferee, Juliano, Julie188, Ka-Ping Yee, Katalaveno,
KathrynLybarger, Kazrak, Kbrose, Kesla, Khhodges, Kmccoy, KnowledgeOfSelf, Kosik, Kpavery, Krich, Kungming2, Kuru, Kurykh, Kwestin, Lakshmin, Largoplazo, Lcamtuf, Leibniz,
Lightdarkness, Lightmouse, Ligulem, Lxicm, M3tainfo, MER-C, Mako098765, Manionc, Manjunathbhatt, Marc Mongenet, Marcok, Marzooq, Matt Crypto, Mav, MeltBanana, Mentmic, Michael
B. Trausch, Michi.bo, Mike Rosoft, Mike40033, Miquonranger03, Ml-crest, Mmernex, Moa3333, Monkeyman, Moomoo987, Morphh, Mr. Lefty, MrOllie, Mrholybrain, Mscwriter, Mu,
Mydogategodshat, Myria, Nikai, Nitesh13579, Nixeagle, Njan, Oarchimondeo, Oblivious, Ohnoitsjamie, Omnifarious, OrgasGirl, P@ddington, PabloStraub, Pakaran, Palica, Panda Madrid,
Papergrl, Passport90, Pctechbytes, Perspective, Peter Schmiedeskamp, Phatom87, Piano non troppo, PierreAbbat, Ponnampalam, Poweroid, Prashanthns, President of hittin' that ass, Proton,
Ptomes, Public Menace, Pyrop, Quantumseven, QuickFox, Quiggles, Qwert, RJASE1, Raanoo, Raraoul, Raysecurity, Rbilesky, Rcseacord, Red Thrush, RexNL, Rhobite, Rich Farmbrough,
RichardVeryard, Rinconsoleao, Riya.agarwal, Rjwilmsi, Rlove, Rmky87, Robert Merkel, Robofish, Rohasnagpal, Romal, Ronz, Rursus, Rvera, Rwhalb, SWAdair, Salsb, Sapphic, SecurInfos,
SecurityGuy, Seidenstud, Sephiroth storm, Shadowjams, Sharpie66, Sheridbm, Shonharris, SimonP, Siroxo, Sjakkalle, Sjjupadhyay, Skarebo, SkerHawx, Smith bruce, Smtully, Snori, Snoyes,
Socrates2008, Solinym, Solipsys, SolitaryWolf, Soloxide, SpK, Spearhead, Sperling, Squash, StaticGull, Stationcall, Stephen Gilbert, Stephenb, Stor stark7, Stretch 135, StuffOfInterest,
Supertouch, Suruena, SusanLesch, Susfele, Sweerek, Szh, THEN WHO WAS PHONE?, Tanglewood4, Tartarus, Tassedethe, Taw, Taxman, Technologyvoices, TestPilot, Texture, The Anome,
The Punk, Thireus, Timothy Clemans, Tional, Tjmannos, Tobias Bergemann, Tobias Hoevekamp, Tom harrison, Toon05, Touisiau, Tqbf, Tripletmot, Tuxisuau, Ukuser, Vaceituno, Versus22,
Vijay Varadharajan, Vininim, Wfgiuliano, White Stealth, Whitehatnetizen, Wiki-ay, Wikicaz, WikipedianMarlith, Wikiscient, Wimt, Wine Guy, Wingfamily, Wlalng123, Wmahan, Wolf530,
Woohookitty, Wrathchild, XandroZ, YUL89YYZ, Yaronf, YoavD, Yuxin19, Zarcillo, Zardoz, Zarutian, ZeroOne, Zhen-Xjell, Zifert, ZimZalaBim, Zzuuzz, 668 anonymous edits
Access control Source: http://en.wikipedia.org/w/index.php?oldid=378513169 Contributors: Actatek, Advancesafes55, Amire80, Andreiij, Andriusval, Apacheguru, ArielGlenn, BAxelrod,
BD2412, Bderidder, Ben Ben, Binksternet, Borgx, Brankow, Bunnyhop11, CHoltje, Camw, Carlosguitar, Chris0334, CliffC, DEng, David-Sarah Hopwood, Dekisugi, Delfeye, Edward, El C,
Eldub1999, Espoo, EverGreg, Exit2DOS2000, Feptel, FisherQueen, Frap, Gagandeeps117, Gail, Gaius Cornelius, Galar71, George A. M., Gogo Dodo, Grafen, Guinness man, Gurch, H2g2bob,
Hersfold, Hetar, Hu, Hu12, Iancbend, Immunize, Indyaedave, Iwatchwebmaster, Jedonnelley, Jeff3000, Jgeorge60, Jray123, Ka-Ping Yee, Knokej, LadyAngel89, Lightmouse, Lingliu07,
Linkspamremover, Luís Felipe Braga, MER-C, Magioladitis, Mark Renier, McGeddon, McGov1258, Memoroid, MikeHobday, Mindmatrix, NE2, NawlinWiki, Neetij, Nickbernon, Nihiltres, Old
Moonraker, Omassey, Patrick, PeregrineAY, Philip Trueman, Piano non troppo, PrologFan, RISCO Group, Rickfray, Rjwilmsi, Roleplayer, Ruuddekeijzer, RyanCross, SPUI, Scouttle,
SecurityEditor, Sietse Snel, Silly rabbit, Slakr, Soifranc, Stantry, Stephenb, Subverted, Swamy.narasimha, Talsetrocks, Testplt75, Texture, Thatcher, The Anome, The Founders Intent, Therepguy,
Timo Honkasalo, Tobias Bergemann, Tonypdmtr, Trbdavies, Vaceituno, Vrenator, Web-Crawling Stickler, Welsh, Wikiborg, Willowrock, Woohookitty, Wsimonsen, Xaosflux, Yan Kuligin,
Zeeshankhuhro, ‫ينام‬, 245 anonymous edits
Access control list Source: http://en.wikipedia.org/w/index.php?oldid=378718760 Contributors: Abune, Akhristov, AlephGamma, Amit bhen, Apokrif, Bdesham, Bluefoxicy, Bobrayner, Brian
McNeil, Calculuslover, Cburnett, Centrx, Ched Davis, ChristopherAnderson, CyberSkull, DGerman, DNewhall, Daggyy, David-Sarah Hopwood, Dgtsyb, Dpv, Dreadstar, Ems2, Equendil,
Eric22, Excirial, Extransit, Falcon9x5, Fipinaka8, Floridi, Frap, Gardar Rurak, George Burgess, Gjs238, Gutza, Heron, Ignacioerrico, IngisKahn, Isaacdealey, Itai, JamesBWatson, Jarsyl,
Jminor99, JonHarder, Joy, Ka-Ping Yee, Kbolino, Kbrose, Kernel.package, Kinema, Korath, Kubanczyk, Kvng, Lajevardi, LeonardoGregianin, Lime, Lunchscale, Manu3d, Marknew, Marqueed,
Matěj Grabovský, Milan Keršláger, Mindmatrix, Modster, Mormegil, Mulad, Oliver Crow, Omicronpersei8, Ost316, OverlordQ, Oxymoron83, Palpalpalpal, Parklandspanaway, Paul Weaver,
Raul654, Raysonho, RedWolf, Redeeman, Res2216firestar, Retodon8, Robenel, Ronhjones, Samuel Blanning, Shadowjams, Shahid789, Silly rabbit, Swebert, Szquirrel, Ta bu shi da yu, Tablizer,
Tagesk, Tarquin, Tclavel, Template namespace initialisation script, The Rambling Man, Top5a, Trasz, Warren, Winhunter, Xpclient, Yhabibzai, 186 anonymous edits
Password Source: http://en.wikipedia.org/w/index.php?oldid=378819437 Contributors: Aapo Laitinen, Abb615, Abune, Alai, Alansohn, Alexius08, Alksentrs, Allan64, Andrejj, Anshulnirvana,
Antandrus, Arctific, Are2dee2, Armando82, ArnoldReinhold, Arthuran, Arvindn, AugPi, B. Wolterding, Beezhive, Begoon, Bforte, Bihco, Bleh999, Bloosyboy, Bobmarley1993, Bonzo,
Boredzo, Borgx, Brazil4Linux, Brendandonhue, Brianga, Brickbybrick, Brockert, Buddy13, Calebdesoh, Caltas, Camw, CanisRufus, Carlsotr, Cdc, Cflm001, Chamal N, Cheezmeister,
Chocolateboy, Chrislk02, CliffC, Computer Guru, Conversion script, Correogsk, Crazycomputers, D thadd, DRLB, DStoykov, DVD R W, Danelo, DarkFalls, Darkjebus, Darth Andy,
DavidJablon, Davidgothberg, Deb, Den fjättrade ankan, Deor, DerHexer, Dethme0w, Dispenser, Dnas, Donfbreed, Donreed, Dragonboy1200, EasyTarget, Ed Poor, EivindJ, Ejjjido., Elembis,
Em8146, EmilJ, Epbr123, Epktsang, Etu, Evil saltine, FisherQueen, Fram, Frames84, Frap, FreplySpang, Fuhghettaboutit, Furrykef, G7yunghi, GOATSECHIGGIN, Gimme danger, Ginkgo100,
GlenPeterson, Graham87, Gredil, Greenguy1090, Gregzeng, Gsp8181, Gtstricky, H2g2bob, Haakon, Haham hanuka, Hajatvrc, Halo, Handbook3, Helpmonkeywc, Hirohisat, Homer Landskirty,
Homerjay, Huppybanny, Imran, Intgr, Ioeth, Iowahwyman, J.delanoy, Jad noueihed, Jakey3 6 0, Jayjg, Jaypeeroos, Jensbn, Jeremy Visser, Jgeorge60, Jidanni, Joakimlundborg, John Vandenberg,
JohnOwens, Jojhutton, Jonathan Kovaciny, Jons63, Jordav, Joseph Solis in Australia, Joy, Jpgordon, KJS77, Ka'Jong, Karl Dickman, Kbh3rd, Kenguest, Ketiltrout, Khendon, Kieron, Kigali1,
Kirvett, KnowledgeOfSelf, Kransky, Krylonblue83, Kukini, L337p4wn, Landon1980, Ldo, LiDaobing, Lovykar, Lynto008, MC MasterChef, MCR NOOB MCR, MER-C, Machii, Martin451,
Matt Crypto, Matthew Yeager, Mattpw, Mbell, Mckaysalisbury, Mckinney93, Metodicar, Mgentner71, Michael Snow, MichaelBillington, Mikayla123, Mike Rosoft, Mitch Ames, Mizzshekka1,
Mns556, Mnxextra, Moink, Moonradar, Mr Stephen, Mr. Lefty, MrOllie, Mugunth Kumar, Musides, Mysid, Naddy, Nagelfar, Nakon, Nate Silva, NathanBeach, Nddstudent, Neils.place,
Networkspoofer, Nigelj, Nnh, Ntsimp, Nuno Tavares, Nyks2k, Oda Mari, OekelWm, Ohka-, Olegos, OlenWhitaker, Oli Filth, Olivier Debre, Omegatron, OrgasGirl, Oswald07, Ottawa4ever,
Ouizardus, PCcliniqueNet, Pakaran, Paul A, Paul Stansifer, Paulchen99, Perlygatekeeper, Peyna, Phatom87, Phil Holmes, PhilKnight, PhilipO, Piano non troppo, PierreAbbat, Pilif12p, Pilotguy,
Pinethicket, Pit, Prashanthns, ProfessorBaltasar, Proton, PseudoSudo, Pseudomonas, Quackslikeaduck, Quadell, Rangi42, Raven in Orbit, Rcb, Real decimic, Rearden9, RedLinks, RexNL, Rich
Farmbrough, Richard0612, Rjd0060, Rochlitzer, Rotring, Roypenfold, Rror, Sabbut, Saganaki-, SaltyPig, Saqib, Sbb, ScAvenger, Sedimin, Serezniy, Shaddack, Shadowjams, Shanes, Shohami,
Shshme, Shuggy, Slysplace, Snoyes, Spartaz, Spens10, Spoon!, Squids and Chips, Staka, Steven Zhang, Stormie, TGoddard, TUF-KAT, Tangent747, Tcotco, Tedickey, Texture, TheDoctor10,
TheRingess, Thebeginning, Themindset, Thipburg, Tiddly Tom, Tjwood, Tony Sidaway, TonyW, Tra, Trafford09, TransUtopian, TreasuryTag, Unicityd, Utcursch, VBGFscJUn3, ValC,
VampWillow, Veinor, VernoWhitney, Violetriga, Visor, Vocation44, Walle1357, Wavelength, Weregerbil, Wikicrying, Wikieditor06, Wikipelli, Wimt, Wolfsbane, Woohookitty, Wrs1864, Ww,
Xap, Xelgen, Yamamoto Ichiro, Zenohockey, Zoe, Zondor, Zoz, 381 anonymous edits
Hacker (computer security) Source: http://en.wikipedia.org/w/index.php?oldid=379017724 Contributors: 007exterminator, 5 albert square, 7OA, A. Parrot, A1b1c1d1e1, Abb3w,
Acidburn24m, Acroterion, Adambro, Adamryanlee, Addshore, AdjustShift, Adrian, Adrianwn, Agent Smith (The Matrix), Aiden Fisher, Aitias, Alansohn, Alexmunroe, Allen4names,
Allstarecho, Alpha 4615, Ameliorate!, Amirhmoin, Ammubhave, Andponomarev, Angelafirstone, Animum, Anthony62490, Antipode, Apparition11, Armeyno, Army1987, Ash, Asphatasawhale,
Atsinganoi, Atulsnischal, Aubwie, Aude, Aviator702-njitwill, Axonizer, Bachrach44, Badgernet, BananaFiend, BarretBonden, Bburton, Bdorsett, Bedwanimas214, Ben Ben, Benedictaddis,
Beno1000, Bgs022, BigChris044, Blanchardb, BlastOButter42, Bobo192, Bogey97, Bongwarrior, Br33z3r, Brianpie, Brothejr, Bucketsofg, CRYSIS UK, CYRAX, CanadianLinuxUser,
Capricorn42, Cdf333fad3a, Celebere, ChadWardenz, Charles Matthews, Chris the speller, Chuunen Baka, Click23, Cometstyles, Comperr, Computerhackr, Computerwizkid991, Condra,
CosineKitty, Courcelles, Crakkpot, Creepymortal, Cybercobra, Dan Brown456, Danbrown666, Daniel 1992, Danno uk, Dark-Dragon847, DarkFalcon04, Darkspin, Darkwind, DarthVader,
Davewild, David91, Davman1510, Dcb1995, Dejan33, Delivi, Delmundo.averganzado, DerHexer, Deskana, Digital Pyro, Diligent Terrier, Dipu2susant, Discospinster, Doctor who9393,
DogPog1, DoubleBlue, Doug, DragonflySixtyseven, Droll, Dspradau, Duk, Egmontaz, Ehheh, Ejdzej, ElKevbo, Elassint, Encyclopedia77, Epbr123, Erik9, Error -128, Evaunit666,
Evilmindwizard, Excirial, FCSundae, Faradayplank, Farry, Fastily, Fat4lerr0r, FayssalF, Footballfan42892, Fordmadoxfraud, Fran Rogers, Freakofnurture, Frehley, Freqsh0, Frosted14,
Fuhghettaboutit, Gaius Cornelius, GeneralIroh, Ghepeu, Gigor, Glacier Wolf, Gogo Dodo, Graham87, Grandscribe, Grimhim, Gwernol, Gyakusatsu99, H2g2bob, HTS3000, Hackistory, Halod,
Hannibal14, Harryboyles, Haseo9999, Haxor000, Hexatron2006, HistoryStudent113, Holyjoely, Horrorlemon, Howlingmadhowie, Howrealisreal, Hroðulf, Hu12, Hudy23, Hydrogen Iodide,
IAnalyst, Iamahaxor, Icey, Igoldste, Im anoob68, Imanoob69, ImperatorExercitus, Indexum, Intershark, Intgr, Ipwnz, Iridescent, Irishguy, J.delanoy, JForget, Jackollie, Jameshacksu, Japonca,
Jasgrider, JayKeaton, Jclemens, Jebba, Jennavecia, JimVC3, Jmundo, JoeMaster, John of Reading, JohnDoe0007, JonWinge, Joyous!, Juliancolton, Jvhertum, Jwray, K-lhc, KVDP, Kakero,
Katalaveno, KayinDawg, KelleyCook, Kelly Martin, Kendelarosa5357, Ketiltrout, Kewp, Khoikhoi, KillerBl8, Kingpin13, KnowledgeBased, Knowzilla, Kokey, KrakatoaKatie, Krazekidder,
Kreachure, Kubanczyk, Kungming2, Kurykh, Ladanme, Landon1980, Leuko, Lilnik96, Limideen, Lolhaxxzor, Lolimahaxu, Loren.wilton, Lradrama, Luna Santin, MBisanz, MER-C, MHPSM,
MK8, Maelwys, Mahanga, Manticore, Marcika, Mark7-2, Martarius, Martychamberlain, Maziotis, Mccleskeygenius10, Mckaysalisbury, Meeples, Mentisock, Mgiganteus1, Michael Snow,
Michael93555, Mindmatrix, MisterSheik, Ml-crest, Mnbitar, Mos4567, Mounlolol, Mr. Wheely Guy, Mr.honwer, MrFirewall, MrOllie, Mrdifferentadams, Mukesh2006, Mukslove, Mysdaao,
Mzinzi, Nallen20, Nauticashades, Ndenison, NeoChaosX, Nfutvol, Nickgonzo23, Nickrj, Nicopresto, Nivekcizia, Njan, Noctibus, Nokeyplc, Nolan130323, Noq, NorwegianBlue, Nuttycoconut,
217
Article Sources and Contributors
OlEnglish, Omicronpersei8, Orphan Wiki, Oscarthecat, Othtim, OverlordQ, Oxymoron83, PL290, Pegua, Pengo, Pharaoh of the Wizards, Philip Trueman, Pigmietheclub, Pogogunner, Poindexter
Propellerhead, PokeYourHeadOff, Porqin, Pre101, Propaniac, Pseudomonas, Publicly Visible, Qlid, Quantumelfmage, Quest for Truth, Qutezuce, Raganaut, RaidX, Randhirreddy, Ranga42,
Raptor1135, Ravyr, Rayzoy, Reach Out to the Truth, RebirthThom, Redmarkviolinist, Rekrutacja, RexNL, Rich Farmbrough, Rising*From*Ashes, Rjwilmsi, Rmosler2100, Robertobaroni, Ronz,
Roo556, Rory096, Rsrikanth05, Rtc, Ryan, RyanCross, SF007, SJP, ST47, Satanthemodifier, Sayros, Sbbpff, Scarian, Script-wolfey, Seb az86556, SecurInfos, Sephiroth storm, Shadowjams,
SheeEttin, Skomorokh, Slumvillage13, Slysplace, SoCalSuperEagle, Someguy1221, SpaceFlight89, SpectrumDT, SpuriousQ, SqueakBox, Staka, Stayman Apple, Steaphan Greene, Storm Rider,
Strobelight Seduction, Surya.4me, Sv1xv, Sweetpaseo, Swotboy2000, T.Neo, THEN WHO WAS PHONE?, Taimy, Tall Midget, Tckma, Tempodivalse, Tense, Terafox, The Anome, The Thing
That Should Not Be, The wub, TheDoober, Thecheesykid, Thedangerouskitchen, Thehelpfulone, Thingg, Thiseye, Tide rolls, TigerShark, Tikiwont, Tommy2010, Toon05, Touch Of Light,
Tpjarman, Tqbf, Triwbe, Twas Now, Tytrain, Tyw7, Ulric1313, UncleanSpirit, Unixer, Unmitigated Success, Useight, Ustad24, VanHelsing23, Vatrena ptica, Versus22, Vic93, Voatsap, Vonvin,
WadeSimMiser, Waldir, Warrington, Waterjuice, Weregerbil, Whaa?, WikiMASTA, Wikichesswoman, WilliamRoper, Winsock, Wisden17, Work permit, Wrs1864, Xena-mil, XeroJavelin,
Xf21, Xiong Chiamiov, Xkeeper, XxtofreashxX, Xython, Yunaffx, Zeeshaanmohd, ZimZalaBim, Zimbardo Cookie Experiment, Zsero, Zvonsully, ‫א"ירב‬, 1076 anonymous edits
Malware Source: http://en.wikipedia.org/w/index.php?oldid=378890556 Contributors: !melquiades, (, 12056, 16@r, 66.185.84.xxx, A Nobody, A.qarta, A3RO, Abune, Adaviel, Akadruid,
Alansohn, Alekjds, Alex43223, AlistairMcMillan, Allen3, Amcfreely, American2, Americasroof, AndonicO, Andreworkney, Andrzej P. Wozniak, Andypandy.UK, Angela, Appelshine, Arcolz,
Arienh4, Ashton1983, Astral9, Atulsnischal, Audriusa, Augrunt, Aussie Evil, Avastik, BalazsH, Bdelisle, Berland, BfMGH, Bgs022, Billymac00, Blurpeace, Bobmack89x, Bobo192,
Bongwarrior, Bornhj, CFLeon, CactusWriter, Calabraxthis, Calltech, Cameron Scott, Camw, Capricorn42, CarloZottmann, CatherineMunro, Cbrown1023, Ccole, Centrx, CesarB, Chan mike,
ChicXulub, Chriscoolc, Chrislk02, Chun-hian, Ciphergoth, Cit helper, Cleared as filed, Clicketyclack, CliffC, Closedmouth, ClubOranje, Colindiffer, Collins.mc, Cometstyles, Compman12,
Connorhd, Conversion script, Copana2002, Coyote376, Crt, Cshay, Ct280, Cuvtixo, Cwolfsheep, Cxz111, D6, DHN, DJPhazer, DR04, DaL33T, DanielPharos, David Gerard, David Martland,
Dawnseeker2000, Dcampbell30, Demizh, DerHexer, Dffgd, Diego pmc, Digita, Dougofborg, Download, Dpv, Dreftymac, Drewjames, Drphilharmonic, Dtobias, Durito, Ecw.technoid.dweeb,
Edward, Ehheh, Ejsilver26, Elison2007, Ellywa, Elvey, Elwikipedista, Enauspeaker, EneMsty12, Entgroupzd, Epbr123, Ericwest, Espoo, Etaoin, EurekaLott, Evercat, Everyking, Evil Monkey,
Evildeathmath, Fan-1967, Father McKenzie, Fennec, Fernando S. Aldado, Fingerz, Finlay McWalter, FleetCommand, Flipjargendy, Floddinn, Fluri, Fomalhaut71, Fragglet, Frankenpuppy,
Fredgoat, Fredrik, Fredtheflyingfrog, Freejason, Frenchman113, Fubar Obfusco, Furby100, GCarty, GL, Gail, Galoubet, Ged UK, Gilliam, Gjohnson9894, Glane23, Gogo Dodo, GraemeL,
Graham france, Groink, Gwernol, Hal Canary, HamburgerRadio, HarisM, Helikophis, Herbythyme, Heron, Hezzy, Hfguide, HistoryStudent113, Hm2k, Howdoesthiswo, Huangdi, Hydrogen
Iodide, IRP, Icairns, Idemnow, Ikester, Ikiroid, Iridescent, J.delanoy, JForget, Jacob.jose, Janipewter, Jaranda, Jarry1250, Javanx3d, Jclemens, JeffreyAtW, Jesse Viviano, Jitendrasinghjat,
JoeSmack, JohnCD, Johnuniq, JonHarder, Jopsen, JosephBarillari, Jtg, Jusdafax, KBi, KP000, Kajk, Kei Jo, Kejoxen, KelleyCook, KellyCoinGuy, Kevin B12, Kharitonov, Khym Chanur,
Kiminatheguardian, Kingpomba, Kinston eagle, Kozuch, Kraftlos, Kris Schnee, Krystyn Dominik, LC, Largoplazo, LarsHolmberg, Larsroe, Laudaka, Ld100, LeaveSleaves, LedgendGamer,
Legotech, Liftarn, Lightmouse, Longhair, Lord of the Pit, LorenzoB, Luminique, Lzur, MC10, MKS, Mac Lover, Mack2, MacsBug, MaliNorway, ManaUser, Mandel, Maniago, Martarius, Matt
Crypto, Mav, Mboverload, Medovina, Mere Mortal, Mgaskins1207, Mhardcastle, Michael Hardy, Mickeyreiss, Microcell, Mifter, Miketsa, Mikko Paananen, Minghong, Mintleaf, Mirko051,
Mitko, Mitravaruna, Mmernex, Monaarora84, Monkeyman, Mortense, Moshe1962, Mr.Z-man, MrOllie, Muhammadsb1, Muro de Aguas, Mzub, Nagle, NativeForeigner, Nburden, NcSchu,
NeilN, Nnkx00, Nnp, Noctibus, Noe, Noozgroop, Nosferatus2007, Nubiatech, Ohnoitsjamie, Oldiowl, One-dimensional Tangent, Optigan13, Otisjimmy1, Ottava Rima, Overkill82, OwenX,
PJTraill, Palica, Panda Madrid, Paramecium13, Pascal.Tesson, Patrick Bernier, Paul, Phantomsteve, Philip Trueman, Philmikeyz, Phoenix1177, PickingGold12, PierreAbbat, Pigsonthewing,
Pilotguy, Pnm, PoM187, Pol098, Porque123, Postdlf, Powerpugg, PrivaSeeCrusade, PrivaSeeCrusader, Pstanton, Ptomes, Publicly Visible, Quantumobserver, Quarl, Quebec99, Quintote, R'n'B,
Radagast83, Radiojon, RainR, RainbowOfLight, RandomXYZb, RattusMaximus, Rdsmith4, RealityCheck, Reisio, Retiono Virginian, Rfc1394, Rich Farmbrough, Rich257, Richard Arthur
Norton (1958- ), Riddley, RiseAgainst01, Rjwilmsi, Robert Adrian Dizon, RodC, RogerK, Romal, Roman candles, Romangralewicz, Rossen4, Rossumcapek, Rrburke, S0aasdf2sf, Sacada2,
Salamurai, Salasks, SalemY, Salvio giuliano, Sam Barsoom, Samker, Sandahl, Sc147, Screen317, Sephiroth storm, Shirulashem, Shoaler, Siddhant, Sietse Snel, Simonkramer, Simorjay, Sky
Attacker, Slakr, Snowolf, Sogle, Sperling, Spikey, Stefan, Stephen Turner, Stieg, Stifle, Sugarboogy phalanx, SusanLesch, Svetovid, T-1000, T38291, Tassedethe, Teacherdude56, Techienow,
Technogreek43, Techpraveen, Tfraserteacher, Tgv8925, The Anome, The Thing That Should Not Be, TheBigA, TheDoober, Thumperward, Thunder238, Tiangua1830, Tiggerjay, Tmalcomv,
Tohd8BohaithuGh1, Tom NM, Tony1, TonyW, Trafton, Trovatore, Tunheim, TurningWork, Tyuioop, URLwatcher02, Uberian22, Ugnius, UncleDouggie, User27091, Uucp, VBGFscJUn3,
Vague Rant, Vary, Vernalex, Versus22, Vikasahil, Visualize, Voidxor, WalterGR, Warren, Wavelength, Wes Pacek, Wewillmeetagain, Whisky drinker, Whispering, Wikid77, Wikipandaeng,
Wikipe-tan, Wikkid, Wimt, Wouldshed, Wwwwolf, Wysprgr2005, Xaldafax, XandroZ, Xelgen, Xgravity23, Xixtas, Xme, Xp54321, Xxovercastxx, Yamamoto Ichiro, Yudiweb, Zhen-Xjell,
Zifert, Михајло Анђелковић, Սահակ, आशीष भटनागर, 693 anonymous edits
Vulnerability Source: http://en.wikipedia.org/w/index.php?oldid=370090164 Contributors: 16@r, Adequate, Alphanis, Andycjp, Anthony, Arthur Fonzarelli, Backburner001, Batmanx, Beland,
Beware the Unknown, CALR, CardinalDan, Christian75, Correogsk, DanielPharos, Dekisugi, Derek farn, Epbr123, Heds 1, Imark 02, Julia Rossi, KimvdLinde, Kwarner, Lova Falk, Mac, Mani1,
Mgiganteus1, Mriya, Nikai, Oli Filth, Penbat, PigFlu Oink, Pigman, Quandaryus, Quintote, Ronbo76, Ronz, Scott Sanchez, SkyLined, Spartan-James, TheRedPenOfDoom, WaysToEscape, 61
anonymous edits
Computer virus Source: http://en.wikipedia.org/w/index.php?oldid=379030289 Contributors: 0612, 129.128.164.xxx, 12dstring, 1nt2, 203.109.250.xxx, 203.37.81.xxx, 2han3, 5 albert square,
66.185.84.xxx, 66.25.129.xxx, ACM2, AFOH, AVazquezR, Abb615, Aborlan, Acalamari, Acebrock, Acroterion, Adam Bishop, Adam McMaster, Adam1213, Addshore, AdjustShift, Adjusting,
Adrian J. Hunter, Agnistus, Ahoerstemeier, Ahunt, Air-con noble, Aishwarya .s, Aitias, Ajraddatz, Akuyume, Alai, Alan Isherwood, Alan.ca, Alansohn, Alegjos, AlexWangombe, Alexd18,
Alexjohnc3, Algont, AlistairMcMillan, Alu042, Alvis, Alxndrpaz, Ambulnick, Amcfreely, Amren, Anaraug, Andrewlp1991, Andrewpmk, Andyjsmith, Andypandy.UK, Anindianblogger, Anna
Lincoln, Anonymous editor, Anonymouseuser, Antandrus, Antimatt, Antonio Lopez, Antonrojo, Artaxiad, Arvind007, Aryeh Grosskopf, Arzanish, Asenine, Ashton1983, Askild, Astral9,
Astronaut, Audunv, Aurush kazemini, Authr, Autoerrant, Avb, Avicennasis, Ayman, Ayrin83, AzaToth, Azkhiri, BOARshevik, Bachrach44, Baloo Ursidae, BauerJ24, Bawx, Bcohea, Beland,
Bemsor, Ben10Joshua, Benbread, Benji2210, Bento00, Bernhard Bauer, Bfigura's puppy, Bfmv-rulez, Big Bird, Bilbobee, Birdfluboy, Birdhombre, Bitethesilverbullet, BitterMan, Bjbutton,
Blanchardb, Bleavitt, BloodDoll, Bluegila, Bmader, Bobo The Ninja, Bobo192, Boing! said Zebedee, Bolmedias, Bones000sw, Bongwarrior, Bookinvestor, Boomshadow, Boothy443,
BorgQueen, Born2cycle, Born2killx, Bornhj, Brainbox 112, BreannaFirth, Brendansa, Brewhaha@edmc.net, Brion VIBBER, Bro0010, Bruce89, Brucedes, Brucevdk, Bryan Derksen,
Bsadowski1, Bubba hotep, Bubba73, Bubble94, BuickCenturyDriver, Bullzeye, CMC, CWii, Calmer Waters, CalumH93, Calvin 1998, CambridgeBayWeather, Cameron Scott, Can't sleep, clown
will eat me, CanadianLinuxUser, Cancaseiro, CanisRufus, Captain Disdain, CaptainVindaloo, Caramelldansener, Carnildo, Carre, Casper2k3, Cassandra 73, Cbk1994, Cenarium, Ceyockey,
Chairman S., Chamal N, Chimpso, Chinakow, ChipChamp, Chmod007, Chris G, Chris55, ChrisHodgesUK, Chrishomingtang, Chrisjj, Chrislk02, Chriswiki, Chroniccommand, Chuunen Baka,
CivilCasualty, Ck lostsword, Ckatz, Clawson, CliffC, Closedmouth, Clsdennis2007, Codyjames7, Coffee Atoms, Collins.mc, CommKing, Commander, Comphelper12, Compman12,
Computafreak, Conversion script, Coopercmu, Copana2002, Courcelles, Crakkpot, Cratbro, Crazyeddie, Crazypete101, Crobichaud, Csdorman, Cuchullain, Cureden, Cyde, Cynical, DHN, DVD
R W, Da31989, Dainis, Daishokaioshin, Damian Yerrick, Dan100, Daniel Olsen, DanielPharos, Danielmau, Dantheman531, DarkGhost89, DarkMasterBob, Darksasami, Darkwind, Darr dude,
DarthSidious, Dartharias, Darthpickle, DataGigolo, David Gerard, David Stapleton, David sancho, Davis W, Dblaisdell, Dcoetzee, Dddddgggg, DeadEyeArrow, Deagle AP, Decltype, Deli nk,
DerHexer, DevastatorIIC, Dfrg.msc, Dialectric, Digitalme, Dino911, DirkvdM, Discospinster, Dlyons493, Dmerrill, Dmmaus, Doczilla, Doradus, Dougmarlowe, DougsTech, Doyley, Dragon
Dan, Dragon Dave, Drahgo, Draknfyre, DreamTheEndless, Drvikaschahal, Dspradau, Dureo, Dysepsion, Dysprosia, Dzubint, ERcheck, ESkog, Eagleal, EatMyShortz, Ecthelion83, Edetic,
EdgeOfEpsilon, Editor2020, Edward, Eeera, Egil, Egmontaz, EhUpMother, El C, Elb2000, EliasAlucard, Elison2007, Eliz81, Elockid, Emote, Eneville, Epbr123, Eptin, Ericwest, Erik9, Escape
Orbit, Euryalus, Evadb, Evercat, Evil Monkey, Evils Dark, Excirial, FF2010, Falcofire, Falcon8765, FatM1ke, Favonian, Fdp, Fedayee, Fennec, Ferkelparade, Fireworks, Firsfron,
FleetCommand, Fletcher707, Flixmhj321, Footwarrior, Fordan, Frap, Freakofnurture, Frecklefoot, Fredil Yupigo, Fredrik, Frenchman113, FreplySpang, Frevidar, Froth, Frymaster, Fubar
Obfusco, Furrykef, GGenov, GRAHAMUK, Gail, Gaius Cornelius, Galoubet, Gdavidp, Ged UK, Gerardsylvester, Ghymes21, Giftiger wunsch, Giftlite, Gilliam, Gimmetrow, Ginnsu,
Gkaukonen, Glane23, Glen, Glennfcowan, Glennforever, GloomyJD, Gofeel, Gogo Dodo, Goldom, Golftheman, Goofyheadedpunk, Gorank4, GraemeL, Graham87, Grandmasterka, Green
meklar, Greg Lindahl, GregAsche, GregLindahl, Grm wnr, Gromlakh, GrooveDog, Gtg204y, Guanaco, Gudeldar, Guitaralex, Gunnar Kreitz, Gurch, Gurchzilla, Hadal, Haemo, Hairy Dude,
HaiyaTheWin, HamburgerRadio, Hammer1980, Hanacy, Harksaw, Haseo9999, Hashar, Hayson1991, Hdt83, Hebrides, Herd of Swine, Heron, Hervegirod, HiLo48, Hitherebrian, Hobartimus,
Hongooi, Hqb, Hu, Huangdi, Huctwitha, Humpet, Hunan131, Hydrogen Iodide, ILovePlankton, Ibigenwald lcaven, Imaninjapirate, Imapoo, Immunize, Imroy, Imtoocool9999, Insineratehymn,
Inspigeon, Intelligentsium, Into The Fray, Inzy, Irixman, IronGargoyle, Isaac Rabinovitch, ItsProgrammable, J.delanoy, J800eb, JFKenn, JFreeman, Jackaranga, Jacky15, Jake Nelson, Jakebob2,
Jakew, Jamesday, Jamesontai, Jarry1250, Java13690, Javeed Safai, Jaxl, Jay.rulzster, Jayron32, Jclemens, Jcvamp, Jcw69, Jdforrester, Jdmurray, Jeff G., JiFish, Jiy, Jjk, Jkonline, Jleske, Joanjoc,
JoanneB, JoeSmack, John, John Fader, John Vandenberg, JohnCD, Johnuniq, Jojit fb, Jok2000, Jolmos, JonHarder, Jonathan Hall, Jono20201, Jorcoga, JoshuaArgent, Jotag14, Joyous!, Jpgordon,
Jrg7891, Jtg, Jtkiefer, Jujitsuthirddan, Jusdafax, Justinpwnz11233, Jy0Gc3, K.Nevelsteen, KJS77, KPH2293, Kablakang, Kamote321, Karenjc, Kate, Kbdank71, KeKe, Kehrbykid, Kejoxen,
Ketiltrout, Kevin, Kevin B12, Khfan93, Khukri, Kigali1, Kingpin13, Kizor, Kkrouni, Kmoe333, Kmoultry, KnowledgeOfSelf, Knucmo2, Knutux, Kokiri, Kostiq, Kostisl, Kpa4941, Krich,
Kukini, Kurenai96, Kuru, KyraVixen, LC, LFaraone, La Corona, La goutte de pluie, LarsBK, Last Avenue, Last5, LeaveSleaves, Lee Daniel Crocker, Leethax, Legitimate Editor, Leithp,
Lelapindore, Leliro19, Lemontea, Leo-Roy, LeonardoRob0t, Lerdthenerd, Lethe, Leujohn, Levine2112, Liftarn, Likepeas, LittleOldMe, LizardJr8, Lloydpick, Lolbombs, Lolroflomg, Lord
Vaedeon, Lowellian, Lradrama, LuYiSi, Luigifan, Luna Santin, Lusanaherandraton, M1ss1ontomars2k4, MER-C, MONGO, MRFraga, Macintosh User, Madhero88, Magister Mathematicae,
MakeRocketGoNow, Malcolm, Malcolm Farmer, Malo, Man It's So Loud In Here, Manhatten Project 2000, Marcg106, Marianocecowski, Mario23, Marius, Mark Ryan, Martarius, MartinDK,
Martinp23, Master2841, Materialscientist, Matt Crypto, MattieTK, Mattinnson, Maurice45, Mav, Maverick1071, Mboverload, Me, Myself, and I, Meekywiki, Meno25, Mentifisto,
Mephistophelian, Merovingian, Mhammad-alkhalaf, Michael Hardy, Michaelas10, Midgrid, Mike Rosoft, Mike4ty4, Mike6271, MikeHobday, MikeVella, Miketoloo, Miketsa, Mild Bill Hiccup,
Mind23, MindstormsKid, Minesweeper, Minghong, Miranda, Miss Madeline, MisterSheik, Mitch Ames, Mminocha, Modemac, Mojibz, Moniquehls, Monkeyman, Mormegil, Morven, Morwen,
Mr. Wheely Guy, MrBell, MrBosnia, MrStalker, Mrbillgates, Muad, Mufc ftw, Mugunth Kumar, Mullibok, Munahaf, Murraypaul, Myanw, Mzub, N419BH, NHRHS2010, Najoj, Nakon, Name?
I have no name., Natalie Erin, Nataliesiobhan, NawlinWiki, Nazizombies!, NerdyScienceDude, Nesky, Never give in, NewEnglandYankee, Nfearnley, Nick, Nikai, Ninetyone, Nivix, Nixdorf,
Nmacu, Noone, NoticeBored, Ntsimp, NumbNull, Nuno Brito, Obradovic Goran, Octernion, Oducado, Ohnoitsjamie, Oldmanbiker, Oliver202, Olof nord, Omegatron, Omicronpersei8,
Omkarcp3, Onebravemonkey, Onorem, Optakeover, Orbst, Oscarthecat, Otolemur crassicaudatus, Oxymoron83, Pablomartinez, Pacific ocean stiller ocean, PaePae, Pakaran, Papelbon467,
Paranoid, Paranomia, Patrickdavidson, Paul1337, Pearle, Pedro, Peter Dontchev, Peter Winnberg, PeterSymonds, Peterl, Petri Krohn, Phearson, PhilHibbs, Philip Trueman, Philippe,
Phillyfan1111, Phocks, Phydend, PiMaster3, Pilotguy, Pinethicket, Pnm, Pol098, Poosebag, Porqin, Posix memalign, Praesidium, Prashanthns, Premeditated Chaos, Prophaniti, PseudoSudo,
218
Article Sources and Contributors
Psynwavez, Public Menace, Puchiko, Purgatory Fubar, Pursin1, Qa Plar, Qevlarr, Quintote, Qwerty1234, Qxz, R'n'B, RAF Regiment, RB972, RG2, RHaworth, RTG, Racconish, Radiokillplay,
Raffaele Megabyte, Rahul s55, RainR, Rajnish357, Raven4x4x, Ravi.shankar.kgr, Reconsider the static, RedViking, RedWolf, ReformatMe, Reinoutr, Reisio, Repy, Rettetast, Rex Nebular,
RexNL, Rfcrossp, Rgoodermote, Rhinestone42, Rhlitonjua, Rhobite, Riana, Ricecake42, Rich Farmbrough, RichAromas, Richardsaugust, RickK, Riffic, Rigurat, Rjwilmsi, Rkitko, Rlove, Rob
Hooft, Robert McClenon, RobertG, Rockfang, Romal, Romanm, Ronhjones, Rossami, Rossumcapek, RoyBoy, Rpresser, Rror, Rubicon, Rudy16, RunOrDie, Runefrost, Ryanjunk, Rynhom44,
S0aasdf2sf, SEWilco, SF007, SS2005, Sacada2, Sagaciousuk, Sam Hocevar, Sam Korn, Samuel, Sandahl, Sander123, Sango123, Sanjivdinakar, Savetheozone, Savidan, Sbharris, Scepia, Sceptre,
Sci-Fi Dude, Scoutersig, Secretlondon, Seishirou Sakurazuka, Senator Palpatine, Sepetro, Sephiroth storm, Seriousch, Sesu Prime, Sfivaz, Shanel, Shanes, Shankargiri, Shas1194, Shindo9Hikaru,
Shirik, Sietse Snel, Sigma 7, Siliconov, Sillydragon, SimonP, SingDeep, Sir Nicholas de Mimsy-Porpington, Siroxo, Sjakkalle, Skarebo, Skate4life18, Skier Dude, SkyLined, SkyWalker, Slaad,
Slakr, Slipknotmetal, Slowmover, Smaffy, Smalljim, Smkatz, Sniper120, Snowolf, So Hungry, Soadfan112, Solitude, Sonjaaa, SpLoT, Specter01010, Spellmaster, Spidern, SpigotMap, Spliffy,
Spondoolicks, SpuriousQ, Squids and Chips, Sree v, Srikeit, Srpnor, Sspecter, Stealthound, Steel, Stefan, SteinbDJ, Stephenb, Stephenchou0722, Stereotek, SteveSims, StickRail1, Storkk, Storm
Rider, Suffusion of Yellow, Superdude876, Superm401, T-1000, THEN WHO WAS PHONE?, TJDay, TKD, TableManners, TaintedMustard, TakuyaMurata, Tangotango, Taral, Tarif Ezaz,
Tarnas, Tassedethe, Taw, Taxisfolder, Tbjablin, Techpraveen, TedE, Tengfred, TerraFrost, TestPilot, Tfine80, ThaddeusB, Thatguyflint, The Anome, The Cunctator, The Evil Spartan, The Thing
That Should Not Be, The Utahraptor, The hippy nerd, The undertow, TheNameWithNoMan, TheRedPenOfDoom, Theroachman, Thevaliant, Thingg, Thomas Ash, Tide rolls, Tim Chambers,
TimVickers, Timothy Jacobs, Timpeiris, Tivedshambo, TjOeNeR, Tnxman307, Tobias Bergemann, Tocsin, Tom harrison, Tom.k, TomasBat, Tommy2010, Tpk5010, Trafton, Travis Evans,
TreasuryTag, Tricky Wiki44, Triwbe, TrollVandal, Tumland, Twas Now, Tyomitch, Tyuioop, Ugnius, Ukdragon37, Ultra-Loser, Ultraexactzz, Umapathy, Uncle G, Ur nans, Utcursch, Ute in DC,
VQuakr, Valueyou, Veinor, Versus22, Vice regent, Visor, VladimirKorablin, Voxpuppet, Vtt395, WAvegetarian, Waggers, WakaWakaWoo20, WalterGR, Warren, Washburnmav, Wavelength,
Wayfarer, Wayward, Web-Crawling Stickler, Weird0, WelshMatt, WereSpielChequers, Weregerbil, What!?Why?Who?, Whatcanbrowndo, Wheres my username, WhisperToMe, Widefox,
WikHead, Wiki alf, Wikieditor06, Wikipelli, Willking1979, Wimt, Winhunter, Winston365, Wizardman, Wj32, Wkdewey, Wknight94, Wnzrf, Wraithdart, Wrs1864, Wwwwolf, Wysprgr2005,
Xaraikex, Xbspiro, Xeno, Xeroxli, Xgamer4, Xp54321, Yabba67, Yidisheryid, Yuma en, Zero1328, ZeroOne, Zerotonin, Zoicon5, Zoz, Zrulli, Zyborg, Zzuuzz, రవిచంద్ర, 2681 anonymous edits
Computer worm Source: http://en.wikipedia.org/w/index.php?oldid=378223314 Contributors: A More Perfect Onion, A.qarta, A930913, AFA, Addshore, Aeons, Ahoerstemeier, Aim Here,
Akadruid, AlanNShapiro, Alansohn, Alekjds, Alerante, Amanda2423, Amrishdubey2005, Anabus, Andres, Andrewpmk, Asterion, Augrunt, Avastik, Barefootguru, Barkeep, Beezhive, Bexz2000,
BigDunc, Bitethesilverbullet, Bobrayner, Bongwarrior, Borgx, Brion VIBBER, Burntsauce, CactusWriter, Camw, Can't sleep, clown will eat me, Capricorn42, CardinalDan, Celarnor, Cellorelio,
CesarB, Clharker, Comphelper12, Conversion script, Coopercmu, Crackitcert, Craigsjones, CrazyChemGuy, Crimson Instigator, Crobichaud, Cyp, DJ1AM, DVD R W, Daev, Dan Guan, Daniel
Mahu, DanielPharos, DarkfireInferno, David Gerard, David.Mestel, DavidSky, Dcsohl, DeadEyeArrow, Dekisugi, Demizh, Deor, DerHexer, Dindon, Discospinster, Dj ansi, Dlohcierekim,
Dspradau, DylanBigbear, Dysprosia, EJF, Eeekster, Eequor, Ehheh, Enviroboy, Epbr123, Evercat, Ewlyahoocom, Falcon8765, Fanf, Fennec, Fenwayguy, Fieldday-sunday, Firetrap9254, Fubar
Obfusco, Furrykef, Fuzheado, GCarty, GHe, Gamerzworld, Gamma, Ghatziki, Gilliam, Gogo Dodo, Grunt, Gscshoyru, Gunnar Hendrich, Guy M, Hajatvrc, HamburgerRadio, Hashar,
Herbythyme, Hersfold, Ignatzmice, Imfo, Intelligentsium, Iridescent, Isnow, Itsme2000, Jaimie Henry, James McNally, Jason.grossman, Jclemens, Jdforrester, Jebba, Jeff G., Jenny Wong, JiFish,
Jkelly, JoeSmack, Jonathanriley, Jondel, Joseph Solis in Australia, Jtg, Just Another Dan, Jwkpiano1, KelleyCook, Kerowren, King of Hearts, KneeLess, Koyaanis Qatsi, Kralizec!, Kungfuadam,
LC, LeaveSleaves, Leuko, Lights, Luckyherb, Luigifan, MDfoo, MER-C, MITalum, Makeemlighter, Malcolm Farmer, Martial Law, Matt Crypto, Matt.whitby, Mav, Maximaximax,
Mcmvanbree, Meatabex, Mentifisto, Milo03, Mindmatrix, Minimosher, Miquonranger03, Misza13, Mogh, Monkeyman, MrOllie, Mszegedy, Mygerardromance, Mzub, NYKevin, Naddy,
Nattippy99, Neurolysis, Newnoise, Nguyen Thanh Quang, Nixdorf, Noctibus, Noone, Nsaa, Object01, Oducado, Ohnoitsjamie, Orphan Wiki, PL290, Palica, Patrick, Paul, Pauli133, PhilHibbs,
Philip Trueman, Piano non troppo, PierreAbbat, Pnm, Poeloq, Pomegranite, Powellatlaw, Pstevens, PubliusFL, RJHall, Rahul s55, RainR, Rami R, Reach Out to the Truth, RexNL, Rhobite,
Richard001, Rossd2oo5, Rs232, S3000, ST47, Sam Korn, Sat84, SaveThePoint, Sensiblekid, Sephiroth storm, Seth Ilys, Shuini, Sietse Snel, Smaug123, Snori, Souch3, SqueakBox, Staeiou,
StaticGull, Stephen Gilbert, SteveSims, Subzerobubbles, Syndicate, Tempodivalse, The Anome, The Thing That Should Not Be, Thingg, TomTheHand, Tpk5010, Trafton, Traveler100,
Uberian22, Ulm, Uncle Dick, Useingwere, UserGoogol, VIKIPEDIA IS AN ANUS!, Vernalex, Very cheap, WAS 4.250, WPANI, Waerloeg, WalterGR, Weregerbil, WhisperToMe, Wik, Wiki
alf, Wilinckx, Wimt, Wirbelwind, Wiwiwiwiwiwiwiwiwiwi, Wnzrf, Woohookitty, Ww, Wwwwolf, XXXSuperSnakeXXX, YUL89YYZ, Yidisheryid, Yixin1996, Yyaflkaj;fasd;kdfjk, Zifert,
Zman2000, Zoicon5, ZooFari, 517 anonymous edits
Exploit (computer security) Source: http://en.wikipedia.org/w/index.php?oldid=371807440 Contributors: Abaddon314159, Adequate, Aldie, Alerante, Altenmann, Apokrif, Arunkoshy,
AxelBoldt, Bluefoxicy, Bobo192, Bomac, Boyrussia, Conversion script, Crakkpot, DanielPharos, Dpakoha, Dreaded Walrus, Dreftymac, Ebraminio, Ehheh, El C, Enigmasoldier, Erik9,
ExploITSolutions, Fabio-cots, Fathisules, Galoubet, Georgia guy, Ghostwo, Ground Zero, Guriaz, Guriue, HamburgerRadio, Irishguy, Irsdl, Jamespolco, Jerome Charles Potts, JonHarder, KDK,
Karada, La goutte de pluie, Latka, LebanonChild, Matteh, Mav, Michael Hardy, Mindfuq, Mindmatrix, Nakon, Nikai, Nuno Tavares, Omicronpersei8, PC Master, PabloStraub, Papergrl, Pengo,
Pgk, Pie4all88, Pilotguy, RainR, Raistolo, Ramsey, Rl, Ronz, SWAdair, SimonP, Sionus, Skittleys, SkyLined, SkyWalker, Sloverlord, Smaffy, SpigotMap, Stephenb, Stevertigo, Swwiki, Syp,
TakuyaMurata, Tompsci, Ugnius, Vargc0, Waterloox, Weltersmith, Wolfrock, Yudiweb, Zorro CX, 131 anonymous edits
Computer insecurity Source: http://en.wikipedia.org/w/index.php?oldid=378691778 Contributors: (, Abach, Aclyon, Adib.roumani, AlMac, Alarob, Alejos, Althai, ArnoldReinhold, Arvindn,
AssetBurned, Awg1010, Beland, Binarygal, Bluefoxicy, Bobo192, Brockert, Cacycle, CesarB, Chadloder, Clicketyclack, DanielPharos, Derek Ross, Dfense, DoubleBlue, Dratman, Druiloor,
Edward, Eequor, Efitu, Erik9, Eurleif, Evice, FlyingToaster, Frap, Fredrik, Freqsh0, Furby100, Furlong, Galoubet, Gleeda, GraemeL, Guitarmankev1, Gzuckier, H2g2bob, Haakon,
HamburgerRadio, Harryboyles, Jbeacontec, JeremyR, Jerk.monkey, JoeSmack, JonHarder, Josh Parris, Judzillah, Ka-Ping Yee, KelleyCook, Kubanczyk, Laurusnobilis, Lloydpick, Manionc,
Mark UK, Mark83, Matt Crypto, Maurice Carbonaro, Mitch Ames, Moa3333, MrOllie, Nikai, Nilmerg, Oarchimondeo, Oli Filth, Parijata, Pcap, Praesidium, Ptomes, RJBurkhart, RedWolf,
Rfc1394, Rich257, Robertb-dc, Roger.Light, SS2005, Selket, Sephiroth storm, Shoy, Silsor, SimonP, Sjwheel, Smaug123, Snoyes, Speedeep, Sperling, Stephen Gilbert, Stevertigo, Systemf,
Tailsx1234, Tanath, Tassedethe, The Anome, Tiangua1830, Tim1988, TonyW, Touisiau, TreasuryTag, Tyw7, Unixguy, Victor Fors, W Hukriede, WalterGR, Wombdpsw, Woohookitty, Ww,
ZayZayEM, 154 anonymous edits
Communications security Source: http://en.wikipedia.org/w/index.php?oldid=378694709 Contributors: Alex19568, ArnoldReinhold, Beland, Deville, DocWatson42, Eastlaw, Eclecticology,
Eve Hall, FTAAPJE, Harumphy, Jim.henderson, JonHarder, Kevin Rector, Matt Crypto, MichaelBillington, Mzajac, Nlu, Pmsyyz, Pnoble805, Rearden9, Secureoffice, Spinoff, TonyW,
Woohookitty, 16 anonymous edits
Network security Source: http://en.wikipedia.org/w/index.php?oldid=379050290 Contributors: Aarghdvaark, AdjustShift, AlephGamma, AndonicO, Aqw3R, Arcitsol, AuburnPilot,
Batman2472, Bobo192, Boing! said Zebedee, BorgQueen, CaliMan88, CanadianLinuxUser, CardinalDan, Cbrown1023, CliffC, Coffeerules9999, Cst17, Cyberdiablo, DH85868993, DavidBailey,
Dawnseeker2000, Discospinster, Dmsynx, Dmuellenberg, Donalcampbell, Drmies, Dscarth, Dthvt, E smith2000, E0steven, Ed Brey, Eeekster, Ehheh, Ellenaz, Epabhith, Epbr123, Ewlyahoocom,
Falcon8765, Farazv, FrankTobia, FreshBreak, Ghalleen, GidsR, Ginsengbomb, Gorgalore, Hariva, HexaChord, Impakti, Iridescent, Irishguy, JDavis680, Jcj04864, Jonathan Hall, JonnyJinx,
Jswd, Kbdank71, Krmarshall, Kungming2, Kvng, Leonard^Bloom, Liamoshan, M2petite, ManOnPipes, Mdeshon, Michael A. White, Mild Bill Hiccup, Mitch Ames, MrOllie, Mscwriter,
Nakamura36, Nick C, Nivix, Nsaa, Obscurans, Onevalefan, Pharaoh of the Wizards, Pheebalicious, PhilKnight, Philpraxis, Piano non troppo, Prolog, Quest for Truth, R'n'B, RJaguar3, Raanoo,
Radiosband, Reach Out to the Truth, Res2216firestar, Robofish, Rogger.Ferguson, Rohasnagpal, Ronhjones, Rrburke, Rwhalb, SF007, Seldridge99, Seraphimblade, Sgarchik, Shandon,
Shoeofdeath, Simsianity, Sindbad72, Smyle.dips, Somno, Stationcall, Stephenchou0722, Sundar.ciet, Sweeper tamonten, T38291, Take your time, TastyPoutine, TheFearow, Tide rolls,
Tnspartan, Tqbf, Tremaster, TutterMouse, Ulric1313, Weregerbil, Wiki-ay, Will Beback, Willking1979, Woohookitty, Wrp103, ZimZalaBim, Zzuuzz, 269 anonymous edits
Book:Internet security Source: http://en.wikipedia.org/w/index.php?oldid=364997292 Contributors: 88wolfmaster, Josh Parris, Od Mishehu AWB, Reach Out to the Truth, RichardF,
RockMFR, Wiki-uk, X!
Firewall (computing) Source: http://en.wikipedia.org/w/index.php?oldid=378678906 Contributors: !Darkfire!6'28'14, 9Nak, =Josh.Harris, Abaddon314159, Acrosser, Aejr120, Ahoerstemeier,
Ahunt, Aitias, Alansohn, Ale jrb, AlephGamma, Alexius08, AlistairMcMillan, Alphachimp, Altzinn, Anabus, Anclation, Andem, Andrei Stroe, Android Mouse, Angr, Anhamirak, Anna Lincoln,
Ans-mo, Antandrus, Anthony Appleyard, Apy886, Arakunem, Ash, Asqueella, Aviv007, Avono, Backpackadam, Badgernet, BananaFiend, Bangowiki, Barticus88, Bazsi, Beetstra, Beezhive,
Bencejoful, Berford, Bevo, Biot, Bkil, Black Falcon, Blanchardb, Bobo192, Boomshadow, Booster4324, Borgx, Brianga, Bryon575, Bswilson, Bucketsofg, C'est moi, C.Fred, Calabraxthis, Can't
sleep, clown will eat me, CanadianLinuxUser, Capi, Capricorn42, Captain-tucker, Celarnor, Cellspark, CharlotteWebb, Ched Davis, Chrisdab, Chriswaterguy, Chuck369, Chun-hian, Chzz, Cit
helper, CliffC, Closedmouth, ConradPino, CoolingGibbon, Copsewood, Corvus cornix, Cougar w, CraigB, Creed1928, Cryptosmith, DESiegel, DJ Clayworth, DSatz, DStoykov, Da Vynci,
Danhm, Danshelb, Danski14, David.bar, DavidChipman, Davidoff, Dbrooksgta, Dcampbell30, Dcoetzee, Dean14, Debresser, December21st2012Freak, Deelkar, Deewiant, DemonThing,
DerHexer, DevastatorIIC, Diberri, Discospinster, Djdancy, Dman727, Dmccreary, Dols, DonDiego, DoogieConverted, Dougher, Dse, Dzordzm, E Wing, ENeville, EddieNiedzwiecki, Eequor,
Egil, El C, ElKevbo, Elagatis, Elcasc, Eldraco, EliasAlucard, Elieb001, Emailtonaved, Enric Naval, Epbr123, Eponymosity, Equazcion, Equendil, Escape Orbit, Everyking, Expertour, Fabioj,
Fahadsadah, Femto, Feureau, Fightingirishfan, FleetCommand, Flewis, Frap, FreplySpang, FunkyBike1, Fynali, G7yunghi, GDallimore, Gaiterin, Gascreed, Giftlite, Gilliam, Gogo Dodo,
Gonzonoir, Goodyhusband, Graham87, Grand Edgemaster, Grapht, Greg Grahame, Gstroot, Guitardemon666, Gurch, Haakon, Hadal, Halmstad, Hamzanaqvi, Hans Persson, HarisM, Harland1,
Harryboyles, Hax0rw4ng, Hazawazawaza, Henry W. Schmitt, Hetar, Hokiehead, Hoods11, Hps@hps, Hu12, Hugger and kisser, ILRainyday, Ilpostinouno, Info lover, Interiot, Intgr, Isilanes,
J.delanoy, JDavis680, JForget, JSpung, Jackrockstar, Jaho, Jalara, Jaraics, JasonTWL, Jchandlerhall, Jclemens, Jebba, Jeff G., Jennavecia, Jhi247, Jigesh, Jlavepoze, Jmprtice, Jobeard, JonHarder,
JonnyJinx, Josemi, Joy, Joyous!, Jpbowen, Jpgordon, Jramsey, Jusdafax, Just James, Justin20, KCinDC, Kamathvasudev, Kandarp.pande.kandy, Karnesky, Kbdank71, Kenyon, Kgentryjr, Khym
Chanur, Killiondude, Kjwu, KnowledgeOfSelf, Kralizec!, Kubanczyk, Kyleflaherty, L'Aquatique, L33th4x0rguy, La Pianista, Lakshmin, LeaveSleaves, LegitimateAndEvenCompelling, LeinaD
natipaC, LeoNomis, LeonTang, Leszek Jańczuk, Lets Enjoy Life, Lincolnite, Linkoman, Lolsalad, Loren.wilton, Lubos, Lucy1981, Lukevenegas, MER-C, MMuzammils, Malo, Manop,
Marcuswittig, Marek69, Mashby, Materialscientist, Mattgibson, Matthäus Wander, Mattloaf1, Maxamegalon2000, Mcicogni, Meaghan, Meandtheshell, Megaboz, Mernen, Michael Hardy, Milan
Kerslager, Mindmatrix, Miremare, Mitaphane, Monkeyman, Mortein, MrBenCai, MrOllie, Muheer, Mwalsh34, Mwanner, Mygerardromance, Nachico, NawlinWiki, Nealmcb, NeonMerlin,
Netalarm, Neurolysis, Newone, NightFalcon90909, Njmanson, Nnp, Noctibus, Nposs, Ntolkin, Nuno Tavares, Nuttycoconut, Od Mishehu, Ohnoitsjamie, OlEnglish, OlavN, Oli Filth,
OpenToppedBus, Otisjimmy1, OwenX, Oxymoron83, Pabouk, Paul, Pb30, Peter.C, Peyre, Phatom87, Philip Trueman, Phirenzic, Phoenix314, Piano non troppo, Piet Delport, Pmattos, Pnm,
219
Article Sources and Contributors
Possum, Prasan21, Prashanthns, PrestonH, Purpleslog, Quentin X, Raanoo, Rafiwiki, Randilyn, Random name, Rbmcnutt, Rchandra, Rebel, Red Thrush, Regancy42, Richard, Rick Sidwell,
Ricky, Rjwilmsi, Rl, RoMo37, Robbie Cook, Roseurey, RoyBoy, Rror, Rsrikanth05, Rtouret, Rumping, Rwxrwxrwx, SGGH, Sanfranman59, Sceptre, Seb az86556, Seba5618, SecPHD, Seddon,
Sensiblekid, Sephiroth storm, Sferrier, Shawniverson, Sheridp, Shiro jdn, ShyShocker, Simeon H, Skrewz, SkyWalker, Smalljim, Snigbrook, SoCalSuperEagle, Spearhead, Stephenb,
Stephenman882, Stevietheman, Suicidalhamster, Sysy909, T Houdijk, Talyian, Taxman, Tbird1965, Tcosta, Teenboi001, Terronis, TexasAndroid, Thatguyflint, The undertow, Thearcher4,
Theymos, Tide rolls, Tim874536, Timotab, Tobias Bergemann, Tombomp, Tommysander, Topspinslams, Trevor1, Turnstep, Tushard mwti, TutterMouse, Ulrichlang, UncleBubba, Unschool,
Vendettax, Venom8599, Vilerage, Viriditas, Voidxor, WPANI, Wai Wai, Wavelength, Weylinp, Why Not A Duck, WikiLaurent, Wikialoft, Wikiscient, WilliamSun, Wimt, Wk muriithi,
Wknight94, Wmahan, Wordwizz, Wtmitchell, Wyatt915, XandroZ, Xaosflux, YUL89YYZ, Yama, Yorick8080, Zeroshell, ZimZalaBim, Zntrip, Πrate, 1117 anonymous edits
Denial-of-service attack Source: http://en.wikipedia.org/w/index.php?oldid=378644075 Contributors: A.hawrylyshen, Abune, Acdx, Adarw, AdjustShift, Ahoerstemeier, Alan Taylor, Alanyst,
Aldor, Alerante, Alex9788, AlexandrDmitri, Alexius08, Allen4names, Alliekins619, Alphachimp, Ancheta Wis, Andrejj, Andrew Hampe, Andrewpmk, Andrzej P. Wozniak, Anetode, Anna
Lincoln, Anonauthor, Anonymous Dissident, Antisoft, Apankrat, Ariel@compucall, Armoraid, Arnoldgibson, Artephius, Asmeurer, Astt100, Atif.t2, Atif0321, B1atv, Balth3465, BanyanTree,
Bartledan, Baylink, Beatles2233, Becky Sayles, Bemsor, BenTremblay, Berrick, Bhadani, Black Falcon, BlackCatN, Blackeagle, Blackmo, Bloodshedder, Blueboy96, Bluefoxicy, Boborok,
Bongwarrior, Bradcis, Brento1499, BrianHoef, Brianski, Brusegadi, Bsdlogical, Buck O'Nollege, Bucketsofg, Burfdl, Bushcarrot, Butwhatdoiknow, Bwoodring, CWii, Cadence-,
CanadianLinuxUser, Canaima, CanisRufus, Carl Turner, Carlo.Ierna, Carlossuarez46, Catamorphism, Centrx, Charles Matthews, Chicken Quiet, Chowbok, Christian75, Ciaran H, Cinnamon42,
Clapaucius, Clerks, CliffC, Closedmouth, Cmichael, Corporal, CosineKitty, Courcelles, Crakkpot, Csabo, Cwagmire, Cyan, CyberSkull, Da31989, Danc, DanielPharos, Danny247, Danr2k6,
Dark Tea, DarkMrFrank, Darkfred, Darklord 205, David Fuchs, David Gerard, DavidH, Davken1102, Ddcc, DeadEyeArrow, DeanC81, Deathwiki, Delirium, Delirium of disorder, Deltabeignet,
Demonkill101, DerHexer, Derek Ross, Dicklyon, Discospinster, Drennick, Drogonov, DropDeadGorgias, Drrngrvy, Dsarma, Dschrader, Duzzyman, Echuck215, Ed Poor, Edman274, Egmontaz,
Eiler7, ElKevbo, Elecnix, Eliz81, ElizabethFong, Emurphy42, EneMsty12, Equendil, Erianna, Eric-Wester, Escheffel, Evercat, Excirial, Expensivehat, Face, Fang Aili, Fastilysock, Fat&Happy,
Fedevela, Ffffrrrr, Fiddler on the green, Filovirus, FinalRapture, Findepi, Finnish Metal, Fintler, Fireman biff, Flewis, Frankgerlach, Frap, Fredrik, Freekee, Friejose, Furrykef, Fuzheado, Gaius
Cornelius, Galorr, Gardar Rurak, Gareth Jones, Gascreed, GateKeeper, Gauss, Gazzat5, Gblaz, Gdm, Geniac, Getmoreatp, Gevron, Ghewgill, Gmaxwell, Gmd1722, Goarany, Godzig, Gogo
Dodo, Gomangoman, Goncalopp, Gopher23, Gorman, Goto, GraemeL, Graham87, Gwernol, HamburgerRadio, Harry the Dirty Dog, Herbythyme, Hmwith, Hu, Hu12, Huds, Hut 8.5, I already
forgot, IRP, Intgr, Irishguy, Ivan Velikii, Ivhtbr, Ixfd64, J-Baptiste, J.delanoy, JMS Old Al, JOptionPane, JackHenryDyson, Jamesrules90, Janarius, Jaranda, Jcc1, Jcmcc450, Jeannealcid, Jebba,
Jhalkompwdr, JoanneB, Joelee.org, Joemadeus, Joey Eads, Johan Elisson, JonHarder, Josh Parris, Josh3580, Jotag14, Jrmurad, Juanpdp, Julesd, Jwrosenzweig, KVDP, Kaishen, Kakaopor,
Kakurady, Kane5187, Kbdank71, Kbk, Kenyon, Keraunoscopia, Ketiltrout, Kigoe, Killiondude, Klaslop, Korath, Kortaggio, KrakatoaKatie, Krellis, Krimanilo, Kristen Eriksen, Kubanczyk,
Kuru, KyraVixen, LAX, La Parka Your Car, Larrymcp, Lawrence Cohen, Ld100, LeaveSleaves, Lemmio, Life Now, Lightdarkness, Lights, Liko81, LilHelpa, Livefastdieold, Liveste, Looxix,
Lothar Kimmeringer, Lotje, Loudsox, Lradrama, Lupo, MC10, MER-C, Madman, Magnus Manske, Magog the Ogre, Mako098765, Malhonen, Mani1, Mann Ltd, Marco Krohn, Marek69, Mark,
Matt Casey, Mav, Maximus Rex, Mbell, Mboverload, Mckaysalisbury, Mcnaryxc, Melkori, Mgiganteus1, Michael Hardy, Midnightcomm, Mightywayne, Mikeblas, Millisits, Mindmatrix,
Mmoneypenny, Modamoda, Monaarora84, Mpeg4codec, Msamae83, Msick33, Msuzen, Mtcv, MugabeeX, Mwikieditor, N-david-l, N3X15, Naniwako, Ncmvocalist, NeoDeGenero, Neutrality,
Nishkid64, Niteowlneils, Nivix, Nneonneo, No Guru, NotAnonymous0, Nyttend, O.neill.kid, Ocker3, Oddity-, Oducado, Ojay123, Olathe, Oliver Lineham, Omicronpersei8, One, Orpenn, Oscar
Bravo, Oystein, P Carn, PPBlais, Pablo Mayrgundter, Paroxysm, Pchachten, Pearle, Pedant17, Peng, Pengo, PentiumMMX, Persian Poet Gal, Peter AUDEMG, Peter Delmonte, Phantomsteve,
Philip Trueman, Physicistjedi, PierreAbbat, Plreiher, Pol098, Prashanthns, Pwitham, Qrsdogg, R3m0t, RLE64, Raistolo, Ramu50, Randomn1c, Raoravikiran, RattleMan, Raven 1959,
Rchamberlain, Rdsmith4, RedPen, RedWolf, Redvers, RexNL, Rfc1394, Rhlitonjua, Rich Farmbrough, Rjc34, Rjstott, Rjwilmsi, Rmky87, RobLa, Robertvan1, Rofl, Romal, Romeu, Ronhjones,
Rosarinjroy, Ross cameron, Rror, Ryanmshea, Ryanrs, SRE.K.A.L.24, SaintedLegion, Sam Hocevar, SamSim, Samwb123, Scapler, SchnitzelMannGreek, Scientus, Scootey, Sean Whitton,
Sephiroth storm, Shaddack, Shadow1, Shadowjams, Shadowlynk, Shaggyjacobs, Shree theultimate, Shuini, Sierra 1, Signalhead, Silversam, Skiidoo, SkyWalker, Slakr, Sligocki, Snowolf,
Softwaredude, Soma mishra, SonicAD, Sophus Bie, SoulSkorpion, Spaceman85, Sparky132, Spartytime, Splintercellguy, Stephenchou0722, Stphnm, Sunholm, SuperHamster, Swearwolf666,
Synchronism, T23c, THEN WHO WAS PHONE?, TakaGA, Taxman, Tckma, Tempshill, Terryr rodery, The Anome, The Cute Philosopher, TheMightyOrb, Thewrite1, Thexelt, Tide rolls,
Timwi, Titi 3bood, Tmaufer, Toobulkeh, Totakeke423, Tothwolf, Tree Biting Conspiracy, Trigguh, Tuspm, Ulric1313, Ultimus, Uncle Dick, UnitedStatesian, Unschool, Useight, VKokielov,
Vashtihorvat, Vedge, Vegardw, VernoWhitney, Vicarious, Victor, Violetriga, ViperSnake151, Viperdeck PyTh0n, W.F.Galway, WAS 4.250, WadeSimMiser, Warrush, Wayward, WeatherFug,
Web-Crawling Stickler, Wesha, WhisperToMe, Who, Whsazdfhnbfxgnh, WikHead, Wikibert, Wikimol, William Ortiz, Wiredsoul, Wmahan, Wolfkeeper, Woody, Woohookitty, Wrs1864,
Xavier6984, Xawieri, Xee, Xiphiidae, Xitrax, XtAzY, Yaltaplace, Yehaah, Yonatan, Yooden, Yudiweb, Zac439, Zedmelon, Zhangyongmei, Zhouf12, Zsinj, Île flottante, Սահակ, ‫םיאפר לורט‬,
1053 anonymous edits
Spam (electronic) Source: http://en.wikipedia.org/w/index.php?oldid=378389379 Contributors: (, (jarbarf), 123josh, 23skidoo, 24.251.118.xxx, 2ndAccount, 5 albert square, 99DBSIMLR, A.
B., A3RO, AFNETWORKING, Aapo Laitinen, Aaron Rotenberg, Aaronkavo, Abby 94, Abce2, Abecadlo, Acidskater, Addshore, Adw2000, Aeroknight, AgentCDE, Ahoerstemeier, Akamad,
Alansohn, Alarob, Alerante, Alexius08, Alfio, Alina Kaganovsky, AliveUser, Allen1221, Alphachimp, Altenmann, Alusayman, Alyssapvr, Amatulic, Ams80, AnK, Anand Karia, Andrej,
Andrewtscott, Andypandy.UK, Andywarne, AngelOfSadness, Angela, Anger2headshot, Aniboy2000, Anonymous Dissident, Antandrus, Anthony, Anthony Appleyard, AnthonyQBachler,
Arbitrary username, ArchonMagnus, Arichnad, Arm, Arsenal boy, Arvindn, Astudent, Atif.t2, Avalyn, BF, Badlittletrojan, Balix, BalooUrsidae, Barefootguru, Barnabypage, Barneca,
BarretBonden, Barzam, Baylink, Beatlesfan303, Beetstra, Beland, Bemsor, Ben-Zin, Benevenuto, Benjaminhill, Bertilvidet, Bevo, Beyond My Ken, Billgdiaz, BlkMtn, Bloodshedder, Bobo192,
Bogdangiusca, BolivarJonathan, Boogster, Bookofjude, Boringguy, Brentdax, Brettr, Bruns, Btljs, Bucketsofg, Bushytails, Caiaffa, Calmer Waters, Calvin 1998, Can't sleep, clown will eat me,
Canadian-Bacon, CanadianLinuxUser, Canderson7, CapitalR, Carlosp420, Cassandra 73, Cate, Catgut, Celardore, CesarB, ChaTo, Chamal N, Chancemill, Chaprazchapraz, Charles Iliya
Krempeaux, Charles Matthews, Cheesy123456789, Chocolateboy, Chris 73, Chris Newport, ChrisO, Chriswaterguy, Chuckupd, Chveyaya, Ciaran H, Cill.in, Cirilobeto, Citicat, Cjpuffin, Clawed,
Click this username please!, CliffC, Closedmouth, Cmdrjameson, Codewizards, Cognita, Colonies Chris, Commander, Conversion script, CovenantD, Cowshampoo, Cprompt, Crashfourit,
Cribbooky, Crmtm, Crockett john, Crossmr, Crystallina, Cst17, Ctudorpole, Cyp, D-Rock, DKSalazar, Daddy Kindsoul, Daikiki, Damian, Damian Yerrick, Daniel marchev, Darc dragon0, Dark
shadow, Darkspots, Darrell Greenwood, Dat789, David Gerard, David Stewart, David.Mestel, DavidFarmbrough, Dayewalker, Dbrunner, Dcoetzee, Deagle AP, December21st2012Freak,
Dekisugi, Deli nk, Delirium, Demonkey36, Denelson83, DerHexer, Des1974, Deus Ex, Dfense, DickRodgersMd, Dicklyon, Dieall, Discospinster, Divisionx, Diza, Dizzybim, Djhbrown,
Dkavlakov, DocWatson42, Donkennay, Donreed, Dori, DorkButton, Dougofborg, DragonflySixtyseven, Drcarver, Dtobias, Dudesleeper, Dustinasby, Dycedarg, Dynabee, Dynaflow, E!, ESkog,
EagleOne, Ealex292, Eaolson, Earldelawarr, Eastdakota, Ebonmuse, Ed Poor, Eeera, Eehite, Egmontaz, Eiler7, Electricnet, Eloquence, Elvey, EncMstr, Endofskull, EoGuy, Equendil, Er
Komandante, Eric-Wester, EricR, ErikWarmelink, EscapingLife, Ethan01, Everyking, Evil Monkey, Evoluzion, Excirial, Exir Kamalabadi, Eykanal, FF2010, FactChecker6197, FaerieInGrey,
Falcanary, Falcon9x5, Fanf, Faradayplank, Fastily, Favonian, Fearless Son, Fedor, Fedra, Feedmecereal, Fibonacci, Flammifer, Flewis, Florentino floro, Flubbit, FlyingPenguins, Forspamz,
Fosnez, Frecklefoot, Fredb, Fredrik, Freeleonardpeltier, FuadWoodard, Fubar Obfusco, Funky on Flames, Furious B, Furrykef, Fuzheado, G.W., G4fsi, G7yunghi, GCarty, GDonato, Gail, Gaius
Cornelius, Garycolemansman, Gavia immer, Geozapf, Gerbrant, Ghiraddje, Giftlite, Gilliam, GinaDana, Glen, Godalat, Gongor the infadel, Governorchavez, GraemeL, Graham87, Greatigers,
Greenmoss, GregRM, GrooveDog, Grubbmeister, Guanaco, Gurch, Gwernol, Gzkn, Gökhan, H Bruthzoo, Hadal, Haham hanuka, Hajor, HalfShadow, HappyInGeneral, Harryboyles, Hatterfan,
Hdt83, Heatman07, Heerat, Hellogeoff, HexaChord, Hhhjjjkkk, Hierophantasmagoria, Hike395, Hoogli, Hqb, Hserus, Hu12, Hut 6.5, Hut 8.5, Hydrargyrum, I-Love-Feeds, I80and, IRP, Ian
Pitchford, IceUnshattered, Icseaturtles, Ikester, Imran, Inignot, Ino5hiro, Internete, Intersofia, Invisible Friend, Iridescent, Irtechneo, Ivan Bajlo, Ixfd64, J'raxis, J.delanoy, JDiPierro, JForget,
JRawle, JTN, JackLotsOfStars, Jackola, Jagged 85, Jakew, Jamawillcome, Jamesday, Jamesontai, Janarius, Janet13, Jasonpearce, Javert, JayKeaton, Jayden54, Jeannealcid, JediLofty, Jeff G.,
Jeffreythelibra, Jeffro77, Jengod, Jesse Viviano, JesseW, Jfdwolff, Jh51681, Jhfireboy, JimiModo, Jj137, Jkej, Jmlk17, Jmozena, Jmundo, Jnc, Joanjoc, JoeSmack, John Broughton, JohnI,
Johnkirkby, Johnmc, Johnsmitzerhoven, Jojhutton, Jolly Janner, Jonathunder, Jonnabuz, Josh Parris, JoshuaZ, Jotag14, Joyous!, Jtkiefer, Judge Rahman, Julian Mendez, Jusdafax, Jusjih,
Jwrosenzweig, Jyavner, Jyotirmaya, K.lee, Kail212, Karada, Karl, Kartik.nanjund, Kate, Keith D, Kelisi, Kelly Martin, KellyCoinGuy, Kevin.cohen, Kghose, Khym Chanur, Kickakicka,
Kilo-Lima, Kimbayne, Kingofdeath6531, Kingofspamz, Kingpin13, KitAlexHarrison, Klaun, Kmenzel, KnowledgeOfSelf, Knutux, Kotepho, Kotjze, Kowey, Kozuch, Kristen Eriksen, Kubigula,
Kusma, Kylu, L33tj0s, LOL, Lachatdelarue, Lakish, Landon1980, Lapsus Linguae, Larry V, Lassebunk, LedgendGamer, LeeHunter, Lemming, Lenoxus, Leon7, Leopardjferry, Levin, Liftarn,
Lightmouse, Linkspamremover, Litch, Lloydic, Lordmac, LoveEncounterFlow, Lowellian, Lstanley1979, Luckas Blade, Luke C, Lupin, Luxa, Luxdormiens, MBisanz, MER-C, MJN SEIFER,
MPLX, Mac, Madchester, Maddog Battie, Maged123, Magister Mathematicae, Magnus Manske, Maha ts, Malcolm Farmer, Manc, Mani1, Manning Bartlett, Manop, Mapsax, Maqic, Marco
Krohn, Marek69, Martial Law, Martin451, MartinHarper, Marysunshine, Masgatotkaca, Master Deusoma, Matt.whitby, Matteh, Matthewcieplak, Maurreen, Maximaximax, Maximus Rex,
Maxschmelling, Mckaysalisbury, Mcnattyp, Mcsboulder, Meelar, Mendel, Mentifisto, Mephistophelian, Mercury, Merlin Matthews, Metaphorce, Michael Hardy, MichaelBillington,
MichaelScart, Michaelric, Michal Nebyla, Midnightcomm, Mike6271, MikeDConley, Mikeipedia, Mikemoral, Mindspillage, Minesweeper, Missionary, Mlaffs, Mlk, Mnkspr, Mns556,
Modemac, Moink, Monkey Bounce, Moogwrench, Morten, Motor, MrBurns, MrOllie, Mrschimpf, Ms dos mode, Msdtyu, Mudpool, Mufassafan, Mufka, Mwanner, Mydogategodshat,
Mygerardromance, Mysdaao, NHRHS2010, Nachmore, Nakamura2828, Nakon, Nasier Alcofribas, NeilN, Neilf4321, Neilisanoob, Nestea Zen, Neutrality, Newportm, Nickolai Vex, Nikai,
NikkiT, Nikzbitz, Nk, Nonpareility, Ntsimp, Nubzor, Nummer29, Oducado, Ohnoitsjamie, Ojigiri, Ojw, Oklonia, OnBeyondZebrax, OnePt618, Onorem, Opelio, OpenScience, Optakeover,
Orangemarlin, Orlady, Orphan Wiki, Orpheus, Osama's my mama, Ottava Rima, Ottawa4ever, OwenX, Ownlyanangel, Oxymoron83, P Carn, PCHS-NJROTC, PHILTHEGUNNER60, PJDJ4,
PMDrive1061, Pacdude4465, Pack It In!, Pagw, Pakaran, Paranoid, Patcat88, Pattelefamily, Paul Magnussen, PaulGS, Pavlucco, Pedant, Pepper, PeteVerdon, Petenewk, Pgk, Phil Boswell, Picus
viridis, PierreAbbat, Piet Delport, Plison, Poeloq, Policehaha, Ponder, Postdlf, Prapsnot, Pridak123, Profeign, Profgadd, Proxima Centauri, Pti, Quakerman, QueenCake, Quintin3265, Quoth,
Qwanqwa, R.123, R5gordini, RadicalBender, Radnam, Rafreaky13, Raganaut, Ragbin, Rajakhr, Rajkiran.singh, Ral315, Raul654, Raymond arritt, Rbedy, Rcousine, Rdsmith4, Receptacle, Red
Darwin, Red Thrush, Redrose64, Reedy, Reginmund, Rehevkor, Reki, Remember the dot, Reswobslc, RevRaven, Rewster, RexNL, Rhlitonjua, Rhobite, Rich Farmbrough, Richi, RickK,
Rjd0060, Rjw62, Rjwilmsi, Rkr1991, Robert Merkel, RobertG, RobertL, RobertMfromLI, Robma, RockMFR, Rossami, RoyBoy, Rubicon59, Rudá Almeida, Ryulong, SCZenz, Sadharan,
SaltyPig, Samtheboy, Sbluen, Scalene, Schneelocke, SchnitzelMannGreek, SchuminWeb, Scott&Nigel, Sdedeo, Seba5618, Sedheyd, Sendervictorius, Shadowsoversilence, Shantavira, ShaunES,
Shawnc, Shentino, Shigernafy, Shirik, Sick776, Siliconov, Silvestre Zabala, Simetrical, Singularity, SiobhanHansa, Sjkiii, Skwee, Skyshaymin96, Slady, Sleepyvegetarian, Sltrmd,
SniperWolf1564, Snori, Snoyes, Softarch Jon, Softtest123, SolarCat, Someguy1221, SouthernNights, Spamlart, Spangineer, Sparky the Seventh Chaos, Spntr, Spoirier, Spoon08, Spoonpuppet,
SpuriousQ, Starrburst, Stephen Gilbert, Stephen Turner, SterlingNorth, Stevertigo, Stevey7788, Stirling Newberry, Subho007, Suffusion of Yellow, Sunilromy, SuperHamster, Surachit,
Susvolans, Svetovid, T3hplatyz0rz, TNLNYC, Tacosnot2000, Taelus, Tallus, Tannin, TanookiMario257, Tarquin, Taxman, Tcrow777, Techman224, Tedickey, Template namespace initialisation
script, Terzza, Tesseran, Test2345, Tetraedycal, Texture, The Anome, The Thing That Should Not Be, The creator, The pwnzor, The wub, Theatrebrad, Thefattestkid, Thegamingtechie,
220
Article Sources and Contributors
TheoClarke, There is no spoon, Therealbrendan31, Theresa knott, They667, Thezoi, Thincat, Thompson.matthew, Tide rolls, TigerShark, Tim Starling, TimVickers, Tinykohls, Tiptoety,
Tissue.roll, Tktru, TomV, Tomahawk123, Tomkoltai, Tommy turrell, Tommy2010, Tompotts1963, Toresimonsen, Toytoy, Tree Biting Conspiracy, Turpissimus, Twinxor, Tyger 2393, Tyrenius,
Ucucha, Ultimus, Uncle G, Useight, User27091, Varmaa, Varunvnair, VederJuda, Versageek, Vespristiano, Violncello, Volcanictelephone, Wafulz, Walter, Walton One, WarthogDemon,
Wayiran, Wayward, Wazok, Weedwhacker128, Wesley, Whaa?, WhisperToMe, Whiterox, WhyohWhyohWhy, Wiki alf, Wiki is screwed by the Gr0up, WikiLeon, WikiMan225, WikiTrenches,
Wikidemon, Wikiklrsc, WikipedianMarlith, Wikipeditor, Wile E. Heresiarch, William Avery, Willking1979, Willsmith, Wknight94, Wmplayer, Wolfcm, Writerjohan, Wrs1864, Wwwwolf,
Xenongamer, Xhad, Y0u, Yama, Yamla, Yardcock, Yekrats, Yidisheryid, Yo yo get a life, Yonghokim, Yoshn8r, Younghackerx, Yworo, Zach8192, ZachPruckowski, Zardoz, Zarius, ZeWrestler,
Zephyr2k, Zigger, ZimZalaBim, Zntrip, Zoe, Zundark, Zvika, Zytron, Zzuuzz, 1404 anonymous edits
Phishing Source: http://en.wikipedia.org/w/index.php?oldid=378920760 Contributors: .::Icy Fire Lord::., 1995hoo, 1RadicalOne, 3210, 42istheanswer, 7Train, A-giau, ABCD, AKismet,
Abc518, Abdullah000, Academic Challenger, AdjustShift, Aesopos, Aftiggerintel, Ahoerstemeier, Aitias, Akamad, AlMac, Alan.ca, Alanraywiki, Alansohn, AlbertR, Alchemist Jack,
Alextrevelian 006, Ali K, Alucard (Dr.), Alxndr, Amh library, Anabus, Andrew Levine, AndrewHowse, Andrewlp1991, Andrewtc, AnnaFrance, Anonymous Dissident, Ant honey, Anthony
Appleyard, Antonio Lopez, Aquillyne, Archanamiya, Archelon, Art LaPella, Arvindn, Asn, Atropos235, AussieLegend, BabuBhatt, Bancrofttech, Barek Ocorah, Bart133, Bassbonerocks,
Bemsor, Bento00, Betacommand, Bhxinfected, Bigzig91090, Biophys, Biscuittin, Bjankuloski06en, Blackrabbit99, Blast73, Bnefriends, Bobblewik, Bobmack89x, Bobo192, Bodybagger,
Bongwarrior, Bookwormrwt, Borgx, Bornhj, Boudville, Brian Kendig, Brighterorange, BrockF5, Browolf, Bsroiaadn, Buggss, Butwhatdoiknow, C'est moi, CMW275, CWY2190, Cactus.man,
CactusWriter, Cain Mosni, Calvin 1998, CambridgeBayWeather, Can't sleep, clown will eat me, Canadian-Bacon, CanisRufus, Capricorn42, CaptainMike, Captainspalding, Carbonite,
CaseyPenk, Castleinthesky, Catdude, Cenarium, Centrx, CesarB, Chaser, Chmod007, Chris 73, Chris Chittleborough, Chrislk02, Chroot, Chuck Adams, Ckatz, ClementSeveillac, CliffC, Cnd,
Cocopopz2005, Colin Keigher, ColinHelvensteijn, Colostomyexplosion, Commander, CommonsDelinker, Conscious, CosineKitty, Courcelles, CrazytalesPublic, Credema, Cunningham,
CuteHappyBrute, D0762, DOSGuy, Dafphee, Dalf, DanBri, Dancter, Daniel Wright, DanielCD, Daven29, David Levy, David Shay, DavidForthoffer, Dawn Bard, Dawnseeker2000,
DeadEyeArrow, Deagle AP, Dearsina, Deathphoenix, Debresser, Debroglie, Deor, DerHexer, Dfense, Digiplaya, Digita, Directorblue, Dirkbb, Discospinster, Dnvrfantj, Doc Strange, Douglas
Ryan VanBenthuysen, Dskushwaha, Dtobias, Duplicity, Dylan620, Dysepsion, DÅ‚ugosz, EagleOne, Eamonster, EdBever, Edward, Einsteininmyownmind, El C, Electrosaurus, ElinorD, Elockid,
Eloquence, EmeryD, Emurph, Epbr123, Equendil, Ergateesuk, EvilZak, Excirial, Exmachina, Face, Falcon8765, Fattyjwoods, Fedkad, Fereve1980, Ferwerto, Finbarr Saunders, Firsfron, Flauto
Dolce, Flewis, Folkor, Frankie1969, Frecklefoot, Freeworld4, Fubar Obfusco, G026r, GCarty, GGreeneVa, Gabr, Gail, Gaius Cornelius, Gboweswhitton, Generic9, George The Dragon, Gerry
Ashton, Gilliam, Gimmetrow, Gjd001, Glenfarclas, Gogo Dodo, Gojomo, Gorgonzilla, Govvy, Greensburger, Grick, Griffind3, Grnch, Groyolo, Gtstricky, HamburgerRadio, Hankwang,
Harmonk, Harryboyles, Hateless, Herbythyme, HexaChord, Hibernian, Homunq, HoodedHound, Hooksbooks, Horologium, Hylaride, I already forgot, I need a name, ITurtle, IamTheWalrus,
Iangfc, Iceflame2539, Ikester, ImaginaryFriend, Imgoutham, Inferno, Lord of Penguins, InfoSecPro, Inomyabcs, Insanity Incarnate, Inter, Intersofia, Iolar, Ismail ngr, Isopropyl, Ixfd64, J Cricket,
J.delanoy, JIP, JLaTondre, JTBX, JTN, JaGa, Jacobratkiewicz, Jake Wartenberg, Janarius, Jaremfan, Jarry1250, Jbergquist, Jeannealcid, Jed 20012, Jeffq, Jehochman, Jenny Wong, Jerome
Charles Potts, Jerryseinfeld, Jersey92, Jesse Viviano, JimVC3, Jliford, Jmlk17, Jnavas, Jobeard, Joelr31, Johann Wolfgang, John254, Johnuniq, JonHarder, Jondel, JonnyJD, Josang, Joseph Solis
in Australia, Josh Parris, Joshdaman111, Jotag14, Joyson Konkani, Julian Diamond, Juliancolton, Juux, K1Bond007, Ka-Ping Yee, Kaare, Kai-Hendrik, Katester1100, Kelaos, KelleyCook,
Kelson Vibber, Kendrick7, Kevin Forsyth, Keycard, Kiand, Kimse, Kinneyboy90, Kizor, Kkrouni, Klemen Kocjancic, Knucmo2, Kozuch, Kraftlos, KrakatoaKatie, L Kensington, LCE1506,
LOL, Lakers3021, Lakshmin, Lamather, LarryMac, Laurusnobilis, Lcawte, Lee Carre, Legoktm, Leon7, Lets Enjoy Life, Lexlex, Lightblade, Lightmouse, Lights, Ligulem, Lilac Soul,
Linkspamremover, Lisamh, Lotje, LovesMacs, Lradrama, Luckas Blade, Lwalt, Lycurgus, MBelgrano, MBlume, MC MasterChef, MC10, MER-C, Macintosh User, Magicheader, Man It's So
Loud In Here, ManiF, Mark7-2, Mark83, Markus-jakobsson, Marskell, Martarius, Master Thief Garrett, Matt Crypto, MattKeegan, Matthew Platts, Matthew Yeager, Maximaximax, Meelar,
Mgway, Mhoulden, Michael Hardy, Michaelorgan, Michal Nebyla, Mike5906, MikePeterson2008, Mindmatrix, Minghong, Mintleaf, Mithridates, Mkativerata, Mns556, Mommy2mack08,
Moncrief, Monkeyman, Morenooso, Mormegil, MrOllie, MrWeeble, Mralokkp, Mrbakerfield, Mrkoww, Ms dos mode, Mspraveen, Mtford, Mugaliens, Muhammadsb1, Muke, Nagle, Nagytibi,
Nathanrdotcom, NawlinWiki, Nealmcb, Neelix, Nepenthes, Netsnipe, Neutrality, Nichalp, Nicknackrussian, NigelR, Nikai, Nintendude, Nishkid64, No more anonymous editing, Nonagonal
Spider, NotAnotherAliGFan, Novakyu, Nposs, NuclearWarfare, Nv8200p, Nyttend, Ocon, Oda Mari, Oducado, Oktober42, Oliver202, OllieGeorge, Olly150, Olrick, Omphaloscope,
One-dimensional Tangent, OnlineSafetyTeam, Optimale, Ori.livneh, Otto4711, OverlordQ, Overseer1113, OwenX, PCHS-NJROTC, PDH, Pakaran, Palica, Passgo, Past-times, Patrick, Paul
August, Paul W, Paul1953h, Pearle, Perfecto, Peskeyplum, Petrichor, Pgk, Philip Trueman, PhilipO, Phils, Piano non troppo, Pie Fan, Piledhigheranddeeper, Pillwords, Pinethicket, Pinkadelica,
Plasticspork, Plastikspork, Poccil, Poli, Pollinator, Pomyo is a man, Ponguru, Postdlf, PotentialDanger, Pratyeka, Profilepitstop, Ps-Dionysius, Pseudomonas, Publicly Visible, Pwitham,
Pyroknight28, R.123, Rabasolo, Ragib, Randybandy, Raul654, RazorICE, Redfarmer, Redfoot, Remember the dot, RenniePet, RexNL, Reyk, Rfl, Rhlitonjua, Rhobite, Rich Farmbrough,
Richwales, Rick Norwood, Rjwilmsi, Rmatney, Roastytoast, Robchurch, RobertG, Robertcathles, Rohasnagpal, Rolandg, Ronaldgray, RossPatterson, RoyBoy, RoySmith, Roymstat, RozMW,
Rprpr, Rrburke, Rror, Ryan Norton, RyanCross, RyanGerbil10, Ryco01, SEWilco, SJP, SMC, ST47, SWAdair, Safemariner, Salaamhuq, SallyForth123, Sam Korn, Sanjay ach, Sanjeevchris,
Saqib, Sbanker, SchfiftyThree, Schneelocke, SchuminWeb, Scootey, Scott Ritchie, Sdhonda, Securus, Seinucep, Sendmoreinfo, Sesu Prime, Shawnhath, Shii, Shoecream, Shoejar, Sigma 7,
Sigrulfr, Simon Shek, SimonD, Sjl, Sladen, Slakr, Snowolf, Socrates2008, Sophus Bie, Southboundon5, SouthernNights, Spamlart, Sparky the Seventh Chaos, Spencer, Splash, SquidSK,
StealthFox, SteinbDJ, Stevenrasnick, Storkk, Storm Rider, Stronginthepost, Stwalkerster, SunDog, SuperHamster, Surreptitious Evil, Symode09, THEN WHO WAS PHONE?, Ta bu shi da yu,
Taejo, Tannin, Tayquan, Telempe, Template namespace initialisation script, TerriersFan, That Guy, From That Show!, The Anome, The Monster, The Nut, The Rambling Man, The Thing That
Should Not Be, The snare, TheHerbalGerbil, TheListener, Thingg, Tide rolls, Tim Chambers, Timtrent, Tomasic, Tomheinan, Tommy2010, Tony1, TonyW, Tpbradbury, Travelbird, Tregoweth,
Treisijs, Tuntable, TwilligToves, Tylerwillis, Unixslug, Vald, Veinor, Versus22, Vikreykja, Vindi293, Vineetcoolguy, Vsmith, Vuo, Wavelength, Wayward, Webrunner69, WhisperToMe,
Whittlepedia, Who, WikHead, Wiki alf, WikiLaurent, Wikibob, Wildwik, William Avery, Wiseguy2tri, Wocky, WolfmanSF, Wpedzich, Wrs1864, X!, Xp54321, YetAnotherAuthor, Yngvarr,
YourEyesOnly, Yuhaian, Z10x, ZEUHUD, Zap Rowsdower, ZeWrestler, Zgadot, Zhen-Xjell, Zigamorph, Ziva 89, Zoicon5, ZonedoutWeb, Zragon, Zyschqual, Zzuuzz, Ævar Arnfjörð
Bjarmason, ‫א רמות‬., 에멜무지로, 1729 anonymous edits
Data security Source: http://en.wikipedia.org/w/index.php?oldid=378697421 Contributors: Ant, BMF81, Cascadeuse, Chrislk02, Cronky, DanielPharos, Datawipe, Discospinster, El C,
Everyking, Finlay McWalter, Fjakovlj, Frap, George1997, Gurch, Gzuckier, Hanskpe, Iqspro, John254, JonHarder, Joseph Solis in Australia, Jusdafax, Kubieziel, Kurt Shaped Box, Ledzep747,
Longhair, Matthew Stannard, Maurice Carbonaro, Megarichkingbob, Mfisherkirshner, Nono64, Nuwewsco, PMG, Peleg72, Rhobite, Schumi555, Seaphoto, Seldridge99, Sfe1, Sjakkalle,
Steinsky, TrainTC, Udo Altmann, Weregerbil, Wiknerd, Willking1979, Wmasterj, Wuhwuzdat, Zzuuzz, 115 anonymous edits
Information security Source: http://en.wikipedia.org/w/index.php?oldid=377132064 Contributors: 193.203.83.xxx, Aapo Laitinen, Abductive, Aclyon, Addshore, Adhemar, Aitias, Ajuk,
Aka042, AlMac, Alanpc1, Ale jrb, AlephGamma, Alexandru47, Ali, Alucard (Dr.), Amatriain, Amorymeltzer, Andycjp, Angela, Ant, Aron.j.j, Awillcox, BD2412, Bburton, Bill.martin, Bobo192,
Bobrayner, Borgx, BrWriter2006, Breadtk, Brian the Editor, Brwriter2, Btyner, Calm reason, Canoruo, Cartermichael, Cascadeuse, CelebritySecurity, Centrx, Cflm001, Choibg, Chris Brown,
Chris Ulbrich, Chrisbrown, CliffC, Closedmouth, Colemannick, Colonies Chris, CommodiCast, CommonsDelinker, Conversion script, Corp Vision, Cpaidhrin, Cupids wings, Danakil, Dancter,
DanielPharos, Davebailey, Dawnseeker2000, Dcressb, DeadEyeArrow, Dekimasu, Delmundo.averganzado, Derekcslater, Dkosutic, Docpd, Donalcampbell, Edcolins, Eddy Scott, Edward, El C,
Elieb001, Enviroboy, Epbr123, Evb-wiki, FTsafe, Falcon8765, Fancy steve, Favertama, Fcoulter, Feldermouse, Fredrik, Fryed-peach, Gaius Cornelius, Ged fi, Gfragkos, Giraffedata, Glavin,
Gomm, Gpdhillon2, GraemeL, Graham87, GroupOne, Grutness, HaeB, HansWDaniel, Happyrose, Hephaestos, Hnguyen322, Hu12, INFOSECFORCE, Imjustmatthew, Infinitesteps, Inthenet,
Iridescent, Irishguy, Ishikawa Minoru, Itai, Itusg15q4user, IvanLanin, JCLately, JaGa, Jdlambert, Jim.henderson, Jmilgram, John Vandenberg, JohnManuel, JohnOwens, JonHarder, Joy, Jpluser,
Khendon, Kieraf, Kimvais, KizzoGirl, Kl4m, Kozmando, Kungming2, Kuru, LiDaobing, Little saturn, MBisanz, MER-C, MK8, Madvenu, Martarius, Matt Crypto, Mattatpredictive, Mattg82,
Mauls, Mcicogni, MeS2135, MerileeNC, Michael Hardy, Mike0131, Mild Bill Hiccup, Mindmatrix, Mitte, Ml-crest, Morpheus063, MrOllie, Nainawalli, Nikai, Nitinbhogan, Noah Salzman,
Nuno Tavares, Nuwewsco, Nyttend, OKAMi-InfoSec, ObscurO, Ohka-, Ohnoitsjamie, Open4Ever, Piano non troppo, Plastikspork, Pnevares, PolarYukon, Portal60, Pramukh Arkalgud
Ganeshamurthy, Prokopenya Viktor, Prowriter16, Ptomes, Puntarenus, Quantumseven, Raed abu farha, RainbowOfLight, Ramfoss, Ranman45, Rearden9, Revmachine21, RexNL, Rgalexander,
Rhobite, Rich Farmbrough, Richard Bartholomew, Riker2000, Rjwilmsi, Robwhitcher, Ronz, Rossj81, Rsrikanth05, Rursus, SDC, Sa ashok, Sam Hocevar, Saqib, SasiSasi, Sbilly, Scapler,
Scgilardi, Scottdimmick, Seaphoto, SebastianHelm, Sephiroth storm, Sfoak, Sfoskett, Shadowjams, Shawnse, Shonharris, Soliloquial, Spearhead, Sperling, Srandaman, Stationcall, SteinbDJ,
Stephenb, Stevedaily, Stuartfost, Suruena, Sutch, Tabletop, TarseeRota, Tenbergen, The Anome, Think outside the box, Thireus, Tqbf, Truestate, Tuxisuau, Umbrussels, Uncle Dick, UriBraun,
V1ntage318, Vaceituno, Vinodvmenon, Violetriga, WalterGR, WideClyde, Wikibob, Wikipediatrix, Wilcho, William Avery, Wingfamily, Wmasterj, Woohookitty, Wrp103, Xinconnu, Xsmith,
Yakheart, ZeWrestler, ‫جردبماك‬, 431 anonymous edits
Encryption Source: http://en.wikipedia.org/w/index.php?oldid=378810433 Contributors: 16@r, 321ratiug, 62.253.64.xxx, ARGOU, Abnerian, Ahoerstemeier, Algocu, Alias Flood, Altenmann,
Andreyf, Annsilverthorn, Anthony Appleyard, ArnoldReinhold, Atanumm, Aude, Austin Hair, Austinmurphy, BanyanTree, BeEs1, Bellhalla, Benn Adam, Bertix, Bikepunk2, Binksternet,
Blahma, Brihar73, Btyner, CBM, CWY2190, Captain panda, Chaos, Chicken Wing, Chris Peikert, Christopher Parham, ClementSeveillac, Cmichael, Colonellim, Constructive editor, Conversion
script, DPdH, Da monster under your bed, Daniel.Cardenas, Danny, DavidJablon, Davidgothberg, Dawnseeker2000, DeadEyeArrow, DibbleDabble, Dicklyon, Digitalone, Dijxtra, Djkrajnik,
Dprust, Duk, Ebraminio, Eequor, Ehheh, Elektron, Emiellaiendiay, Emx, Encryptedmusic, Epbr123, Epeefleche, Etphonehome, FT2, Faithtear, Fastfission, Fastifex, Feezo, Fieldday-sunday,
FireDemon, Fleem, Frap, Fredrik, Fremsley, GatorBlade, Gatsby 88, Giftlite, Gogo Dodo, Gr8fuljeff, Greensburger, Grunt, Haakon, Hadal, Hairy Dude, Hdt83, Heron, Hu12, Hvs, Ianblair23,
ImMAW, Intgr, Isfisk, Itusg15q4user, Ixfd64, J.delanoy, JDspeeder1, Jackhc247, Jarskaaja, Jeffz1, JeremyA, Jevansen, Jimothytrotter, Jobarts, Joemaza, John Fader, Johnteslade, JonHarder,
Kate66705, Keilana, Kendrick7, Knguyeniii, Kokoriko, LP Sérgio LP, Learjeff, Lebha, Lee Carre, Legare, LiDaobing, Linweizhen, Luna Santin, Lunkwill, M.A.Dabbah, M.J.W, MBisanz,
MER-C, Marchuk, Mark Renier, Marygillwiki, Matekm, Matt Crypto, Maxt, Mazin07, MentalForager, Metamorph123, Michael Daly, MichaelBillington, Mickey.Mau, Mike A Quinn, Milaess,
Mindmatrix, Mitsumasa, Mmernex, Modulatum, MrOllie, MtB, Munahaf, Nabolshia, NawlinWiki, NeoDude, Neutrino72, Nikai, Nixdorf, Nk, Nowledgemaster, Nposs, Nuwewsco, Onevalefan,
OrgasGirl, Pabix, Papergrl, Paxsimius, Peleg72, Pharaoh of the Wizards, PhoenixofMT, Pjf, Plastikspork, Player82, Ploxx, PopUpPirate, Pozytyv, Public Menace, PuercoPop, Punk4lifes,
Pvfresno72, Quaestor, Quasar Jarosz, RCVenkat, RHaworth, Raftermast, Ranman45, Raviaulakh, Rdsmith4, Rearden9, Reedy, Rich Farmbrough, RobertG, Rockvee, Romanm, Rory096,
Rossheth, STSS, Sander Säde, Santarus, Savidan, Schzmo, Scircle, Scott Illini, Secrefy, Securiger, Shotmenot, Sinar, SkyWalker, Slgrandson, Slightsmile, SoftComplete, SomeBody, Someone42,
Sonjaaa, Spondoolicks, Squadron4959, Streaks111, SurrealWarrior, Tarquin, TastyPoutine, Taxman, Technopat, TedColes, Tellarite, Thingg, Todd Peng, Todd434, Tombomp, ToonArmy,
Touisiau, TreasuryTag, Triscient Darkflame, Tromer, Uieoa, Useight, Vancegloster, Vegaswikian, Voidxor, W1tgf, Wavelength, Wikidrone, Wolf.alex9, Woohookitty, Wutsje, Ww, Xnquist,
221
Article Sources and Contributors
Yonatan, Zeisseng, Zkissane, Zro, 369 anonymous edits
Cryptography Source: http://en.wikipedia.org/w/index.php?oldid=377638160 Contributors: (, 00mitpat, 137.205.8.xxx, 1dragon, 2D, 5 albert square, A bit iffy, ABCD, APH, Aaron Kauppi,
Abach, Abdullais4u, Academic Challenger, Acroterion, Adam7117, AdjustShift, Adjusting, Aeon1006, Agateller, Ahoerstemeier, Aitias, Akerans, Alan smithee, Alansohn, Altenmann, Anclation,
Andres, Antandrus, Antura, AnyFile, Anypodetos, Ap, ArnoldReinhold, Artusstormwind, Arvindn, Auyongcheemeng, Avenue, AxelBoldt, B Fizz, Badanedwa, BeEs1, Beleary, BiT, Bidabadi,
Bill431412, Binksternet, Bkassay, Blackvault, Bletchley, Blokhead, Bobo192, Boleslav Bobcik, Boyette, Brandon5485, Branger, Brentdax, Brighterorange, Brion VIBBER, Browner87, CL,
CRGreathouse, CWY2190, Cadence-, Calcton, Caltas, Capricorn42, Card, Cedrus-Libani, Cferrero, Chandu iiet, Chaojoker, Chaos, Chas zzz brown, ChemGardener, Chris 73, Chris the speller,
Chungc, Church074, Ciacchi, Cimon Avaro, Ckape, ClementSeveillac, Clickey, Closedmouth, Cluckkid, CommonsDelinker, Complex01, Conversion script, Corpx, CrazyComputers,
Crazycomputers, Crazyquesadilla, Crownjewel82, CryptoDerk, Cyde, Cyrius, DPdH, DRLB, DSRH, DVD R W, Dachshund, Daedelus, DanDao, Dante Alighieri, DarkBard, Darkhero77, Darth
Mike, Dave6, David Eppstein, David G Brault, David Nicoson, DavidJablon, DavidSaff, DavidWBrooks, Davidgothberg, Dbenbenn, DeathLoofah, Deeahbz, Delfeye, Delifisek, Demian12358,
Detach, Dgies, Dgreen34, Dhar, Dhp1080, Dienlei, Dlrohrer2003, Dmsar, Dogposter, Donreed, Dori, DoubleBlue, Dr. Sunglasses, Dr1819, DrDnar, Drink666, Dtgm, Duncan.france, Dysprosia,
E=MC^2, Echartre, Ed g2s, Edggar, Egg, Eltzermay, Endymi0n, Episcopus, Euphoria, Evercat, Exir Kamalabadi, FF2010, Farnik, Fatespeaks, Ferdinand Pienaar, Fg, Fieldday-sunday, Fredrik,
FreplySpang, G Rose, GABaker, Gaius Cornelius, Galoubet, Geni, Geometry guy, Georg Muntingh, GeorgeLouis, Ghettoblaster, Gianfranco, Giftlite, Gilliam, GimmeFuel, Glenn, Goatasaur,
Gogo Dodo, Gouldja, Grafikm fr, Graft, Graham87, Greatdebtor, Gubbubu, Gus Buonafalce, H2g2bob, Hagedis, HamburgerRadio, HappyCamper, Harley peters, Harryboyles, Harrymph,
Havanafreestone, Hectorian, Heinze, Hermitage17, Heron, Heryu, Hibbleton, HippoMan, Hollerme, Homunq, Hong ton po, Hotcrocodile, Htaccess, Hut 8.5, IAMTrust, Iago4096, Impaciente,
Imran, InShaneee, Infomade, Ivan Bajlo, Ixfd64, J.delanoy, JIP, JRM, JYolkowski, JaGa, Jacobolus, Jacoplane, Jacopo, Jafet, Jagged 85, Jaho, Jdforrester, Jeffq, Jeremy Visser, Jericho4.0,
Jessicag12, Jimmaths, Jitse Niesen, Jj137, Joel7687, John Vandenberg, John Yesberg, JohnMac777, Jok2000, Jorunn, JoshuaZ, Jpmelos, Jrockley, Jsdeancoearthlink.net, Judgesurreal777, Julesd,
K1Bond007, KFP, Kakofonous, Karada, Kazov, King of Hearts, Komponisto, KooIkirby, Kozuch, Krj373, Ksn, Kubigula, Kuszi, Kylet, LC, Lakshmin, LaukkuTheGreit, Laurentius,
LeaveSleaves, Leranedo, Leszek Jańczuk, Liftarn, Lightlowemon, Lightmouse, LoganK, Logologist, Lotte Monz, Lpmusix, Ludraman, Luna Santin, Lunchscale, M5, Madchester, MagneticFlux,
Mangojuice, ManiF, Manop, Manscher, MarSch, Mark Zinthefer, MasterOfHisOwnDomain, Materialscientist, MathMartin, Matt Crypto, Mauls, Maurice Carbonaro, Maury Markowitz, Mav,
MaxMad, Maxt, Mayevski, Mblumber, Mboverload, Meco, Meelar, Mercurish, Michael Devore, Michael Hardy, Michael miceli, Michi.bo, Mike Segal, Mindmatrix, Minna Sora no Shita,
Miserlou, Misza13, MithrandirAgain, Mmernex, Mmustafa, Mo0, Mobius, Mohdavary, MoleRat, Molerat, Monkeyman, Monkeymonkey11, MooCowz69, Moonriddengirl, Moverton, MrOllie,
Mrstoltz, Msaied75, Msanford, Msh210, Mspraveen, Myria, Mzajac, NHRHS2010, NYKevin, NawlinWiki, Neilc, Nemu, Ner102, Netoholic, Nevilley, Nfearnley, Nightswatch, Nihil novi, Nikai,
No barometer of intelligence, Norwikian, Novum, Ntsimp, ONEder Boy, Oddity-, Oerjan, Oiarbovnb, Oleg Alexandrov, Omicronpersei8, Ortolan88, OutRIAAge, Pakaran, Pale blue dot,
Pallas44, Papergrl, Patrick, Paul August, Pbsouthwood, Pegasus1138, Per Honor et Gloria, Permethius, PerryTachett, Peruvianllama, Peter Delmonte, Peter Isotalo, Peyna, Phr, PierreAbbat,
Pinethicket, PleaseSendMoneyToWikipedia, Plrk, Porkolt60, Powerslide, Praveen pillay, PrimeHunter, Proidiot, Prsephone1674, Qxz, R Math, Radagast3, Raghunath88, Rashad9607, Rasmus
Faber, Raul654, Rbellin, Rdsmith4, RedWolf, Reddy212, Remi0o, Rettetast, Revolver, Rich Farmbrough, Rich5411, Richard Burr, Richwales, Rjwilmsi, Roadrunner, RobertG, Robertsteadman,
Rochdalehornet, Rohitnwg, Ross Fraser, RudyB, Ryanwammons, SCCC, SDC, SJP, Saber Cherry, Sansbras, Saoirse11, Schlafly, Schneelocke, Scircle, Seamusandrosy, Sean.nobles, Secrefy,
Securiger, Serlin, Sfdan, Shenron, Shmitra, Siddhant, Sidmow, Sietse, SimonLyall, Sinar, Sjakkalle, Skippydo, Skunkboy74, Slayemin, Slightsmile, Slipperyweasel, Snowolf, Socialworkerking,
Sonam.r.88, Sovietmah, Speck-Made, Starwiz, Stefanomione, Stephen G. Brown, Stevertigo, StevieNic, Strigoides, Super-Magician, Superm401, Surachit, Sv1xv, Symane, Syphertext, TJRC, Ta
bu shi da yu, TakuyaMurata, Tao, TastyPoutine, Taw, Ted Longstaffe, TedColes, Tempshill, TerraFrost, The Anome, The wub, The1physicist, Thehockeydude44, Theresa knott, Thiseye,
Thumperward, TiMike, Tide rolls, Timwi, Tobias Bergemann, Tomasz Dolinowski, Touisiau, Treisijs, Turnstep, TutterMouse, Tw mama, Udopr, Uncle Lemon, Unmerklich, Uriyan, Vadim
Makarov, Vkareh, Volfy, Vuong Ngan Ha, Vyse, WPIsFlawed, WannabeAmatureHistorian, Wbrameld, Wes!, WhatisFeelings?, Who-is-me, Wiki5d, Wikiborg, Wikiklrsc, WikipedianMarlith,
Wilson.canadian, WojPob, Wolfkeeper, Wonderstruck, Woohookitty, Wrs1864, Wsu-dm-a, Ww, XP105, Xnquist, Xompanthy, YUL89YYZ, Yakudza, Yekrats, Yintan, Yob kejor,
ZachPruckowski, Zeno of Elea, Ziko, Zntrip, Zoeb, Zundark, Zyarb, ‫لیقع فشاک‬, 729 anonymous edits
Bruce Schneier Source: http://en.wikipedia.org/w/index.php?oldid=369562332 Contributors: 213.253.39.xxx, AaronSw, AdamAtlas, Amillar, Andrei Stroe, Arkuat, Arnoldgibson,
Bayerischermann, Bignose, BjKa, Bongwarrior, Boxflux, Brentdax, Ciphergoth, Cybercobra, D6, DavidCary, Decrypt3, Djechelon, DonL, DreamGuy, Edward, Elonka, Emarr, Emijrp, Eric
Shalov, Fbriere, FitzSai, Gaius Cornelius, Gareth Jones, Gavia immer, Gbroiles, Gfbarros, Godsmoke, Goochelaar, Gorgonzilla, Grim Revenant, Gwern, HOT L Baltimore, Hairy Dude,
Hbobrien, Hurrystop, Hut 8.5, Imran, Ingolfson, Intgr, JBsupreme, Jclemens, Jesper Laisen, John K, Jonneroo, Kai-Hendrik, Kaso, Kaszeta, Kenyon, Kraenar, Kwamikagami, LapoLuchini, Leejc,
MarekTT, Matt Crypto, Meelar, MikeVitale, Mister Matt, Mitch Ames, Moonriddengirl, Mulad, Nigelm, Nikita Borisov, Nishkid64, Nowhither, Ntsimp, Olivier, Onorem, Pcap, Piet Delport,
Quelbs, Radagast83, Ravedave, Rfl, Rob T Firefly, Rockk3r, Schneelocke, Ser Amantio di Nicolao, Shaddack, Sheridbm, Siddhant, Sigma 7, SparqMan, Ssd, Stefanomione, SuperJumbo,
Tagishsimon, The Epopt, The enemies of god, Thumperward, Timothylord, Toleraen, Tomasf, Tony1, Tqbf, Tradfursten, TreasuryTag, Ventura, Vespristiano, Vicki Rosenzweig, Whkoh,
WojPob, Ww, 85 anonymous edits
Application security Source: http://en.wikipedia.org/w/index.php?oldid=377327680 Contributors: Aarnold200, Algae, Blackjackmagic, Bookbrad, Charles Matthews, DatabACE, Dcunited,
Dman727, Dosco, Eheitzman, Enric Naval, Felmon, Fhuonder, Frap, Friendlydata, Geofhill, Grabon, Ha runner, Halovivek, Hillel, Hnguyen322, Iridescent, JEMLA, JLEM, JYolkowski, Jnarvey,
JonHarder, Maurice Carbonaro, Maxgleeson, Mindmatrix, NEUrOO, Nickbell79, NielsenGW, Njan, Ohnoitsjamie, OnPatrol, OwenX, Pryderi100, Pseudomonas, Raysecurity, Rhobite, Robina
Fox, Rwwww, Sander Säde, SimonP, Slicing, Stationcall, Super n1c, Swtechwr, Tjarrett, Toutoune25, Tyler Oderkirk, Welsh, Wiscoplay, 59 anonymous edits
Application software Source: http://en.wikipedia.org/w/index.php?oldid=377263250 Contributors: 1776bob, A3RO, ARUNKUMAR P.R, AVRS, Aateyya, Abdull, Acalamari, Acroterion,
Ahoerstemeier, Aim Here, Akendall, Alansohn, Ale jrb, Aleenf1, Alexius08, Alisha0512, AlistairMcMillan, Alksub, Alpha 4615, AnakngAraw, Andreas Kaufmann, Android Mouse, Andy,
ArielGold, Aseld, Atlant, Auntof6, Austinm, Avenue, BRUTE, Ballchin2therevenge, Balloonguy, BarretBonden, Baudline, Bennity, BlueSquadronRaven, Bmanoman, Bogrady, Cacycle,
Caesura, Camw, Chad103, Chnt, Chris 73, Cireshoe, ClosedEyesSeeing, Codemaster, Cohesion, Colfer2, Computerapps, Conan, Connelly, Courcelles, CryptoDerk, D6, Danakil, DanielRigal,
Darth mavoc, Dbolton, Dead sexy, DeadEyeArrow, Dekisugi, Denisutku, Denoir, Der Mann Ohne Eigenschaften, DerHexer, Dmccreary, DocWatson42, Dominicpb, EagleOne, Echuck215,
EdTrist, Eeekster, Eekerz, Ehheh, Elvey, Emrrans, Epbr123, Equazcion, Er Komandante, Eraserhead1, Ewawer, Fabient, Foxandpotatoes, Frap, Freeformer, FreplySpang, Fvw, Gail, Geniac,
Georgecopeland, Giftlite, Gogo Dodo, Granburguesa, Grandscribe, Groovenstein, Gubbubu, Guoguo12, HammerHeadHuman, HappyInGeneral, Hessamnia, High5sw, Hmains, Hmrox, Hulsie,
Hutcher, II MusLiM HyBRiD II, Iamthenewno2, Ienpw III, Igoldste, Ilmari Karonen, Iranway, IvanLanin, J.delanoy, JCLately, JForget, James086, Jamesontai, Jantangring, Jarda-wien, Jeffsurfs,
JeremyA, Jisatsusha, JoshHolloway, Jovianeye, Jpbowen, Jpgordon, Jukcoder, Juliancecil, Juliancolton, Kbdank71, Kbdesai, Kenny sh, Kevin B12, Kevjudge, Khaosworks, Knee427, Kozuch,
Kwekubo, LeaveSleaves, M2Ys4U, MER-C, Machina.sapiens, Madcheater, Madhero88, Maghoul, Maian, Malafaya, Manop, Marclaporte, Mark Renier, Maximaximax, Mexcellent, Mikeo,
Minghong, Mipadi, Misteror, Mmxx, MrOllie, Mschel, Mushroom, Nickmalik, Nurg, Nzd, Octavabasso, Odie5533, Ohnoitsjamie, Oicumayberight, Ozgod, Pankaj.nucleus, Pearrari, PhJ,
PhilHibbs, Philip Trueman, Piano non troppo, Pilotguy, Pinkadelica, Pointillist, PsychoSafari, RJHall, Requestion, Rfc1394, Rgill, Ric4h4ard, Ronz, Rwwww, RxS, Sadalmelik, Seidenstud,
Seryo93, Silverbackwireless, SimonP, Simoneau, SixdegsNew, Sleepyhead81, Socrates1983, SpaceFlight89, Srleffler, StaticGull, Stephenb, Steven Walling, Stevenj, Stevietheman, Suffusion of
Yellow, T@nn, Tangotango, Tdangkhoa, Templarion, The Thing That Should Not Be, Thehelpfulone, ThiagoRuiz, Tom harrison, Tommy2010, Totakeke423, Traxs7, Treygdor, Triona, Twenex,
USPatriot, Universalss, Versageek, Vyvyan, Wernher, Wgoetsch, Wimt, Winkin, Wizard IT, Woodshed, Woohookitty, Xiong Chiamiov, Yahia.barie, Yaron K., Yintan, Yworo, Zquack, Zzuuzz,
‫لیقع فشاک‬, 591 anonymous edits
Software cracking Source: http://en.wikipedia.org/w/index.php?oldid=378211578 Contributors: 16@r, Ahoerstemeier, Akhristov, Alkivar, Andrewpmk, AndyBQ, Anna Lincoln, Antandrus,
Antoin, Artichoker, Avatar, Biasoli, Bithaze, Blanchette, Blazotron, Blx, Brandon, Bruce89, Buttboy666, Buzzert, CWY2190, Cain Mosni, Chris Chittleborough, Clayhalliwell, Clngre, ColinJF,
Commit charge, Conversion script, Cornlad, Craptree, CrazyRob926, Creek23, Crossmr, Cuchullain, Cynical, Cyrius, DLand, DMahalko, Dantheman531, Dark Chili, Davewho2, Dcsutherland,
Decltype, Dethme0w, DocWatson42, DragonHawk, Eiler7, Elsendero, Epbr123, Evil Monkey, Fabrib, Finell, Fintler, Flarn2006, Francs2000, Fratrep, Frecklefoot, Fuqnbastard, Ghettoblaster,
Godcast, Goplat, Gracefool, Grand Edgemaster, Graue, Greenrd, Gregzeng, Gyorokpeter, Hidro, Hm2k, Hoovernj, Horsten, Howardform, IAmAI, Ike9898, Inter, Itai, Ixfd64, J04n, JTFryman,
Jake Lancaster, Jclemens, Jeff3000, Joannna, Jons63, Joyous!, Kangy, Khalid hassani, Kjkolb, Kozuch, Lawl95, Lenoxus, LilHelpa, LittleOldMe, Lumpbucket, Magnus Manske, Marco
Guardigli, Mariah-Yulia, Matthewcieplak, MeteorMaker, Metromoxie, Mike Van Emmerik, Mmdoogie, Mmernex, Mushin, Nar Matteru, NawlinWiki, NewEnglandYankee, Njh, Nnp, Ogat,
Ohnoitsjamie, Ondra.pelech, Oscarthecat, OverlordQ, Oysterguitarist, Patcore, Paulmoloney, Pengo, PetrochemicalPete, Psiphim6, Psychonaut, RainbowOfLight, Rajah, Reisio, Rfc1394, Rhe br,
RickK, Rtc, Ruby.red.roses, Rudá Almeida, Rurik, SDC, Salman06p0020, Salvatore Ingala, Scienceman123, Sculptorjones, Sharebay, Shashankrlz, Sheogorath, Silverfish, Sjledet, Splash,
Splintax, Stephen Gilbert, Stephenb, Suncloud, Swedish Meatball, SyntaxError55, Taidawang, Taimy, Tempshill, The Cunctator, The Dark, Thechamelon, Thumperward, Toahan, Tombomp,
Tommy2010, Tsbay, Tucker001, Twilight Helryx, Ugur Basak, Varano, Vary, Viznut, Warren, Wereon, Wernher, Wiknerd, Worldmaster0, Yamamoto Ichiro, Yath, Zacatecnik, Zeerus, ‫א"ירב‬,
342 anonymous edits
222
Image Sources, Licenses and Contributors
Image Sources, Licenses and Contributors
File:Flughafenkontrolle.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Flughafenkontrolle.jpg License: GNU Free Documentation License Contributors: Art-top, BLueFiSH.as,
Bapho, Cherubino, Edward, JuergenL, MB-one, Voyager
File:Security spikes 1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Security_spikes_1.jpg License: Public Domain Contributors: User:Edward
File:Delta World HQ - entrance with security station.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Delta_World_HQ_-_entrance_with_security_station.JPG License: Creative
Commons Attribution-Sharealike 3.0 Contributors: User:Mav
File:Kate-at-fleshmarket.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Kate-at-fleshmarket.JPG License: Creative Commons Attribution-Sharealike 2.5 Contributors: Original
uploader was 2005biggar at en.wikipedia
Image:Wiktionary-logo-en.svg Source: http://en.wikipedia.org/w/index.php?title=File:Wiktionary-logo-en.svg License: unknown Contributors: User:Brion VIBBER
File:GatewayTracingHologramLabel.jpg Source: http://en.wikipedia.org/w/index.php?title=File:GatewayTracingHologramLabel.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Newone
Image:Canadian Embassy DC 2007 002.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Canadian_Embassy_DC_2007_002.jpg License: Public Domain Contributors:
User:Gryffindor
Image:Security spikes 1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Security_spikes_1.jpg License: Public Domain Contributors: User:Edward
Image:1-Wire lock.jpg Source: http://en.wikipedia.org/w/index.php?title=File:1-Wire_lock.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Stan Zurek
Image:OxfordCCTV2006.jpg Source: http://en.wikipedia.org/w/index.php?title=File:OxfordCCTV2006.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Nikolaos S.
Karastathis
Image:Private factory guard.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Private_factory_guard.jpg License: Creative Commons Attribution 2.0 Contributors: ALE!,
FlickrLickr, FlickreviewR, Zolo, 1 anonymous edits
File:Encryption - decryption.svg Source: http://en.wikipedia.org/w/index.php?title=File:Encryption_-_decryption.svg License: GNU General Public License Contributors: user:odder
File:NewYorkCitySubwayEntranceInterior.jpg Source: http://en.wikipedia.org/w/index.php?title=File:NewYorkCitySubwayEntranceInterior.jpg License: Creative Commons Attribution 3.0
Contributors: User:Canadaolympic989
File:Access control door wiring.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_door_wiring.png License: Public Domain Contributors: User:Andriusval
File:Intelligent access control door wiring.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Intelligent_access_control_door_wiring.PNG License: Public Domain Contributors:
User:Andriusval
File:Access control topologies serial controllers.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_topologies_serial_controllers.png License: Public Domain
Contributors: User:Andriusval
File:Access control topologies main controller a.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_topologies_main_controller_a.png License: Public Domain
Contributors: User:Andriusval
File:Access control topologies main controller b.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_topologies_main_controller_b.png License: Public Domain
Contributors: User:Andriusval
File:Access control topologies terminal servers.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_topologies_terminal_servers.png License: Public Domain
Contributors: User:Andriusval
File:Access control topologies IP master.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_topologies_IP_master.png License: Public Domain Contributors:
User:Andriusval
File:Access control topologies IP controller.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_topologies_IP_controller.png License: Public Domain Contributors:
User:Andriusval
File:Access control topologies IP reader.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_topologies_IP_reader.png License: Public Domain Contributors:
User:Andriusval
File:Access control door wiring io module.png Source: http://en.wikipedia.org/w/index.php?title=File:Access_control_door_wiring_io_module.png License: Public Domain Contributors:
User:Andriusval
Image:Computer-eat.svg Source: http://en.wikipedia.org/w/index.php?title=File:Computer-eat.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: User:Slady
File:Morris Worm.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Morris_Worm.jpg License: Creative Commons Attribution-Sharealike 2.0 Contributors: Go Card USA from
Boston, USA
File:Conficker.svg Source: http://en.wikipedia.org/w/index.php?title=File:Conficker.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Gppande
Image:PersonalStorageDevices.agr.jpg Source: http://en.wikipedia.org/w/index.php?title=File:PersonalStorageDevices.agr.jpg License: GNU Free Documentation License Contributors:
User:ArnoldReinhold
Image:PD-icon.svg Source: http://en.wikipedia.org/w/index.php?title=File:PD-icon.svg License: Public Domain Contributors: User:Duesentrieb, User:Rfl
File:Picture Needed.svg Source: http://en.wikipedia.org/w/index.php?title=File:Picture_Needed.svg License: Public Domain Contributors: Original uploader was Anthony5429 at en.wikipedia.
Later version(s) were uploaded by Headbomb, Zain.3nov, Flyingidiot, Biblionaif at en.wikipedia.
File:Firewall.png Source: http://en.wikipedia.org/w/index.php?title=File:Firewall.png License: GNU Free Documentation License Contributors: Bruno Pedrozo
Image:Gufw 9.04.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Gufw_9.04.PNG License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Gaiterin
Image:Stachledraht DDos Attack.svg Source: http://en.wikipedia.org/w/index.php?title=File:Stachledraht_DDos_Attack.svg License: GNU Lesser General Public License Contributors:
User:Compwhizii
Image:spammed-mail-folder.png Source: http://en.wikipedia.org/w/index.php?title=File:Spammed-mail-folder.png License: GNU General Public License Contributors: Ascánder, Bawolff,
KAMiKAZOW, LordT, RJaguar3, 7 anonymous edits
Image:PhishingTrustedBank.png Source: http://en.wikipedia.org/w/index.php?title=File:PhishingTrustedBank.png License: Public Domain Contributors: User:Andrew Levine
Image:Phishing chart.png Source: http://en.wikipedia.org/w/index.php?title=File:Phishing_chart.png License: Public Domain Contributors: Original uploader was ZeWrestler at en.wikipedia
Image:CIAJMK1209.png Source: http://en.wikipedia.org/w/index.php?title=File:CIAJMK1209.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JohnManuel
Image:DefenseInDepthOnion.png Source: http://en.wikipedia.org/w/index.php?title=File:DefenseInDepthOnion.png License: Public Domain Contributors: WideClyde
Image:Lorenz-SZ42-2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Lorenz-SZ42-2.jpg License: unknown Contributors: Avron, Dbenbenn, Sissssou
Image:Skytala&EmptyStrip-Shaded.png Source: http://en.wikipedia.org/w/index.php?title=File:Skytala&EmptyStrip-Shaded.png License: GNU Free Documentation License Contributors:
User:Chrkl
File:16th century French cypher machine in the shape of a book with arms of Henri II.jpg Source:
http://en.wikipedia.org/w/index.php?title=File:16th_century_French_cypher_machine_in_the_shape_of_a_book_with_arms_of_Henri_II.jpg License: Creative Commons Attribution-Sharealike
3.0 Contributors: User:Uploadalt
File:Encoded letter of Gabriel Luetz d Aramon after 1546 with partial deciphering.jpg Source:
http://en.wikipedia.org/w/index.php?title=File:Encoded_letter_of_Gabriel_Luetz_d_Aramon_after_1546_with_partial_deciphering.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Uploadalt
Image:Smartcard3.png Source: http://en.wikipedia.org/w/index.php?title=File:Smartcard3.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Channel R
Image:International Data Encryption Algorithm InfoBox Diagram.svg Source:
http://en.wikipedia.org/w/index.php?title=File:International_Data_Encryption_Algorithm_InfoBox_Diagram.svg License: Public Domain Contributors: Original uploader was Surachit at
en.wikipedia
Image:Diffie and Hellman.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Diffie_and_Hellman.jpg License: GNU Free Documentation License Contributors: ArnoldReinhold,
MichaelMaggs, Phr, William Avery, Ö
223
Image Sources, Licenses and Contributors
Image:Firefox-SSL-padlock.png Source: http://en.wikipedia.org/w/index.php?title=File:Firefox-SSL-padlock.png License: GNU Free Documentation License Contributors: BanyanTree, Phr,
Sissssou, Stephantom
Image:Enigma.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Enigma.jpg License: Public Domain Contributors: user:Jszigetvari
Image:2008-09 Kaiserschloss Kryptologen.JPG Source: http://en.wikipedia.org/w/index.php?title=File:2008-09_Kaiserschloss_Kryptologen.JPG License: GNU Free Documentation License
Contributors: User:Ziko
Image:Bruce Schneier 1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Bruce_Schneier_1.jpg License: Creative Commons Attribution-Sharealike 2.0 Contributors: sfllaw
Image:OpenOffice.org Writer.png Source: http://en.wikipedia.org/w/index.php?title=File:OpenOffice.org_Writer.png License: GNU Lesser General Public License Contributors:
http://hacktolive.org/
224
License
License
Creative Commons Attribution-Share Alike 3.0 Unported
http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/
225