- O-ISM3
-
Articles
- Ten ways ISMS fail
- How can you Measure how Secret a Secret is?
- What is the Maturity of your ISMS?
- Risk, Investment and Maturity
- Compliance vs Continuous Improvement
- A primer in Metrics driven Process Management
- Process Management with Security Metrics
- Measuring Security
- Beyond Authentication, Authorization and Accounting
- Return On Security Investment
- Standards, standards, standards, Are they any good?
-
by Vicente Aceituno Canal
- Slideshare
- Youtube
- Youtube (Spanish)
- Contact
-
Foundations
- Ask Smart Questions to Set Security Service Levels
- Can you pass the O-ISM3 Test?
- The CIA triad is not helping you as much as you think
- Advanced Classification of Information
- Security Foundations Series: Secrecy
- Security Foundations Series: Privacy
- Security Foundations Series: Availability
- Security Foundations Series: Expiration
- Security Foundations Series: Retention
- Security Foundations Series: Quality
- Security Foundations Series: Compliance
- Security Foundations Series: Technical Objectives
- Security Foundations Series: Intellectual Property you Own
- Security Foundations Series: Intellectual Property you Use
- What is an Operational, Positive Definition of Security
- Operational Definitions for Security
- Information Assurance Markup Language
- Security Quarks help communicate with non IT people
- Security Quarks and the Cookie Monster
- Information Security Paradigms
Information Security Paradigms
Information security is complex, isn’t it? Confidentiality, Integrity, Availability, Non Repudiation, Compliance, Reliability, Access Control, Authentication, Identification, Authorization, Privacy, Anonymity, Data Quality and Business Continuity are some concepts that are often used.
It is very difficult to define security, but Why? The reasons are manifold:
Information systems are very complex, they have structural and dynamic aspects. Unix abstracts these aspects using the file - process dichotomy. Generally speaking, information systems are structured as information repositories and interfaces, connected by channels both physical and logical. Interfaces interconnect information systems, facilitate input/output of information and interact with users. Data repositories in these systems can hold information either temporarily or permanently. The dynamic aspect of information systems are those processes that produce results and exchange messages through channels.
Information systems process data, but data is not information. The same information can be rendered as binary data using different formats and conversion rates of data to information. The importance of a single bit of data depends on how much information it represents.
Security is not a presence, but an absence. When there weren’t any known incidents, organisations could confidently say that they were safe.
If an incident that goes totally undetected is still a incident, is a matter of debate.
Security depends on the context. An unprotected computer wasn’t as safe connected directly to the Internet in 1990 as it was connected to a company’s network in 2005, or totally isolated. We can be safe when there are no threats, even if we don’t protect ourselves. So security depends on the context.
Security costs money. Any realistic definition must consider the cost of protection, as there is a clear limit on how much we should spend protecting an information system. The expenditures depends both on how much the system is worth to us and the available budget.
Finally, Security depends on our expectations about our use of systems. The higher the expectations, the more difficult they will be met. A writer that holds in a computer everything he wrote in his life and someone who just bought a computer will have totally different expectations. The writer expectations will be more difficult to meet, as he might expect his hard drive to last forever, so a crash can mean catastrophe, while the recently bought computer’s hard drive might be replaced with little hassle. The writer expectations are independent of his knowledge of the system. The system is sold as an item that holds data, and the writer expects the data to stay there as long as he wants to. Expectations about use of a system are not technical expectations about the system. We expect a car to take us places in summer or winter, at it doesn’t matter how much you know about cars, they usually do. The same way users expect systems to serve a purpose, and their expectation can’t be qualified as unrealistic or based on ignorance about how reliable computer systems are.
A good security definition should assist in the processes related to protecting an information system, for example:
1. Find what threats are relevant to me.
2. Weight the threats and measure the risk.
3. Select security measures we can afford that reduce the risk to an acceptable level for the smaller cost.
Unfortunately current definitions in use are not up to this task, and worse still, they are not helpful for advancing information security knowledge. Ideally, a security definition should comply with the scientific method, as it is the best tool for the advancement of empiric knowledge. Scientific theories are considered successful if:
• Survive every falsification experiment tried.
• Explain an ample spectrum of phenomena, becoming widely usable.
• Facilitate the advance of knowledge.
• Have predictive power.
The demarcation criteria used to distinguish scientific from pseudo-scientific theories are based on Karl Popper’s falsifiability. If a theory is falsifiable, it’s possible to think of an experiment that refutes or confirms the theory. For example the theory that the Koch’s germ causes TB is falsifiable, because it’s possible to design an experiment, exposing one of two different populations of animals to the germ. If the exposed animals get infected, the theory seems confirmed, but what makes the experiment valid is that if both populations get infected, the theory would be shown to be false, because it would be shown that the cause of the illness is something different from the germ.
The definitions in use normally don’t state their scope and point of view. From now on I will assume a information technology point of view, within the scope of a company.
Let’s have a look at the four main approaches to defining security.
1. the set of security measures.
2. to keep a state.
3. to stay in control.
4. CIA and derivatives.
The first approach is easy to debunk. If security was the set of security measures, a bicycle with a lock would be just as safe in the countryside of England as in Mogadishu, but it is not. It is interesting that Bruce Schneier has been so often misquoted. “Security is a process, not a product” doesn’t mean that security is impossible to achieve, a point of view favoured by those who think that being secure is the same as being invulnerable. Reading the quote in context what he means is that security is not something you can buy, it’s not a product. Security is NOT the set of security measures we use to protect something.
The second approach states that security is a state of invulnerability or the state that results from the protection. Examples of proponents of this approach are:
• Gene Spafford: “The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.”
• RFC2828 Internet Security Glossary:
o Measures taken to protect a system.
o The condition of a system that results from the establishment and maintenance of measures to protect the system.
o The condition of system resources being free from unauthorised access and from unauthorised or accidental change, destruction, or loss.
The approach that states that security is alike to be invulnerable is purely academic and can’t be applied to real systems because it neglects to consider that security costs money. Invulnerability leads to protection from highly unlikely threats, at a high cost. It is related to very uncommon expectations, and it focuses in attacks, neglecting protection from errors and accidents.
The third approach, stay in control, is akin to keeping Confidentiality, defined as the ability to grant access to authorised users and deny access to unauthorised users, so this approach can be considered a subset of the CIA paradigm. This approach states that security is “to stay in control” or “protecting information from attacks” Examples of proponents of this approach are:
• William R. Cheswick: “Broadly speaking, security is keeping anyone from doing things you do not want them to do to, with, or from your computers or any peripherals”
• INFOSEC Glossary 2000: “Protection of information systems against access to or modification of information, whether in storage, processing or transit, and against the denial of service to authorised users, including those measures necessary to detect, document, and counter such threats.”
• Common Criteria for Information Technology Security Evaluation - Part 1: Security is concerned with the protection of assets from threats, […] in the domain of security greater attention is given to those threats that are related to malicious or other human activities..”
Some access mechanisms used to achieve Confidentiality are often taken as part of security definitions:
• Identification is defined as the ability to identify a user of an information system at the moment he is granted credentials to that system.
• Authentication is defined as the ability to validate the credentials presented to an information system at the moment the system is used.
• Authorisation is defined as the ability to control what services can be used and what information can be accessed by an authenticated user.
• Audit is defined as the ability to know what services have been used by an authorised user and what information has been accessed, created, modified or erased including details such when, when, where from, etc.
• Non repudiation is defined as the ability to assert the authorship of a message or information authored by a second party, preventing the author to deny his own authorship.
This has led to different mixes of CIA and these security mechanisms. As these definitions mix the definition of security with protection mechanisms to achieve security, I won’t bother debunking them any further (ACIDA, CAIN, etc)
CIA is the fourth approach to defining security and the most popular; “keeping confidentiality, integrity, availability”, defined as:
• Confidentiality, already defined, sometimes mistaken for secrecy.
• Integrity, defined as the ability to guarantee that some information or message hasn’t been manipulated.
• Availability is defined as the ability to access information or use services at any moment we demand it, with appropriate performance.
Examples of proponents of this approach are:
• ISO17799: “Preservation of confidentiality, integrity and availability of information”
o Confidentiality: Ensuring that information is accessible only to those authorised to have access.
o Integrity: Safeguarding the accuracy and completeness of information and processing methods.
o Availability. Ensuring that authorised users have access to information and associated assets when required..
• INFOSEC Glossary 2000: “Measures and controls that ensure confidentiality, integrity, and availability of information system assets including hardware, software, firmware, and information being processed, stored, and communicated”
This popular paradigm classifies incidents and threats by effects, not causes, and therefore is not falsifiable. For example, an illnesses classification of fevergenic, paingenic, swellnessgenic and exhaustiongenic is complete, but not falsifiable, because what illness doesn’t produce fever, pain, exhaustion or swelling?.
It is curious that, using this example, a change in the skin coloration doesn’t fit with these categories. A doctor using that paradigm will incorrectly classify it a fevergenic (“It’s a local fever”) or swellgenic (“It’s a micro-swelling). The same way, professionals that don’t question the CIA paradigm classify the loss of synchronization as an integrity problem (“time information has been changed”), while it’s clear that only stateful information, like a file or a database, can have the property of integrity.
It is impossible to think of an experiment that shows an incident or a threat not to belong to one of the confidentiality, integrity or availability categories. Therefore the CIA paradigm is unscientific.
There are several examples of incidents that are not well treated using CIA, but appear to fit within the paradigm. Uncontrolled permanence of information can lead to Confidentiality Loss. Information Copy in violation of authorship rights can lead to Confidentiality Loss, as someone is getting access who is not authorised for it. Copy in violation of privacy rights can lead to Confidentiality Loss, as someone is getting access who is not authorised for it. Now, what are these CIA classifications good for? It’s very clear that to prevent “confidentiality” incidents our controls will be very different if we want to limit access, if we want to prevent breaching of authorship rights, or if we want to guarantee information erasure. So, why are we classifying at all, if the classification doesn’t help to do something as simple as selecting a security measure?. Some other examples of incidents that don’t fit CIA are operator errors and fraud. To neutralise a threat, a control that regulates the causes of the threat will normally be needed; therefore, for control selection, it would be far more useful to classify by causes than by effects, which is exactly what CIA doesn’t do.
CIA doesn’t consider the context at all. This is why small and medium size organisations are intimidated by the exigency of Confidentiality, Integrity and Availability, giving up on devoting enough resources to security. Only big organisations aim for Confidentiality, Integrity and Availability.
CIA doesn’t consider our expectation about our information systems. You can’t demand confidentiality of public information, like www.cnn.com news; you can’t demand integrity of low durability information, it is too easy to reproduce; and you can’t demand availability of low priority services.
Many practitioners who use the CIA definition have a stance of “We want to prevent attacks from succeeding”; in other words, for us to be safe is equivalent to being invulnerable. The definition of an incident under this light is totally independent of the context, and considers attacks only, neglecting accidents and errors as incidents. Disaster Recovery Plans show that the need to protect a company from catastrophes is well known, but many accidents are considered a reliability issue and not a security issue, because accidents are not considered a security problem.
So, if no current information security definition or paradigm is satisfactory, what can replace it? An interesting alternative is the use of an operational definition. An operational definition uses the measuring process the definition of the measured quantity. For example, a meter is defined operationally as the distance travelled by a beam of light in a certain span of time. An example for the need of operational definitions is the collapse of the West Gate Bridge in Melbourne, Australia in 1970, killing 35 construction workers. The subsequent enquiry found that the failure arose because engineers had specified the supply of a quantity of flat steel plate. The word “flat” in this context lacked an operational definition, so there was no test for accepting or rejecting a particular shipment or for controlling quality.
Before detailing the operational definition, some words about probability. Probability has predictive power with the following considerations:
• As long as systems and the environmental conditions don’t change, the future is similar to the past.
• You can apply probability to set of phenomena, not to individual phenomenon.
• A sufficiently big set of historic cases must be available for significant probability calculations.
Probability is often misunderstood. If you drop a coin nine times and get nine crosses, the probability of getting a cross the tenth time is still ½, not lower as intuition suggests. Quite the opposite, the more crosses we get, the higher should be our confidence that the next drop will be a cross too, unless we have tested the coin previously the coin with several runs of dropping it ten times and we got globally crosses 5 out of ten times, meaning the coin is “mathematically fair”.
An operational definition for information security is: “The absence of threats that can affect our expectations about information systems equivalently protected in equivalent environments”. Security is something that you get, not something that you do.
In practice threats are always present. This is the reason perfect security is not possible, which is perfectly consistent with the operational definition. This shows how invulnerability and security are different, as the definition put in practice shows invulnerability as unfeasible.
Expectations about a system are expectation about the use of the system, not expectations about how would it respond to an attack, and are therefore the same even of some new vulnerabilities are discovered.
This operational definition is not only falsifiable, but it is expectations dependent and deals cleanly with the definition difficulties of context. It is helpful to determine what threats are relevant, to weight the threats, measure the risk and to select security measures.
Operational means “working definition” (Give EXAMPLE)
The following definitions of incident and threat follow from the operational definition:
• Incident: “Any failure to meet our expectations about an information system”. This definition makes our expectations the pivotal point about what should protected.
• Threat “Any historical (CLASS?) cause of at least one incident in an equivalent information system” This implies that the probability is not zero, and brings in the context. This is an operational, a “working definition”. Zero days are considered threats, as they belong to the category of malicious code, which is know to have caused incidents in the past. If a “threat” that never causes an incident is a threat is a matter of debate.
The threats relevant to an information system will be the causes of historic incidents in information systems protected equivalently in equivalent environments. Insecurity can be measured by the cost of historic incidents in a span of time for every information system equivalently protected in an equivalent environment.
Many companies have these general expectations about their information systems and the way they are used:
• Comply with existing legal regulations.
• Control the access to secrets and information or services protected by law, like private information and copyrights.
• Identify the authors of information or messages and record of their use of services.
• Make the users responsible for their use of services and acceptance of contracts and agreements.
• Control the physical ownership of information and information systems.
• Control the existence and destruction of information and services.
• Control the availability of information and services.
• Control the reliability and performance of services.
• Control the precision of information.
• Reflect the real time and date in all their records.
Every organisation will have a different set of expectations, which leads to different sets of incidents to protect from and different sets of threats to worry about depending on the environment. The more specific are the expectations defined, the easier it becomes to determine the threats to them and the security measures that can protect them.
To determine how relevant are the threats it is necessary to gather historical data for incidents in equivalent systems in equivalent environments. Unfortunately, whereas the insurance industry has been doing this for years, information security practitioners lack this statistical information. It is possible to know the likelihood and the cause to have a car accident, but there is not data enough to know how likely is to suffer a information security incident, nor the cause. Quantitative risk measurement without proper historical data is useless. Some practitioners even mix estimates figures with complex formulae, which is equivalent to mixing magic and physics.
Even if there is no accurate data about risk, it is possible to follow a risk assessment process similar to OCTAVE to identify the expectations about the information systems and the significant threats that can prevent the expectations to materialise.
With the operational definition, every identified threat can be controlled using suitable security measures. If quantitative risk information is available, the most cost efficient security measures could selected.
Previously unknown threat can be controlled using impact reduction security measures, which are effective against a wide spectrum of threats, like for example, back-up.
The operational definition of an incident helps to focus on whatever is relevant to our context. If there is no expectation for secrecy, no matter what is revealed, there is no incident. The operational definition of a threat helps focus on threats that are both relevant and likely. It doesn’t make much sense consider meteors as a threat if no information system has ever been destroyed by a meteor. Measuring insecurity by the cost of incidents helps to gauge how much invest in information security. If our expenses protecting information systems for the last five years were 10.000 euros a year, and our losses were 500 euros a year, it probably doesn’t make sense to rise the budget to 20.000 euros, but to 10.500 tops. Of course this is a gross estimate, but it gives an idea of what can be achieved if statistics on the cost of incidents and their causes were available.
The operational definition is richer than the other paradigms, it addresses expectations, context and cost and it makes far easier to determine what security measures to take to protect the expectations put on an information system. The adoption of a falsifiable definition should enable some progress in information security theory.