Ask Smart Questions to Set Security Service Levels

If one does not know to which port one is sailing, no wind is favorable. - Seneca

CISOs faces three key challenges that are specific to information security:

  • Security has proven nearly impossible to measure; and without objective, independent from the observer measurement, CISOs find it difficult to show the Board the value of information security, link it with business objectives, or reach an agreement on “How much security is enough”. The most powerful levers for increased investment remain the need of regulatory compliance, highly publicized security incidents, or raising fear about the consequences of a security incident.
  • Increasing security requires an increase in investment, but the relationship is not linear; twice the investment does not produce twice the security or half the risk. The role of the CISO would be seen more valuable if it could contribute to the bottom line of the organization, optimizing the investment in security.
  • Risk and Compliance are not enough; As popular as they are, a Risk approach or a Compliance approach, by themselves, can’t directly inform benefits realization or help tune the investment in security to the needs of the organization. Risk approaches are not objective, as different practitioners and different methods render results that are not comparable. Compliance approaches are not flexible enough to adapt to the needs of every organization.

THE SCIENTIFIC METHOD SHOWS A NEW WAY TO THINK ABOUT AND MEASURE SECURITY
The objective measurement of a certain attribute of a certain object should render always the same result independently of who makes the measurement, when the measurement is made , or what method is used to measure the attribute. Therefore, the scientific method uses Operational Definitions, as they provide an almost complete independence of the observer, method, or timing. While nonscientific definitions define an attribute by its essence or nature, operational definitions define attributes by the method used to measure the attribute. For example: “The weight of an object is the numbers and units that appear when that object is placed on a weighing scale”. Definitions of weight that are not operational, like “The amount of mass an object has”, are easier to understand intuitively, but don’t enable objective measurement.

HOW PROFESSIONALS CAN MEASURE SECURITY REQUIREMENTS OBJECTIVELY
A Security Requirement is an emergent property [1] (or attribute) that arises from a user using an information system. If the user or the information system don’t exist, or the user does not use the information system, the security requirement does not arise.
Using this observation, we can measure the Security Requirement by asking the user Smart Questions that reduce our uncertainty [2] about the security requirement. The user that can provide the most relevant answers is the owner of the information system, who normally an internal customer of the CISO. Every smart question (measurement procedure) becomes the operational definition of the security requirement. The difference between a common question and a Smart Question is that the second will naturally render an answer with units of measurement. [3][5]
We can move between different levels of abstraction to unearth the security requirements that arise at different levels; the organization using all the information it handles, a user using a single application, or a department using a set of key systems, will have interrelated but distinct security requirements that can be measured.
CISOs can’t use traditional definitions like Confidentiality, Integrity, Availability, Possession, Utility, Risk, Authentication, Authorization, Audit, Non-Repudiation or Accountability for measuring security objectively, as the definitions of these concepts found in the most widely used standards are never operational.

BENEFITS PROFESSIONALS CAN GAIN FROM START USING SMART QUESTIONS
There are several categories of Smart Questions: Secrecy (4 questions), Privacy (15), Availability (6), Expiration (3), Retention (4), Quality (4), Intellectual Property (8), and Technical Objectives (4).
For example, let’s consider three Secrecy scenarios. If we were to use traditional definitions of security, or any risk analysis method, the analysis of these scenarios would be open to interpretation by different practitioners. For the sake of an example we may get results like:

  1. Confidentiality: High / Risk: (25) High
  2. Confidentiality: Medium / Risk (12) Medium
  3. Confidentiality: Low / Risk (5) Low

This type of analysis would not render results with units or actionable lists. and we don’t get clear success criteria that can drive management. What is an incident and what is success when security requirements are analyzed using traditional methods and traditional definitions stays open to interpretation, which does not help to manage security. On the other hand, if we ask the owner of the secret the following Smart Questions related to information use;

  • a)Who would you want to share this information with, and for how long?
  • b) Who would you not want to share this information with?

We will obtain two lists, the list of the audience that can be trusted with the secret and everyone else. A very short list of possible answers for the same three scenarios could be:

  1. A: The Board / B: No one else.
  2. A: Internal users / B: External users.
  3. A: Everyone / B: No one.

In the first example, if the Board can’t access the information: that is an incident; if someone who is not in the board can access the information, that is an incident as well. On the other hand, every time the Board can access the information, we have succeeded, and when someone who is not in the Board fails to access the information, again we succeed. In the same way incidents are defined from success criteria, the definition of threats, vulnerabilities, weaknesses, and risks can be defined in relation to security requirements. The third example shows how these smart questions can be used universally. Every kind of system and use of that system, no matter how restrictive or public it is will give us success criteria that will indicate both when we are providing value and when an incident has occurred.
Let’s examine another use case. A company is evaluating using a DLP solution. We use traditional questions and determine that the system has a Very High Risk. Should we use DLP or not? The answer will not be determined by the measurement of risk or confidentiality assessment, as this type of measurement can’t point to any solution as particularly suitable for the use case. Using Smart Questions, we might find that only 3 persons use the system. This may drive the conversation towards choosing a more economical solution, like air gaping the system, and relying in physical access control instead.

HOW PROFESSIONALS CAN AGREE SERVICE LEVELS FOR SECURITY
In connection to all the measured security requirements, we can proceed to agree on service levels for security. A Service Level for Security is the rate and cost of success and incidents that are acceptable by the users or owners in a period. Depending on service level requested, we can find the investment that will be necessary to achieve that service level, and initiate a discussion where we can reach an agreement of what is the right balance between the investment made, and the service level for security that we can commit to. Unrealistic service levels would require disproportionate investment [4]. After a period of an agreement in place, we will be able to show the value that the information security brings, if we meet the security requirements repeatedly and within the agreed service level (both success and incidents), and this can steer the subsequent period of investment in security towards a greater value for the business and optimized investment.

HOW PROFESSIONALS CAN BENEFIT FROM USING SMART QUESTIONS
The benefits of using smart questions are multiple:

  1. Security requirements can be measured objectively, eliminating the variability of performance that other methods suffer, depending how, who or when the measurement is taken.
  2. Security requirements are relevant to their context. This greatly helps directing security efforts towards controls that are relevant to the business.
  3. Security requirements are immediately actionable as the success criteria measured is specific enough to readily determine which security efforts will contribute to directly or indirectly meet them.
  4. The barrier of communication between the security professionals and the users disappears, as there is no need to explain no specialist’s concepts.
  5. Agreeing service levels for security becomes easier, as we can link investment, controls and security requirements.
  6. Demonstrating the value of security becomes easier, as the service levels make obvious the expected return (meeting the security requirements) for the investment .
  7. Information owners, who are often internal customer can classify information is a way that is more significant for the business , as there are objective security requirements to use as reference.

ENDNOTES
[1] Emergent Properties in complex systems are attributes that their constituent parts don’t exhibit. For example, ripples in the sand in a beach is an attribute that individual sand grains and air don’t exhibit by themselves. If there was no sand, or no air, the ripples would not exist. Attributes of ripples are for example their height or separation.
[2] A Measurement is the procedure of obtaining a reduction in the uncertainty of a number that is characteristic of an attribute.
[3] Using questions for measurement is a well-known method that is used in science when the human factor is present. Using questions is used extensively in polls, and it is used by Delphi method that was developed by the RAND Corporation in the late 1950s.
[4] Mayfield’s Paradox states that to keep everyone out of an information system requires an infinite amount of money, and to get everyone onto an information system also requires infinite money, while costs between these extremes are relatively low.
[5] Level of measurement is a classification that describes the nature of information within the values assigned to variables. The best known has four levels, or scales, of measurement: nominal, ordinal, interval, and ratio.
[6] Some Examples of Smart Questions. This is a small subset of all the know smart questions:

  • Under what circumstances should the information be destroyed?
  • When should the information be destroyed? When does this length of time start counting?
  • Who should control personal information, and for how long?
  • Who should not control personal information?
  • What are the valid uses of the personal information?
  • Should it be possible to identify the owner of the personal information?
  • How recent needs to be the information to be valid?
  • When are the information system supposed to be, up and working?
  • What is the minimum acceptable performance of the information systems measured in outputs per input per unit of time?
  • How long would a downtime of information systems would be acceptable?
  • How long is the shortest uptime of information systems that is acceptable?
  • In the event of the information system downtime, how many transactions can be lost?

Future ransomware will attack Trust in your Data, not Access

I knew it. It is too late to say now, but I knew a ramsonware worm attack was going to happen. Really. And I feel so bad about not writing about that I need to make a forecasts of other things to come in the world of malware attacks. I am sure I was not the only one who knew.

No, the recent WannaCry attack was not the largest infection in history. Conficker, Slammer, ILoveYou, infected more computers and perhaps created more damage. Why did WannaCry had to happen? Because it could.

We have seen for the last few years ramsonware distributed using phishing and drive-by downloads. It was just a question of time before someone connected the dots and thought of creating a ransomware worm.

Many have learnt now something that had been forgotten: Vulnerabilities need to be patched. As the consequences of not patching are not immediately apparent, and the consequences of not testing the restore of backed up data is not immediately apparent, for many IT teams it became acceptable not to patch and not to test. For the next few months, this will no longer be the case. After that, managers will have new worries, or will follow new fads, IT personnel will move onto new jobs, and in two or three years a new worm will shake the world.

Just as IT learn how to prevent worm attacks. attackers will learn about their mistakes. WannaCry writers made several mistakes:

  • The infection spread to companies that were not the original target.
  • The infection spread too fast: This attracted attention and the response was relatively fast and effective.
  • There was a bug in the code that was supposed to prevent the code from being sandboxed and analyzed. It was used, albeit unintentionally, as a kill switch for additional infections.
  • The number of bitcoin accounts was tool low to track who makes and individual payment. This clearly indicates that they where not aiming for multiple targets.

The interest of the ransomware attackers is that the infection is discovered quickly after some useful data has been encrypted, but not before. It is in their best interest that the ransom claimed is low enough to entice payment, and creating a sense of urgency by adding a time limit for the payment. It is in their interest that antivirus measures don't detect them, and that a system being patched or not does not stop the attack. How will they achieve their goals?

  • Future ransomware attacks will trigger out of business hours instead of upon infection. As the data is not being actively used, the amount of data encrypted will be larger.
  • Future ransomware worms will spread using multiple channels: Mail, Bluetooth, LAN, drive-by downloads, social networks.
  • Future ransomware will target narrower and narrower targets more and more accurately, exploiting known vulnerabilities that have not been patched according to information collected by "malware scouts".
  • Future ransomware will stop encrypting data. Instead, file names and contents, and specially database contents will be subtly changed over several days. This will render useless to have of backup copies, and will diminish the trust on the information so much that payment will be inevitable. Remember that data encryption is only used in order to prevent access to the data. Destroying the TRUST in the data will be even more damaging.

I would not be surprised if the change log is recorded encrypted in a blockchain based legger.

What gives data value is the cost of data acquisition, storage and processing, the quality of the data (In what degree can you trust it?). Young data is of relatively little value if it can be acquired again. Very old data may have become obsolete. Business quality data can be very expensive to replicate or validate. This is where the future ransomware will hit. Among all data types, dates are particularly vulnerable, as you can change them without losing credibility. Think of the damage of not knowing if the contract renewal of your clients is correct or not. What about the appointment of all your patients?

Inevitably, when this attack becomes common, companies will get ransom claims when no data has been changed. Will this be called bluffware?

And finally, attackers may stop using bitcoin. They may move on the stock market and demand the attack to the published, trusting to profit from the predictable changes in the stock value because of the company being in the news.

What can you do to prevent being a victim of this future ransomware?

  • Implement highly mature security processes that stay in place after changes in management or personnel.
  • Educate your users.
  • Keep backup copies. Check periodically that restores work.
  • Keep your systems up to date with security patches.
  • Keep your systems protected with updated antivirus.
  • Monitor that all changes in your business grade data are monitor and logged.

I sincerely think that this is the future of ransomware, but as a professional, I hope this time I am wrong.

You can subscribe to updates in this blog at the bottom of this page

Wannacry or Conficker: How to prevent worms in real life

There is plenty of published info about Wannacry; I am not replicating any here. How can your company avoid being hit? It is simple and it is complicated. First we need to understand why companies don't apply patches:

  1. They don't know it should be done.
  2. They feel they are too busy to do it.
  3. They feel it creates issues, with no obvious benefit.
  4. They don't do it often enough.
  5. There are no immediate drawbacks of stopping to patch, eventually it becomes normal not to do it.
  6. The people responsible to do it move on to new jobs, and the new ones don't get promotions or are rewarded for doing it. Why bother?

Preventing worms is a team effort between the Systems teams and Security teams. Security teams are responsible for monitoring new vulnerabilities and patches, and handing over that information to the System team.

Back in 2008, my team and I stopped Conficker from affecting Bankia's systems.

(From Wikipedia): Conficker, also known as Downup, Downadup and Kido, is a computer worm targeting the Microsoft Windows operating system that was first detected in November 2008.[1] It uses flaws in Windows OS software and dictionary attacks on administrator passwords to propagate while forming a botnet, and has been unusually difficult to counter because of its combined use of many advanced malware techniques.[2][3] The Conficker worm infected millions of computers including government, business and home computers in over 190 countries, making it the largest known computer worm infection since 2003

The Systems team applied patches in periodic batches, for servers and workstations. It is the only reasonable way to do it in a large state. What lazy Security teams do is to forward everything immediately to Systems, and shift the blame to them if the patches are not applied. This is the Cry Wolf approach. We forwarded nothing. We just requested the inclusion in the next batch of security patches with one exception. Remote executable vulnerabilities affecting the most used OS in the bank.

We requested once or twice a year urgent application of patches. As we did not request it often, the Systems team listened to us when we did. When the patch that prevented Conficker came along, we asked for it to be applied immediately. And it was.

Bankia was never affected by Conficker. This did not make the news.

Patching should be done. And it should be boring.

Avoid getting your organization in the news. Find a way to collaborate with your Systems team.

Pages