Two of the attributes traditionally used in assessing risk are likelihood/probability or frequency* (P) and impact/consequence (I). Some limit themselves to evaluating the level of risk based on a single value of P x I. That is a mistake (see this earlier post on risk levels) and I will touch on an issue or two here.
Let's look at (P) and (I).
I have seen reports that predict that 80%-90% of organizations will suffer a breach in the next 12 months (based on the level of breaches in the last 12 months). But, some will have a breach that affects non-sensitive information and only causes embarrassment – such as changes to their web page – while others will have very serious intrusions with significant damage.
How can you estimate which consequence your organization will suffer (going on the 90% likelihood that your organization will be breached)? How do you know that you won't have multiple breaches, by different actors, with different impacts in the next twelve months?
I think, if I were doing it, I would ask the information security professionals to consider the assets we are trying to protect, assess the strength of the defenses, and then estimate the likelihoods (plural) of severe, moderate, and lower impact (but still at least embarrassing) sets of consequences.
The estimation of 'damage' must be based on the impact to the business, not simply on some IT valuation of the information assets 'at risk'. How will the ability of the business to continue with its planned activities, including new initiatives, be affected? Can a value be placed on any reputation damage?
A troubling and complicating factor in the assessment is the duration of the breach and, possibly, the continuing damage it can be causing.
According to several reports, many breaches are not detected until months after they occur – and often detected by third parties, not by the breached organization!
Further, it can take months to expel the invader and repair the defenses. I understood it took something like 6 months for JP Morgan Chase to get the intruders out of its system.
A new report, discussed in SC Magazine, has this to say:
"On average, nearly half a year passes by the time organizations in the financial services industry and the education sector remediate security vulnerabilities, according to new research from NopSec."
"According to the findings, organizations in the financial services industry and the education sector remediate security vulnerabilities in 176 days, on average. Meanwhile, the healthcare industry takes roughly 97 days to address bugs, and cloud providers fix flaws in about 50 days."
This has to be taken into account when assessing cyberrisk.
So, I would not limit the risk assessment to a single possible level of impact: there are multiple, each with a different likelihood/frequency. The impact level can be seriously affected by the duration of the intrusion and continuing damage to the enterprise – which needs to be built into the (I).
I don't know whether it is possible to place a precise value on either (I) or its (P). The likelihood and severity of a breach are constantly changing.
What should not change, however, is the level of cyberrisk that an organization is willing to take. Since cyberrisk cannot be eliminated, and business has to continue, management and the board must accept that some level of risk will remain and must be accepted. This needs to be known so that management can determine (a) whether the current level of risk requires treatment, and (b) how much investment should be made in prevention and detection.
Two points come immediately to mind when it comes to treating cyberrisk:
- It is essential to beef up the ability of the organization to detect an intruder who has succeeded in breaching the defenses
- It is critical to have response processes that can work promptly to limit any damage (including the duration of the breach and its effect), expel the intruder, understand what damage has occurred and how the defenses were breached, and communicate with all necessary and appropriate parties
ComputerWeekly.com published a piece on "the cyber security outlook for 2015" in which they identified, as a serious mistake organizations are making:
"Over-focusing on prevention and not paying enough attention to detection and response. Organisations need to accept that breaches are inevitable and develop and test response plans, differentiating between different types of attacks to highlight the important ones."
When the organization does not have effective, tested, response capabilities, the (I) increases significantly.
An article on ZDNet got me thinking. It talks about a software product that helps with the response by searching for corporate data that has made its way onto the "dark web."
Once an organization has identified the information it wants to protect, should it proactively monitor the dark web to see if any of it appears – even before they are aware of a breach?
Do you have thoughts on this topic of assessing and treating cyberrisk?
*Frequency is used when there is a likelihood of an event multiple times a year.