I was reading my copy of the Spring 2017 issue of Enterprise Risk, the official magazine of the Institute of Risk Management, when I started getting annoyed.
This is usually a good product, representing a fine association that focuses on enterprise risk magazine (as opposed to insurance, contingency planning, and other forms of risk management).
But this time, it said some things that I would call "mistakes."
The magazine has a generally useful "Trending" section with infographics. This issue had four topics it covered, one of which was very useful (on cyberattacks and their consequences) and one that made no sense to me at all.
"It's getting more difficult to forecast risk," according to the magazine.
You can't forecast risk!
It's always an educated guess, at best. At worst, it's a gut feeling. But the idea that you can accurately and confidently assess the likelihood of an event or situation with a specified set of consequences is nonsense.
"Forecast" is not a sound word.
If they had talked about assessing the level of risk with an acceptable degree of confidence, that would have meant something.
In fact, that idea, that there is a level of confidence in your assessment, is something I address in World-Class Risk Management.
The issue has an article on "Seeing the bigger picture." The idea is that visualization tools can enhance the value of a heat map. I like the idea of showing the interrelationships among different risks, but there's a huge and perhaps insurmountable problem: They are "putting lipstick on a pig."
Heat maps are a problem! They assume that there is a single point for a risk level, where one axis is probability and the other is impact (using the terms in the article).
But that is wrong. It's a common mistake, but it's still a mistake.
If you tried to plot a risk on a heat map, you would not get a point — you would have a range.
When you consider any potential event or situation, there are multiple possibilities and not one.
For example, let's take the possibility of a fluctuation in the rate of exchange between the Euro and the U.S. dollar.
It's almost certain that there will be some level of change between the opening and closing levels. But the change could be anywhere from 0.0001 percent to 2 percent (assuming anything over that level is infinitesimal — an assumption that may not always be valid) and -0.00001 percent to -2 percent. There are different levels of likelihood for each degree of change.
Some take that range of values and convert it to a single number. But that also has problems, as it is possible that different ranges might convert to the same number.
In addition, while the range overall might appear acceptable, it is also possible that one or two points within the range are not.
Another mistake that I see from time to time is thinking that a surprise, a loss or other adverse event, indicates that risk management failed.
Risk management is not perfect.
It is not a precise science where the future can be predicted with confidence.
It's about doing your best to consider what might happen, assess whether that's okay, and then doing something about it, if not.
Surprises are inevitable. Risk management only fails when it should have been able to provide more insight about what might happen — but for some reason did not. There are many possible reasons for this, which I cover in detail in the book.
I will close with one final common mistake. That is the belief that the review of a list of risks every so often is risk management.
It's not. It's simply list management.
Going further, risk management is not even about risks! It's about the achievement of objectives!
When the focus is on a set of risks, it is not on whether there is an acceptable likelihood of achieving (or exceeding) objectives.
It sounds odd that risk management is not about risks — but this is essential to understand for it to be effective.
I welcome your comments.