|Related articles - All articles - The author - Services|
Clear thinking and “risk appetite”
Make good decisions under risk and uncertainty
by Matthew Leitch; first appeared on www.irmi.com in April 2007
Reviews the challenges of making decisions about controls.
Explains some very common techniques that introduce systematic biases into this decision making.
Focuses attention instead on what people would be willing to pay for mitigation.
The logic by which we should choose the best way to deal with risks seems obvious:
If the risk is trivial then only the cheapest and easiest controls have a chance of being worthwhile. If the risk is important then more elaborate and perhaps costly controls can be considered if necessary. If no control is cheap and effective enough to compare with doing nothing then doing nothing is what we will select.
This is nothing more or less than the most established decision making principle in decision science as it is routinely applied in business.
Some less obvious but still important finer points are as follows:
Weighing potential outcomes may be hard, particularly those that are uncertain or not immediately expressible in money terms.
We may have to add up the value of a control over more than one risk, or weigh more than one control against a risk.
Often the cost of complete mitigation is enormous but partial mitigation has a much better net value. Compromise is frequently cost effective.
Actions have to be selected in combinations.
We may not have sufficient resources now to implement all the controls that would be worthwhile and so must do the best we can with our limited resources (typically limited time and expertise).
The more quickly and easily we can invent good controls the better our risk control choices will be, the more often we will choose to act rather than do nothing more, and the greater the benefit of risk control.
In this decision we usually have an option called “try again to think of better controls.” A common experience is to spend some time looking at risks and responses but feel that the conclusions are not satisfactory. In other words, we’re not happy and believe more thinking about possible controls will be a good investment.
Over time we will have new ideas for controls and revised views about risk. Controls design is not a one-off act.
Something distinctive about risk-control decisions is that the outcomes involved are uncertain and may involve some rare but potentially extreme impacts. Weighing these in decision making is particularly difficult.
A major difficulty is that the consequences of some future loss depend on our position at the time the loss occurs. The unexpected appearance of a £10,000 tax liability is a minor irritation if you are a wealthy person but calamitous if that £10,000 is all you have.
Extreme impacts that are bad, like bankruptcy and death, are particularly difficult to weigh but somehow deserve more consideration than less extreme outcomes.
Extreme impacts that are good don’t usually deserve the same special weighting. As Arnold Schwarzenegger said on UK television many years ago “Money doesn’t make you happy. I’m no happier now with 51 million dollars than I was when I only had 50 million dollars.”
All this is common sense once you think about it, but not necessarily feasible to carry out accurately, in detail, and with strong empirical support for estimates.
In practical applications we have to take short cuts. Ideally these short cuts will save us a lot of time but have minimal impact on the quality of decisions. The biggest worry is that a short cut will introduce systematic bias, leading to consistently wrong decisions and a significant overall error.
Each approach to risk management/internal control needs to be adapted to the circumstances in which it is to be used so that effort is focused sensibly and systematic biases are avoided.
Unfortunately, there are many short cuts that introduce biases.
For example, the common practice of rating each risk on a risk register for its “probability” of occurring and then for its “impact” if it did occur leads to a systematic understatement of risk. This is because most risk register items have various possible impact levels and these are uncertain. Hence there is a chance of impact in the “high” range, a chance of impact in the “medium” range, and a chance of impact in the “low” range, as well as a chance of no impact at all. The usual grid technique almost always means that only one range of impacts gets considered.
Another example is that when we make a spreadsheet model of future gains and losses we usually ignore the fact that we have some ability to react to events as they unfold. This means we systematically underestimate the value of courses of action.
Who knows if this is offset by another common spread-sheeting shortcut, which is to assume that estimating “average” values for uncertain inputs will give an output value that is an “average.” This has been dubbed the Flaw of Averages by Professor Sam Savage and usually leads to overestimates of value.
The idea of “risk appetite” as it is commonly applied is another short cut that leads to systematic biases.
Risk appetite, as often described, is an upper amount of risk a person or organisation is prepared to “accept” and is often viewed as something relatively fixed and driven by personality or goals.
The idea is that each risk is rated in some way and then plotted on a graph where there is a line representing the risk appetite. If the risk falls below the line then no action is needed. If the risk falls above the line then controls must be added until it is below the line.
In this approach there is no consideration of the cost of the controls and no consideration of other reasons why we might think further thought worthwhile. Other good reasons for putting in design time include:
Not having thought about the risk and its controls before.
Learning about an exciting new idea or tool that may be applicable to our situation.
Some versions of the risk appetite theory mix cost benefit trade-offs with an appetite line, but then it is unclear what function the line serves. For example, this happens in “A risk management standard” published jointly by AIRMIC, IRM, and ALARM.
Ignoring the cost of controls and other reasons for spending time on design is clearly contrary to the basic principles of decision making so it is no surprise that systematic bias results. The risk appetite approach systematically undervalues cost effective controls that address less important risks and drives over-investment in more important risks. The distortion is worst for risks near the risk appetite line.
Furthermore, since the breakdown of risks in most risk registers is uncontrolled the extent of control depends on how aggregated the risks are. If an area of risk is broken down into sufficiently small subcategories then all of them will fall below the line and, apparently, no controls are needed at all!
The tendency to think that willingness to bear risks is a fixed personal characteristic (or some corporate analogy) could lead to failure to respond to changing circumstances. If you think risk taking is a fixed personal characteristic then you may not respond to situations such as these:
Becoming wealthier, which should lead to less special weighting of losses you used to think of as catastrophically large.
Competitive situations where only winning is valuable and consistently mediocre performance is the same as utter failure.
Situations where we are poor now but expect to be wealthy and secure by the time the potential loss could crystalize.
The “risk appetite” idea aims to guide decisions about where to focus thinking about potential controls, and decisions about which controls are worth implementing and operating. What could do that easily but better?
From the point of view of a controls designer what is really helpful is an idea of how much an organisation would be prepared to spend to mitigate a particular risk (along with other constraints on the design such as avoiding impact on customers or inconvenience to the chief executive).
Any technique that gets to an unbiased estimate of this willingness to pay is potentially appropriate. A risk appetite line adds nothing useful to this.
One good way to devise an approach to managing risk and control is to start with a good decision making principle and find short cuts to use it in practice that are unbiased.
Unfortunately short cuts that introduce systematic biases are common and applying a “risk appetite” line is one of them.
Working without a line is better than having one, and if you need to get a clearer idea of how people feel about a risk one alternative is to ask how much they would be willing to pay, as a maximum, to mitigate the risk.
|Related articles - All articles - The author - Services|
|If you found any of these points relevant to you or your organisation please feel free to contact me to talk about them, pass links or extracts on to colleagues, or just let me know what you think. I can sometimes respond immediately, but usually respond within a few days. Contact details|
About the author: Matthew Leitch is an independent consultant, researcher, and author specialising in internal control and risk management. He is also the author of the new website, www.WorkingInUncertainty.co.uk, and has written two breakthrough books. Intelligent internal control and risk management is a powerful and original approach including 60 controls that most organizations should use more. A pocket guide to risk mathematics: Key concepts every auditor should know is the first to provide a strong conceptual understanding of mathematics to auditors who are not mathematicians, without the need to wade through mathematical symbols. Matthew is a Chartered Accountant with a degree in psychology whose past career includes software development, marketing, auditing, accounting, and consulting. He spent 7 years as a controls specialist with PricewaterhouseCoopers, where he pioneered new methods for designing internal control systems for large scale business and financial processes, through projects for internationally known clients. Today he is well known as an expert in uncertainty and how to deal with it. more