New website, new perspective: www.WorkingInUncertainty.co.uk  Related articles  All articles  The author  Services 
What can auditors do when they have read 'A pocket guide to risk mathematics'? by Matthew Leitch, 14 May 2010 
Audit topics that can be tackled using a good conceptual understanding of risk mathematics.
The limitations on what can be done.
Today it is increasingly expected that internal auditors will audit 'risk management'. The mandatory standards of the Institute of Internal Auditors include performance standard 2120, Risk Management, which says:
The internal audit activity must evaluate the effectiveness and contribute to the improvement of risk management processes.
The standards do not explicitly comment but presumably it would be a problem if risk assessments were significantly inaccurate due to logical, mathematical, or arithmetic errors, or because any data used were faulty. We don't expect most risk assessments to be 'accurate' but grossly misleading numbers or ratings are surely a problem.
And it would be an error if bad decisions about what controls were to be implemented were taken due to faulty mathematics, logic, or arithmetic.
Such errors are frighteningly common, as you will read below, so auditors need to be able to perform audits that in some way address these potential problems effectively.
A superficial process audit will find some of the issues but even the thickest and most impressively signed off risk documentation provides only moderate protection against mistaken quantitative risk analysis. If auditors are to do a satisfactory job they need to be prepared to study some details.
There are problems in doing this. At their most sophisticated, quantitative risk assessments can involve mathematics combined with extensive software and lots of data. This puts a limit on what auditors can do because most do not have the mathematical or programming skills to understand the details or spot errors at that level. Knowing this, people tend to assume that they can do very little by way of audit when risk mathematics is involved.
This is not correct. It is true that slips in mathematics and program code are extremely hard to find. It is also true that if the experts are deliberately hiding something they know to be an issue then the auditor will be lucky to find them out. However, many large errors and undisclosed limitations of risk analyses are conceptual. They arise from faulty understanding of risk mathematics and from taking short cuts.
These errors are within reach of an auditor who has a conceptual understanding of risk mathematics, knows what problems to look for, and knows how to find them. There is usually much more useful auditing to be done than has been in the past, provided auditors have at least a solid conceptual understanding of risk mathematics.
If you are an auditor reading this and wondering if it is true, then please make a special effort to be objective and open minded. How did you feel about mathematics last time you studied it? How long ago was that? If you have ever taken an examination in mathematics do you think you could pass that examination now? No? Me neither.
Even if you liked mathematics (and many do not) the trouble with the symbolic details is that they are very hard to master and then quickly fade from our memories the moment we stop using them!
As you weigh up the pros and cons of doing more to audit risk analyses please keep in mind that I am talking about acquiring and using a conceptual understanding of risk mathematics. This understanding is vastly easier to acquire than the symbolic skills. It also stays in your memory much longer.
The conceptual understanding of risk mathematics provided by my book, A pocket guide to risk mathematics: key concepts every auditor should know, is not sufficient to allow you to spot technical errors in the detail of algebra or program code. However, it does allow you to go after the conceptual errors and limitations, and you may find that simple formulae are also within reach if you remember some algebra from school or university. It will also change the way you feel about risk mathematics, giving you the selfconfidence to have a go.
Here, in ascending order of challenge, are the main things you should be able to tackle:
Unquantified risk analyses
Forecasts lacking proper risk analysis
Pseudoquantified risk analyses
Quantified risk modelling of low to moderate complexity (excluding technical and deliberately hidden problems)
Reviews of quantified risk modelling of high complexity
Now for some more detail on each.
Some risk analyses are unquantified. Typically, risks on a list are rated as 'high', 'medium', or 'low' and there is no attempt to define quantitatively what these terms mean.
At first sight it looks like a knowledge of risk mathematics is useless here because risk mathematics is not involved. However, there are two reasons for looking more closely:
Even though numbers have not been used the logic of events is still relevant. For example, risk descriptions are often so vague that even the roughest, least quantified ratings are undermined. For example, the likelihood of 'fraud' is vastly different for an individual sale than for a whole company over a year, or perhaps even over the life of a long project. People often do not realise the mistakes they have made.
Many of these risk assessments should have been quantified. A variety of excuses may be given for avoiding numbers, but the most common reason is ignorance. People think that to use numbers you need data. This is not correct. Data are useful, but even numbers alone help us get the best from gut feeling.
Common examples of these include forecasts made to support budgeting or to predict end of year results, and forecasts made to inform investment decisions, including decisions on whether or not to go ahead with a proposed project.
The conceptual errors that can be made here are, again, largely a result of ignorance. People think that their sensitivity analysis is meaningful, offer 'optimistic' and 'pessimistic' scenarios, and imagine that if they put in best guesses for uncertain inputs that the spreadsheet will give them the best guess for its outputs. These are all red flags the auditor should respond to if gross errors in forecasts are to be avoided.
Pseudoquantification is depressingly common for risk analyses based on risk registers. It has numbers but there is no model to pull them together. Typically, risks are rated for 'probability' and 'impact' (or words to the same effect) and these ratings are categories with upper and lower limits defined with numbers.
The common problems include all those true for unquantified risk analyses, plus a load of other errors that occur when people try to do things with their numbers.
For example, imagine a project is underway and a forecast for the eventual cost is required. Someone takes the latest forecast, which is on an optimistic best guess basis, but then says that in addition to this forecast cost there is a risk of some additional costs. This risk number is a total taken from a risk register and has been calculated by multiplying the probability of each risk with its money impact, which itself is a best guess number. It would have been better to admit that the main forecast plus the risk number together are the best guess for the project. In other words, what has been presented as 'risk' is actually a best guess for costs not previously considered. The eventual costs could be considerably higher but the risk assessment method has thrown away information about this.
This category includes work described as risk analysis and also forecasts that provide prediction intervals (i.e. give information about the likely error of the forecasts).
At this level, mathematics starts to become a visible feature of the approach people are taking. Unfortunately, this does not necessarily mean the thinking or numbers are any more reliable. The analysts may have copied their model from a book or journal article and misunderstood its limitations. They may be using software they don't fully understand. We cannot assume they know what they are doing just because they seem to know far more than anyone else in the company.
Auditors with a good conceptual understanding of risk mathematics should be able to find many problems that are common in:
formulation of models
fitting models to data; and
communicating results.
Most of the time the impact of flaws and limitations in quantitative risk assessment is a systematic understatement of uncertainty/risk. Some of the problems come from thoughtless use of well known mathematical techniques. For example, some methods of fitting models to data assume that in a particular sense the data are distributed 'Normally'. The usual statistical test to check if some data are indeed distributed 'Normally' assumes they are unless the evidence clearly says they are not. This means that if you don't have many data the test will give them the benefit of the doubt and say they are 'Normally' distributed!
Once risk models get very sophisticated the people who are doing the modelling are likely to be highly expert. While even Nobel Prize winners can lose billions, their thinking and language are likely to be very hard to follow and, realistically, auditors need more than a conceptual understanding of risk maths to audit their modelling directly.
More often, the banks and insurance companies where this kind of modelling takes place have analysts in a review role whose job it is to scrutinise models in detail.
What internal auditors can and should do is look at the technical reviews to see if they are as effective as they are supposed to be. People who become experts in a field and then take a role as a reviewer can easily fall into the mistake of judging everything by comparison with what they would have done. If the reviewers and modellers have similar backgrounds then perhaps they will make the same conceptual mistakes.
For example, misunderstandings can happen when mathematicians explain their results to people who are not mathematicians. Some key words in risk mathematics have more than one interpretation. What seems clear to the reviewers may still lead to mistakes when the results are explained to less technically knowledgeable business people.
Another example is the use of assumptions. The traditional mathematical writing style for explaining a model usually does not distinguish clearly between introducing notation and introducing assumptions that could be wrong. Assumptions are not itemised or scrutinized and the tendency is to focus on the extent to which predictions match reality. Mathematicians are exposed to hundreds of examples of this during their education so when they see it alarm bells may not ring. They should do!
Another potential problem is that reviewers may judge modelling efforts against what they believe is achievable rather than against what would be ideal. For example, they may make no comment on the use of Maximum Likelihood Estimation (MLE) thinking that, for a particular model, alternatives require too much computer power to be practical. (MLE is often used but involves ignoring all but one possible set of model parameter values, even though others may possibly be correct.) It may be true that alternatives would be impractical, but MLE is still a compromise and leads to systematic understatement of uncertainty so some effort should be made to establish how much difference it makes and perhaps also explain this to users of the model's results.
While reading the book as intended will dramatically change your view of what you should cover in audits and increase your ability to find important problems, there are limits. As I've already mentioned, you can't expect to find symbolic errors in calculus or in software code through detailed review. Also, if clever modellers are hiding something you will be lucky to find it.
Furthermore, limitations imposed by other gaps in your knowledge may be crucial. For example, your ability to see where assumptions are unrealistic depends in part on having a good understanding of what is being modelled. If you have that gap then you can do no more than check whether assumptions have been handled in a controlled way.
Therefore, as with all audits, you need to make sure the scope and limitations of your review are clear to everyone. I also recommend extending the scope and depth of your reviews gradually. Start well within your limitations and take things a step at a time, learning as you go.
The book, A pocket guide to risk mathematics: key concepts every auditor should know, was written to change what auditors can do and so help prevent another Credit Crunch. It's also a potential career boost for any auditor who puts in the effort to read it properly.
To my knowledge it is the first book to do this and I believe it fundamentally changes what can be expected of auditors in future.
This article has explained a range of audit topics that come within reach and mentioned just a few of the issues that can be found.
New website, new perspective: www.WorkingInUncertainty.co.uk  Related articles  All articles  The author  Services 
If you found any of these points relevant to you or your organisation please feel free to contact me to talk about them, pass links or extracts on to colleagues, or just let me know what you think. I can sometimes respond immediately, but usually respond within a few days. Contact details 
 About the author: Matthew Leitch is a tutor, researcher, author, and independent consultant who helps people to a better understanding and use of integral management of risk within core management activities, such as planning and design. He is also the author of the new website, www.WorkingInUncertainty.co.uk, and has written two breakthrough books. Intelligent internal control and risk management is a powerful and original approach including 60 controls that most organizations should use more. A pocket guide to risk mathematics: Key concepts every auditor should know is the first to provide a strong conceptual understanding of mathematics to auditors who are not mathematicians, without the need to wade through mathematical symbols. Matthew is a Chartered Accountant with a degree in psychology whose past career includes software development, marketing, auditing, accounting, and consulting. He spent 7 years as a controls specialist with PricewaterhouseCoopers, where he pioneered new methods for designing internal control systems for large scale business and financial processes, through projects for internationally known clients. Today he is well known as an expert in uncertainty and how to deal with it, and an increasingly sought after tutor (i.e. onetoone teacher). more Please share: Tweet
