|New website, new perspective: www.WorkingInUncertainty.co.uk - Related articles - All articles - The author - Services|
Measuring and managing risk register quality
by Matthew Leitch, 12 December 2007
Risk registers, while never the best way to work with uncertainty, are still common in some areas of risk management in some countries. In many of these cases writing a risk register is an imposed requirement so, like it or not, it has to be done until the requirements are reformed.
Only consider investing more time in improved risk register text if your objective is one or more of the following:
To minimise the time wasted on risk registers by keeping them short and clear.
To tidy up the thinking before progressing to more structured analyses, providing direct support for decision making.
To show that what has been produced is so defective that continuing with it is not worthwhile, so either you should kill the risk register altogether or cut it back down to size and impose some structure to keep it from turning into a monster again.
This article offers suggestions on how to measure quality in risk registers and how to use those measurements to improve quality. If you want to do this you should consider engaging me to provide some individual technical tutoring or teletutoring sessions. These are a time efficient way to master the skills involved.
In most risk registers there is scope for worthwhile improvement in content quality. Low quality content means that time is wasted on the following activities:
confused, protracted discussions in risk register meetings that go nowhere because the thinking is just too vague;
unnecessary mental effort every time someone has to read or revise any of the text because of the ambiguous wording and defective logic;
enquiries to establish what risk and control descriptions mean so that they can be summarised for reporting;
additional work needed to reassure senior people after a confusing presentation on risk, or a blatant logical mistake, has left them more worried than they were before;
sorting out problems resulting from control weaknesses being obscured by wishy washy wording and ubiquitous logical flaws.
If this seems a rather negative assessment, let me assure you that the low level of quality that is typical in risk registers fully justifies this criticism.
Early risk registers were often very high level and based on the idea that it is enough to know your 'top ten' risks. However, as people found that this gave rather bland risk lists that did not change much from one year to the next they tended to push for more detailed, larger risk lists. For some, the top 10 become the top 100, top 1,000, and even top 10,000.
Another reason for the spread of risk registers in some organizations has been the desire to involve more people and 'embed' risk management.
Where risk registers become more widespread and numerous, more and more people are pulled in to write content for them. Where a short list of 'risks' might once have been written primarily by one person (with suggestions from others perhaps), it becomes more likely that several people will be writing items, perhaps separately and from their different perspectives.
Confusing and inconvenient inconsistency is a potential problem when this happens. Even putting people into one room for a workshop does not entirely eliminate the problem of different perspectives because people are usually asked just to suggest 'risks', not to explain the perspective that led to their suggestion. Besides, even if the perspectives were explained, why should people agree to look at the world in the same way?
In addition, the writers and contributors of 'risk' ideas today are less likely to be risk specialists, which can lead to technically mistaken content such as risk factors confused with 'risks' and inaccurate risk-to-control mapping.
Top quality risk register content may not be a priority for many writers and contributors. Most likely, many of the 'risks' currently on their list were first written in the early days when they were much less experienced. Now that their skills have improved what they lack is the time and impetus needed to reorganise and rewrite the existing material.
Another major factor is that writing risk register content is intrinsically difficult to do well. There are many alternative ways to structure our uncertainties into a list or other risk model. When you try to do it rigorously things can quickly get too complex to manage.
Although it is clear to most people that alternative risk lists are possible in the same situation and some will be more useful than others, this has rarely been recognised in published guidance. Typically, people are asked to write down lists with little if any help on how to choose a good way of dividing up their uncertainties. There are a number of alternative techniques that can be used and their value depends on what you are trying to use risk analysis for.
With large amounts of material being fed into databases by a growing number of people, often not specialists at risk thinking, often with more important things in their lives, and without the skill and guidance needed to do something that is not easy, it is not surprising that things sometimes go wrong.
Sometimes they go so badly wrong that the risk register becomes almost unusable and loses credibility. In some cases, months or even years of work are needed to reorganise and clean up the material, then rebuild its reputation.
For this experiment all you need is a few minutes and an extract from a real risk register. There are many aspects of quality that you could look at but for this experiment just focus on clarity. Read a short extract from the text very slowly, holding a red pen. Every time you see a word, phrase, or sentence that seems even slightly unclear to you make a mark next to the problem. It's a defect.
For example, here's a short sentence with some defects written out in full to show the level of detail to aim for. Remember, you only need to make a mark next to each defect, not explain it.
Extract: "Lack of research funding could lead to reduction in quality of research output."
'lack' - complete or partial lack of funds?
'lack' - for how long?
'funding' - but is this from particular sources? Don't we have the opportunity to fund research ourselves?
'reduction' - by how much?
'quality' - but what about quantity? Is that left out deliberately?
'output' - only output? What about the research behind the papers?
When you've done it as carefully and mercilessly as you can, start again and read through one more time even more critically. Push yourself and keep at it. Finally, count your defects and extrapolate from your tiny sample to estimate the number of clarity defects for the whole register.
Bear in mind that you will not have noticed everything - nobody does - and that this is just one aspect of quality. How much scope for improvement have you found?
Most people who do this are shocked by what they discover.
Risk register quality is one of those things that can be managed more easily if it is measured. Without a rigorous approach it is difficult to resolve debates about whose work is better, whether things have improved, and what simply is not good enough.
Imagine the difference it would make to you to be able to resolve these debates objectively.
Another benefit of measuring risk register quality, at least when done as I suggest below, is that it can provide detailed, educational feedback to authors of risk registers that enables them to improve their text and write better material in future.
Measuring the quality of risk registers is not a well established practice yet. However, quality measurement for other documents that are difficult to write is well established in software engineering and there is a body of useful research about it.
This research has been pulled together brilliantly by Tom Gilb and the main conclusions are as follows:
Slow reviews identify many more defects. The slower you review the more defects you will find, and as you get near to one page per 30 - 60 minutes the number of defects found shoots up rapidly.
One person will not find everything. Even a slow review by one person will not find all defects and if other people review too they will find some more new ones.
The main benefit is educational. Finding and remediating every defect is usually not feasible. However, it isn't usually necessary either. Most of the benefit comes from teaching writers to produce text with far fewer defects.
Reviewing samples is enough. To get the educational benefit and test quality we only need to review samples of the text.
Defects are cut in half at each cycle. To be more precise, after an author has received detailed feedback from a review the number of defects in subsequent writing is reduced by around 50%. This applies to later versions of the document reviewed and to other documents of a similar type produced by that author. The next review cycle cuts this by a further 50% and so on. Graphs of individual progress typically show radical improvements.
Tom also points out that reviews should be against an agreed set of rules. Each time a rule is broken it counts as a defect. Also, to keep people motivated to improve Tom suggests not accepting documents until some prespecified level of quality (i.e. defect density) has been achieved.
Over the years, the engineers who use this kind of inspection have ironed out the practical problems that you might expect. For example, a lot has been written about how to give such detailed, grinding feedback without demoralising writers. It helps that what they are trying to write is inherently hard to write well and that the inspection is against specified rules that everyone can see and that (typically) are common sense.
Often, initial versions of documents are unclear, which prevents reviewers from considering if what is being said is suitable. Reviews for clarity may need to precede reviews for suitability.
Although a number of different techniques are useful for measuring different aspects of the quality of risk registers, slow inspection of text against rules helps measure many aspects of quality that would otherwise be difficult to assess objectively.
The next section describes various aspects of risk registers that could be included in quality measurement and you will see that there are many. Therefore, it is sensible to start off by selecting only certain aspects to include in quality measurement, with the option of adding other aspects in future.
You also need to decide how you intend to use the measurements, at least initially. Suggestions are given below.
Where samples are to be used, such as where the risk registers are large, some decisions are needed on how large the samples should be. Again, this initial decision can be adjusted in light of experience.
Who does the reviews will depend on how important it is to keep them consistent and objective. What is found by inspecting documents depends on how much effort is put into finding defects so they are vulnerable to bias. If the measurements are to be used to compare the performance of different people or teams, or to confirm improvements over time, it is best to have them done by someone whose performance is not being judged on the numbers.
Here are some suggested aspects of risk register quality that could be included. (Detailed suggestions for inspection rules and other techniques for measurement are not shown because they are part of a commercial consulting service, described briefly here.)
Content: The quality of the text in the register.
Clarity: This is the most basic requirement but often a major problem area. Which events are included within each risk register item? What actions are referred to?
Suitability: Is the content saying the right things? There are many points that can be looked at. For example:
Internal / external focus: Does the content focus too much on internal mistakes and not enough on outside influences, or vice versa?
Management / learning focus: Do the actions include learning more about the risks or is it usually assumed that there is nothing more to know?
Upside / downside focus: Is the coverage objective or is it biased towards potential problems, or towards potential opportunities? Does it swing from one to the other for no particular reason?
Taboo topics: Does the coverage avoid topics that contributors thought might be career limiting to mention?
Accuracy of controls versus reality: Is it clear even without audit work that the controls described are not an accurate reflection of reality?
Good ideas generated: Are good new ideas for risk control flowing from the process of writing the register?
Circularity: Have authors slipped into the ploy of writing words that amount to "Our objective is to X; the risk is that we fail to achieve X; so our action is to do X."
Calibration: Are probability judgements reasonable or do they show poor calibration?
Amount of risk captured: How much of total risk is captured by the register?
Accuracy of risk-control mapping: Are controls mapped to risks appropriately? Many people find this difficult to do.
Risk driven actions: Would the actions still be needed even in a world without uncertainty? If so this often suggests that risk management is not opening minds to future possibilities or generating new actions.
Consistency/compatibility: Do registers from different teams make sense together?
Technical: Are the risk register formats used similar, or at least designed so that their outputs are compatible with each other and/or with requirements for central summarising?
Categories: Are risk items categorised consistently between different registers or sub-sections of a database?
Terminology: Is terminology consistently used?
Evidence supporting risk assessments: How much evidence supports ratings?
Design of the register and process: This is another area with many possible aspects to look at. These include:
Capture of evidence: Is there somewhere that the evidence supporting risk assessments can be referred to?
Correlations and causality: Is there a way to capture links between risks and understand how risks could occur together?
Frequency of review and update: Higher frequency is usually better.
Coverage of guidance: Does the guidance cover important topics such as how to write good quality risk items?
With so many potential aspects it makes sense to concentrate on those thought to be most important in your particular circumstances, including known problem areas. A good way to start might be to do a wide ranging, perhaps slightly informal review to find what seem to be most common and important problems.
Assuming you can identify efficient methods of measuring a selection of aspects of quality, what can you do with those measurements? Three obvious uses are:
To raise the quality of a large risk register.
To reduce the size of a risk register.
To raise the skills of people who produce a large number of risk registers, for example, for projects.
To push text into more constrained formats in preparation for restructing the register into something more useful.
If you are increasing the size of an existing risk register it may be a good idea to do it gradually, and keep monitoring quality throughout.
Here are some more suggestions:
Share the rules to guide risk register writers and contributors: Many quality measurements will be based on comparisons with quality rules so it is only fair to let people know what the rules are or will be, and even to get input where there might be controversy.
Share summary scores: If you think that competitiveness in your organization will generate motivation to improve, then consider sharing all the scores.
Communicate intended improvement: The scores give you a way to express intended improvement. This is particularly useful if all the risk registers are currently weak but their authors think what they have done is good enough (because it is the same as others).
Provide detailed feedback to authors and other contributors: For the educational impact, go through the details of defects found.
Establish minimum standards: Seek agreement that scores below certain minimum levels are unacceptable and that risk registers will need to be revised until they reach the required standard before the work is accepted as complete.
Graph scores over time: Use this to prove progress - or lack of it.
With experience you may find that some measures are more important than others, some are time consuming to take, that measurement gets faster with practice, and that new aspects emerge as important. These are just some of the reasons you might want to change the aspects of quality that you use.
To make sure that anyone seeing quality metrics is not confused, be sure to explain any changes, show clearly where numbers are no longer comparable, show old scores for a period of time alongside new ones, and give some idea of how much of the total measurement of quality is being done.
If you have any doubts at all about the quality of risk registers you work with then I hope this article has encouraged you to do something about it. At least try the quick and simple experiment described above.
If you end up measuring and managing even one aspect of risk register quality you will be taking a step forward.
Don't forget to consider some individual technical tutoring sessions with me. This is a time efficient way to resolve questions you may have about what rules to use, how slowly to do reviews, how to feed back results, how to organize reviews, etc.
Tom Gilb's major book on inspection is "Software Inspection" co-written with Dorothy Graham and published in 1993. However, since writing that book he has moved towards the 'agile' approach described above. An excellent article on this is "Agile Specification Quality Control" which appeared in Cutter IT Volume 18 No1, 2005 and is available on Tom's website here.
|New website, new perspective: www.WorkingInUncertainty.co.uk - Related articles - All articles - The author - Services|
|If you found any of these points relevant to you or your organisation please feel free to contact me to talk about them, pass links or extracts on to colleagues, or just let me know what you think. I can sometimes respond immediately, but usually respond within a few days. Contact details|
About the author: Matthew Leitch is a tutor, researcher, author, and independent consultant who helps people to a better understanding and use of integral management of risk within core management activities, such as planning and design. He is also the author of the new website, www.WorkingInUncertainty.co.uk, and has written two breakthrough books. Intelligent internal control and risk management is a powerful and original approach including 60 controls that most organizations should use more. A pocket guide to risk mathematics: Key concepts every auditor should know is the first to provide a strong conceptual understanding of mathematics to auditors who are not mathematicians, without the need to wade through mathematical symbols. Matthew is a Chartered Accountant with a degree in psychology whose past career includes software development, marketing, auditing, accounting, and consulting. He spent 7 years as a controls specialist with PricewaterhouseCoopers, where he pioneered new methods for designing internal control systems for large scale business and financial processes, through projects for internationally known clients. Today he is well known as an expert in uncertainty and how to deal with it, and an increasingly sought after tutor (i.e. one-to-one teacher). more
Please share: Tweet