|New website, new perspective: www.WorkingInUncertainty.co.uk - Related articles - All articles - The author - Services|
Designing intelligent internal control systems
by Matthew Leitch, 7 September 2004
Most of us involved in internal controls work can have a greater positive impact if we expand our knowledge of techniques for managing uncertainty beyond our traditional main-stays of sign-offs, reconciliations, and access restrictions. There's no need to invent new techniques; all we need to do is assimilate them from the branches of management where they developed.
This paper examines management techniques we can add to our tool kit of potential recommendations, the trend towards using these techniques in our work, and the special issues when we come to design internal control systems that use these more "intelligent" controls.
Intelligent control techniques improve management performance, so the potential gains are great. Intelligent controls give people a more realistic and open minded view of the future and the ability to make flexible plans in the face of multiple possible outcomes. Without this help managers tend to have a blinkered view of the future and make plans as if they know what will happen. This is made worse when the blinkers are institutionalised in management methods. That need not happen. We can institutionalise flexibility, open mindedness, and learning instead.
The intelligent controls include a variety of techniques for designing internal control systems. Making lists of risks and responses to them is just one of the alternatives, and often not the best.
Just like the traditional controls such as checks and reconciliations, the intelligent controls are only necessary because things don't always work out as we plan, desire, or expect. It is this uncertainty that makes them important, and it is the fact that uncertainty is being responded to that distinguishes internal controls from other management activities. There is work that would be necessary even in a certain world, on top of which there are internal controls that are useful because the world is not certain.
However, whereas low level controls tend to be there to "make sure" something happens as intended, intelligent controls can respond to uncertainty in a wider variety of ways. Sometimes, their purpose is to help us exploit pleasant surprises and that may mean doing something we did not intend to do initially. Whatever the response, the purpose of these intelligent controls is still to keep us in a good position despite the pressure exerted by surprises.
These activities stretch from the most mundane, everyday decisions under uncertainty made by supervisors right up to super-strategic, once-in-a-decade strategic reviews by top management. However, in practice it is the more mundane management activities that we are usually concerned with because they are plentiful and more likely to be within our grasp.
Along the way I mention some management buzz phrases like "scenario planning" and "real options", but only to show how ideas in these can be brought down to earth and applied in everyday situations, without complicated methodologies and consultants.
The single most important factor in getting good control techniques used is knowledge of them. I urge you to read about the many clever mechanisms for managing risk that have been invented. The material in this paper, and the links and references, are designed to get you started. These techniques have inspired me and I hope they do the same for you.
The trend towards intelligence
A practical example
The intelligent part of internal control systems
Some research on intelligent controls
Improving intelligent controls
Some intelligent control techniques
- Flexible plans
- Cause and effect interventions
- Learning and adaptability
- Portfolio effects: diversification and rebalancing
- Design by flexing and detailing a generic starting point
- Listing areas of uncertainty
- Explicit quantification of uncertainty
- Reporting with uncertainty
- Evolutionary project management
- Managing schedule uncertainty in projects with the Critical Chain method
- Statistical Process Control
- Story telling about the future
- Process management control
- Fault Tree Analysis and Event Tree Analysis
- Negative feedback control loops
We each experience major trends from a personal perspective. Here's my story. Perhaps your own is similar in some ways.
In the early 1990s, when I started a career in accountancy and external audit, internal control seemed a fairly mundane and old fashioned area. Boring even. When it came to making recommendations to clients I tried to "add value" like you're supposed to but, with rare exceptions, my output was based on a few simple control ideas, relentlessly applied. My recommendations were either to stop people from doing something, check before something was done, check and correct afterwards, or to create more documentary evidence of having done these things. Sign offs, restrictions, reconciliations - like a serious meusli my output was good for you but not very appetising.
Later, when I got a job as a controls specialist in one of the big audit firms, internal control changed from being mundane to being confusing. COSO's definition of internal control seemed to bring any activity into scope! The audit programmes I was working with asked about policies and plans - a far cry from the bank reconciliations I had focused on before. My client work branched into designing internal control systems with clients, where I was getting more closely involved with management and their thinking.
The firm started to talk a lot about "risk management" and I began to research various topics in this area intensively as my interest grew. But what was the relationship between risk management and internal control, and how much of this stuff was I supposed to know?
How many of these themes do you recognise from your own career? Our concept of "internal control" has expanded and mingled with risk management, while the variety of controls and topics we are expected to work with has expanded to include more that involve management's thinking. Although I don't do internal audit work I know that this has been even more pronounced for internal auditors, especially in the public sector.
The skills needed to make good recommendations have changed and our repertoire is expanding. The implications of these changes are greater than you might think because they over-turn some of our established assumptions.
Here's an example to show the typical features of intelligent controls and how they can lead to very different control systems compared to the traditional mainstays of internal control.
The Institute of XYZ, a membership organisation, offers a range of training courses to its members and the general public. The courses are promoted and administered by the IXYZ but presented by trainers from various companies.
A printed catalogue of courses is produced each calendar year and a great deal of thought goes into deciding what courses to offer in it. (No other public courses are run.)
Companies wanting to present courses submit course proposals for consideration in May the preceding year and the submissions are sifted by a committee that meets several times before final decisions are made about what to include, when, and how many times.
The IXYZ's course selection committee uses its accumulated experience of past courses, knowledge of trends, and a points system for evaluating submissions. It is clear that the right people are on the committee, that they consider each course carefully and consistently, and the financial commitment the catalogue represents is well understood.
But is this how you would want to do it?
Much effort has gone into managing the risk of choosing unpopular courses by careful evaluation of each proposal. There is no shortage of sign offs and precautions. By normal standards of internal control this is squeaky clean, even though it is difficult to think of a worse way to manage a training programme.
The quickest they can react to a topical issue arising is 7 months, and that is only possible if the issue happens to arise in May and someone immediately proposes a relevant course. If something happens in June, just after the deadline for submissions, it will be at least 18 months before any response is possible.
The long cycle time means that learning from experience takes years when it should take just weeks or a few months. It is very difficult for them to experiment with new ideas.
Consequently, their training catalogue will probably remain devoid of topical and leading edge courses, instead featuring old favourites, year after year, with falling returns. There is also the risk that they will fail to adapt to unexpected changes in customer requirements and be displaced by competitors who have. All this because they happen to print an annual catalogue.
The search for potential improvements
This is an imaginary scenario (though based on reality), so let's imagine we are risk managers or internal auditors sent to review the activity and make suggestions. Our first step is to learn how things are done now and why. We need to know all this to have a chance of making useful suggestions, and the managers won't listen unless they feel we really understand how good they are and what constraints they face.
Let's imagine that, without much prompting, the managers involved say that the reasons for printing an annual catalogue include customers who like it, a track record of sales using it, higher compilation, printing, and postage costs if more editions are printed, and the extra work that would be created by having to manage more than one catalogue a year. Furthermore, they point out that their track record of choosing courses is good, as evidenced by the consistently acceptable sales of the courses offered.
They've missed the point, which is that they are probably failing to present great new courses that would sell even better. Besides, who said anything about printing more editions of the catalogue?
You ask "Is there any way to learn more about what people want and might buy through actual experience or at least contact with them?" A bit more conversation leads to the discovery that they occasionally present tailored in-house courses based on those in the catalogue. They have quite a good database of past and potential customers and the ability to e-mail most of them. There's even a website, though it just shows what is in the catalogue. Now we're getting somewhere.
After more discussion the team decides to take forward the following ideas:
Changing the course feedback form to ask people about other things they would like to learn about.
Quarterly meetings to review the tailored courses requested during the year, concentrating on what people wanted that was different from the standard course, and why.
Shortening the lead time for the annual catalogue from over 6 months to around two, with the proviso that they will still encourage course proposals early in the year to allow the consideration to be spread out over time as usual.
Adding short modules to existing courses to try out new themes. The feedback form can ask if people would like to know more about topics in the course.
Offering a limited number of courses that are not in the catalogue on a trial basis, sharing the risk with the presenters, and concentrating on short events. These to be promoted by e-mail and on the website, with feedback scrutinised intensively.
Points to note
The best recommendations may not be the traditional favourite control mechanisms of checking and restriction.
The potential impact of the changes is dramatic. In this case it may be possible to turn a moribund training programme into something much more lively, responsive, and able to adjust to changing demands.
Management are more personally engaged in this kind of change.
Where an outsider is involved they have to be patient and work along with the managers.
Let's do this in two stages. (1) What is an internal control? (2) What is an intelligent internal control?
There have been many definitions of "internal control" and you may have a favourite. For the purposes of this paper I would like to restrict the term to activities that are only necessary because we face uncertainty. Not everything managers do is an internal control even though it is hard to think of something that does not affect the risk of success or failure, however we define it.
The practical point is that we are not trying to tell managers how to do their whole job. We are specialists in the principles of dealing with uncertainty, whether it is the risk of someone typing in the wrong price on a product record, or the more perplexing uncertainties around predicting demand for that product.
Even in a world without uncertainty plans would still be needed to get things done. Pay would still have to be negotiated and agreed. Products would have to be bought and sold. There's a lot that is outside the scope of internal control.
[If you are unhappy about imagining a world without uncertainty, try this alternative. Imagine reducing the amount of uncertainty around an activity and see which parts of the manager's task reduce and which remain unaffected. The work that reduces with reducing uncertainty is the internal control element.]
In effect this is saying that "internal control system" and "risk management system" mean the same thing. However, note that the most common technique of risk management systems, which is to list risks and responses to them, becomes just one more internal control mechanism. It's not even the main method of developing the control system.
I don't know if it is possible to draw a clear line between "intelligent" controls and other controls. However, intelligent controls can be characterised on the basis of who carries them out and the mechanisms used.
First, intelligent controls are much more likely to be carried out by people with management roles (at any level), though they may be helped by computer systems and other people.
Second, intelligent controls rely on a different set of control mechanisms. Here is a comparison:
|Other controls||Intelligent controls|
View of outcomes
Things are either right (what we wanted/planned) or wrong.
Our objectives are fixed.
View of outcomes
There may be different levels of performance and outcomes may be evaluated in more ways than just right vs wrong.
Our objectives may change.
Preventing things from happening at all.
Checking before something is done and giving a go-ahead if appropriate.
Checking something after it is done and correcting if it is wrong.
Doing things to learn, learning, and adapting in future actions.
Exploiting statistical laws.
Adapting internal control plans dynamically (especially in projects).
As in the COSO framework, internal control includes activities to plan and implement internal controls. The system is continually adapting itself, which is particularly important for projects and other business initiatives where many activities are not repetitive.
In the summer of 2004 I ran an online survey to find out how people viewed intelligent controls. The actual purpose of the study was carefully concealed as respondents were self selected. They were faced with eight imaginary reviews of management activities and for each there were five potential recommendations for improving controls. The results showed that respondents thought recommendations of intelligent controls were as likely to be good ones as other controls were, but that the controls were much less likely to be in place already.
My personal observation is that there is vast scope for worthwhile improvements in the way managers deal with uncertainty. Almost any activity can be reformed, sometimes dramatically, and the reasons for this lie in human psychology and the way its weaknesses are exaggerated by management theory.
We humans tend to have an overly narrow view of what might happen in the future. We have mental blinkers on. This is a feature of our individual cognition, but is greatly exaggerated by social pressures that usually push us towards pretending to be more certain of things than we really are. For example, a person with a promising idea may gloss over doubts in presenting it for approval to his bosses. The approval is given in writing on the basis that the plan will not need to be reconsidered or flexed, and that the objectives really ought not to change. A few months later some nasty surprises have happened and now it is hard to react because the approval given did not allow for flexibility. The project manager carries on, hoping that a bit of good luck and hard work will snatch victory from the jaws of defeat and there will never be a need to admit to a problem.
This tendency to ignore uncertainty seems to be a feature of human psychology and culture. It is a widely overlooked weakness. Consequently, intelligent controls that expand our perception of the future and help us act in accordance with that view, have much to offer.
This section offers suggestions on how to set about improving intelligent controls in the most common situations, which are:
Manager doing it yourself: The most common and important situation, even when there is some corporate improvement programme running.
Internal auditor making a visit: I'll tackle this from the point of view of the auditor, trying to be helpful.
As part of a centrally facilitated project: The project might be one to document controls for control self assessment or for Sarbanes-Oxley compliance, for example, or as a follow on from such a project. Alternatively, it might be a project to implement a "management system" such as enterprise risk management or six sigma quality.
Get started by pumping your brain with inspiring techniques for managing uncertainty. Indulge your curiosity. Cram the knowledge in until your head is bursting. (A lot of the rest of this paper is about techniques so start there.)
Having primed yourself, I suggest dreaming big but starting small. Keep within your ability to try things out and adapt.
That's it. No more advice.
Again, I suggest starting by pumping your brain full of inspiring techniques for managing uncertainty. Really push yourself to understand them and get a feel for the implications. Get good at spotting when people are acting as if they know what the future holds.
But of course your opportunities for unrestrained creative gushing will be few. As every good auditor knows, most of our work is done by asking thought provoking questions. When intelligent controls are discussed it can feel to your interviewees as if the conversation is about their thinking abilities, so be particularly diplomatic. Make sure you have properly understood the circumstances and constraints of the people you are trying to help, and have given them credit for their achievements. Be patient even if the problem and its solution seem obvious early on.
Sometimes all people need is evidence that it's OK not to be certain of things, OK to see what happens, and OK to change your mind.
The objections people raise can be different with intelligent controls. You can still expect the usual objections that things are too bureaucratic and not worthwhile given lack of resources. You can also expect objections that techniques involving multiple possible futures, frequent revisions to plans, and adjustments to priorities are likely to be confusing.
As always, the objections could be correct but they could also be the result of a blinkered view of the future.
This is a big subject because internal control projects (e.g. for Sarbanes-Oxley or Basle II compliance) can be long and expensive, there are many different approaches, and none of them is perfect! Here are some angles specific to intelligent controls:
The mechanisms for updating controls: Intelligent controls are involved to some extent, inevitably, because there is always the problem of devising a way to keep controls updated for new circumstances, and that requires some kind of intelligent control mechanism to be part of the system. However, other intelligent controls may or may not be relevant to the objectives of the programme.
The risk of overlooking intelligent controls: If the project has the traditional focus on checks and restrictions and does not mention intelligent control mechanisms it is likely that these mechanisms will not be dealt with properly.
The need to keep flexibility in the control system: The circumstances around intelligent controls often vary so much that it is not possible to regard them as fixed (yet subject to occasional revisions). In practice the controls are continuously generated, and frequently tailored each time they are used. This is particularly true for projects and similar initiatives where many uncertainty-managing actions are taken just once.
The resources needed to implement intelligent controls: Getting people to think differently can take time, though if it's a reform people are only too pleased to make it could be easier than getting a computer system changed!
The central problem is resource. Getting managers to think differently can be time consuming and most internal auditors and risk managers are out-numbered by other employees at least 1,000 to one. What can you do that will have any impact under these intimidating constraints?
How about defining a vision, some goals, and a plan, then building a powerful economic case and getting support from top management before setting up a programme office and driving the project through to completion? Just kidding, so stop feeling cynical and annoyed. This kind of thing only works in management theory. In practice even a newly appointed CEO, buoyed by high expectations and empowered to bring about transformation, would be very lucky to get results this way, as research by George Binney and others showed in 2003.
Even the most attractive improvement programmes have to compete with many other priorities and the simplest changes can run into a quagmire of objections and confusion. To make a difference we have to work on what is possible and be realistic. Here are some ideas to make the most of limited resources and sporadic support. Throughout I will assume that top management have a generally favourable attitude towards improving risk management/internal control but their enthusiasm and interest wax and wane because of competing priorities. Although they are generally supportive in principle they occasionally introduce awkward restrictions you don't entirely agree with, and delay decisions for reasons that are not entirely clear but probably political.
Ride the waves of support: Opportunities to make progress in improving controls come and go and some may be more obvious than others. Sarbanes-Oxley or other regulatory changes, an embarrassing failed investment, a fraud, a new Board member, a project to improve customer service or introduce rolling forecasts - do not waste periods when you can get resources and attention. It's easier to do this if you have some basic design principles and an idea of what needs to be done overall, even if you have to change your ideas and can't implement changes in what would seem to be the logical sequence.
Team up with people in a similar position: Risk managers are out-numbered thousands to one by other employees but the odds look different if you can work with other groups whose role involves changing the way the organisation works. Look for relationships with people responsible for audit, performance management, quality, compliance, legal matters, safety management, design of management reports, statistics, risk analysis, customer service improvement projects, and indeed any project that seeks improvement. These people all have frustrations that you can help with. Virtually all feel ignored and worry about how to get more support and attention. Most have frustrations that relate directly to the human tendency to have an overly narrow view of the future. They spend their lives designing templates, databases, procedures, and training, and collecting and reporting information about their progress. When they have good support, get some of your stuff included in their designs and get copies of their reports. When you have good support return the favour.
Focus on reporting with explicit risk information: One challenge is to keep reminding people to consider uncertainty in their decision making and planning. A way to do this is to try to get more and more management information reported with indicators of uncertainty alongside. For example, if there is to be a monthly measure of customer satisfaction, ask how reliable it is and suggest that the error range be shown with the figures. (For more on this see below.)
Example of an uncertain KPI: A company providing medical insurance used "quality reviews" to help control the most complex and error prone part of their work, which was ascertaining the details of medical problems from customers wanting to claim against their insurance. The quality reviews involved members of the team taking turns to spend a week inspecting samples of claims processed by their colleagues. Monthly statistics on the numbers of errors found were shown on a graph as part of a pack of information for senior management on process performance. The graph appeared to show a gradual improvement in quality - about 5% over several months. Then an internal auditor reperformed some quality reviews to see if they were reliable and found twice as many errors as had been noted originally. In other words, the KPIs were about 100% wrong.
Promote eye-catching management techniques: Ideas about how to do things circulate in the minds of people in organisations. Reality TV shows, RAG reporting, variance analysis, visions, brainstorming workshops, and personal organiser programs are some ideas that just seem to pop up all the time, even though none of them works very well. They are "memes" and some are more catchy than others. Use any opportunity to promote technical ideas for internal controls that you like and think others will find interesting, memorable, and useful.
Don't tell people how to think when you don't have to: Since intelligent controls tend to involve intelligent thinking, documenting them in detail can be difficult and, occasionally, counter productive. There are alternatives - see below.
Concentrate on the controls, not the risk analysis: Internal control projects tend to increase in power when they get beyond an initial stage of risk assessment and people start to focus on the internal controls themselves. However, getting to that stage takes work. For ideas on minimising that work see below.
Concentrate on people who can help you and the organisation through their uncertainty management skills: In principle, everyone has a role in internal control. In practice, some people are far more important than others, so influencing a few of the key people can make a big impact on overall performance. Look for people who are interested in managing uncertainty better because of their intellectual preferences and because it will make a big difference to their job performance. These people give the chance to make some demonstrable improvements, and that is helpful in influencing others.
Whether people have to identify controls they already use, or think of new ones, a list of likely mechanisms is a valuable memory aid. Do not rely on stating risks, control objectives, or high level control requirements (all of which amount to the same thing) to elicit all the controls you need. Also give a prompt list of potential mechanisms.
This isn't just for beginners and non-specialists. I find a prompt list helps me think of controls and I've been doing this work for more than a decade.
This is particularly important for intelligent controls because they usually come to mind less easily and because people often have a poor understanding of how they really behave at work. Ask them what they do and they tend to tell you what their intellect says they ought to do. There has been some fascinating research on this phenomenon.
Back in 1975 Henry Mintzberg wrote "The Manager's job: folklore and fact" which opened with a section contrasting real life with theory. Did managers plan, organise, coordinate, and control? Were they systematic planners who delegated repetitive duties to give themselves time to scientifically study aggregated information from formal information systems? Oddly, they often said they were, but observation and diary studies showed they were not. What really happens is that managers are bombarded with things to deal with every few minutes, get nearly all their information from conversations with their focus on the latest news and hottest gossip, and have many regular duties too.
A lot can be drawn from this but I would like to point out two things:
Managers tend to say they do what theory advises, but their real behaviour is different.
Managers spend a lot of time looking for the latest information and reacting to it. This is characteristic of some of the intelligent controls described below. It may be that people are much better at intelligent control than management theory and they themselves appreciate.
In 1992 David Boddy and David Buchanan's book "Take the lead" reported the results of diary studies and a questionnaire looking at people managing change projects. Once again, systematically making plans and tracking progress against them barely figured. What did take up time and attention was the endless pressure to deal with conflicting interests, fluctuating support, and changing circumstances and goals. For example, one project manager was asked to build a refinery, but it was not known what products the refinery should produce or in what quantities.
These findings contrast dramatically with the focus and advice in nearly all books and courses on how to manage projects. Uncertainty in change projects is far greater than usually imagined and means project managers spend a lot of time doing things they feel they ought not to be doing, like coping with loss of resources that had been promised and trying to clarify goals they feel should have been clear and certain from the start. In fact they are doing the job that needs to be done and should get appreciation for it.
In 2003, George Binney and others reported on their study of eight newly appointed CEOs in "Leaders in transition - the dramas of ordinary heroes." Their summary reads like a blend of poetry and psychotherapy, but despite this there are some useful points. Their CEOs often referred to the expectation of others (and their own expectation) that they would create a vision and then inspire people to work towards that vision. In reality the leaders were almost entirely controlled by their context and the ones who worked along with the flow, looking for movements that were in in a favourable direction and taking advantage of them, did best.
All three studies point out the conflict between management theory and real life. The conflict means people are not good at describing their own behaviour and tend to feel guilt and frustration that they do not live up to the theory. Yet, real behaviour contains elements of intelligent controls (mainly responsiveness to events) and we should try to identify those elements and reinforce them.
Here's a method of developing a set of documented control standards for intelligent controls that lets you start quickly with lightweight documentation and grow it gradually as experience permits.
Begin by identifying a handful of basic management processes that you would like to cover with your control standards. For example, you might start with project management, operations management, and investments.
Then, for each process, write down what a typical set of control mechanisms would be, and write about how to tailor those mechanisms to different situations. I call these generic standards with flexing guides. Issue both and help people to use them. At this stage you are relying more on tailoring than is ideal, but that is to allow you to get started quickly.
When the standards and flexing guidance have been applied a few times in different departments to different activities it is time to start producing more specific generic standards. For example, in place of a generic standard for "project management" you might have one for "construction projects" and one for "IT projects", with the original scheme still applied to any other type of project. In this way, the natural process of learning is assimilated into your documentation and you rely less on tailoring. However, an element of tailoring will always be necessary with intelligent controls, so flexing guides remain even though they can be a little more prescriptive.
Compared to just asking people to follow a process of risk analysis and then think of risk responses the above combination of standards with flexing guides has more teeth. In effect, you are saying to people they should use the standard approach or have a good reason for doing something else.
Compared to just issuing control standards this method has the flexibility needed to prevent it being rejected as rigid, inappropriate, and generally unworkable.
Finally, compared to trying to go straight for detail on the first pass this method offers a much easier starting point and a natural way to incorporate learning over time.
A lot of management books reinforce two ideas: (1) Managers can and should think through decisions, plans, etc by following a sequence of thinking steps as prescribed in the book. (2) This has to be done once only or infrequently e.g. an annual budget.
Of course we all know that, except when people take the books literally, management is really a lot of thinking going on in parallel by different people, not in the theoretical sequence, and usually continuous over a long period of time. Knowing this, authors often join the end of their process to its beginning and call it a cycle, or include a paragraph saying that in practice the methodology may not work out exactly as drawn, or point out that the process they have shown is just the start and that it has to be continued. None of these ploys goes far enough. Here are two techniques for ejecting the myth of linear sequences from our management models:
Structure the documentation: Define the documentation and how the thinking in each part links to others so that revisions will ripple through. Having structured documentation, with a sense of sequence and cross referencing, is helpful. Do not confuse this with the sequence of thinking, which is more likely to be lots of people thinking in parallel on an "easiest-first" basis, simultaneously top-down and bottom-up.
Define channels of communication: A channel of communication in this sense is (a) between two people or teams, (b) directional, and (c) for a specified type of information. For practical purposes it helps to say what physical form the channel takes. In an organisation there are lots of these channels and they are operating in parallel. Usually there is no sense that certain communications have to take place before others. For example, within a management system designed to monitor the health of a billing process and improve it there might be a channel like this: "John (IT support) updates Jill (billing operations) weekly by email of the latest progress and position on billing-related bug fixes." Draw channels using diagrams showing all the channels relevant to your scope, with circles for people and arrows for channels, annotated to show their content.
This helps clarify who should be talking to whom about what, but it does not prescribe anything more, which can be helpful when managers do not want to be told what to think.
In the following sub-sections I describe some intelligent control techniques, with suggestions on where and how to use them, and links to further information. They are not in any particular order other than that I have tried to start with elementary techniques and work up towards techniques that apply more than one of the elementary techniques in combination.
None of these techniques is an excuse to just get started and see what happens. We are looking for more control, not less, through a broader view of the future.
|Narrow view of the future||Realistic view of the future|
Expected outcome is clearly defined, and planned for.
Control attempted by motivating people to stick to the plan.
Surprises frequent and lead to confusion and recriminations.
Surprises hard to respond to because positions are locked in.
Potential outcomes systematically analysed.
Control attempted through learning and responsiveness within a supple yet resilient plan.
Surprises rare and responded to calmly and quickly as normal business.
Unfolding events easily accommodated within flexible positions.
Delays are so common in real life that we tend to assume an implicit flexibility in dates of actions, but this is not as good as explicitly planning with flexibility. Flexibility in all aspects of plans, not just dates, is usually necessary and there are various ways to achieve it.
Leaving things unstated in the plan: You could leave unstated any combination of details (who, what, when, etc). Usually plans do not go down to complete detail on how things are to be done so this form of flexibility is to be expected; the only question is how much of it is best in a given situation.
Stating a plan with alternative paths: In the extreme such a plan could look like a computer programme in a parallel programming language, with communication between parallel processes, and within each process any combination of sequence, selection, and iteration of actions. In practice we might just suggest some alternative actions and decisions rules in plans to increase their flexibility, or insert some contingency actions.
Use policies: Sometimes it helps to give up on the programming style and construct an approach that is a collection of policies rather than a recipe. Some have suggested that an organisation's policies are its DNA and that experience can be used to select the DNA/policies that work.
Planning to do more planning: This is simple and effective. How many plans should have planning activities within them but don't?
Links and references
One common group of risk management actions involves thinking about cause-effect relationships and how they can be exploited. Although this kind of thinking is extremely common we need to be careful, because not everything that looks like risk management really is.
We often think of the world as operating by cause and effect, often with the details of how cause leads to effect being unknown to us. We could imagine a vast network of cause and effect links, but in practice we usually only think of a very few at a time. If we put our microscope over one event in the network its situation looks like this.
events (cause) > [unknown causal links] > event > [unknown causal links] > events (effect)
(The square brackets just mean that there may not be any unknown causal links.)
Since events include things that we could do we have various ways to influence the world shown:
Add events (our actions) that will change the cause events, directly or indirectly.
Add events (our actions) that will change the central event, directly or indirectly.
Add events (our actions) that will change the effect events, directly or indirectly.
Learn more about the unknown causal links and then reconsider our options.
The idea is simply to think about the possible causes and effects of something and then think of ways to manage them to advantage. It may be that some event is desirable, so you want to increase the likelihood of it happening and magnify the resulting benefits if it does. Alternatively, the event may be undesirable, so you want to decrease the likelihood of it happening and reduce the impact if it does.
This is one of the most commonly described approaches to managing risk, but is surprisingly difficult to separate from other planning. We tend to see acts as managing risk only when:
they change the likelihood of something happening, but only slightly;
they change the knock-on impact of something that is unlikely to occur; or
they influence something that is only a loose probabilistic cause of something we are interested in.
In each of these cases there are actions that look very similar but are something more than risk management. (1) If we take an act that radically changes the likelihood of something happening we would say we were making it happen (or not happen), not managing the risk of it happening (or not happening). (2) If we take an act that changes the effects of something that we are expecting to happen we would say we were just managing the impact, not managing the impact of a risk. (3) If we take an act that changes a cause that strongly determines the event of concern we would just say we were managing, not managing risk.
Obviously, the actions that are most interesting are those that do have a radical influence on the likelihood of something happening, or that change the impact of something we are expecting to happen. The message of risk management is that we shouldn't leave it there, but should go on to consider the weaker influences and less likely outcomes. The psychology of this is the same as for other intelligent controls. We have a tendency to view the future with blinkers and that includes assuming actions we plan will be effective and failing to plan for outcomes that are not the most likely.
Hiking example: Imagine you are planning to go hiking in the countryside at the weekend. You will need to wear your walking boots when hiking so your objective is to be wearing them. To reduce the risk of hiking without your walking boots on you plan to find and put on your walking boots. Is this managing risk or just an odd way of justifying putting your boots on? Obviously this is not risk management. Finding and putting on your boots is an action that takes you from not having boots on, with near certainty, to having boots on, again with near certainty. It makes a radical difference to the probability of your having boots on. This is the main action you plan to take, not risk management for it.
Now imagine you suddenly remember that the last time you used your walking boots was months ago before you moved to a new flat and you're not sure where they are or even if you kept them. In the light of that you decide to have a look for the boots before the weekend so you have time to buy some more if you can't find them. Now, that's risk management!
Why am I going on about this distinction? The answer is that it allows me to make two important practical points. Firstly, it is easier to consider weaker influences and unexpected outcomes as part of initial planning than do it separately later. (The same point could be made about most intelligent controls.) Secondly, risk management sessions are too often wasted writing down actions that are not risk management.
Hiking example continued: Suppose I wrote out a "risk register" for your hiking trip and on there I wrote in the Objective column "Wear walking boots", and in the Risk column "Failure to wear walking boots", and then in the Action column "Find and put on walking boots." On the face of it I seem to have fulfilled the requirements of the risk register template. There is no tip off in the wording that reveals I kept my mental blinkers on the whole time and have done no risk management whatsoever.
The following sub-sections are examples of simple patterns of cause-effect intervention.
Links and references
This is where we set up a contract with another party so that, if some event happens, they will exchange something of value with us that tends to compensate for the impact of the event. Usually we think of this as compensation for something bad that happens to us but it could also be something good that happens leading to us paying the other party. Examples of this include taking out insurance, reinsurance, laying off bets by placing opposite bets, hedging contracts, penalty clauses, choosing to deal with organisations who compensate, and profit sharing (i.e. sharing our profit with others).
Here, we make a contract with another party for them to deliver some result to us for a price that is fixed to us. If the cost of achieving the result is more or less than the price we pay that is the other party's concern, not ours. Examples include sub-contracting at a fixed price and factoring debt.
Imagine there are two actions we need to take and the outcome overall depends on whether those actions are effective. Three of the possible connections between effectiveness and overall success are:
IF ( action 1 is effective AND action 2 is effective ) THEN result is success.
IF ( action 1 is effective ) THEN result is success.
IF ( action 1 is effective OR action 2 is effective ) THEN result is success.
Clearly if we have a choice our least favourite rule will be the first one. Both action 1 and action 2 have to be effective for us to reach success. In the second rule action 2 is irrelevant to success, which is helpful, but we only have one chance to succeed, and that is by making action 1 effective. The final rule gives us two chances for success, albeit at the cost of potentially having to attempt both actions.
This analysis gives a number of techniques for making plans that cope with uncertainty better.
Once you recognise that there is a chance of some actions not being as effective as you had imagined the value of reducing dependencies is clear. Sometimes an existing plan includes unnecessary dependencies and it is possible to change rule 1 into rule 2 and still achieve the same result. However, more often it is only by slightly modifying our view of success that we can make it dependent on fewer effective actions. In other words, by reducing dependencies we can go from:
IF ( action 1 is effective AND action 2 is effective ) THEN result is success.
IF ( action 1 is effective ) THEN result is success'.
One way of getting from rule 1 to rule 2 is to break success into two or more parts (i.e. increments), each of which is dependent on fewer effective actions. Sometimes achievement of each increment of success is dependent on achieving previous increments, but sometimes it is not, giving two variants on this technique. In other words, from the all-or-nothing:
IF ( action 1 is effective AND action 2 is effective ) THEN result is success.
to the incremental:
IF ( action 1 is effective ) THEN result is part of success.
IF ( action 2 is effective AND action 1 was effective ) THEN result is the other part of success.
to the fully independent:
IF ( action 1 is effective ) THEN result is part of success.
IF ( action 2 is effective ) THEN result is the other part of success.
If you can find other actions that achieve the success desired this opens up two other tactics, based on changing rule 2 to rule 3. If there is scope for trying one action and then using the second only if the first fails, then the second action is a contingency plan. If this is not possible it may still be possible to try both actions in parallel. Acting in parallel is more costly but may still be worthwhile.
It may be that we need to take other actions in order to put in place resources needed for a contingency action, in the form of contingency funds, redundant systems, and so on.
In all these situations it may be that there are many actions and many increments of success, not just two.
Consider the humble inbox of your e-mail program. It is a buffer between events and your actions. It gives you time to respond. Another example is the store room of a corner shop. When a product has sold out in the shop the shopkeeper can get some more from the store room and perhaps order more from a supplier. Again, the buffer stock gives the shopkeeper time to respond, but in this case it is by taking a rapid response first (refilling the shelf from stock in the storeroom) that he has the ability to take the slower response of ordering more from the supplier.
In principle, a buffer is an action we can take quickly, in response to some event, to gain time to take some other action. You could have a sequence of buffer actions, each gaining time to take the next one.
This is something we look for when our actions could give rise to a variety of outcomes and some are particularly unpleasant. Under the circumstances we will try to plan towards outcomes comfortably dissimilar from the unpleasant ones.
Example: Imagine you are preparing to take a professional examination that is important to you and that failure would be a very serious outcome indeed. Would you do just enough preparation to pass, provided nothing untoward happens, or would you aim to pass comfortably? When I was in this situation many years ago it surprised me that there were people I studied with who aimed to do "just enough". I would imagine coming down with a bad cold the day before the exam, mis-reading my watch and doing a question too few, then having a page of my answers lost by the examiner. I was relieved to pass every time, though I once came within one mark of failure, but many people failed.
The situation here is that we are operating some process that must respond to events, but the process has its limits. To keep within those limits we may be able to take actions that influence the events and use them to manage that demand.
Lack of exercise is thought to be a risk factor driving many ailments, so taking exercise is a way to manage the risk of those ailments. Notice that if the connection between a cause and the effect is a close one (e.g. if you do not exercise you are virtually guaranteed to get the ailment, but will not get it otherwise) we would not say this was managing the risk. There isn't enough uncertainty involved.
This is a huge area, but based on the simple idea that one way to tackle uncertainty is to learn faster and be more flexible. That can involve getting set up to learn faster, doing things to generate learning, getting more feedback faster, making use of learning more often, and creating flexibility to respond to what you learn. Although this is common sense it is surprising how often organisations do none of this, instead carrying on as if they know everything from the outset and do not need to learn or adapt.
Getting set to learn: If you promise your boss something will happen in a certain way and then you start revising your priorities and methods as you go along that looks out of control. In contrast, if you say at the start that you will be monitoring events and progress frequently and adjusting to take advantage of new information that looks like good, responsible management! More seriously:
Documents like approved plans and agreed contracts need to be written in a way that gives room for learning and adjustment or, better still, requires them.
Plans need to allocate time and resources to learning and adjusting.
People need to understand that this is expected of them.
Often, the whole structure of a plan needs to be designed to create a situation in which you can learn quickly and have the flexibility to adapt instead of being stuck with the consequences of old decisions.
Finally, allocate responsibilities to people in a way that allows flexibility. For example, if you have a set of five products to manage, then allocating a manager to each product creates a defender for each product who may well block attempts to manage the products as a portfolio. You may want the product management team to pull support for products that are failing and give it to the successes, but the managers will resist. In contrast, if managers are responsible for the portfolio they will be more willing to change things.
Doing things to learn more, faster: Although much can be achieved by retrospectively trying to make sense of what has been happening it is hard to learn how the world works and what you can do to influence it by monitoring trends. You have to experiment. In other words, try different things and see what happens. Making lots of small, quick experiments is a common management strategy that you can see, for example, guiding the development of web-sites and the selection of new products. Our ability to spot potential winners is so poor that trial and error turns out to be a good strategy, provided you can do it quickly and cheaply. There's a fascinating literature on "design of experiments" which is all about designing efficient experiments. One interesting technique is EVOP, which involves making small changes to input parameters of a live process to see what effect it has on the output. Provided those changes are small enough the customer receiving the output won't mind, but the tiny changes can be enough to guide continuous optimisation.
Other fact finding and research techniques can be useful, and even just thinking more is an important option.
Sometimes in a project you have a choice about when you do tasks whose outcome is very uncertain but important to the overall outcome of the project. Unless there are special circumstances do them as early as you can.
Get more feedback, faster: Include performance measures that give fast feedback. For example, if you are trying to improve educational standards in a school then exam results are an important measure but cannot be used alone because they are only available once a year, which is far too infrequent. We really need to know next week if something seems to be working.
Adapt often and in a quick but controlled way: We're used to discussions about objectives, priorities, resources, and plans being tough and often political. They are something we would perhaps prefer to do no more than annually. There's no room for that kind of discussion if you are reviewing everything monthly! If the monthly reviews lead to violent fluctuations in approach as warring factions struggle for the upper hand then adaptation will not work. Fortunately, if reviews are more frequent they are naturally easier and quicker. People get used to the routine. (You never get used to an annual review process, particularly as they're always reinventing it.) People know the issues. They've discussed many of the alternative actions before. The more frequent the reviews the easier they are and the smaller the adjustments.
Build flexibility: Learning and thinking about what you have learned give no advantage if there's nothing you can differently as a result. These options might be devised after the learning takes place but it is helpful to have given yourself flexibility beforehand. Here are some types of flexibility to aim for:
Flexibility as to extent
Scalability: Look for ways to make it possible to increase or decrease the investment, to any extent required, and at low cost.
Incremental delivery: Look for ways to deliver in small increments, with the opportunity to assess the situation after each. (See Evolutionary Project Management.)
Flexible timing: Look for flexibility on when you do things, including when you make decisions.
Easy termination: Look for ways to get out quickly and easily if necessary.
Flexibility as to purpose
Commonality and multifunctionality: Do things that will be useful in all or many anticipated futures. This may be because the action has several helpful effects.
Reconfigurability: If the investment or product cannot be made multifunctional, can it be made configurable? This might take the form of last minute customisation.
Reusable components: If the investment or product cannot be made multifunctional or reconfigurable, can it be made so that it is possible to reuse parts of it? Make it modular, remanufacturable, or recyclable.
Balancing cost and flexibility
Gaining information and flexibility often involve extra costs. There's a fine line between intelligent flexibility and being weighed down by dabbling in fringe activities, and that line tends to move with expectations of the economy. In good times people are happy to expand and try ideas, encouraged by optimistic expectations of the future. In good times the new management cuts away what is now described as "non-core activities".
As mentioned earlier, the usual human tendency is to see the future too narrowly. In good times, when our expectations are high, we are too confident that our investments will be successful and tend to make too many of them. In bad times, when our expectations are low, we are too confident that investments will fail and tend to make too few of them. In both cases, we tend to make too little effort to maximise the learning and flexibility in our plans.
Most of the time the right balance is best struck by judgement, with the hard thinking going into finding efficient ways to build flexibility and learning into our plans. Occasionally a big decision has to be made that justifies quantitive methods, such as those discussed below.
Links and references
Techniques and theory developed to help investors make money from shares and other securities have wider application, at least in principle.
Diversification is simply not putting all your eggs in one basket. If you invest your money in a mix of different securities the variance of returns from those investments will be lower than if you had invested the same money in just one security, provided the returns of the securities in the diversified portfolio are not perfectly positively correlated.
In the language of finance, the mix of "risk" (i.e. variance of returns) and returns is better if you diversify.
Curiously enough, if you could divide your money between two securities that were perfectly negatively correlated the variance of returns from your portfolio would be zero. Rises in one share would always be tracked by falls in the other. In practice, this is not achievable in real stock markets so investors usually hold between 15 and 20 securities and cannot diversify away the variance resulting from overall economic cycles.
The diversification principles apply to other situations, such as portfolios of projects or the performance of sales people. (1) The variation of performance of the whole portfolio is less that the variation in individual items, in percentage terms. (2) Combinations that tend to be affected in opposite ways by the same factors have results that are more predictable overall, though not necessarily higher on average.
Rebalancing is the inevitable consequence of trying to maintain a portfolio with the same balance of different types of security. From time to time, securities whose price has risen are sold and securities whose price has fallen are bought, so that the total value of securities of each type returns to the original ratios. This tends to increase returns through buying low and selling high, even though it may involve buying more of securities that are falling for good reasons and will fall further.
In other portfolio situations, like projects, rebalancing is still necessary but your policy will be different. Whereas in securities markets it is not known which direction the price will go next, in real projects you know that a failing project is not something you want to invest more into, so your rebalancing policy should be different. Typically, although you need to be experimenting with new items in your portfolio, ones that are going well should get more investment while ones that are not should get less.
Links and references
If you have the job of designing all or part of an internal control system forget about doing a risk analysis and writing internal controls against each risk. That's an auditor's technique and useful mainly for checking your coverage. The smart, natural way to design is to:
Design top down: Early on you rarely have full details of how systems and process will work, but this is no barrier to sketching in at a higher level the architecture of the control system, what types of control you expect to rely on most, and even to identifying some specific controls that are clearly going to be part of the design. This allows you to plan further controls design work. Aim to have a good idea of what you are building after just 10% of the design work is done.
Adapt generic schemes of control: Don't start from scratch. Begin with a generic scheme of controls and tailor it to the circumstances. These circumstances will relate not only to risk, but also to economics, time available, strategy, and cultural fit.
Put pre-fabricated elements together: Often one generic scheme is not available or appropriate as a starting point and it is better to take bits of schemes and assemble them.
Refine the design in the light of experience and measurement: It's hard to predict error rates in advance and there are always problems with controls when you use them in practice. Anticipate this and be ready to measure and analyse problems and potential wasted effort during the first few months of live operation.
Links and references
This refers to management techniques that involve making lists of "risks", "potential opportunities", "assumptions", "uncertainties", or any combination of the foregoing. For example, a construction company doing a project might use several of these techniques, including:
a list of the assumptions underlying the calculations used to reach its overall project cost estimate;
a list of safety risks affecting the site and the project; and
a detailed analysis of the project plan leading to a list of risks to its successful completion.
These techniques can range from a five minute exercise by someone working alone to a six month programme of workshops. Longer, however, is not necessarily better.
They are useful for opening minds to the possible futures and for deciding where to direct risk management resources. They are not the only way to think of new internal controls but they are one of the best known. This way the internal control system is self generating because it includes activities that extend or adapt the control system itself.
Many of the lists that are used could be much better than they are. Here are some things to aim for in your designs:
Prefer the phrase "areas of uncertainty": The most common method is to ask people for "risks" but this almost always causes them to think about things that are currently unsatisfactory. Even if you can get past that there is still a strong tendency to focus only on negative things. It is also common to think of "risks" as single things, when in fact they are almost always sets of things and as such need careful definition and different technical treatment.
Plan for many iterations: Don't expect people to get their list perfect first time, or for the list to remain current for very long. The ideal process is quick and easy, and each iteration provides useful analysis of what to do next. For example, in building a marketing plan the very first list of uncertainties will help identify what research and analysis would most help to develop the plan. By the sixth iteration the plan may be ready for approval and its uncertainties list has become a key part of the proposal.
Start early: People tend to look at the future as if wearing blinkers. To offset this it helps to generate awareness of the uncertainties involved as early as possible, before people are seduced by their own over-confidence into thinking they know what the future holds.
Have a rationale for the way the uncertainty is broken down: Areas of uncertainty don't define themselves. Their boundaries are our choice and we should try to choose a breakdown that works well for our purposes. One of the most common ways to ensure this is to base the risk analysis on an explicit causal model. (See below for more on this.)
Use logical ratings: If you are going to rate the significance of each area in some way then do it logically. One approach to expressing the significance of an area of uncertainty is to define a probability distribution for its impact. An approximation to this that works for subjective estimates is to ask for impact levels for which the cumulative probability is equal to a standard set of values e.g. "Give me a level of impact such that you are 90% sure the actual impact will be less." Alternatively, you can keep the impact levels fixed and ask for confidence ratings. Never rate a set of risks by giving a rating for "probability" and another rating for "impact." It's meaningless.
Make clear the empirical support for ratings: Similarly, if you are going to rate the significance of each area, don't confuse gut feel with statistics. Ratings of the likely impact of uncertainties tend to be supported by a mixture of get feel and hard evidence. For example, there is a big difference between believing, from gut feel, that the probability of losing a particular customer in a year is 0.1, and being able to say that over the last 10 years an average of 12% of customers have been lost annually, though things have improved, and therefore you think the probability should be estimated at 0.1. Clearly, if we are relying on gut feel there is more likely to be something to gain from getting some hard information.
Risk analyses are based on causal models (but not necessarily explicit models). How else would we have any sense of the probabilistic impact of an area of uncertainty? Sometimes those models are confused and implicit. Ask a group of people to "brainstorm some risks" and what you get is a jumble that arises from the many different mental models used by people in the group. That's not necessarily a bad thing because those models tend to be richly detailed and up to date. At the opposite end of the spectrum the analysis might be based on an explicit computer model of a business, fully quantified. Somewhere in between are many different types of model with different styles and levels of explicit detail in a sea of implicit models (i.e. judgement).
Here's an example to illustrate the principles at work. Imagine we have a model that says that if our employees smile at customers then more hot dogs will be sold. Consequently, we can see that we have three areas of uncertainty from this tiny fragment of model:
Degree of smiling: How many will our employees manage? How convincing? Can they keep it up all day and all night?
Sales of hot dogs: How many and at what price?
The link, if any, between smiles and dog sales: Our causal model could be wrong. This is the "model risk" and is often ignored. In this case ignoring model risk could be a big mistake. Maybe the secret is for the employee to match the customer's mood more closely, perhaps being careful to be just a little less ebullient with customers who would rather wallow in their misery.
Often, models do not itemise individual causes but just name groups of them e.g. "Regulation", "Competition."
Listing areas of uncertainty doesn't stop with listing your own. In negotiations it is useful to examine the other side's areas of uncertainty. If they see the future differently from you that can be the basis for what each side considers a good deal, and you might realise that the other side will put a value on information you can offer them.
Links and references
Our society expends a great deal of time and effort teaching quantitive methods for making decisions. We learn mathematics at school that go far beyond the basic arithmetic needed for keeping score. Accountants sit tough exams on building discounted cash flow models to support business decisions. Many organisations have policies that require financial modelling to support business cases.
Yet, despite this, we are often unwilling to be guided by these calculations and frequently feel they are wrong. We think "I don't understand this and the results seem wrong. I don't trust it." A big part of the explanation for this wasteful situation lies in how we deal with uncertainty. The reality is that many calculations about the future are wrong.
If quantification is to play a useful role, as it can, it needs to leave people thinking "I understand this, and the results seem right - in fact they've improved my intuition. I trust this."
The controls designer does not need to understand the details of all the modelling techniques that can be used, but does need to know the main principles and techniques so that he/she can guide others towards techniques that are simple yet reliable, and overcome the inevitable objections.
This section explains the main reasons calculation is ignored, the need for more explicit quantification of uncertainty in quantitive predictions, and the techniques that are most likely to be useful (i.e. they are simple yet reasonably accurate). This is done in the form of responses to common objections.
"We don't really take any notice of calculations now, so surely putting more effort into calculation is a waste of time?"
One of the main reasons people don't take much notice of calculated predictions is that they often think the predictions are wrong, and the reason for that is often because they are wrong for reasons related to uncertainty.
Explicitly quantifying uncertainty in calculations, such as cash flow forecasts, discounted cash flow models, and valuations, is vital if big mistakes are to be avoided. The three main problems that result if we don't do this are:
False impression of certainty: Precise, single figure valuations, forecasts, etc imply a level of certainty that simply is not believable, undermining their credibility (rightly). However, they also perpetuate the myth of our own forecasting ability and encourage us to keep our blinkers on.
The Flaw of Averages: This is the belief that there is no need to show uncertainty explicitly because if we put average values in for things we don't know for sure the result will also be the average value. In fact this is not so unless the model is linear, which models rarely are.
Example: A publisher might think the likely sales of a book are 4,000 on "average" and print 4,200 (just to be safe). Consequently, even if true demand is more than 4,200 that is all the books that will be sold, which drags the average down. Average inputs do not lead to average outputs.
Failure to value learning and flexibility: Most spreadsheet business models work on the assumption that all the decisions have to be taken at the start of whatever it is. The fact that we can, and will, make decisions later that will benefit from things we learn on the way is not reflected in the model and so it under-values (often greatly) the business idea.
Example: A telecoms company is looking at a number of suspected billing errors to decide which it should investigate further. Past experience shows that most suspected errors turn out to be false alarms, or so small in value that they are not worth pursuing. However, a few turn out to be big and worthwhile. On the basis of the expected financial returns from going ahead with a full project none of the suspected errors appears worth investigating. However, if the projects are structured as a series of small investigation steps, with checkpoints at which the investigation can be dropped or continued, then several of the errors look worthwhile.
Unfortunately, it can also lead to over-valuation because some business ideas involve reducing flexibility, and that has a cost that needs to be considered.
"We do sensitivity analysis already."
Sensitivity analysis, done properly, helps remind people that the prediction is not exact and can highlight situations where a decision is unduly influenced by one parameter that is hard to know. A Tornado diagram is a useful device for summarising sensitivity analyses over many variables.
However, even when done well sensitivity analysis only considers sensitivity to one variable at a time so it is vulnerable to situations where combinations of variables work together (especially if they are correlated). It does not deal with the flaw of averages or with the value of flexibility.
Sensitivity analysis is not always done well. Sometimes people do it by asking what effect an x% change in each variable would have, where x is some constant chosen for the analysis. This is not helpful as some variables are much more likely to vary by x% than others. Better techniques are (a) to find out how much each variable has to changed to affect the overall decision (e.g. to produce an NPV of zero), and (b) to estimate a range for each input variable such that you are, say, 80% sure that its value will lie within the range, then calculate the effect on the overall result of varying each input value across its 80% range.
"We do a high and low forecast already."
If this is done properly it almost always requires uncertainty to have been represented explicitly in the model, which is what we want, and in any case it does help point out that a single value prediction is unrealistic. However, there are two ways to make high and low forecasts unhelpful.
One problem is that we sometimes have no idea how likely the high and low forecasts are. Are they really the highest and lowest possible values? Or are they a range such that the forecaster is 80% sure the result will be within the high-to-low range? Or some other degree of confidence? This needs to be clear.
More often people pick high and low values for each input variable and calculate the result. This means that the overall low output is the result of all your worst nightmares coming true at once, while your highest output is the result of all your wildest dreams being fulfilled. Both are very unlikely - especially if there are many input variables - so the range is unduly wide.
"We always make our assumptions clear."
Making assumptions clear is a good thing and, in theory, it offers a way to make single predictions without asserting unreasonable certainty. You can say "IF the following assumptions are correct, THEN you can expect the result to be ..."
But let's be realistic. If you have more than a couple of assumptions for people to consider in this way it is too hard for users to cope with. Besides, they just want a prediction and expect you to have made estimates, not assumptions.
"I'm worried that we'll spend ages doing this model and never finish."
Explicitly representing uncertainty in quantitive models is a good thing, but it all takes extra work and only some uncertainties justify detailed, sophisticated modelling. A practical approach to this has been offered by Chris Chapman and Stephen Ward in "Managing Project Risk and Uncertainty." They call it constructive simplicity.
The principle is to develop the model in iterations of increasing sophistication. Uncertainty is included in the model from the start so that the approximate size of the impact of each area of uncertainty can be judged. Uncertainties with a big impact justify more sophisticated modelling and more data gathering in subsequent iterations. Uncertainties that don't have much impact on the overall results and decisions do not need to be developed further.
"The calculations sound really complicated, and what is Monte Carlo simulation?"
The main way to make uncertainty explicit in models is to replace exact input values with probability distributions. For example, instead of estimating that sales next July will be £93m, you might say you think they are normally distributed with a mean of £92m and a standard deviation of £6m. When you've done this for all the uncertain variables your model, somehow, has to work out what that implies for the probability distributions of the output variables, like profit and net cash.
Trying to do this by calculus and algebra ("analytically" as the mathematicians say) is often impossible even for top mathematicians so it's a great comfort to know that there's no need to. Instead you can rely on a technique that is easy to understand and simple to perform, thanks to the power of spreadsheets and some spreadsheet add-ins now available.
The technique is called Monte Carlo simulation, because it resembles heavy gambling. You can do this with an ordinary spreadsheet like Excel, but there are several spreadsheet add-ins that make it easier, such as XLSim, @RISK, and Crystal Ball. Begin with an ordinary model that doesn't show uncertainty, then, wherever you have an input variable whose value is uncertain you enter a probability distribution instead of a single value. The tool then runs the model thousands of times and records the results of each run. On each run it generates random values for your uncertain variables according to the distributions you chose. This ability to compute the implications of your uncertainty about input variables goes far beyond unaided judgement.
The main weakness of Monte Carlo simulation is that it is hard to get accurate results for extreme situations that happen very rarely. This is because, just as in real life, the extreme situations happen very rarely in the simulation.
"We don't have enough data to quantify our uncertainty."
Another reason people shy away from quantification is that they think they need data to support their quantities. It is true that empirical support greatly increases the value of quantitive analysis, and quantification usually underlines how little we know and how helpful it would be to learn more. However, quantification is valuable even if it is just based on gut feel. Consider bets on horse races. The risk is quantified, but not using statistics. Numbers help us communicate our judgements, and calculate their implications.
There's some odd logic at work here. It often feels easy to guess average numbers for things, but hard to guess their spread. In part this is because we are often asked to state the spread in terms of statistical parameters of which we have no experience, such as the standard deviation. However, it may go beyond that. We judge the likelihood of things by how easily they come to mind, but there is no easy method for judging spread directly. There has to be some explanation for our bizarre intuitive feeling that it is easier to predict a number exactly than to give probabilistic statements about its range!
"Some of these parameters are too difficult to estimate."
Some variables are easier for people to estimate than others. For example, the chances of sales being over £1m next month, over £2m, and over £3m could be easy for people to answer subjectively. In contrast, supplying a number for the standard deviation of the growth rate of sales over the next 6 months is not easy. What does it mean? Who has personal experience of that sort of number? Obviously these are a challenge.
What you need to find is questions that people can answer. Then you use the link between those and the difficult-to-think-about number. There are two methods. (1) Take the easy answers and work out what the difficult-to-think-about number should be. (2) Try guessing the difficult-to-think-about number and then showing people the implications of each choice for numbers they can more easily relate to. Obviously the first approach is more desirable, but it happens that working back to the number you want isn't always easy.
Example of working backwards: Imagine you are trying to get to the standard deviation of sales next July. People don't usually have a feel for standard deviations but they may be able to say what the chances are of sales being more or less than numbers you suggest. If you already have the expected average sales and are confident that the probability distribution to use is the Normal distribution it is easy to work out the standard deviation using a spreadsheet function. If you don't know the mean and want to use more ratings an alternative is to set up a worksheet that uses Excel's Solver to calculate the best fitting values for the mean and standard deviation of the distribution.
Example of working forwards: Imagine you are trying to get estimates for alpha and beta for a variable that has a Beta distribution. Who has a feel for those numbers? It happens that it is also rather difficult to work out alpha and beta from other information about the distribution. The easiest approach is to let people try different values for alpha and beta on a spreadsheet that instantly displays the resulting graph and also statements about it such as "52% chance of sales greater than £1m" so that people can see what these bizarre parameters really mean.
"Everything depends on everything else."
Numbers often vary in a correlated way. For example, when the sun is shining there is less chance of rain. Sunshine and rainfall are linked, not independent. Years ago it was thought that links between probabilities made probability theory too hard to apply to many real problems. Then causal networks (otherwise known as "Bayesian nets" or "networks") came on the scene and showed that dependencies need not be as confusing as had been thought.
There's even a development of these that can model decisions which is based on Influence Diagrams.
"We don't know what model to use."
If you're not sure what your model should be then identify the set of models that seem possibly correct and decide how likely it seems that each is the correct model. Calculate the prediction based on each model. You can either view the results as a set of possible answers, each with a probability of being correct attached, or combine them into one answer by averaging the results, weighted by the probability of its model being correct. The technical name for this is Bayesian Model Averaging and it gives better predictions than just choosing one potential model and going with it.
Making predictions from multiple models is easy provided you don't try too many. [In contrast, trying to update your views on the likelihood of each model being right using a database of empirical data is not easy and there are some ferociously complicated examples of it by academics and researchers.]
"Isn't that like Real Options valuation, with calculus and stuff? Way too complicated for our people."
Years ago the value of learning something during a project that you could use to make a decision was calculated as the "value of information." Today the fashionable phrase is "real options".
The idea of real options is that flexibility can be valued by using the same methods that have been developed for valuing financial options. Individual options are identified and formulae or models developed to work out a value. There is a lot of enthusiasm for this, particularly among academics, but after very modest popularity in practice it has dropped out of the top ten most used management techniques again. The main reasons seem to be that (a) it is too hard for most people to understand the calculations and therefore they don't trust them, and (b) the assumptions are not valid in non-financial settings. In particular, the real options approach gives value to options even when in fact the organisation is unlikely to pay enough attention to the option to exercise it when they should. Research is beginning to show that organisations are not as vigilant as they should be so real options tend to be overvalued. With financial options this isn't such an issue because you can set a computer up to monitor share prices 24 hours a day. In general management situations it may take months of work to get an update on the information needed to review a decision.
My personal experience has been that most people are so wary of mathematics of any kind that only the simplest quantitive methods have any chance of being taken up unless there is an unusual person involved or a trend in the particular field towards a particular technique. With ease of understanding as our top priority, what are the practical alternatives for putting a value on information and flexibility? These can be divided into methods that try to produce an exact result by calculation, and methods that, in effect, use a simple model and computer spreadsheet power.
The simplest exact method is to use decision trees. Compare the tree with and without the parts of your plan designed to increase information and flexibility. Decision trees can be drawn easily for presentation and also laid out as tables on a spreadsheet for calculation. There are two main problems with decision trees that affect people in ambitious applications of the technique.
Firstly, the values you put on different outcomes are likely to be calculated as net present values of cash flows and theoretically it is hard to choose the right risk adjusted discount rate. However, the point of discount rates is to compare an investment with alternative uses of the money. If your financing is very simple and your backers will never know or care about the details of your business then risk adjusted discount rates don't come into it.
Secondly, decision trees can get rather large if there are lots of alternative outcomes and lots of decisions. In particular, what seems like a single decision or outcome may have to be modeled as many if it can occur/be taken at different times. It is usually worth trying to work around this by judicious simplification because the alternatives to decision trees (lattice models and equations based on calculus) are complicated to explain and may be inaccurate anyway. If you don't model every point at which a decision could be taken the result will be an under-valuation of flexibility, but you will still get a more reliable result than you would have if flexibility was ignored.
The alternative to exact methods is to use the awesome computing power sitting on your desk to apply Monte Carlo simulation to models in which decisions you might take in the future are built into the model. This takes less brain-work than decision trees and can cope with more sources of uncertainty, but you still have the task of thinking about how you might make decisions in future.
"It's too complicated. Senior management won't understand it. We don't have anyone who knows how to do it."
That may be true, but before concluding that it is, take a closer look at what would really be involved and test a few specific ideas. The implications of not modelling uncertainty explicitly are usually underestimated while the difficulty of the work needed is often over-estimated. If people think that they are being asked to master techniques from calculus and algebra they may be surprised to learn that in fact they just need a spreadsheet, some random numbers, and a cheap tool that can repeat the calculation thousands of times in a few seconds and collate the answers.
I particularly like the spreadsheet simulation methods because they let me think in very simple, concrete terms, without having to introduce sophisticated mathematical approximations. That also makes it easier to explain what I'm doing.
Example of simple ideas: One way to model billing errors in a large company would be to assume their impact was Normally or, perhaps, Lognormally distributed. Doing that immediately leads to the question "Why?" and the other problems of estimating parameters for these distributions. Functions like the Normal distribution are the result of making idealised assumptions and calculating their consequences. The Normal distribution is based on assuming a vast number of causal factors are driving the variable of interest. We can go back to the original assumptions in our spreadsheet.
A simple alternative to using a Normal distribution in this situation is to imagine that there are lots of things that could potentially go wrong with billing in a period, and that each has a chance of occurring, and will give rise to zero or more bills being too high and zero or more bills being too low. This can be modeled by setting up a spreadsheet with a table of, say, 50 things that might go wrong and a probability of occurrence for each one. You could set those probabilities by letting the spreadsheet generate them using random numbers, perhaps within a range. Similarly, the impacts might be set by generating them at random within constraints. The simulation runs by deciding at random which things have gone wrong on each trial and how much their impact was. Different distributions of errors arise from different assumptions, but the results don't seem to be Normally distributed.
Even if you decide you can't use models with explicit uncertainty to make predictions about the future it can still be useful to build them. Simulations people can use to explore uncertainty phenomena, particularly when the graphics are good, can help people improve their intuitive grasp of how things happen. They can "connect the seat of the intellect to the seat of the pants" as Professor Sam Savage puts it.
Links and references
Here's a simple idea with a subtle but crucial role. Get people to give information about the uncertainty inherent in management information they provide. Do this with as much information as you can. Even audited financial statements are not above such caveats (internally at least!) because it can be helpful to understand how much the results rested on accounting judgements and estimates.
Do this and suddenly the illusion of certainty is shattered and the value of learning more is clearer. There may also be useful but unreliable information that has not previously been used because a satisfactory way to show the uncertainty was not thought about.
Here are some specific techniques that can be useful:
State the source of information: Even if the source is "John Smith's gut feeling" that is a source.
State assumptions: This is not an ideal approach because lists of assumptions tend to convey the impression that the conclusions are worthless, but it's better than nothing.
Quantify the empirical support: For example, if the information is based on a survey, how big was the survey?
Show confidence limits: For example, what is the upper level such that you are 90% sure the result is less, and the lower level such that you are 90% sure it is more?
Show the whole probability distribution: The previous idea can be taken further by showing a full probability distribution for the number in question.
List sources of uncertainty: This is another simple technique that can be applied to any type of management information.
Analyse out components that have higher uncertainty: If some parts of a number are subject to more uncertainty than others then analyse them out. For example, a company's profits for a period may be influenced by a number of accounting estimates and judgements. Show how much those are worth.
Links and references
Almost anything that can be thought of as a form of project is an opportunity to apply Evolutionary Project Management (Evo for short), and the more difficult and risky the project the greater the benefit of doing so.
Although we're used to the idea that a project is a big, dramatic change the problem is that investing resources for months or even years without having anything useful delivered until the end is taking a big chance. What if our initial idea for what we needed turns out to be wrong?
It is much better to rethink our project as a series of smaller deliveries, and rethink the rest of our project each time we deliver something and see how it performs.
The idea that a project is a long build up to a single delivery comes from the roots of project management itself in construction and space projects. It's hard to see how you can incrementally deliver a bridge or a man on the moon. Until you have a bridge that you can use or a man on the moon you have nothing but a lot of expenditure.
However, computer projects are usually different. It's quite possible to deliver a system in stages, or to roll it out gradually. Indeed, this happens to just about all systems because at some point in their lives they move from "development" to "maintenance". "Maintenance" means that the pace of changes and enhancements has slowed down and become business as usual.
Not surprisingly it is in the world of IT projects that the philosophy of incremental delivery has blossomed over the last few decades. For example, the Dynamic Systems Development Method (DSDM) is based on the simple idea that delivering what was required is not the objective of the project. The true objective is to deliver what is required at the time of delivery, which of course may be rather different from the original requirements.
Evo, devised by Tom Gilb, is the form of incremental delivery that has gone furthest in spreading from the IT world to be applied to any project. Three key ideas in Evo are these:
Incremental delivery: The project is divided up into (ideally) 50 deliveries of value to at least one stakeholder, which are delivered in sequence so that the delay between each delivery is kept small (a matter of several weeks at most). Delivering something to the next stage of a project (e.g. a technical specification) does not count as a delivery of value. There is usually a need for some "back room" work to set up an open ended architecture that makes incremental delivery efficient.
Evolving requirements and plans: One of the major advantages of delivering something is that you can learn from people using it. Another is that you learn quickly what it takes to make a delivery in the project. To capitalise on this learning it is important to review and revise the remaining project plan after each delivery.
Performance characteristics: The improvements we want as a result of the project are defined in terms of measurable scales called performance characteristics. The idea is that these are scales of degree, not simple success/fail criteria. This mental shift makes it possible to think of just about any project in terms of incremental improvements. For example, the objective "Put a man on the moon" is not helpful to incremental delivery because until a man gets on the moon (a tough thing to do) we've delivered nothing. In contrast improving on "Knowledge of space travel" and "Knowledge of the moon" is easy to think of in incremental terms, and while we're about it why not aim for more "Useful applications of space travel" as well. This kind of objective is perhaps not so good for election year speeches but it is good for the space programme.
Tom Gilb's Impact Estimation tables help to judge the impact of potential deliveries on the performance characteristics and resources.
Evolutionary project management does not mean plunging in and making it up as you go along. It is a highly disciplined method of incremental delivery that drastically improves the risk profile of most projects.
Evo or variations on it is used by a number of top companies and is the preferred approach to acquisition projects for the United States Department of Defense.
Links and references
This is a good example of a risk analysis based on a model. The model is the project plan. The risks are the uncertain durations of each task on the plan, and the uncertain completion dates of tasks and the project as a whole. How can these uncertainties be managed?
One approach is to pretend you know how long things should or will take and try to keep to that schedule by making adjustments of some kind when things go awry. This has a number of drawbacks and doesn't work very well. It encourages blinkered thinking about the future and the schedule becomes a target rather than a reliable projection. People try to catch up when they fall behind (with only partial success), but ease up if they happen to get ahead of schedule. The result is that it is very rare to finish ahead of schedule.
A better approach is to explicitly represent the uncertainty in plans, for example by showing the duration range such that you are, say, 80% confident that the task will be finished in the time (neither earlier nor later). Simulation can show the implications for the overall schedule of these uncertainties, summarising them as the distribution of finish dates for the project as a whole.
The Critical Chain approach to this has these elements:
Careful analysis of dependencies, considering resource constraints: The starting point is to draw up the project with tasks and dependencies shown. The Critical Chain method emphasises the importance of searching for dependencies created by scarce resources as well as those that arise when an activity cannot be started until another is finished.
Durations estimated as a range: Durations are estimated as a range and to get a 50% confidence duration i.e. the duration such that you are 50% confident of finishing within that time.
Plans based on 50% estimate: Plans are drawn initially with times set at the 50% confidence level, but with the understanding that dates are uncertain.
Buffers at the end of the project and before tasks that feed into the critical chain: Some tasks will be done in less time than expected, and some will take more time. A time buffer is inserted after the final task in the project plan to show the overall buffer. Where tasks feed into the critical chain and, if delayed sufficiently, could delay the overall project, a time buffer is inserted between the tributary and the critical chain. The intention is cut the risk of delaying the overall project.
Starting tasks as early as possible, not simply when the schedule says they should start: Tasks start as soon as possible. If tasks only start on schedule or later (due to delays) then the overall project is almost certain to be delayed. If it is hard to get flexibility in timing then more analysis may be needed to identify when having the flexibility to start earlier if possible is most useful and worth pushing for.
Links and references
Another set of ideas and tools for dealing with uncertainty comes from the field of quality management - Statistical Process Control (SPC). The usual set up is a process that is producing some output whose characteristics are measurable and can be compared with what customers are looking for. The element of uncertainty comes in because, in practice, processes do not produce exactly the same output every time. If the output strays too far from the ideal it won't be acceptable.
The usual roles of SPC tools are (1) to help reduce the variation, and (2) to identify when the factors driving the variation in output have changed significantly. The usual assumption is that drivers of variation that have changed significantly recently are more interesting and deserving of management than drivers that have not changed. Here's an example to make things clearer.
Typical SPC set up: Imagine we are selling hot dogs in a very hi-tech way and have become obsessed with serving the dogs at the right temperature. We have installed a device that measures the internal temperature of the hot dog at the instant it is handed to the customer and records it on a computer. As you might expect the temperature will vary a little depending on many different factors, e.g. the air temperature, how long the customer spends looking for his money and whether the hot dog is held in the server's hand during this period or put on the counter, the cooking temperature, the properties of the hot dog, the properties of the roll, the extent to which the roll is compressed by handling. (A lot of factors causing variation like this usually results in variation that is approximately normally distributed, though not necessarily.)
Some SPC tools are concerned with pinpointing the most influential causes of variation so that they can be controlled. This may take experiments to see what effect deliberate variations seem to have.
Other techniques, such as the famous control charts, are designed to pin-point when something important has changed. For example, a failure of the thermostat in the cooker could produce a significant change to the average temperature and the variation of temperature. Control charts are designed to pick that up as quickly as possible without raising too many false alarms in respect of variation that is not due to some special cause.
The usual design of control charts is that each measurement is plotted on a graph, moving left to right. There are also two horizontal lines called the Upper and Lower Control Limits. Various rules are applied to decide where there has probably been a significant change to the drivers of variation. These rules relate to things like the number of consecutive measurements outside the control limits, the number of consecutive measurements that move in one direction, and the moving average of the absolute difference between successive measurements. By varying parameters of these rules it is possible to adjust the probability of missing a genuine change on conditions, and of having false alarms.
Links and references
One way to combat the tendency to see the future too narrowly is to take outcomes that seem very unlikely, or even impossible, and try to work out stories that tell how they might come about. This is sometimes used routinely in procedures for eliciting subjective probabilities from experts. The facilitator might say, "I know that it seems very unlikely that X will happen, but let's just assume for a moment that it has happened. What could be the explanation?" Once we've thought of a way for something to happen we tend to see it as more likely.
This is also a useful way to get people thinking about how events might unfold over time.
The disadvantage is that it doesn't necessarily mean that perceptions of probability are any more accurate. We've just countered a bias with another bias.
A well known management technique that uses this principle is scenario planning. If the scenarios cover all possible outcomes this extra analysis can help clarify the likelihood of each scenario. However, if the scenarios do not cover all possible outcomes the apparent likelihood of different outcomes is distorted by the story telling. However, scenario planning is still a great way to open minds to possibilities that otherwise would have been missed, and to prepare those minds to react more quickly to events as they unfold.
A typical approach is to start with some scoping and then spend time on analysis of the way the business and its environment work, and what is going on. This leads to a clearer model. Next, the analysis tries to separate what is certain about the future ("trends") from what is not ("uncertainties").
The two most important uncertainties are then chosen and used to generate four scenarios.
Example of everyday scenario planning: Imagine you are due to speak at a conference and have decided that the top two uncertainties for your speech are the size of the audience and the quality of the sound system and projector. Take the highest and lowest credible values for each uncertainty and generate the four permutations for the speech, with inspirational names:
Audience = hundreds, AV = clear and big: "Professional".
Audience = hundreds, AV = not working: "Out of earshot".
Audience = 3, AV = clear and big: "Booming".
Audience = 3, AV = not working: "Cosy chat".
These need to be thought about to eliminate mutually incompatible combinations, and then to generate stories about how these scenarios might come to pass. For "Out of earshot" the story will probably involve a series of last minute calamities, or perhaps abysmal organisation at the venue.
After that the problem is to generate strategies that work well across a wide range of scenarios, and more specialised options that are nevertheless worthwhile. A number of methods are possible.
The main benefits of this kind of work are thought to be in expanding management's view of the future, leading to plans that deal with uncertainty better, and preparing them to recognise and respond to scenarios or parts of them if they actually occur. Consequently, the benefit is mainly with the individuals who participate in the story telling.
Most examples of scenario planning show it being used to look far into the future at big questions, like the future of whole industries and even the human race itself. However, it is just as applicable to the everyday planning problems we face. Perhaps more so.
One of the method's big contributions is in preparing our minds to recognise and react quickly to future events as they unfold. Isn't it ironic that so many applications of the technique concentrate on events that will play out over years or even decades. Surely we'll have plenty of time to think! By contrast, everyday challenges that are over in a few minutes, hours, or days, give us very little time to think so mental preparation is more important.
Links and references
Process management control reaches its most developed form when applied to large scale business processes. These processes usually cut across organisational boundaries and mistakes made in one department tend to generate problems and work for departments later in the process. The high volume and need for low cost mean it is vital to minimise the number of things that go wrong as this is much more efficient than detecting and correcting errors, however early you do it. The elements of this form of control are:
A management group with end-to-end responsibility: Because the process cuts across organisational boundaries and errors in one department often cause problems for others it is important to get representatives together who, collectively, can take responsibility for the process, end-to-end.
Process measures and summarised reporting: To focus their conversations and reveal what is happening across the whole process it is important to collect data on process performance (including volumes, resources used, errors, and backlogs) and summarise it as time series and to give an end-to-end picture.
Work to improve inherent reliability: The team needs to understand what types of error and delay are occurring and why, then initiate actions to remove causes of errors. These may include system bugs, ergonomic problems, and the skills of individuals.
Proactive management of risk factors and adaptation of controls: Reacting to past problems is not enough. The team needs to look ahead for demands that may be beyond the current configuration of the process, and initiative actions to adapt the process in good time. It is also important to reduce future challenges if possible, for example by spreading change over time.
Links and references
Sometimes risk analyses can be done entirely in terms of binary outcomes, success or failure. This is common in analysing the reliability or safety of machines, for example. The simple, success / failure nature of events makes very elaborate modelling and sophisticated computer analysis possible. Leading examples of this are Fault Tree Analysis and Event Tree Analysis.
Fault Tree Analyses (FTA) look like logic circuit diagrams in a hierarchical tree shape. FTA works top down from a defined failure event, such as an explosion in an engine, fire in a building, or injury at the doors of a lift. The analyst has to think what could cause the top event and in what way. Potential causes are placed lower on the diagram and linked to the top event using a "gate", which is a piece of logic about how the events combine. (The terminology comes from the design of electronic logic circuits.)
For example, the top level event "gas explosion" could be the result of "gas leak" AND "naked flame". The gate would be an AND gate. The causal events can themselves have causes and so on. A big model may have thousands of events and gates in it.
Computer tools can analyse the model once it is created to find out how much influence on eventual reliability each event has, if there are any events that, on their own, can lead to overall failure, and which sets of events happening together could cause the failure.
Probabilities can be added to the basic events (i.e. the ones not caused by anything shown on the diagram) and the probability of other failures worked out.
Event Tree Analysis (ETA) looks more like a decision tree. ETA starts with a single defined event, such as an explosion in an engine, fire in a building, etc. The analyst then has to think about what could happen as a result, including how safety systems like alarms might affect the impact. The possible combinations of circumstances branch out from the starting event on the left hand side. Again, by applying probabilities the likelihood of various different impacts can be estimated.
Links and references
A thermostat is a negative feedback control loop. You simply set the temperature you want and the device compares the actual temperature with the target to decide on an action to reduce the difference. ("Negative" just means that the feedback reduces the difference.) The same idea is often relied on in organisations and many management textbooks state that management control is no more or less than setting clear targets, holding them fixed for a year or so, and motivating subordinates to reduce the difference between actual and target values.
Negative feedback control loops have often been seen as a way of managing uncertainty about the future in organisations, and they are, but they are far less effective than generally thought and usually other techniques are preferable. Problems with negative feedback as a control mechanism in organisations include the following:
Assumption that an effective action exists: The logic of negative feedback control seems persuasive until you realise that it rests on the assumption that there is some action that will get you "back on track" when something unexpected happens. In practice this is rarely the case in business situations. Typically we are constrained on quality, deadline, and resources so if we get into trouble we can usually do no better than limit the damage by trading off our priorities.
Example from auditing: During my training as an auditor all audit work was to a predetermined budget, usually derived by taking the agreed overall fee/cost and dividing it between tasks in what seemed, at the outset, to be a reasonable way. If progress seemed slower than expected so a budget over-run had occurred or was feared on a part of the audit the team leader would scold the offending person for getting "bogged down" and warn everyone else not to get "bogged down" before saying we would have to "find some efficiencies."
Audit work is fully constrained and usually if things start to go badly it is because the work that needs to be done is messier and more complex than originally thought. If you fall behind the chances are that you will fall further behind as the work progresses unless you are prepared to accept less evidence or blow the budget. Since team leaders do not want to overspend the true meaning of "find efficiencies" is "convince ourselves we don't need so much evidence."
Purely reactive: One reason there is rarely a way to get back on track is that negative feedback loops are purely reactive. We have to wait for a difference between desired and actual results before anything different happens. By that stage it is often too late. Other methods for coping with future uncertainty that involve looking ahead at potential future events are better in this respect than negative feedback loops.
Failure to adjust targets: One characteristic of negative feedback control is that the target is held constant. This is an artificial constraint and people usually find it easier to adjust targets as well as action plans as new information is obtained. There has to be a compelling reason for trying to hold to a target to compensate for the disadvantages.
Effect on behaviour: When people are motivated to achieve some agreed and fixed target, rather than to act in the best interests of the organisation, they close their minds to uncertainties and focus on their personal interests. People play games to meet targets. They begin to confuse targets with what will actually happen. They stop questioning whether the target is still appropriate.
In short, negative feedback loops are fine in situations where the task they perform is simple and unchallenging and the loop itself has very little intelligence. A thermostat is an ideal example, but a division of a company is not.
Links and references
Henry Mintzberg's paper, "The Manager's Job: folklore and fact" has been reprinted and updated since 1975 and is available through Harvard Business Online. Summaries of it are all over the place on the internet.
"Take the Lead" by David Boddy and David Buchanan was first published in 1992 by Prentice Hall.
The report on eight newly appointed CEOs by George Binney, Gerhard Wilke, and Colin Williams is called "Leaders in transition: dramas of ordinary heroes". It costs £50 but the first chapter is a summary and freely available on their website at http://www.ashridgeconsulting.com/web/acl.nsf/w/LeadersInTransition/$FILE/LeadersInTransisionChapter1.pdf.
The idea of having well defined and cross referenced documentation to support a more chaotic thought processs comes from the 1980 edition of J Christopher Jones's book, "Design Methods", where it appears in his review of new topics. It looks like the current edition still contains the review.
The US Marine Corp offers a range of online books including two brilliant chapters on planning. As you can imagine, uncertainty and the folly of thinking you are in control are huge themes in this guide. Here's a typical quote:
"We should not think of a plan as an unalterable solution to a problem, but as an open architecture that allows us to pursue many possibilities. A good plan should maximize freedom of action for the future. It is a mistake to think that the goal is to develop plans and then implement them as designed. We will rarely, if ever, conduct an evolution exactly the way it was originally developed."
"Implementation of Opportunity & Risk Management in BAE SYSTEMS Astute Class Limited – A Case Study" by Andrew Moore, Alec Fearon, and Mark Alcock offers an approach to risk and opportunity management that uses causes as an entity in their database.
"Learning more from experience" discusses practical learning in business and how it differs from science. It appears on my website, www.dynamicmanagement.me.uk.
A simple illustration and explanation of portfolio effects is given by Jerry Miccolis in one of his articles for www.IRMI.com, "The Alchemy of Enterprise Risk Management: Examples from the Investment World". Jerry has written several outstanding articles for IRMI and this was the last.
The approach is described in detail in "Designing internal control systems".
A simple, practical guide is "How to run a risk management meeting", which is on my website, www.managedluck.co.uk. A longer and more advanced alternative is "Risk modeling alternatives for risk registers", which is on www.internalcontrolsdesign.co.uk. I've also written a punishing questionnaire that searches out flaws in risk registers and it's called "The crisis in management control and corporate governance".
The idea that differences in perceived probabilities between negotiating parties can be used in deal making appears in Howard Raiffa's brilliant book "The Art and Science of Negotiation", first published in 1982.
Professor Sam Savage has devoted a page to the Flaw of Averages, as he calls it. His work is important, but also fun as you can see from the page. There's more good stuff at his Stanford homepage and commercial website for AnalyCorp. I laughed out loud at this, which talks about connecting the seat of the intellect with the seat of the pants.
The people who sell Crystal Ball have provided a collection of models to show what their product can do. All you need is Excel to read the explanations and see how the models are set up.
"Managing project risk and uncertainty" by Chris Chapman and Stephen Ward and published by Wiley in 2002 explains their "constructively simple" approach to developing quantitive models and illustrates it with a series of detailed examples that go way beyond projects."
I'm not aware of anything more on this topic. The vast majority of publications on this topic concern scientific reporting and are extremely detailed and technical.
"Why is Evolutionary Project Management so effective?" starts with an overview of Evo and gives some useful links.
A good summary of the method is given in a review of Dr Goldratt's book, "Critical Chain", here.
If you Google for "SPC", "control chart", or "design of experiments" you will get scores of good explanations. I quite like the NIST Sematech Handbook of Engineering Statistics, which covers SPC in chapter 6, Monitor.
I liked the book "Profiting from Uncertainty" by Paul J H Schoemaker, 2002.
A great collection of Scenario resources is provided by Martin Börjesson.
This is discussed in a bit more detail in "Designing internal control systems", here.
The IEE offers a very short introduction to FTA and ETA using simple examples: Fault Tree Analysis and Event Tree Analysis.
More detailed, but still accessible, explanations are given by Relex, a company that offers supporting software. At the bottom of this page they list three other pages that are interesting and helpful.
The case against negative feedback control loops in business is discussed in "Risk Management and Beyond Budgeting". The conclusion is that negative feedback control mechanisms, such as budgetary control systems and management by objectives, work against the objectives of risk management, which are to get people to be more open minded and realistic about the future, and plan accordingly. The Beyond Budgeting Round Table is at www.bbrt.org.
|New website, new perspective: www.WorkingInUncertainty.co.uk - Related articles - All articles - The author - Services|
|If you found any of these points relevant to you or your organisation please feel free to contact me to talk about them, pass links or extracts on to colleagues, or just let me know what you think. I can sometimes respond immediately, but usually respond within a few days. Contact details|
About the author: Matthew Leitch is a tutor, researcher, author, and independent consultant who helps people to a better understanding and use of integral management of risk within core management activities, such as planning and design. He is also the author of the new website, www.WorkingInUncertainty.co.uk, and has written two breakthrough books. Intelligent internal control and risk management is a powerful and original approach including 60 controls that most organizations should use more. A pocket guide to risk mathematics: Key concepts every auditor should know is the first to provide a strong conceptual understanding of mathematics to auditors who are not mathematicians, without the need to wade through mathematical symbols. Matthew is a Chartered Accountant with a degree in psychology whose past career includes software development, marketing, auditing, accounting, and consulting. He spent 7 years as a controls specialist with PricewaterhouseCoopers, where he pioneered new methods for designing internal control systems for large scale business and financial processes, through projects for internationally known clients. Today he is well known as an expert in uncertainty and how to deal with it, and an increasingly sought after tutor (i.e. one-to-one teacher). more
Please share: Tweet