Basic risk management is a topic that most people understand easily. The steps are straightforward, the need is obvious and the underlying concepts are familiar.
However, that familiarity poses its own, significant risk. If the basics are simple to grasp, here is another case of “simple is not the same as easy”. The four fundamental steps of risk management each need a lot of careful attention. They are, in truth, rather tricky.
One part of the process has always seemed to me a particular challenge. It rarely bothers people who are new to risk management,m so I take pains, when speaking about risk, to emphasise it. It is the problem of probability.
Or, to be more precise, the problems …
Let me start by going back to basics, briefly. At the heart of Risk Management is the need to analyse your risks (at the heart, not just because it is the third of my five steps, but because it rounds off the first two steps and sets up the last two). When we analyse risk, we look at a whole raft of information, but two things stand out as essential.
First, risk represents an uncertain event. How uncertain or certain it is, is measured by the probability, or likelihood
Second, if the risk manifests, it will be a “bad thing”. How bad, is measured by the consequences, or severity, or impact.
There are lots of scales on which to estimate the potential impact of a risk, and many of them are robust and easy to apply. What is far harder to work with is likelihood.
Quite simply, human beings are extremely poor at estimating the likelihood of uncertain events. This is because we are equipped with a set of biases that get in the way and rapidly lead us astray. I will examine some of my favourites, and invite readers to use the comments to add some of your own.
We tend to think bad things are less likely to occur, when we are in control – which makes sense. The problem arises, when we are not in control, but somebody else is. Our own “passenger” status then leads us to over-estimate the risk, even when the person is in control is very capable. Most people feel safer when they are driving than when somebody else is – regardless of the other person’s safety record. Many people feel safer driving than on public transport, despite the overwhelming weight of statistics and the simple fact that, the person who is in control is a professional (driver, pilot, …).
”But what” some people say, “about the pilot who was drinking in the cockpit, or the bus driver who was using a mobile phone?” These are examples of how recent newsworthy events take over our consciousness and lead us towards new mistakes. Strangely, the millions of bus journeys in which no incident occurs are never reported, and neither is each safe aeroplane landing. Perhaps our attitudes may change if every road traffic accident were on the news.
The worse the outcome, the higher we will rate it on the Impact scale. What also seems to occur is that we unconsciously adjust our estimation of Likelihood, according to perceived impact. Higher impact events seem more likely than they really are. In the real world, most uncertain events cluster around a diagonal line on our familiar Impact versus Likelihood graph: Highly likely events tend to be low impact; whilst high impact events tend to be rare. What we often do is distort our estimates.
Another distortion comes when we consider the context of the risk. We also consider a bad outcome is more likely when it is associated with something we consider to be, in itself, bad. So, we feel more likely to catch a dread disease in a “hostile” country, industrial accidents seem more likely in a “bad” industry. What is important, is not an objective assessment of the context (you can decide for yourself which countries or industries feel bad to you). What matters is our subjective assessment.
Unless you are an expert in something, you will never have sufficient data and analytical tools to estimate probabilities accurately. So the solution is simple:
If you try to estimate likelihood with accuracy, you risk falling into another trap: The Precision Trap. This is where you mistake precision for accuracy; the more precise your estimate, the more convincing it seems. To avoid the Precision Trap, the safest route is to stick to low-precision estimates. Avoid the temptation to use too many categories on the likelihood scale of your risk assessment. Absolutely reject the use of probability-based likelihood estimates unless you have real data on which to base your probabilities and you also understand statistics and probability theory.
The “so what?”
Treat risk analysis with care, be aware of the risks of bias in your likelihood estimates, keep your estimates simple, avoid being too precise.
My favourite scale for likelihood:
The sort of things that seem to happen a lot – most of us have experienced them and we all know people who have.
The sort of things that seem to happen from time to time – a few of us have experienced the and most of us know someone who has.
The sort of things that do happen, although few of us know someone who has actually experienced them.
If all this sounds imprecise, then it is supposed to. I don’t want project managers believing their risk estimates: I want them acting on best evidence and good judgement.
Dr Mike Clayton is one of the most successful and in-demand project management trainers in the UK. He is author of 13 best-selling books, including four about project management. He is also a prolific blogger and contributor to ProjectManager.com and Project, the journal of the Association for Project Management. Between 1990 and 2002, Mike was a successful project manager, leading large project teams and delivering complex projects. In 2016, Mike launched OnlinePMCourses.
Please log in again. The login page will open in a new window. After logging in you can close it and return to this page.