8 April, 2024

Risky Analysis: Pitfalls and Good Practices in Estimating Likelihood

By Mike Clayton

Basic risk analysis is a topic that most people understand easily.  The steps are straightforward, the need is obvious and the underlying concepts are familiar. But there’s one pitfall that too many Project Managers fail to avoid: estimating likelihood.

Familiarity Brings Risk

Risky Analysis: Pitfalls and Good Practices in Estimating Likelihood

However, familiarity poses its own, significant risk.  If the basics are simple to grasp, we have a case of “simple is not the same as easy”.  The four fundamental steps of risk management each need a lot of careful attention.  They are, in truth, rather tricky.

One part of the process has always seemed to me to be a particular challenge.  It rarely bothers people who are new to risk management, so I take pains, when speaking about risk, to emphasize it.  It is the problem of probability.

Or, to be more precise, the problems…

In this article, I will:

The Basics of Risk Management

Let me start by going back to basics, briefly. If you want a full introduction, take a look at his video:

At the heart of Risk Management is the need to analyze your risks.  When we analyze risk, we look at a whole raft of information, but two things stand out as essential.

These two things define risk

Risk is uncertainty that can affect your outcome.

So, first, risk represents an uncertain event.  How uncertain or certain it is, is measured by the probability, or likelihood.

And second, if the risk manifests, it will have an effect on the outcome. This can be negative (a ‘bad thing’) or positive (a ‘good thing’).  How good or bad that effect is, is measured by the severity, or impact. I prefer to use ‘impact’ because the term ‘severity’ suggests a scale. The word ‘impact’ can suggest a scale too. But it can also suggest a qualitative measure; a particular type of consequence.

There are lots of scales on which to estimate the potential impact of a risk, and many of them are robust and easy to apply.  What is far harder to work with is the likelihood.

Why is Estimating Likelihood so Tricky?

Quite simply, human beings are extremely poor at estimating the likelihood of uncertain events.  This is because we are equipped with a set of biases that get in the way and rapidly lead us astray. I will examine some of my favorites, and invite you to use the comments, below, to tell us about your own.

Control Bias

We tend to think bad things are less likely to occur when we are in control – which makes sense. 

The problem arises when we are not in control, but somebody else is. Our own ‘passenger’ status then leads us to overestimate the risk, even when the person in control is very capable. 

Most people feel safer when they are driving than when somebody else is – regardless of the other person’s safety record. Many people feel safer driving than on public transport, despite the overwhelming weight of statistics. And the simple fact is that the person who is in control is a professional (driver, pilot, etc).


But what’ some people say, ‘about the pilot who was drinking in the cockpit, or the bus driver who was using a mobile phone?’

These are examples of how recent newsworthy events take over our consciousness and lead us toward new mistakes.  Strangely, the millions of bus journeys in which no incident occurs are never reported, and neither is each safe airplane landing. Perhaps our attitudes may change if every road traffic accident were on the news.


The worse the outcome, the higher we will rate it on the Impact scale. That’s what impact is all about.

What also seems to occur is that we unconsciously adjust our estimation of Likelihood, according to perceived impact. Higher-impact events seem more likely than they really are. This is because of ‘saliency’. We are more likely to notice high-impact events, because they are more meaningful (or ‘salient’) to us.

In the real world, most uncertain events cluster around a diagonal line on our familiar Impact versus Likelihood graph: 

  • There are always a lot of little things going wrong.
    Highly likely events tend to be low impact.
  • However, high-impact events tend to be rare. 

What we often do is distort our estimates.


Another distortion comes when we consider the context of the risk.  We also consider a bad outcome is more likely when it is associated with something we consider to be, in itself, bad.

So, we feel more likely to catch a dread disease in a ‘hostile’ country, industrial accidents seem more likely in a ‘bad’ industry. What is important, is not an objective assessment of the context (you can decide for yourself which countries or industries feel bad to you). What matters is our subjective assessment.

Beyond Bias: The Fundamental Problems of Estimating Likelihood

These biases are fundamental – built into the way our brains have evolved to work. But there is a more important fact: we just aren’t good at estimating likelihood, because ‘probability’ is not a ‘thing’ in our environment. As a result, our ancestors did not evolve a reliable process for estimating it.

Compare this, for example, with distance or direction. Hunters learn to throw projectile weapons with great accuracy. But even young children can quickly acquire proficiency here.

There are, however, a group of people in our society who are excellent at estimating likelihood. They do it for a living.

These are actuaries.

It is the job of an actuary to analyze data from past and present events, to forecast future trends. They deploy advanced mathematical and statistical skills to estimate risk and uncertainty. The commonest place to find them is in the insurance industry.

However, actuaries depend on two things that most Project Managers lack:

  1. A sophisticated understanding of mathematics, statistics, and probability theory.
    Usually, they are math graduates with significant additional training.
  2. Access to a vast array of statistical data from which they can calculate probabilities.
    in the insurance industry, there are thousands of car crashes, accidents, home invasions, and illnesses from which to compile data tables and statistics. These allow very narrow confidence intervals on many actuarial estimates.

But, Project Managers rarely have advanced math training. And, because of the nature of projects as unique (or small batch) events, there is little data for us to draw upon, even if we had the statistical sophistication to do so.

Things May Change

Artificial Intelligence is (needless to say) changing everything.

It seems likely to me that, as organizations increasingly use tools like PowerBI and Tableau to gather and aggregate real data, the AI and Project Management software businesses will respond. A combination of machine learning and traditional programming could allow software tools to:

  • interpret data
  • understand the project
  • identify potential risks
  • make estimates of likelihood (with confidence limits)

Solving the Problem of Estimating Likelihood

Unless you are an expert in something, you will never have sufficient data and analytical tools to estimate probabilities accurately. So, until the AI tools become available and the data sets become sufficient, the solution is simple:

Don’t try!

If you try to estimate likelihood with accuracy, you risk falling into another trap: The Precision Trap. And this means that, in all but the most sophisticated project environments, the best approach is to keep it simple.

The Precision Trap

This is where you mistake precision for accuracy; the more precise your estimate, the more convincing it seems. 

To avoid the Precision Trap, the safest route is to stick to low-precision estimates. Avoid the temptation to use too many categories on the likelihood scale of your risk assessment.

Absolutely reject the use of probability-based likelihood estimates unless you have real data on which to base your probabilities and you also understand statistics and probability theory.

And definitely avoid risk likelihood categories like possible, plausible, and probable. Remembering, of course, that ‘certain’ means 100% likelihood – a probability of 1 – and impossible means a probability of 0.

The ‘so what?

Treat risk analysis with care, be aware of the risks of bias in your likelihood estimates, keep your estimates simple, and avoid being too precise.

My favorite scale for estimating likelihood is really this simple:

The sort of things that seem to happen a lot – most of us have experienced them and we all know people who have.

The sort of things that seem to happen from time to time – a few of us have experienced the and many of us know someone who has.

The sort of things that do happen, although few of us know anyone who has experienced them.

Estimating likelihood with a five-point scale

If you really do need a five-point scale for estimating likelihood, then I recommend:

  1. Very Low
  2. Low
  3. Medium
  4. High
  5. Very High

If all this sounds imprecise, then it is supposed to. I don’t want project managers to believe their risk estimates:  I want them to act on the best evidence and good judgment.

But, what do you think?

Share your thoughts about the craft of estimating likelihood in the comments below.

More about Risk Management

There is a lot of material on this site about risk management. Here are some highlights:

Never miss an article or video!

Get notified of every new article or video we publish, when we publish it.

Mike Clayton

About the Author...

Dr Mike Clayton is one of the most successful and in-demand project management trainers in the UK. He is author of 14 best-selling books, including four about project management. He is also a prolific blogger and contributor to ProjectManager.com and Project, the journal of the Association for Project Management. Between 1990 and 2002, Mike was a successful project manager, leading large project teams and delivering complex projects. In 2016, Mike launched OnlinePMCourses.
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Never miss an article or video!

 Get notified of every new article or video we publish, when we publish it.