- You are here:
- Home »
- Blog »
- Project Risk Management »
- PM Formulas: Understanding the Math of Project Management

Project Management is a structured discipline. And the basics are rigorous and logical. So it’s no surprise that mathematical formulas appear from time-to-time. In this article, I’ll do a round-up of all the PM formulas you need to know, as a working project manager.

The PM formulas we’ll look at will cover:

- Fundamental Statistics
- Commercial and Procurement formulas
- Investment Appraisal
- Network Diagrams: CPM and PERT method
- Risk Management
- Earned Value Management (EVM)
- Some important financial concepts

*Math, maths, or mathematics?*

Well, mathematics is just a bit of a mouthful. So it’s math or maths. I’m British, so ‘math’ just sounds wrong to me. Worse: it sounds ugly. But over 40% of my readers come from North America. So, ‘math’ it is.

*Formulas or formulae? *

My Oxford-educated A level maths teacher hated ‘formulas’. But even then (early 1980s), the Oxford English Dictionary allowed both forms of plural – as does Fowler. Apologies if you like a good Greek plural ending on a dull Latin stem. I’m going with the easier version: formulas.

In the opening of his book, *‘A Brief History of Time’*, Stephen Hawking wrote that someone had told him that each equation he included would halve sales.

The authors and editors of the PRINCE2 guide and the APM’s Body of Knowledge have taken this advice very much to heart. I can find no formulas in either. Though, that’s not to say they aren’t there. Let me know if you spot any.

The team behind the 6th Edition of the PMI’s Project Management Body of Knowledge (the PMBOK Guide) has been rather more generous. And the syllabuses for both the CAPM and PMP qualifications are littered with PM formulas. I think I cover all of them – but certainly do cover most of them – here. Again, let me know if I miss any.

And that leaves my own experience and training. I’ve added in a few more formulas I think a working project manager needs under their belt.

You don’t need a detailed understanding of statistics, as a project manager. But you do need a basic understanding of two things:

- Averages, or measures of the central tendency of data
- The Spread of Data

Averages measure where the middle is in a distribution of data. But sadly, there is no unique central point that is useful in all contexts. Hence the term ‘central tendency’ that leaves it somewhat vague.

However, the statistical measures are not, themselves, vague. And there are three that are most commonly used. It’s important to:

- understand each
- represent your data with the most appropriate
- state which measure you are using

The mean, m, is the measure most people infer when you use the term *‘average’*. We take the sum of all the data points and divide it by the number of data points.

A problem with the mean is that a few extreme scores at either end of the range can produce a rather skewed measure of the central tendency. The median is, therefore, a better measure in this case.

There is no formula – it is just the middle term if you arrange the data in ascending order. Strictly, if there are n terms, count to the (n+1)/2 term.

Mode is the most frequently occurring value n the data. For continuous data, where every value is going to be slightly different, you need to specify a level of precision.

The modal number of legs for a person is 2 (as will be the median, but not the mean).

Some data sets will have more than one mode – we refer to them as multi-modal.

While averages represent where the center of a data set lies, variance and standard deviation measure how spread out the data is.

The simplest measure of spread is the range – the difference between the greatest data point and the smallest. Therefore, it is extremely sensitive to a single, extreme, measurement.

Variance measures how spread out the data is. For reasons you’ll see in a moment, it often gets the symbol s^{2}. You calculate it by adding together the squares of the differences between each data point and the mean, and then dividing by one less than the number of data points.

The problem with the variance is that it does not measure the spread of the data in the same units as the data. The numbers look huge. Whilst statisticians use the variance, most of us don’t ever need it.

It’s far better to use the standard deviation, s, which is the square root of the variance. This gives an intuitive measure of the spread of the data from the mean, in units that match the units of the data and of the mean.

Much of the data you will use will be distributed in a symmetrical bell curve, known as a** ‘Normal Distribution’**. We can have more confidence that the ‘true’ value of an estimate lies within a broader range.

So, around 68 percent of the time, the true value will lie within one standard deviation in either direction from the mean. But 95 percent of the time it will lie within two standard deviations.

1 sd | 68.3% |

2 sd | 95.5% |

3 sd | 99.7% |

4 sd | 99.994% |

5 sd | 99.99994% |

6 sd | 99.9999998% |

Regression to the mean is not a statistical measure, but it is an important idea.

If you Make a small number of observations, natural variability can mean that your observations are more at one end of the range of possible outcomes than the other. The more observations you make, the more the average represents a true measure of the phenomenon.

What this means in practice is that you need enough data before you can confidently state a useful average.

This isn’t a statistical formula, but it doesn’t fit well elsewhere – and it does talk about the impact of big numbers.

The formula is about the increase in complexity of communication as the number of people increases. Indeed, it also talks about the increase in complexity of a project as the scale increases.

Two people can only communicate with each other. But three people can communicate in three different pairs. For four, it’s 6 six possible pairs, and for five, it’s ten.

The numbers grow rapidly. and the formula looks like this (a little thought will show you it’s obvious):

This is sometimes (in the context of telecoms networks) known as Metcalf’s Law:

The effect of a telecommunications network is proportional to the square of the number of connected users of the system (

n^{2}).

Contracts are usually either Fixed price or Cost plus. That is, they are either based on a firm quote from the contractor, or are based on the actual costs the contractor incurs. The balance of risks for each is very different. For more information, please see our article: Project Procurement Management [All the basics you need to know]

Here are three common variants on a fixed price contract:

**FFP: Firm Fixed Price**

No formula. The price is the price! This is regardless of circumstances unless there is some form of breach of contract.**FPIF: Fixed Price plus Incentive Fee**

In many jurisdictions, penalties are not lawful. So we let contracts where, if the contractor meets an agreed performance standard, they pick up a bonus, or incentive fee.

Contract Cost = Fixed price plus an agreed sum if the contractor meets agreed performance targets, as an incentive**FPEPA: Fixed Price Economic Price Adjustment**

For a long contract period, costs may rise due to inflation, or movements in foreign exchange rates. The contract can allow for this by documenting a mechanism for adjusting payments accordingly.

Contract Cost = Fixed price plus subject to agreed variation (up or down) according to economic circumstances

The formulas here can be complex and link to documented bank rates for things like:- inflation
- interest rates
- exchange rates

Now let’s look at what we mean by various types of ‘Cost plus’ contracts:

**CPPC: Cost Plus Percentage of Cost**

Contract Cost = Contractor’s cost plus a percentage of that, as a fee**CPFF: Cost Plus Fixed Fee**

Contract Cost = Contractor’s cost plus a fixed sum determined by contractor, as a fee**CPAF: Cost Plus Award Fee**

Contract Cost = Contractor’s cost plus a fixed sum determined by client, as a fee**CPIF: Cost Plus Incentive Fee**

Contract Cost = Contractor’s cost plus an agreed sum if the contractor meets agreed performance targets, as a fee

Of course, hybrids are possible.

When you are putting together a Project Proposal or a Business Case, you need to demonstrate the value to the sponsoring organization of making an investment in the project.

So, we’ll take a look first at the PM formulas for some basic measures of project investment performance, and then at the more sophisticated Discounted Cash Flow (DCF) method, which takes into account the ‘time value of money’. That is, a dollar today is worth more (due to inflation) than the promise of a dollar next year.

The simplest formula calculates the Payback from an investment as:

It’s therefore equivalent to calculating a revenue profit.

This is a measure of how long a project takes to recoup an investment, simply in cash terms. For a regular revenue return, R (say, $R per year), then:

Return on Investment (ROI) is usually calculated as a percentage. It measures the ratio of the profit, or return, to the cost, or investment.

There are two ratios here; one common and the other one not.

This is an estimate of the whole-life cost of a project, including:

- Design and construction
- Day-to-Day operation
- Planned and reactive maintenance
- End-of-life decommissioning

This is a measure of the return a project needs to make, to cover all of the investment costs. In the formula are two terms:

- The first term draws in the cost of equity funding (capital available from reserves)
- The second tern draws on the cost of debt (capital you borrow)

Here, Cost of Equity and Cost of Debt are the nominal interest rates applied to each.

What would you rather have; $100 today, or $100 a year from now?

Clearly, you want it today. In a year, inflation will erode the spending power of your $100 – it may only be worth $97. And, if you invest your $100 today, it could earn interest, giving it a spending power nearly the same, the same, or maybe more than it has today.

How do we account for this in our calculations? We use a Discount Factor and Discount Rate, to calculate the present value (PV) of money in the future, or the future value (FV) of money today.

If we assume a discount rate, r, then the discount factor, D, to apply in Year n is:

The present value of a sum A received in n years’ time, at Discount Rate r is given by:

The future value in n years, of a sum A received today, at Discount Rate r is given by:

The Net Present Value (NPV) is the sum of the present values of every payment (negative) and receipt (positive). It represents the same idea as the simple payback but takes into account the time value of each payment or receipt.

If each is represented as a Cash Flow (CF) that is either positive (income) or negative (cost), and appears at a time n units from now, then:

The Internal Rate of Return is the discount rate, r, that makes the NPV exactly equal to zero. If the IRR is greater than the project discount rate (the cost of capital to the organization, allowing for inflation), then the project makes a positive rate of return and is therefore beneficial.

All things being equal, if the IRR exceeds the discount rate by a sufficient margin, then we should undertake the project. But, of course, there are always other factors, like:

- risk
- availability of funds
- availability of resources

NPV and IRR are two different measures of the same thing.

There are two formal approaches to creating a Network Diagram:

- Critical Path Method (CPM)
- Program Evaluation and Review Technique (PERT)

The two formulas for Float are:

The Program Evaluation and Review Technique, or PERT, uses three-point estimates for durations, rather than the single point estimates used in the Critical Path Method (CPM).

For each activity, we estimate a duration, the Expected Activity Duration (EAD) based on:

- Optimistic likely duration (O)
- Most likely duration (M)
- Pessimistic likely duration (P)
- Note that O and P are not the best and worst cases, but best likely and worst likely cases.

There are two ways to get an EAD from Expectation from these estimates.

- The ‘triangular’ distribution
- The ‘beta’ distribution

We can use them to calculate duration, cost and resources estimates.

The formula for PERT Triangular Distribution is as follows:

This formula is a variant on the simple triangle version that puts more weight on your ‘most likely’ estimate. A 4x weighting is most commonly used, although you may see other weights. In the CAPM and PMP exams, the weighting factor is 4.

Note that the divisor of 6 is because we add 1 x O, 4 x M, and 1 x P. 1+4+1=6.

The PERT approach also allows us to make simple estimates of the variation in the EAD, by calculating a Standard Deviation (s).

A low value of standard deviation (SD) indicates that the data points are close to the Average and we can have a higher confidence in it. A high value of SD indicates the data points are spread out over a large range. The formulas for Standard Deviation are:

…for the triangular distribution

…for the beta distribution

The range of an activity duration is, therefore, EAD ± s. This gives us a measure of the risk to a task. We can use Monte Carlo modeling to calculate the effect, of all of the statistics of every activity, on the finish date (or cost).

As with the basic statistics we saw above, the variance for an activity is the square of the standard deviation.

A lot of the math in basic risk management is actually bogus. This includes the commonest formula, which gives an Expected Monetary Value, V, to a risk, based on its impact, I, and Likelihood, L.

It’s bogus, for two reasons:

- We mostly see numerical values for impact and likelihood that are on over-simplistic scales like 1, 2, 3, rather than on numerically meaningful scales.
- Since each of impact and likelihood is an estimate, we should really consider ranges for each, and a means of combining them that gives a range of values, rather than a single point estimate.

However, this is the formula that you would need, for example, for the CAPM or PMP exams.

The Root Mean Square, or RMS, is a statistical way to combine estimates for multiple risks into a single value.

Let’s say I have two tasks, each with an:

- estimate
- worst-case risk
- best estimate risk

These can be estimates of duration or cost, but we’ll assume cost here.

- Task 1

Estimate: 8 days

Worst-case risk: +6 days

Best estimate risk: +2 days - Task 2

Estimate: 12 days

Worst-case risk: +7 days

Best estimate risk: +5 days

Adding these together, we get:

- Base estimate: 20 days
- Estimate + Worst-case risk: 33 days
- Estimate + best estimate risk: 27 days

But let’s look at the spreads between worst-case risks and best estimate risks:

- Task 1 risk spread: 4 days
- Task 2 risk spread: 2 days

We calculate the RMS by calculating the square root of the mean of the squares of these spreads.

- Task 1 risk spread squared: 16
- Task 2 risk spread squared: 4

The mean of 4 and 16 is (4+16)/2 = 10. And the square root of 10 is approximately 3.2. So the most likely risk level is the sum of the best estimate risks (5+2=7) and the RMS value (3.2).

So, our estimate plus most likely risk is 20 + 7 + 3.2 = 30.2 – which I would round up to 31 days.

In Failure Mode and Effects Analysis (see below) we allocate a Risk Priority Number in a similar way to how we calculated the Expected Monetary Value of a risk. But, here we also take into account a measure of how easy or hard it is to detect the risk.

For each failure mode, we give a numeric score to quantify:

- Likelihood that the failure will occur (O)
- Likelihood that the failure will
*not be*detected (D) - The severity of the adverse impact of the failure mode (S)

We multiply these three scores to get the Risk Priority Number (RPN) for that failure mode. The sum of the RPNs for all failure modes is the RPN for the whole process.

We shan’t discuss the all-important Earned Vale and Earned Schedule PM formulas here. We’ve already covered them in our article, Earned Value Primer: The Basics of EVM.

If ypu aren’t familiar with Earned Value Management, here’s a brief introduction, followed by a summary of the Earned Value Analysis formulas.

Finally, let’s take a look at a few valuable financial concepts. They don’t come with formulas, but they are essential ideas for every project manager to understand.

‘Sunk Cost’ is money you have already spent and can no longer recover. It is important because we often justify (at least, emotionally) continuing with a failing or no-longer valuable project because of the huge investment we have already committed.

But, that investment is gone, whether we continue or not. The only question that makes sense is this:

If we make the investment needed to complete the project, will we recoup a sufficient return on that extra investment?

‘Opportunity Cost’ is a valuation of the benefit of an opportunity we can no longer pursue, because of the alternative option we have chosen. Typically, this is the difference between:

- The actual value we have attained from our chosen option, and
- The whole anticipated value of its next best alternative, which we did not select.

This is a process for reducing costs, without materially affecting the project scope or quality. Here, we interpret *‘materially’ *in terms of what the users actually need.

If there are, please do let me know in the comments, below, and I can include them in an update of this article.

Dr Mike Clayton is one of the most successful and in-demand project management trainers in the UK. He is author of 14 best-selling books, including four about project management. He is also a prolific blogger and contributor to ProjectManager.com and Project, the journal of the Association for Project Management. Between 1990 and 2002, Mike was a successful project manager, leading large project teams and delivering complex projects. In 2016, Mike launched OnlinePMCourses.

**Session expired**

Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.

%d bloggers like this: