Wednesday, March 27, 2013

Estimate Confidence Buffers, Risk Buffers and Management Reserve

"If confusion is the first step to knowledge, I must be a genius.”
                                                                                                - Larry Leissner

We’re nearing the conclusion of this series that started with the basics of scheduling.  I want to tie up a few loose ends, maybe clarify some inaccuracies and address some confusion I may have introduced.
As I developed this series on scheduling, leading up to the previous post on estimate buffers, I hopefully offered convincing arguments for including appropriate buffers.  In this post, I want to compare and contrast various types of buffers that are common.

I think the discussion I offered on risk buffers is consistent with other authors and industry practices, so I won’t belabor that discussion.  I will point out that risk buffers are included in the project (time and cost) budget and are managed by the Project Manager.
Literature and practices for improving estimate confidence are much rarer.  Therefore, there may be some stakeholder pushback for including these buffers in the schedule.  However, you can demonstrate through logic, reason, science and ethics that, in order to have a realistic schedule, these buffers, like the risk buffers, must be included in the schedule.  Like risk buffers, these confidence buffers are included in the project budget and are managed by the Project Manager.

However, you don’t want to “double count” your risks.  Let’s say you do a lot of similar projects.  For these, there will probably be a common pool of risks that are included.  However, if your estimates are based on historical results and sometimes these risks occur, then there is the potential that your Pessimistic estimates will already account for those risks.  You will need to adjust either the estimates or the risk buffers so they are in the schedule only once.
In contrast to the confidence and risk buffers, some PMs Pad their estimates.  Some project managers, after over-running the project schedule a few times, learn that their estimates are never sufficient and start adding arbitrary amounts.  “We go over by 20% every time, so I’ll just start adding 20% to the schedule.”  This may sound logical on the surface, but is actually bad for their credibility and for the PM profession.  (Look for more discussion on this topic in a future discussion of The Toyota Way.)  Stakeholders learn that the PM is padding and, in response, start cutting the budget or otherwise compensating.  Pad is Bad.

In contrast, neither confidence buffers nor risk buffers are arbitrary.  Hopefully, if you’ve followed the discussion this far, the reasonable, logical and scientific justification for including these in the project are thoroughly explained and documented.
One last buffer that is sometimes seen in large, third-party project contracts is Management Reserve.  Management Reserve is separate from and distinctly different than estimate confidence buffers, risk buffers or pad.  Management Reserve is contingency funds allocated for use by the contract owner (sponsor).  These funds are not part of the project budget and are not available to the Project Manager except after approved project change control to authorize allocation to the project.

I hope this series has been interesting and informative for you.  One last remaining post and then we can move on to new topics.
Was this discussion scheduling and estimating clear and thorough?  Is there anything I missed that you would like me to address or anything that is still not clear that I should expand on?

© 2013 Chuck Morton.  All Rights Reserved.

Thursday, March 21, 2013

Estimates & Buffers

“A clever person turns great troubles into little ones and little ones into none at all.”

If you’ve been following the series on the well-formed schedule and the subsequent series of posts, you are aware that from the WBS you create estimates and iteratively refine those estimates by, for example, improving the probability of success and compensating for risk.  One of the PM challenges is how to balance motivating task owners, setting appropriate stakeholder expectations, reporting honestly and doing all of this in a forthright and ethical manner.

To demonstrate, let’s use our example task from Risk Buffers – An Example.  You might estimate that your Most Likely time for the commute is 30 minutes and the Optimistic time is 20 minutes.  However, the Expected (Mean) time might be calculated as 35 minutes (this is the 50% probability of success, where you’re as likely be early as late).  If you toss in one standard deviation to get to a 65% probability of success or two standard deviations to get to a 95% probability of success, the estimate could balloon to 45 or 55 minutes.  On top of this, then, you add those risk buffers, which might add 10-15 more minutes to the estimate.

Let’s take a look at this from your perspective with several of the stakeholders to examine the conflicts.  First, with the task owner:  Physics and Theory X management say that if you put that high estimate on the task, the task owner will take all of it.  In order to motivate the task owner, it is necessary to use, for example, either Most Likely or Expected.  In fact, I’ve managed projects in organizations that insisted that I use the Optimistic estimate and “stay on those team members” or they would just slack off.

Now looking at this from the perspective of your relationship with the project owner or sponsor, if you fill the schedule with fully bloated tasks (“What do you mean it’ll take 65 minutes to drive to the office!  I know full well it only takes 30 minutes.), you’ll lose all credibility.  However, if you use only Optimistic (!) or even Most Likely or Expected estimates, the end (date or cost) is not realistic and you’ve set yourself up for failure (as documented in The Schedule – Probability of Success). 

Some stakeholders want to know when it will really be done and how much it will really cost.  And they want to have realistic progress updates on these realistic completion values.  If you are not using fully weighted estimates, your reporting to these stakeholders will not be accurate.

Finally, another stakeholder that you have to true to is yourself (and the PM community).  How you address these conflicts must be ethical.

The conflict then is how to have individual task estimates that are under-weighted, but have the overall project estimate include the probability of success and risk adjustments.  The solution, of course, is to have “tasks” in the schedule that represent these estimating and risk buffers.  So let’s say you create these buffers and put them all at the end of the schedule.  This also creates reporting problems because there will be the appearance, at the start of the project, of a large gap between the end of the last deliverable and the end of the project.  It will appear as you progress and report completion of deliverables that you are (most likely) late on all of the deliverables.

At a minimum then, you need to distribute these buffers into each deliverable, spreading them over the life of the project.  You, as project manager, own these buffers and you have to deplete them as you go, continually reassessing much has been consumed by actual task overage and underage.
Since these buffers are included in the project (cost and time) budget, they cannot be arbitrarily reduced, eliminated, increased or created except through formal project change control.  They can (depending on the rigor of project change control) be moved between deliverables, but this should be documented.

Before closing I want to mention that the book Critical Chain by Elihayu M. Goldratt describes a formulaic approach to determine the appropriate amount and placement of the buffers.  This approach has been successfully used in many organizations and should be considered as an alternative to the custom and complex approach to buffers I’ve presented.  Further, he goes in to much more detail on the operational practices.
In my next post I’ll conclude this series on buffers and then we can move on to some new topics.

What challenges have you had convincing stakeholders to use realistic schedules?  Do you get pushback that you are just padding your estimates?  How do you handle it?
© 2013 Chuck Morton.  All Rights Reserved.

Wednesday, March 20, 2013

Where to Find Risks

"Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.”

                                                                                                - Douglas Adams
As I mentioned in my previous post,  Risk Buffers – An Example, this post is dedicated to identifying risks.  The only real way to find risks is by experience, and if you don’t have the experience yourself, then you must rely on the experience of others.  I’ll discuss three reliable sources for finding risks (but this is not exhaustive).

A taxonomy is a schema for classifying things.  The library’s Dewey Decimal System and the Species-Genus-Phyla (or whatever it is) system that biologists use for classifying animals are both taxonomies.  Risk taxonomies have been developed for classifying and organizing risks.  Since the taxonomy is exhaustive, the beneficial aspect of this is that you can then use these risk taxonomies to make sure you consider all areas of risk.
For IT and service projects, the Software Engineering Institute’s (SEI’s) Capability Maturity Model (CMM) developed a risk taxonomy (tr06.93.pdf – Taxonomy-based Risk Identification) twenty years ago that is still applicable.  I have Appendix page A-1 with me on every project and refer to it often.  For each entry, I ask the question:  Are there any xxx risks?  For example, for requirements stability, I ask:   Are the requirements stable?  Other industries have appropriate risk taxonomies available via the popular search engines.

These risk taxonomies are appropriate for finding product and service delivery risks, but not optimized for finding project delivery risks.  For that, PMI’s Organizational Project Management Maturity Model (OPM3) is a better source for identifying risks, though it’s cumbersome.  Basically, areas where the organization is not mature for project management are opportunities for risk.
These taxonomies and maturity models are sources from other’s experiences to help identify sources of risk projects in general.  Getting specific to your project, another useful technique for identifying risks (opportunities and threats), is work with your team or task owners and go through the WBS at a Deliverable, Activity, or Task level and ask them to estimate the Optimistic, Most Likely, and Pessimistic schedule for each.  It’s really amusing that I can ask the project team if they can identify any risks and they’ll consistently say “No.”  But they can confidently give me O, ML, and P estimates, then I follow up with:  What about the Pessimistic estimate can make this be late (or what has to happen for us to achieve the Optimistic estimate)?  Then, they can give me both threats and opportunities.  Amazing.

Finally, a reliable source of risk that you never want to miss is your assumptions.  An assumption can be defined as a risk without an owner.  Project estimating assumptions, task estimating assumptions, all assumptions need to be formalized and documented in the risk register.
In the previous post with the example risks, did you notice anything special about “risks” #3 and #4?
3.       Ice could be so bad that your office closes for the day
4.       You are low on gas and need to refill before you can make it all the way to work

Did you recognize that neither of these are risks?  I’m being somewhat pedagogical to demonstrate my point here, but to complete our task, you have to go into the office.  For “risk” #3, since the office is closed, you can’t complete the task.  Part of the definition of a risk is that you have to be able to “control” the risk (mitigate, avoid, transfer or accept).  This is why civilization-destroying asteroid collisions are not project risks.  For “risk” #4, this is not a “might happen in the future.”  It exists and must be dealt with, so it is not a risk, it is an issue. 
To summarize my comments from today’s post:
·         Use taxonomies and maturity models to identify sources of risk
·         Look for Product/ Service risks
·         Look for Project Delivery risks
·         Use your project team to identify risks (opportunities and threats)
·         Formalize assumptions into risks

Do you have suggestions or techniques for finding risks?

© 2013 Chuck Morton.  All Rights Reserved.

Friday, March 15, 2013

Risk Buffers – An Example

“President Bush has said that the economy is growing, that there are jobs out there. But you know, it's a long commute to China to get those jobs.”
                                                                                                - Tom Daschle

In The Schedule – Risk Buffers, the concluding post of a series on developing the well-formed schedule, I glossed over the complexity and details of planning project risk buffers.  I’d like to revisit the topic in more depth over the next few posts.
However, before I go into the complexities, I would like to present a very simple example, one I hope everyone can relate to, to demonstrate the concepts.  You are probably quite familiar with your commute:  you’ve driven it many times and you know what time you have to leave to generally get to work on time.  Even so, most commuters are occasionally surprised by unexpected traffic conditions – weather, wrecks, and road work, for example.  To demonstrate how to determine risk buffers, this exercise will touch on four of the six PMBoK Project Risk Management processes:  Identify Risks, Perform Qualitative Risk Analysis, Perform Quantitative Risk Analysis, and Plan Risk Responses.

For this example “project” we’ll only have one task:  Drive to work.  My next post will focus on identifying risks, so I won’t dwell on that step here.  For this exercise, let’s say you have these risks:
1.       There could be rain
2.       There could be snow or ice
3.       Ice could be so bad that your office closes for the day
4.       You are low on gas and need to refill before you can make it all the way to work
5.       There could be a wreck that causes a slow down
6.       There could be road work that causes a slow down
7.       There could be road work that causes a detour

(Do you see anything noteworthy about “risks” three and four?  I’ll have more to say about these in my next post.)
Something that is often missed or glossed over is dealing with Opportunities (the opposite of risk threats).  If your task is estimated to a 50% probability, there is as equal a chance of it coming in early as there is it coming in late, so you need to factor in and exploit (or enhance, share or accept) the opportunities.  For example:

8.       Traffic could be very light
9.       There could be a wreck in a location that causes your commute to be lighter than normal
10.   All the traffic lights could hit perfect for you today

The next step is to qualitatively assess the risks.  The objective of this step is to determine those risks you are going to seriously pay attention to – those that you will quantify, determine the risk response, and monitor.  It is generally done by assigning subjective Low, Medium and High values to probability (the likelihood that the risk event will occur) and impact (how the project is affected if the risk event occurs).  For example, if it’s August, the likelihood of snow or ice (risk #2) is Low.  From this analysis you generally then disregard the Low-Low risks (though note that these values can change over time, so you must periodically re-asses your risks).
The list is now winnowed down to the select risks that will receive attention.  Determine a specific (time, money or both) cost to the project for each remaining risk (including opportunities) if it occurs.

Most of what I read at this point talks about mitigating the risks, but there’s a lot more to risk response than just mitigation.  For example, add tasks to the project (such as check the weather or turn on the radio to get a traffic update).  In addition, risk responses can be transfer (ask a colleague to be there for you in case you can’t get there in time), avoid (reschedule to another day when it won’t rain) or accept (add the time and cost to the schedule).
Finally, and where this has all been leading up to, is to add a risk buffer to the project as part of the risk response.  For example, if there is a 25% chance that a risk will occur that will add twenty minutes to the commute, then you would add a five minute risk buffer to the project (in our example, you would plan to leave five minutes earlier).  You would sum the values for each appropriate risk and opportunity and add a buffer for the total amount.

With this step, you have again increased the probability of getting to the office on time – er, that is, of completing your project on schedule.
I’ll be delving deeper into risks over the next few posts using this example to demonstrate the concepts.  What about risks – threats and opportunities – would you like to understand better?

© 2013 Chuck Morton.  All Rights Reserved.

Saturday, March 2, 2013

Estimate Confidence – A Demonstrative Example

"Each time we face our fear, we gain strength, courage, and confidence in the doing”

                                                                                                - Theodore Roosevelt
In my previous post, The Schedule – Improving Estimate Confidence, the conclusion likely leaves the impression that for any meaningful project, to get a reasonably high schedule confidence is impractical.  In fact, PERT teaches that you can get high confidence in the schedule with task confidence of only 50%.

It is true, as I said, that the probability of completing the project (a series of critical path tasks) on the expected date is determined by the product of the individual task priorities.  But this isn’t really meaningful.  For example, a series of 10 critical path tasks, each estimated for 10 days at a confidence level of 50% has a probability of less than 1 in 1,000 of completing in 100 days.  But our stakeholders are generally not that rigorous – we are usually allowed to complete within 5-10 days either way of that date for acceptable success, and this dramatically changes the results.
At a task confidence level of 50%, PERT assumes that there is an equal balance between tasks finishing early and tasks finishing late, which at project end will balance out the noise of individual task variance.

An example will better illustrate the situation.  As shown in the Table 1, here is a project with ten critical path tasks.  For simplicity, I’ve given them all the same estimates, but the estimate values aren’t relevant.
In Table 2, the Expected (statistical mean) and Variance values are plugged into a standard deviation z-table so I can show the duration ranges for selected probabilities of success for this hypothetical project.  StdDev is just the square root of Variance and is in the same units of measure as the estimates.  The appropriate multiplier is applied to StdDev and subtracted from (added to) Mean to get Low (High).  Range is the difference between Low and High.  I have taken liberties with rounding throughout this table.  This table would be read that for a 50% project probability of success (for the hypothetical project in Table 1), one would schedule for 109-121 days duration for the project, a range of 12 days.  If a sponsor needed a higher confidence level, they could get 90% confidence with 101-129 days (a range of 28 days).


There are a number of gleanings from this exercise:
  • Specifying a fixed completion date sets you up for failure, with a less than 0.1% probability of delivering on that date with just 10 critical path tasks.
  • If that is bad, Optimistic scheduling is a disaster.  If you planned to deliver in the optimistic duration above (50 days), your odds drop into the billionths.
  • According to PERT, it is possible to deliver a project successfully within a statistically sound range that should be acceptable to most stakeholders.
But all is not champagne and roses – after all we are project managers and the gods do laugh at us.  So what can go wrong in this model?  Let’s review the validity of the assumptions:
  • That there is a correlation between PERT and statistical analysis.  I know of no research that supports or confirms this assumption.
  • The data points (project estimates) present as a Normal Distribution.  In fact, we know this is not true and that the values skew to the right (meaning that results are more likely to be late rather than early).
  • To be statistically meaningful, you need a large number of data points.  Most projects will not have the volume of critical path tasks needed to validate this assumption.
  • That results of one task are independent of any other task.  This is talking about statistical dependence rather than project task dependence.  But to be true would have to mean that variance (overage or underage) in one task does not influence the variance of any other task.  Earned Value Management (EVM) results would tend to invalidate this assumption, because EVM experience shows that variance early in a project is a high predictor of comparable variance later in the project.
To round out this discussion on estimate confidence, I will note that PMBoK and the Practice Standard for Project Estimating both consider PERT an analogous estimating technique (though I challenge that if applied at the task level by the task owner).  The only recognized bottom-up estimating technique is the Delphi Method.

Has this series on advanced scheduling techniques provided any insight on weaknesses in the techniques you use?