Showing posts with label Project Success. Show all posts
Showing posts with label Project Success. Show all posts

Friday, April 12, 2013

How Should Project Manager’s Be Measured?

"Being in power is like being a lady. If you have to tell people you are, you aren't”

                                                                                                - Margaret Thatcher
“How should I measure my PMs?” is a frequent question (in a number of variants) on various LinkedIn discussion boards.  The classical answer to this question, of course, is that the Project Manager is measured on their success at delivering the contracted/ chartered scope on time and within budget.  The practical answer is not so simple.

The traditional response is based on the purist PMI definition of a project and project manager:  a Sponsor funds a specific project (agreed-on scope, time and cost) and the PM has complete authority and autonomy to deliver the project.  In this scenario, it is proper to base the PM’s performance on those criteria.
I don’t know about you, but I’ve never worked on a project like that.  So let’s look at a couple of scenarios more consistent with what I’ve experienced.  In the first, the PM is assigned to a VP or senior manager and is their agent for delivering their project.  (From the purist PMI definition, the executive is the PM, but that’s not how it’s recognized in the industry.)  In this scenario, the PM is the agent of the executive;  the executive makes the strategic and significant controlling decisions;  the PM is responsible for tactical and operational delivery on behalf of the executive and for monitoring and reporting.  The key PM skill is proactive stakeholder communications.

This PM should not be measured on the traditional criteria:  they don’t have the authority to make significant or strategic resource reallocations to meet the project objectives.  However, this PM should be measured on the quality of the plan (relative to the organization’s delivery capability), the quality of the schedule, delivery to plan and frequency, appropriateness and quality of communications.  On this last point, communications, for example, the PM should be measured on how well they keep the stakeholders (particularly the executive) informed about variances from plan, but the PM should not be assessed on the magnitude of the variations, that there are variations or the response to the variations.
Another common scenario is that the PM is assigned to a PMO or that the organization has a formal, well-defined project delivery process.  In this scenario, the organization, not the PM, takes on the responsibility for delivering scope within time and cost by specifying the process.  In this scenario, the PM should be measured on adhering to the process, the quality of the deliverables and the contribution to improving the process.

There is an important lesson in this scenario:  if the PM follows the process and the project fails, the PM was successful;  if the PM doesn’t follow the process, regardless of whether the project is a success, the PM has failed.  This may sound heretical, but is the difference between how GM and Toyota manufacture cars.  For GM, delivering a car on schedule is paramount, regardless of the process disruption.  Thus, each car becomes a one-off craftsman product with inconsistent quality (results are not in statistical control and quality is tested in).  In contrast, for Toyota the process is paramount.  If the process is not working properly, they will stop the process until the problem is corrected.  That is a true assembly line and the product quality is consistent.
If your organization has a project delivery methodology, the only way to know if it works is to apply it absolutely and allow it to succeed or fail.  If it fails, then do the appropriate root cause analysis and problem resolution to improve the process.  Thus, a PM who shortchanges the process is not benefiting the long-term quality of the organization.

The bottom line is that the PM should be evaluated based on their performance within their span of control and to the organization’s delivery objectives.  Use of any other criteria creates dissonance between what you say you want and want you are rewarding the PM for.  To paraphrase Sgt. Esterhaus, “Let’s be consistent out there.”
What experience do you have with inconsistencies between the reality of what the PM does and how the PM is measured?

© 2013 Chuck Morton.  All Rights Reserved.

Friday, March 15, 2013

Risk Buffers – An Example

“President Bush has said that the economy is growing, that there are jobs out there. But you know, it's a long commute to China to get those jobs.”
                                                                                                - Tom Daschle

In The Schedule – Risk Buffers, the concluding post of a series on developing the well-formed schedule, I glossed over the complexity and details of planning project risk buffers.  I’d like to revisit the topic in more depth over the next few posts.
However, before I go into the complexities, I would like to present a very simple example, one I hope everyone can relate to, to demonstrate the concepts.  You are probably quite familiar with your commute:  you’ve driven it many times and you know what time you have to leave to generally get to work on time.  Even so, most commuters are occasionally surprised by unexpected traffic conditions – weather, wrecks, and road work, for example.  To demonstrate how to determine risk buffers, this exercise will touch on four of the six PMBoK Project Risk Management processes:  Identify Risks, Perform Qualitative Risk Analysis, Perform Quantitative Risk Analysis, and Plan Risk Responses.

For this example “project” we’ll only have one task:  Drive to work.  My next post will focus on identifying risks, so I won’t dwell on that step here.  For this exercise, let’s say you have these risks:
1.       There could be rain
2.       There could be snow or ice
3.       Ice could be so bad that your office closes for the day
4.       You are low on gas and need to refill before you can make it all the way to work
5.       There could be a wreck that causes a slow down
6.       There could be road work that causes a slow down
7.       There could be road work that causes a detour

(Do you see anything noteworthy about “risks” three and four?  I’ll have more to say about these in my next post.)
Something that is often missed or glossed over is dealing with Opportunities (the opposite of risk threats).  If your task is estimated to a 50% probability, there is as equal a chance of it coming in early as there is it coming in late, so you need to factor in and exploit (or enhance, share or accept) the opportunities.  For example:

8.       Traffic could be very light
9.       There could be a wreck in a location that causes your commute to be lighter than normal
10.   All the traffic lights could hit perfect for you today

The next step is to qualitatively assess the risks.  The objective of this step is to determine those risks you are going to seriously pay attention to – those that you will quantify, determine the risk response, and monitor.  It is generally done by assigning subjective Low, Medium and High values to probability (the likelihood that the risk event will occur) and impact (how the project is affected if the risk event occurs).  For example, if it’s August, the likelihood of snow or ice (risk #2) is Low.  From this analysis you generally then disregard the Low-Low risks (though note that these values can change over time, so you must periodically re-asses your risks).
The list is now winnowed down to the select risks that will receive attention.  Determine a specific (time, money or both) cost to the project for each remaining risk (including opportunities) if it occurs.

Most of what I read at this point talks about mitigating the risks, but there’s a lot more to risk response than just mitigation.  For example, add tasks to the project (such as check the weather or turn on the radio to get a traffic update).  In addition, risk responses can be transfer (ask a colleague to be there for you in case you can’t get there in time), avoid (reschedule to another day when it won’t rain) or accept (add the time and cost to the schedule).
Finally, and where this has all been leading up to, is to add a risk buffer to the project as part of the risk response.  For example, if there is a 25% chance that a risk will occur that will add twenty minutes to the commute, then you would add a five minute risk buffer to the project (in our example, you would plan to leave five minutes earlier).  You would sum the values for each appropriate risk and opportunity and add a buffer for the total amount.

With this step, you have again increased the probability of getting to the office on time – er, that is, of completing your project on schedule.
I’ll be delving deeper into risks over the next few posts using this example to demonstrate the concepts.  What about risks – threats and opportunities – would you like to understand better?

© 2013 Chuck Morton.  All Rights Reserved.

Sunday, June 26, 2011

Project Failures & The Chaos Reports

Diane, the Chief Financial Officer, had invited the partner of the local PwC office for this meeting.  She had only done that once before, so he knew this discussion was special.  “How do I guarantee project success?” she asked.  He didn’t need it explained that this was probably a make-or-break project for the company.  He grasped all of the nuances of this question.  After carefully considering his thoughts, he responded “There are many ways to improve the probability of project success, but none of those guarantee success.  In fact, you really can’t guarantee project success.  But there is another way:  Call the activity an experiment or research or a prototype.  Regardless of the outcome, you can claim success.  Just don’t call it a project.”

I want to take a couple of posts and discuss project failures.  In this post I’ll discuss the often quoted Chaos reports, which applies to all types of projects, and in the next post will discuss more specifically IT project failures.

Let me start this discussion by stating up front that I’ve never read a Chaos report.  The Chaos Report is published by The Standish Group and costs hundreds of dollars (or more) to purchase.  My guess is that most people who reference the Chaos Report are, like me, only familiar with the press releases, headlines, and secondary references.

Because I haven’t read the report, I can’t criticize it.  I can, however, criticize all the people who reference it to make a point about project failure.  Allow me to present a couple of examples of why the headline number of project failures is misleading and misrepresentative.

In the first example, I will use an analogy.  Assume that most people who get a dog want it trained and that most of those people choose to do it themselves rather than hire a professional.  Further, most people who go it themselves don’t dedicate enough time, don’t have the experience, don’t follow through, and thus don’t succeed.  Now, let’s survey all the people who bought dogs last year and ask them about the success of their dog training efforts.  The results, of course, will show that most dog training efforts ended in failure, despite that virtually all professional training was successful.  The headline (most failed) completely misrepresents the success of professional dog trainers.

My second concern is with the definitions of success and failure.  “Strategies for Learning From Failure,” (Amy C. Edmondson, Harvard Business Review, April 2011) describes a spectrum of failures ranging from blameworthy to praiseworthy.  Just because it didn’t meet the original (optimistic) objectives does not mean that the project was a failure.  In addition, different perspectives of the project can produce completely different opinions.  For example, both the Sydney Opera House and the Ford Taurus went tremendously over schedule and budget, but both today are regarded as tremendous market successes.  In another example, a project could be started to introduce a new product into the market.  During the exploratory phase, the team determines that it cannot meet the business objectives and it’s cancelled. The brand manager considers the project a failure because the product did not launch;  the CEO, however, considers the project a success because the company did not over invest into a failing proposition and could redirect the funds to other more promising opportunities.  So how can I accept the Chaos report headlines when the same project can elicit either success or failure results depending only on who I ask?

There are many other reasons why the headlines should be ignored:

·         If the organization has failed projects, what is their success with operational efforts (i.e., what is the organization’s success baseline in general?)

·         How does organizational project management maturity correlate with project success?

·         How do the respondents differentiate between project vs product success and failure?

·         How does The Standish Group address different viewpoints of success vs failure on the same project?

·         How does project manager experience, autonomy and authority correlate with project success?

Taken together, if a barely profitable organization with poor manufacturing processes and that had no experience delivering projects asked one of their middle managers to lead an effort to bring out a revolutionary new product, told the manager how much was budgeted, how long it would take, and who would comprise the project team (but the team members would continue to report to their current units), who would be surprised when, after a few months, they cancelled the project and called it a failure?  How many of the projects in the Chaos Reports fit this model?  If The Standish Group reported that most projects with these criteria failed, would it be newsworthy?

I have every expectation that The Standish Group conducts their research professionally and competently.  Again, I’m not criticizing The Standish Group or their Chaos Report.  I am saying that we can’t use the headlines to understand project success or failure rates;  we need the demographic details that are contained in the detailed report.  Those details, I am sure, would provide clarity on something much more important than the frequency of project failure – they would provide insight into conditions for project success.

Wouldn’t you bet those details show that projects in organizations with mature PM processes and run by qualified project managers are successful at a much higher rate?