Showing posts with label Continuous Improvement. Show all posts
Showing posts with label Continuous Improvement. Show all posts

Friday, April 12, 2013

How Should Project Manager’s Be Measured?

"Being in power is like being a lady. If you have to tell people you are, you aren't”

                                                                                                - Margaret Thatcher
“How should I measure my PMs?” is a frequent question (in a number of variants) on various LinkedIn discussion boards.  The classical answer to this question, of course, is that the Project Manager is measured on their success at delivering the contracted/ chartered scope on time and within budget.  The practical answer is not so simple.

The traditional response is based on the purist PMI definition of a project and project manager:  a Sponsor funds a specific project (agreed-on scope, time and cost) and the PM has complete authority and autonomy to deliver the project.  In this scenario, it is proper to base the PM’s performance on those criteria.
I don’t know about you, but I’ve never worked on a project like that.  So let’s look at a couple of scenarios more consistent with what I’ve experienced.  In the first, the PM is assigned to a VP or senior manager and is their agent for delivering their project.  (From the purist PMI definition, the executive is the PM, but that’s not how it’s recognized in the industry.)  In this scenario, the PM is the agent of the executive;  the executive makes the strategic and significant controlling decisions;  the PM is responsible for tactical and operational delivery on behalf of the executive and for monitoring and reporting.  The key PM skill is proactive stakeholder communications.

This PM should not be measured on the traditional criteria:  they don’t have the authority to make significant or strategic resource reallocations to meet the project objectives.  However, this PM should be measured on the quality of the plan (relative to the organization’s delivery capability), the quality of the schedule, delivery to plan and frequency, appropriateness and quality of communications.  On this last point, communications, for example, the PM should be measured on how well they keep the stakeholders (particularly the executive) informed about variances from plan, but the PM should not be assessed on the magnitude of the variations, that there are variations or the response to the variations.
Another common scenario is that the PM is assigned to a PMO or that the organization has a formal, well-defined project delivery process.  In this scenario, the organization, not the PM, takes on the responsibility for delivering scope within time and cost by specifying the process.  In this scenario, the PM should be measured on adhering to the process, the quality of the deliverables and the contribution to improving the process.

There is an important lesson in this scenario:  if the PM follows the process and the project fails, the PM was successful;  if the PM doesn’t follow the process, regardless of whether the project is a success, the PM has failed.  This may sound heretical, but is the difference between how GM and Toyota manufacture cars.  For GM, delivering a car on schedule is paramount, regardless of the process disruption.  Thus, each car becomes a one-off craftsman product with inconsistent quality (results are not in statistical control and quality is tested in).  In contrast, for Toyota the process is paramount.  If the process is not working properly, they will stop the process until the problem is corrected.  That is a true assembly line and the product quality is consistent.
If your organization has a project delivery methodology, the only way to know if it works is to apply it absolutely and allow it to succeed or fail.  If it fails, then do the appropriate root cause analysis and problem resolution to improve the process.  Thus, a PM who shortchanges the process is not benefiting the long-term quality of the organization.

The bottom line is that the PM should be evaluated based on their performance within their span of control and to the organization’s delivery objectives.  Use of any other criteria creates dissonance between what you say you want and want you are rewarding the PM for.  To paraphrase Sgt. Esterhaus, “Let’s be consistent out there.”
What experience do you have with inconsistencies between the reality of what the PM does and how the PM is measured?

© 2013 Chuck Morton.  All Rights Reserved.

Thursday, January 13, 2011

A Philosophy of Projects and Products (Part 3 of 3)

“We’re on schedule for development and plan to start testing next week,” an upbeat Dave updated his manager, John.  John answered, “That’s good, but you’ve pulled developers from support to work OT to make the schedule, our support backlog is climbing, and now no one is available for the scheduled maintenance upgrade this weekend.  We need to take a closer look at how you are deciding the team’s priorities.”
Over the past two blog entries, on P&SD PM and Consultancy PM, I discussed two distinct project management philosophies.  This entry will discuss why this is important to a project manager, to a company hiring a project manager, to a company engaging a vendor of PM services, and to a supplier of PM services.
One area where the distinction is important is academia.  In some circles there are efforts to create a distinct School of Project Management separate from the School of Management.  Existing Management leadership resist this effort, answering that PM is just a variation of Management, not a distinct discipline.  Looking at P&SD PM, one could argue that P&SD PM is, as they argue, just a variation;   after all, it is generally found in organizations where PM is not a core competency, it serves the delivery of the organizations primary function, and general managers manage and utilize the PM services.  Consultancy PM, on the other hand, is a distinct model of management;  it doesn’t support an organization, it is the core competency of the organization.
Another area, closely related, is PM research.  Little of the published research on project management, such as that in Project Management Journal, distinguishes among the types of project management being practiced and the results of the research.  But what can be accurately inferred from the research when comparing such distinct practices as I’ve described in the two previous entries?
More relevant to the practicing project manager, especially one considering a career change, are the styles, cultures, practices, and expectations within organizations with the different PM environments.  I’ve worked in both types and management expectations are crucially different.  Further, the people – your managers and peers – in one are not likely to recognize the differences and prepare you for the changes.
As Peter M. Senge discusses in The Fifth Discipline: The Art & Practice of the Learning Organization (Doubleday, 1994), our mental models can artificially restrict our performance.  Be prepared when moving from a P&SD PM culture to a Consultancy PM culture, or vice versa, to let go of restrictive workplace assumptions.
Finally, the PM Best Practice blog endeavors to advance organizational maturity and continuous improvement.  The concepts explored in the P&SD PM environment and the Consultancy PM environment will be regularly revisited in future columns.  PM best practices are different in these environments and trying to force a set of best practices appropriate for one onto the other is, at best, frustrating.
Do you see other significant ways that the P&SD PM and Consultancy PM environments are relevant to you as a project manager?  If you’ve encountered any of the cultural differences that I describe, how did you adjust to them?