Advertisement
Advertisement
ATD Blog

Performance Management for the World of Now

Thursday, December 10, 2015
Advertisement

performance-management.jpg
If you were my manager and you watched my performance for an entire year, how accurate do you think your ratings of me would be on attributes such as my “promotability” or “potential?” How about more specific attributes such as my customer focus? Do you think that you’re one of those people who, with enough time spent observing me, could reliably rate these aspects of my performance on a 1-to-5 scale? 

These are critically important questions, because in the vast majority of organizations we operate as though the answer to all of them is yes, with enough training and time, people can become reliable raters of other people. We have constructed our entire edifice of HR systems and processes on this answer. 

Likewise, when, as part of your performance appraisal, we ask your boss to rate you on the organization’s required competencies, we do it because of our belief that these ratings reliably reveal how well you are actually doing on these competencies. The same applies to the widespread use of 360-degree surveys. We use these surveys because we believe that other people’s ratings of you will reveal something about you that can be reliably identified, and then improved. 

We’re wrong. Research reveals that neither you nor any of your peers are reliable raters of anyone. As a result, virtually all of our people data is fatally flawed. Over the last 15 years a significant body of research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance. The effect that ruins our ability to rate others has a name: the Idiosyncratic Rater Effect, which tells us that my rating of you is driven not by who you are, but instead by my own idiosyncrasies. This effect is large and resilient.  No amount of training seems able to lessen it, and on average, 61% of my rating of you is a reflection of me. Bottom line: when we look at a rating we think it reveals something about the ratee, but it doesn’t. Instead, it reveals a lot about the rater. 

Despite the repeated documentation of the Idiosyncratic Rater Effect in academic journals, in the world of business we appear unaware of it. We have yet to grapple with what this effect does to our people practices. We take these ratings—of performance, of potential, of competencies — and we use them to decide who gets trained on which skill, who gets promoted to which role, who gets paid which level of bonus, and even how our people strategy aligns to our business strategy. All of these decisions are based on the belief these ratings actually reflect the people being rated. After all, if we didn’t believe that, if we thought for one minute that these ratings might be invalid, then we would have to question everything we do to and for our people. How we train, deploy, promote, pay, and reward our people, all of it would be suspect. 

Is this really a surprise? You’re sitting in a year‐end meeting discussing a person and you look at their performance ratings, and you think to yourself “Really? Is this person really a ‘5’ on strategic thinking? Says who—and what did they mean by ‘strategic thinking’ anyway?” You look at the behavioral definitions of strategic thinking and you see that a “5” means that the person displayed strategic thinking “constantly” whereas a “4” is only “frequently” but still, you ask yourself, “How much weight should I really put on one manager’s ability to parse the difference between ‘constantly’ and ‘frequently’? Maybe this ‘5’ isn’t really a ‘5’. Maybe this rating isn’t real.”

So, perhaps you begin to suspect that your people data can’t be trusted. If so, these last fifteen years have proven you right, and this finding must give us all pause. It means that all of the data we use to decide who should get promoted is bad data; that all of the performance appraisal data we use to determine people’s bonus pay is imprecise; and that the links we try to show between our people strategy and our business strategy—expressed in various competency models—are spurious. It means that, when it comes to our people within our organizations, we are all functionally blind. 

So what are we to do? 

Here are the four considerations for you as you leave your legacy systems system behind to adopt an approach that maps to how we work today. 

Advertisement

It’s All About the Team Leader

This system must be built for team leaders. We know that performance and engagement happen (or fail to happen) in a team. The organization can encourage the right climate and provide the right tools, but it’s up to every team leader to create a microclimate on his team, which then drives both performance and engagement. We all know this. Work for a rotten boss inside a great company, and the experience of the boss trumps the experience of the company. 

Yet our current systems are not built for the team leader at all—they are built for the organization and for HR. Our PM systems require our team leaders to do a host of things that the best team leaders don’t actually do. The best leaders don’t set goals and then ask people to track their “completion percentage” on each goal. They don’t rate people on prescribed lists of competencies. Nor do they write detailed performance reviews once or twice a year. 

It’s as if our performance and engagement systems live in a parallel universe, cut off from the real world where actual team leaders grapple with the challenge of helping actual team members get actual work done. 

Radically Frequent Check-Ins

The most powerful ritual of great team leaders is a radically frequent check-in about near-term future work. 

They aren’t laborious, preparation-filled conversations about feedback or to-do lists. No, these are 1-to-1 meetings about the work that the team member is about to do right now, and how the team leader can help. In fact, the two questions he asks in these check-ins are simply “What are your priorities this week?” and “How can I help?” He does this because he knows the goals set at the beginning of the year are irrelevant by the third week of the year, and so every week he’s got to check in with each team member to course-correct, in real time. 

Advertisement

The weekly cadence is very important. For the high-performing team leaders we study, a year is not a marathon, but is instead 52 weekly sprints. At once every six weeks, the check-in becomes simultaneously backward looking and vague about the future. At once a week, it can stay future-focused and specific to the work at hand. 

Coaching, Not Feedback

These days, I’m always hearing that managers should learn how to give feedback, that they should be better at receiving it, and specifically that “Millennials love feedback.” None of this is true. Millennials don’t love feedback. No one does. Indeed, a growing body of research shows that feedback sends us into “fight or flight” mode—even when the feedback is delivered with great skill and caring. 

The best team leaders know that what people want more than feedback is attention—in particular, coaching attention. They don’t tell a team member where he stands—no one wants to be on the receiving end of that. Instead, they help him know how to get better. They give strengths-based coaching. Instinctively they know the best way to help a person grow is to challenge him to identify and then leverage his strengths intelligently. His strengths—not his weaknesses—are his “areas for improvement,” those areas where he will learn the most, grow the fastest, and be most resilient. 

Reliable Performance Ratings

To see the performance of each team member, all an organization needs is a short survey that asks team leaders (at least four times a year) a few carefully worded questions about how they feel about each of their team members. Asking the opinions of the team leaders produces a starkly different—and more accurate—result that asking them to assign haphazard scores on specific attributes. Given the Idiosyncratic Rater Effect in play, we need to shift the kinds of questions we ask to eliminate spawning more bad people data in our organizations. Over time, these data can be aggregated across the organization, quarter by quarter, so that the organization can have better real-time information about what to do with each team member. 

As talk of performance management pervades mainstream press and growing crop of marquee brands denounce the traditional performance review, there is certainly hunger to bring the way we evaluate talent into the real world of how we work today. This means many of our comfortable rituals—the year-end performance review, the nine-box grid, the consensus meeting, our use of 360s—will be forever changed. For those of us who want HR to be known as a purveyor of good data—data on which you can actually run a business—these changes cannot come soon enough. 

 

About the Author

Marcus Buckingham is founder and CEO of TMBC. Marcus first conquered the bestseller lists in 1999 with First, Break All the Rules. While the title may imply an iconoclastic streak, his continuing plea for managers to break with tradition has nothing to do with rebellion; instead, he argues, rules must be broken and discarded because they stifle the originality and uniqueness—the strengths— that can enable all of us to achieve our highest performance. His latest book, StandOut, has launched not just a new strengths assessment but an entire productivity platform based on a new research methodology to reveal your top two “strength roles”—your areas of comparative advantage.  Marcus has worked with the world’s most prestigious companies, including Facebook, Toyota, Coca-Cola, Wells Fargo, Microsoft and Disney, to name just a few. His compelling message has also drawn attention from numerous media outlets. He has appeared on “The Oprah Winfrey Show,” “Larry King Live,” “The Today Show,” “Good Morning America” and “The View,” and has been profiled in The New York Times, The Wall Street Journal, USA Today, Fortune, Fast Company, and Harvard Business Review.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.