August 2013
Issue Map
Advertisement
Advertisement
TD Magazine

Prepare for Impact

Thursday, August 8, 2013
Audio
Prepare for Impact

Overcome the conundrum of measuring interpersonal skills training, and demonstrate its positive impact on your organization.

Advertisement
Benscoter
Wikipedia defines "conundrum" as "a logical postulation that evades resolution, an intricate and difficult problem."

So what's the problem we're trying to resolve? It's the one that promotes the importance of interpersonal skills (IPS) in the care and feeding of our leaders, yet at the same time, fails to discover whether IPS training has an impact on our organizations.

For this article, "interpersonal skills" includes such topics as communication, decision making, emotional intelligence, and negotiation—what many people today unfortunately label "soft skills." They're "soft" because we view them as either innate or open to subjective definitions. Although we give lip service to their importance, we've generally neglected to evaluate their impact on our organizations, which only lowers their status in the eyes of decision makers.

Fortunately, the impact of IPS training can be measured efficiently and with little or no cost, and the process for doing this begins long before training is ever delivered.

IPS training is everywhere

Do a Google search on "interpersonal skills training" and it will return more than 12 million hits. The skill that's consumed a large portion of my own professional career has been communication skills training in graduate schools and business environments. I find it interesting that when leaders are asked which IPS is most critical to success in business, they rank communication at or near the top. Many validated and popular leadership competency models contain several "soft" competencies, including communication and decision making.

Despite the emphasis put on the importance of IPS, ASTD's 2012 State of the Industry Report notes that IPS training ranks among the bottom three content areas for the Fortune Global 500 grouping and is absent from the top three content areas in its remaining two industry classifications (Consolidated and BEST). In the same report, "processes, procedures, [and] business practices" are among the top three content areas for all industry classifications.

If IPS are (or should be) important, why aren't they at the top of organizations' request lists? In the absence of impact evaluation, we're left with "satisfaction" and "learning" measures that don't do much to create support and confidence in the value of this training. Even for many training specialists, measuring IPS in others is seen as too complicated or too costly (business processes, procedures, and practices are perceived to be much easier to quantify and, therefore, measure).

How should the training profession respond?

Our profession hasn't done a good job of measuring the impact of most of its programs. In The Training Measurement Book, Josh Bersin makes the point that we've saddled ourselves with an evaluation model (the four-level Kirkpatrick model) that doesn't adequately reflect the needs of today's business professionals. The fact that we've followed this model for so long that even our line-of-business people know what we mean by Level 1 or Level 2 is a case in point.

Although we can debate the correlation or cause-and-effect relationship among the four levels of this model, or even the five-level model put forward by Jack Phillips, it remains a fact that organizations rarely go beyond measuring participant satisfaction (Level 1) and learning (Level 2) within the training setting. In addition to time and cost, we may be avoiding the higher levels because they're too risky (we may not like what we find) or no one's asking us to do it in the first place.

And while it's still impractical to measure the return-on-investment of anything less than a strategic and costly training or performance improvement initiative, other approaches to moving beyond Level 3 (transfer of skills to the job setting) of the Kirkpatrick model have entered the picture. This is a function of a slow but steady march toward greater accountability for results.

The topic of alignment of the training function with the business strategy is a hot topic precisely because many in the C-suite are challenging the value that training organizations provide. If we're going to achieve alignment, however, we need to demonstrate the value of our offerings beyond learner satisfaction and knowledge acquisition. That's where impact evaluation enters the picture.

What follows are suggestions for how we can demonstrate the impact of our programs, including the IPS training that has generated so much skepticism in the past. Success depends on what we do before evaluation and what approach we choose to follow that meets our resource, budget, and time constraints. In other words, we have to prepare for impact.

Set up training for impact

Let's tackle the first challenge: linking IPS training with business outcomes. Begin by taking a hard look at how we (and many of our clients) view IPS. We've helped to dig our own hole by treating IPS training as content rather than context. Read some IPS training descriptions and ask yourself:

  • Does this course describe topics rather than outcomes?
  • Are the objectives (if there are any) clear about what the learner will be able to do in the context of a work environment?

For example, does a course on decision making list topics such as "what is decision making?"; "how we make decisions"; or "defining the decision-making process"? Sounds pretty academic, doesn't it?
Here's another question: Do the objectives describe what participants will understand or learn rather than what they will do? Although this alone doesn't necessarily predict a content-driven program, the prospective attendee needs to take a deeper dive before deciding whether to purchase or attend.

Here's a personal example. Several years ago I was working with an automotive client's training developers on ways to increase enrollment in their sales training courses. One program was titled "Dealership Sales Training Level 3."

When I asked about the focus of the program, I was told that it mainly dealt with techniques for welcoming a prospective buyer who enters the dealership for the first time. After some discussion, we changed the name of the course (and made some content modifications) to "How to Welcome a Prospective Buyer" because it reflected the skills taught in the course.

The original title was academic and didn't tell anything about its purpose. The new title reflects on-the-job context and, thanks to the design of the program, is more likely to encourage sales people to enroll.

The process of evaluating impact begins in the needs assessment and design phases of a program. The two approaches to evaluating impact that I'll examine later take this fact into account.

The message, then, is to design IPS training programs with the initial focus on context (what the work environment requires of the participants) rather than content (what topics should be in my training program).

In addition to the questions asked earlier, you also should ask

Advertisement
  • What problem, challenge, or change am I trying to address?
  • What roles do IPS play in addressing the problem (be careful to avoid theoretical language; use actionable and practical descriptions)?
  • How will I know if I've succeeded (that's to say, what's my model for success)?

Doing this more likely will provide a program that will have a positive business impact. But how can we measure business impact effectively without breaking the budget?

Two approaches to evaluating impact

Robert Brinkerhoff's Success Case Method remains a quick and simple approach to determining the factors that facilitate (called "best case") and impede (called "worst case") the impact of newly acquired skills and knowledge on the job. As Brinkerhoff points out, the model is flexible when it comes to time required and budget.

Dave Basarab applied some of the principles from the case study method in developing an approach he calls "predictive evaluation." This approach requires the collaboration of management and training participants in establishing their intentions for a training program and provides a method for measuring the adoption of the new skills and knowledge.

Even more complex approaches to measuring impact have made their way onto the stage. Firms such as Capital Analytics are known for applying sophisticated statistical procedures in judging the effects of performance initiatives. The case study and predictive evaluation approaches represent relatively quick and effective ways of introducing impact evaluation to your organization.

The strength of both models is that they require the evaluator to first establish an operational model or picture of success—if my program is successful, what will the impact on my organization be and how will I measure it? By completing this step before rolling out the learning solution, you're better able to design your approach to achieve maximum impact. It also makes the data collection and analysis phases of your evaluation process more efficient since you know what you're looking for and what to measure.

My intention is not to promote one approach over the other. Your choice should be based on many factors, including:

  • what questions you're trying to answer
  • your client's support for evaluation
  • your level of evaluation expertise
  • the resources you have available to focus on evaluation
  • your evaluation budget
  • time constraints.

The key takeaway is that you're better prepared to evaluate the impact of your IPS training programs than you may think. Neither model is difficult to implement, particularly considering the return you'll generate in credibility and alignment with your organization's business plan and strategy.

Prepare for change

If you've decided to proceed with an impact evaluation, be prepared to move out of your comfort zone. Not only will the focus on measuring impact require additional skills and planning; you'll also need to prepare your client organization for the process.

First you'll have to shed the traditional consulting arrangements that Peter Block described in Flawless Consulting as "pair-of-hands" (I don't have time to do this, so you do it for me) or "expert" (I don't know how to do this: you're the expert, so do it for me), and replace them with a collaborative working arrangement.

Also be ready to discover that, despite your best efforts, the skills you trained either weren't used on the job or, if they were, they had limited impact on business measures beyond a few sparse examples. If that happens, it's back to the drawing board for you and your client.

The point, of course, is that the effort required to effectively measure the impact of a program requires a great deal of thought and preparation on your part, as well as a managed change effort with your client. Even though they may see limited value in training IPS, they'll probably admit that they represent valuable characteristics of strong leaders. It's time to put these beliefs into practice by measuring the impact that these skills have on the success of our business and its leaders.

About the Author

Bud is the owner of GMB Performance Group, a company specializing in organizational development, human performance improvement and training. He has managed departments in the computer and financial services industries. His consulting clients include Siemens Corporation, IBM, Toyota Motor Sales, Constellation Energy, Campbell Soup, Pepperidge Farms, AstraZeneca, Wyeth Pharmaceuticals, Merck, The Bentley Institute and Godiva Chocolatier. Bud has done extensive training and coaching in the areas of presentation skills and communication in the private and public sectors, non-profits and MBA programs at Wharton, Duke University and Penn State. He received the Excellence in Teaching Award from Penn State’s Smeal College of Management in 1997 and 2005.   Bud was the co-editor of The Encyclopedia of Human Resource Management published in 2012 by Pfeiffer and is currently co-authoring a graduate textbook on Performance Consulting to be published in 2013. He is a member of the Triangle South Workforce Development Board of Directors. He is Past President of the Great Valley (PA) and the Carolinas Chapters of the International Society for Performance Improvement (ISPI) and is a member of the American Society for Training and Development (ASTD). He has published numerous articles, and is a frequent speaker at regional and international conferences.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.