June 2016
Issue Map
June 2016
Newsletter Article

How to Analyze Training Evaluation Data

In the TD at Work issue, “ The Four Levels—An Update” (ATD Press, 2015), we present a new model for creating an effective training evaluation plan for any program so that you can show the organizational value of your work.  

By employing the ideas presented in the issue, you will gather a robust data set. The next step is to analyze it and take appropriate action. If you are following the recommendations, you already realize that you do not want to wait until after the program is complete to gather data, and then wait until later to analyze it and see what happened. Instead, you want to gather and analyze data along the way, so instead of measuring what happened, you can influence what happens and maximize results.

Three Key Data Analysis Questions

To maximize program results, ask these three key questions as you analyze the data:

  1. Does . . . meet expectations?
  2. If not, why not?
  3. If so, why? 

To analyze Level 3 data, a key question is, “Does the level of on-the-job application of the new skills meet expectations?” A key assumption is that expectations were agreed upon prior to the start of the initiative so that there is a basis for evaluating the findings. For mission-critical programs, create a four levels evaluation plan during the program design and development phase. Present the plan to stakeholders, and discuss your and their expectations. Document what each party finds acceptable, and use this information when the data analysis begins. 
As preliminary program results begin to occur, an example of a key question could be, “Does the level of customer satisfaction meet expectations?” If so, see if the data indicate the contributors to success, because there are typically several. Document them as possible organizational best practices or items to propagate. If you are not sure what caused the success, this is a good time to conduct some interviews. If the data show that the level of customer satisfaction is not acceptable, find out what has gone wrong, and put together an intervention plan. Keep in mind that the majority of the causes will be issues in the on-the-job application environment. Timely, proactive data analysis and response maximize program outcomes because issues are surfaced and addressed when there is still time to fix them. 

Data analysis resources should be focused on Levels 3 and 4 data, much the same way as overall program resources are allocated to favor these more important levels. Streamline analysis at Levels 1 and 2 by having trainers conduct analysis formatively, as they teach the program. For example, they can mentally assess if the level of interaction during the program meets their expectations. If not, they can conduct a pulse check, in which they stop teaching momentarily and ask the class open-ended questions to determine if there is something inhibiting participation. If the level of interaction is good, the trainer can note in a post-program evaluation form which techniques or program activities seemed to be particularly successful. These Levels 1 and 2 findings can be reviewed by the training department to improve the quality of future programs. As an added bonus, little or no time on the part of training participants and business stakeholders is required for this analysis at Levels 1 and 2.

Expectation Standards Are Unique for Each Program or Organization

There is no universal standard for program expectations at any of the levels. Expectations are specific to each program and organization. For instance, the compliance standard for airline pilots following safety procedures is probably 100 percent. A shipping company may consider a 95 percent on-time shipment record to be acceptable. A call center supervisor may be satisfied if 85 percent of callers rate their experience positively. Organizations that wish to benchmark as a way to know if their performance is acceptable can consider internal benchmarking and competing against themselves for continual improvement. 


How to Share Your Story of Value

By now, you likely understand that not all training evaluation data are of equal value, particularly to different audiences. In addition to considering where to spend training evaluation resources, consider with whom the resulting data will be shared. 

Every group that uses the results of training evaluation data should see all results in a high-level summary. However, each group will consider different results to be of the most interest. Limit detailed reports to include only information that will be most useful and compelling to each of your key stakeholder groups, along with representative decisions they can make using the information. 

As you move farther up the corporate ladder, so you move up the Kirkpatrick levels in terms of what type of information is appropriate and most meaningful to emphasize. Focusing your presentation of data by following this guide will show both your sensitivity to limited resources and your business acumen. 

For example, an executive report might include an aggregate of participant satisfaction scores and a few representative testimonials (Level 1), average retrospective pretest and post-test score comparisons (Level 2), a scorecard of implementation and support findings during the prior 90 days (Level 3), and a more detailed discussion of how the training has supported key organizational outcomes and contributed to the bottom line or mission accomplishment (Level 4). 

Conversely, the information you bring to an internal training department meeting would likely include the raw participant evaluation data and all comments, training presenter input related to the program design, and feedback from participants and managers about the practicality of the content for their job responsibilities. 

Refer to the “Results Most Important to Each Key Group” sidebar for a guide showing the results that are generally of most interest to each key group involved in training evaluation and the organization it serves.

For in-person presentations, it is very powerful to bring a business partner who has personally benefited from applying what was learned in training or who manages a team that has accomplished results in part due to implementing new learning. This testimonial supporting the data provides credibility and a human connection. 

Editor’s Note: This article is excerpted from the TD at Work “ The Four Levels—An Update” (ATD Press, 2015).This issue of TD at Work will help you create an effective training evaluation plan for any program to ensure that your valuable, limited resources are dedicated to the programs that will create the most impact.

About the Author
James Kirkpatrick is a thought leader and change driver in training evaluation and the creator of the New World Kirkpatrick Model. Using his 15 years of experience in the corporate world, including eight years as a corporate training manager, he trains and consults for corporate, government, military, and humanitarian organizations around the world. He is passionate about assisting learning professionals in redefining themselves as strategic business partners to become a viable force in the workplace. His latest book, co-authored with Wendy Kirkpatrick, is Kirkpatrick’s Four Levels of Training Evaluation (ATD Press).
About the Author
Wendy Kirkpatrick is a global driving force of the use and implementation of the Kirkpatrick Model, leading companies to measurable success through training and evaluation. She is a recipient of the 2013 Emerging Training Leaders Award from Train­ing magazine. Together Jim and Wendy are co-owners of Kirkpatrick Partners.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.