Top
1.800.628.2783
1.800.628.2783
ATD Links Archive
Issue Map
ATD Links Archive
Businessman choosing the right door
ATD Links

Making Measurement-Based Decisions

analysis
As you gather formative and summative data, ask and get answers to these three key questions: 

  1. Does . . . meet expectations?
  2. If not, why not?
  3. If so, why?

Does . . . Meet Expectations?

The first question to ask is, “Does the level of . . . meet expectations?” This question can and should be asked when evaluating the data collected at each of the four levels. Here are some sample questions: 

Level 1: Reaction

  • Does participant engagement during the program meet expectations?
  • Does relevance of the program to participant job responsibilities meet expectations?
  • Does participant satisfaction with the program meet expectations?

Level 2: Learning

  • Does participant knowledge obtained / demonstrated during the program meet
  • expectations?
  • Does participant skill demonstrated during the program meet expectations?
  • Does participant attitude about performing new skills on the job meet expectations?
  • Does participant confidence to apply knowledge and skills on the job meet expectations?
  • Does participant commitment to apply knowledge and skills on the job meet expectations?

Level 3: Behavior

  • Does performance of (insert critical behavior) on the job meet expectations?
  • Does level of on-the-job learning meet expectations?
  • Does the quality and amount of performance monitoring meet expectations?
  • Does reinforcement of critical behaviors meet expectations?
  • Does encouragement to perform critical behaviors meet expectations?
  • Does the alignment of reward systems and performance of critical behaviors meet expectations?

Level 4: Results

  • Does movement of (insert leading indicator) meet expectations?
  • Does movement of (insert desired outcomes) meet expectations?

For Levels 1 and 2, the range of acceptable expectations is largely determined by the training department. For Level 3, critical behaviors are expected to be performed consistently, by definition. Expectations for program outcomes in the form of leading indicators, shown in Level 4, should have been defined during the program planning phase, and they are unique to each program. 
Organizations that wish to benchmark as a way to know if their performance is acceptable can consider internal benchmarking and competing against themselves for continual improvement. It is also possible that specific target metrics cannot realistically be determined up front. In that case, it would be prudent to make an educated guess and then use some pilot data to fine-tune expectations. 

There is no need to make this complicated. For each level, you are simply asking yourself if the data you collected indicates that you have or have not met the requirements for each component.

If Not, Why Not?

As the program progresses and you are analyzing related data, it is likely that for at least one component, the outcomes will not meet the expectation. If they do not, you need to identify and correct the issue before the targeted results are jeopardized. 

Data analysis resources should be focused on Levels 3 and 4, much the same way as overall program resources should be allocated to favor these more important levels. However, this process also works well for Levels 1 and 2.

Data Analysis During a Training Program

Streamline analysis at Levels 1 and 2 by having trainers conduct analysis formatively, as they teach the program. For example, they can mentally assess if the level of interaction during the program meets their expectations, based on their experience. If it doesn’t, they can conduct a pulse check, in which they stop teaching momentarily and ask the class open- ended questions to determine if there is something inhibiting participation. For example, “I see some confused looks out there. What thoughts do you have?”

Data Analysis After the Training Program

After training, the data analysis process continues and becomes arguably more critical as on-the-job performance is monitored for acceptable levels. When monitoring Level 3, it is uncommon for a single variable to cause substandard performance. As a learning and performance consultant, at this stage you can help uncover barriers to performance in the workplace and participate in creating solutions. 

This punctuates the importance of creating a good post-training plan; if you do not have monitoring methods lined up in advance, getting and responding to the data will be awkward, or even impossible. 

Here are a few examples of how to identify possible root causes of substandard data: 

All Levels

Advertisement
  • Include conditional questions in surveys and interviews. (e.g., If you rated this item 3 or below, please indicate the reason(s).)
  • Ask program instructors for their input.
  • Ask managers and supervisors of the training graduates for their observations and input.
  • Drill down into data to determine if the problem is global or isolated. (e.g., Is the issue specific to one department, geography, or job title?)
  • Conduct training participant interviews or a focus group and ask open-ended questions.
  • When indicated, ask follow-on questions.

Level 1: Reaction and Level 2: Learning

  • Integrate formative evaluation into the training program.

Level 3: Behavior

  • Observe on-the-job behavior and watch for obstructions to critical behaviors and required drivers.
  • Survey or interview training graduates and their supervisors, customers, co-workers and/or direct reports and ask them why they think that critical behaviors and required drivers are not occurring reliably. Ask them what would make them occur.

Level 4: Results

  • Survey or interview training graduates and their supervisors, customers, co-workers and/or direct reports and ask them why they think that leading indicators and/or desired results are not moving in the right direction. Ask them what behaviors or circumstances would make them move in the right direction. 

Depending upon the culture in your organization, being a learning and performance consultant may require some courage. Some stakeholders may be less than enthusiastic to hear your bad news as you share that the implementation is off track, even if you have suggestions to get it back on target. Some managers or supervisors may tell you to “go back to where you belong.” 
When Wendy worked for a large corporation, a sales manager once asked her what she was doing poking around trying to get sales statistics by sales rep. This is not uncommon, and if your organization does not already have a culture of business partnership, it will likely happen to you, too. 

It is important to keep in mind that it is your job to first seek the truth through assessment and analysis. Then, you need to speak the truth about the suspected root causes and recommended interventions to remedy the situation. In the long run, truth leads to trust. 

Timely, proactive data analysis and response maximize program outcomes because issues are revealed and addressed when there is still time to fix them. There is so much less value in only measuring and reporting what happened after an initiative is over; this only serves to document the program outcomes, and best case, to generate ideas to enhance future initiatives.

If So, Why?

Ideally, when you probe to find out if various program outcomes are meeting expectations, sometimes the answer will be “yes.” If this is the case, study these pockets of success to see if they can be propagated, expanded, publicized, or celebrated.

Propagating Positive Findings From the Training Program

During the training program, the instructor may experience an enthusiastic, engaged participant group. If so, it might be appropriate to say, “All of a sudden the room seems to have come to life. Would you kindly share where you think that might have come from?”

These Levels 1 and 2 findings can be reviewed by the training department to continually improve the quality of future programs. As an added bonus, little or no time on the part of training participants and business stakeholders is required for this analysis at Levels 1 and 2.

Propagating Positive Findings on the Job

Sometimes program success is linked directly to certain training graduates who are performing better than others on the job. We call these individuals Bright Lights™ because they embody success and inspire and lead the way for others. Identify these Bright Lights, capture what they do that is successful, and then identify what factors have contributed to their success. 

Much like with barriers to implementation, successful implementation is typically the result of multiple success factors, rather than just one. When you are trying to capture these success factors, do not rely on surveys alone. Surveys may indicate where to begin looking for success factors, but ultimately, to get a complete picture, you will need to talk to training graduates, their supervisors, and possibly their peers and customers. 

Once these success factors have been identified, you may be able to add them to a learning and performance architecture as items that tend to drive success in your organization. This package can then both predict and maximize future mission-critical program outcomes.

Bottom Line

Do not wait until after a program is complete to gather data, and then wait further to analyze it and see what happened. Instead, you want to gather and analyze data along the way so that instead of measuring what happened, you can influence what happens and maximize current and future program results. 

For more advice, check out Kirkpatrick's Four Levels of Training Evaluation. Use this book to discover a comprehensive blueprint for implementing the model in a way that truly maximizes your business's results. Using these innovative concepts, principles, techniques, and case studies, you can better train people, improve the way you work, and, ultimately, help your organization meet its most crucial goals.

About the Author
James Kirkpatrick is a thought leader and change driver in training evaluation and the creator of the New World Kirkpatrick Model. Using his 15 years of experience in the corporate world, including eight years as a corporate training manager, he trains and consults for corporate, government, military, and humanitarian organizations around the world. He is passionate about assisting learning professionals in redefining themselves as strategic business partners to become a viable force in the workplace. His latest book, co-authored with Wendy Kirkpatrick, is Kirkpatrick’s Four Levels of Training Evaluation (ATD Press).
About the Author
Wendy Kirkpatrick is a global driving force of the use and implementation of the Kirkpatrick Model, leading companies to measurable success through training and evaluation. She is a recipient of the 2013 Emerging Training Leaders Award from Train­ing magazine. Together Jim and Wendy are co-owners of Kirkpatrick Partners.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.