Top
1.800.628.2783
1.800.628.2783
Advertisement
businessman working with graph data at office
Insights

L&D’s Struggle With Learning Evaluation

Friday, December 6, 2019
Advertisement

Four years have passed since ATD published its 2015 research report on measuring and evaluation, Evaluating Learning: Getting to Measurements That Matter. To provide an up-to-date look at how talent development professionals are evaluating learning programs, ATD’s new report, Effective Evaluation: Measuring Learning Programs for Success, publishes this fall.

About the Research

Earlier this year, ATD surveyed 779 talent development professionals about their organizations’ learning evaluation efforts. Half of the respondents were managers, directors, or executives. Forty-six percent were from mid-sized organizations employing between 500 and 9,999 people while 31 percent were from large organizations with 10,000 or more employees. The remaining 23 percent of respondents were from organizations employing fewer than 500 individuals.

Use of Evaluation Levels

The report categorizes evaluation levels using the Kirkpatrick and Phillips models. The report found that use of the five levels has not changed significantly since 2015.

The most widely used levels were Levels 1 and 2. Level 1 measures learners’ reactions (such as through smile sheets) and Level 2 measures the skills and knowledge learners have acquired, which can be done through knowledge quizzes administered at the end of a class or e-learning module. About eight in 10 organizations used these levels to measure learning.

Just more than half of the organizations (54 percent) measured learning at Level 3, which focuses on the extent to which learners are using their new skills on the job. This is typically measured through follow-up surveys or on-the-job observation.

Levels 4 and 5 were less commonly used. Level 4, which measures effects to the organizations’ business or mission (such as sales quotas met or customer satisfaction ratings), was used by 38 percent of organizations. Just 16 percent of organizations use Level 5, which measures financial results (ROI). It is not surprising that fewer organizations use these levels—talent development professionals may not have access to the data they need to evaluate learning at these higher levels.

Advertisement

Barriers and Effectiveness

Organizations continue to struggle using evaluation to meet learning and business goals. Just half of respondents (50 percent) said their learning evaluation efforts helped them meet their organization’s learning goals. This is only a slight increase from 2015, when 44 percent of respondents responded that way. The 2019 report found that even fewer (40 percent) said their learning evaluation efforts helped them meet their organization’s business goals. (In 2015, this figure was 36 percent.)

It is understandable that organizations struggle to meet these goals with their learning evaluation efforts given the barriers they face. The top barrier, reported by 41 percent of respondents, was difficulty isolating a learning program’s influence on results. A similar number of respondents (39 percent) reported that their TD staff does not have access to the data needed for higher-level evaluations while one-third of respondents (33 percent) said it costs too much to conduct higher-level evaluations.

Recommendations

Considering these and other challenges, the report features interviews with talent development professionals about how TD professionals can improve their evaluation efforts. Based on their insights and key findings from the report, some recommendations are offered below.

Use Control Groups or Estimates to Isolate the Effects of Training
Control groups can help isolate these effects while taking advantage of the staggered schedule many training programs have. “A little planning can create control groups,” says John Coné, principal at the Eleventh Hour Group. “If one-third of the people who work in the same place for the same manager doing the same job with the same resources get training this month and another third next month, you can create a month’s worth of comparative data that controls for pretty much every factor but the training.” If control groups are not an option, TD professionals can interview individuals (such as managers or participants) to gather estimates of the difference a training has made to on-the-job behaviors.

Identify Programs That Would Benefit From Higher-Level Evaluation
Results showed that Levels 4 and 5 were not widely used even though they were significantly associated with greater learning and business effectiveness. If talent development professionals are interested in evaluating at higher levels and are working with limited evaluation budgets, they should keep business goals in mind.

“Organizations should determine which training programs contribute most to achieving business goals and allocate evaluation resources to them,” says Kristopher Newbauer, chief human resources officer and head of global people and talent at Rotary International.

Learn More

The full report, sponsored by Profiling Online, is available for purchase here. ATD members can download the whitepaper for free.

To learn more about the report, join us for a free webcast (for members and nonmembers) on November 25 at 2 p.m. ET.

About the Author

Shauna Robinson is a research analyst at the Association for Talent Development (ATD), where she prepares surveys, analyzes data, and writes research reports and short case studies. Her previous positions at ATD include human capital specialist and communities of practice coordinator.

Prior to working for ATD, Shauna was a senior editorial assistant at Wiley in San Francisco, California. Shauna received a bachelor’s degree in English from UC Berkeley, and she is currently attending the University of Connecticut remotely to obtain a master's degree in survey research.

2 Comments
Sign In to Post a Comment
Thanks for sharing the results of the ATD national survey on evaluation and measurement, Shauna. One trend is very familiar - that L&D professionals tend to be relatively strong in measuring reaction and learning (Kirkpatrick levels 1 and 2), but measurement trails off at levels 3 (behavior) and 4 (results). Business leaders, however, see much greater value in the higher levels. While measuring on-job application and results can be challenging, we do have the tools for "raising the bar"!
Absolutely, Tom! That's exactly why it was so important to include examples and advice from experts like you on how organizations can overcome the barriers to evaluating at higher levels. Hopefully in the next report update we do, we'll find that organizations are struggling less with this!
Sorry! Something went wrong on our end. Please try again later.
Sorry! Something went wrong on our end. Please try again later.
Sorry! Something went wrong on our end. Please try again later.