Top
1.800.628.2783
1.800.628.2783
ATD Links Archive
Issue Map
ATD Links Archive
ATD Links

Where Are Organizations Focusing Their Measurement Efforts?

Measurement Small
Recognizing the need for more effective evaluation efforts that drive the success of learning programs and the business overall,  Evaluating Learning: Getting to Measurements That Matter takes a closer look at the state of evaluation in organizations today and what distinguishes organizations with successful evaluation efforts. In particular, the study explores where organizations are focusing their evaluation efforts. 

Evaluating Learning categorizes evaluation efforts using the most widely referenced model: the Kirkpatrick and Phillips evaluation models. Donald Kirkpatrick, author of Evaluating Training Programs: The Four Levels, developed Levels 1 through 4. Level 1 focuses simply on the reactions of employees participating in a program. Level 2 gauges the employees’ acquisition of skills and knowledge. Level 3 involves follow-up studies to find out if the employees are actually changing their behavior or implementing the skills gained in the training. Level 4 measures the actual business (or mission, in the case of mission-driven organizations) impact or results. Level 5, which was added to Kirkpatrick’s original model by Jack Phillips, determines return on investment (ROI). Although Levels 4 and 5 are associated with increased evaluation effectiveness, only a minority of organizations use them. 

Among the organizations that use higher evaluation levels, the research found that typically only a few learning programs out of the entire portfolio of offerings are measured to impact or ROI. Not surprisingly, programs that cover certain content areas are more likely to be evaluated than others. Additionally, programs delivered using the traditional live classroom are more likely to be evaluated at every level than those using technology.

Content Areas and Evaluation

More than 20 percent of respondents’ organizations evaluate any leadership development, sales, or technical program to Level 4. On the other hand, only 11 and 13 percent reported using Level 4 for mandatory and compliance or coaching programs, respectively. The differences between the former group (leadership development, sales, and technical) and the latter group (mandatory and compliance and coaching) in the use of Level 4 is statistically significant (mandatory and compliance and coaching also had the lowest rates for Level 5). 

The low use of Levels 4 and 5 for mandatory and compliance training could be due to the organization viewing it as a “check the box” activity, and simply focusing on meeting regulatory requirements. In the case of coaching, as with many soft skills, identifying appropriate business metrics and isolating the impact of individual learning programs can be particularly challenging. For example, individuals participating in a coaching program may have better promotion rates, but it would be difficult to isolate coaching’s impact from the influence of these individuals’ managers, other development opportunities, and differences in skills. In addition, participating in soft skill offerings may be voluntary, and self-selection issues may arise. 

Although sales falls in the group that is more likely to be evaluated using the higher levels (perhaps due to the availability of business metrics related to sales performance, such as sales volume or percent of quota), it is the least likely of all content areas to be evaluated using Level 1. Only 40 percent of respondents reported that their organizations evaluated the reaction of participants in sales training, compared with 55 percent for mandatory and compliance, which had the next lowest rate. The difference is statistically significant.

Figure 1: Percent of Respondents Evaluating Programs by Content Area

 

Level 1

Level 2

Level 3

Level 4

Level 5

Leadership Development

67%

57%

42%

21%

7%

Sales

40%

34%

26%

20%

8%

Mandatory and Compliance

55%

54%

20%

11%

3%

Technical

64%

Advertisement

57%

36%

23%

5%

Coaching

59%

45%

35%

13%

3%

Delivery Methods and Evaluation

The study also asked participants to report the percentage of all programs delivered in the traditional live instructor-led classroom and through technology-based or electronic methods that were evaluated at each of the levels. Technology-based learning can be delivered through online virtual classrooms and self-paced online modules, as well as other computer-based methods, mobile devices, and noncomputer technology (such as a DVD). 

As Figure 2 illustrates, only a small percentage of either type of program are ever evaluated using Level 4 or 5. However, programs delivered in the traditional instructor-led classroom are more likely to be evaluated at all of the five levels compared with those delivered using technology. On average, organizations evaluate 80 percent of live classroom programs using Level 1, but less than 60 percent of the technology-based portfolio is subject to Level 1. With Level 4, evaluation rates fall to 22 and 13 percent for live classroom programs and technology-based ones, respectively. On average, only a tiny (less than 8 percent) share of programs of either type are subject to ROI analyses. The differences between live classroom programs and technology-based ones are statistically significant for all levels except Level 5.

Figure 2: Percent of Programs Evaluated Using Each of the Five Levels by Delivery Method

 

Level 1

Level 2

Level 3

Level 4

Level 5

Traditional Live Classroom

80%

59%

33%

22%

7%

Technology-Based

58%

51%

18%

13%

6%

Live classroom programs may be more likely to be evaluated because they have been in existence longer, or because it is easier to survey or interview participants. Another reason for this difference may be the high ongoing cost of traditional, live instructor-led classroom programs. Continuing to offer live instructor-led programs to new cohorts may be pricey because of the cost of the instructor’s time (and, in some cases, the cost of classroom facilities and travel). As a result, organizations may have greater incentives to evaluate such a program’s effectiveness. By contrast, the cost of continuing to offer an e-learning course or mobile learning application that has already been developed can be quite low.

Want to Learn More?

Check out Evaluating Learning: Getting to Measurements That Matter for a complete rundown of how organizations are evaluating their learning efforts.  Other sections of the report cover:

  • How evaluation is conducted—approaches and tools.
  • Evaluation’s value and effective evaluation.
  • Barriers, funding issues, and staff skills.

 

About the Author
ATD Research tracks trends, informs decisions, and connects research to performance. By providing comprehensive data and insightful analysis, ATD Research Reports help business leaders and workplace learning and performance professionals understand and more effectively respond to today's fast-paced workplace learning and development industry. Our research reports offer an empirical foundation for today's data-driven decision-makers, containing both quantitative and qualitative analysis about organizational learning, human capital management, training, and performance.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.