Top
1.800.628.2783
1.800.628.2783
ATD Links Archive
Issue Map
ATD Links Archive
ATD Links

Recommendations for Getting to Measurements That Matter

analysis
While the need for evaluation is at the forefront of many talent development professionals’ minds, organizations often struggle to launch rigorous evaluation efforts. These efforts involve the often daunting tasks of selecting research designs, collecting and managing data, conducting analyses, and developing actionable recommendations based on the results. Data points can be drawn from such sources as financial records, personnel files, customer complaints, and surveys of employees, managers, customers, and potential customers. Ideally, evaluation efforts help inform improvements in the effectiveness of learning programs and the success of the business as a whole. 

However, only 35 percent of the 199 talent development professionals surveyed for Evaluating Learning: Getting to Measurements That Matter (hereafter, the Study) reported that their organizations evaluated the business results of learning programs to any extent. Given these rates, it is not surprising that less than half of participants (44 percent) thought their evaluation efforts were helping greatly with reaching organizational learning goals, and 36 percent thought their efforts were helping to a large extent with meeting their employer’s business goals. 

Furthermore, the state of evaluation has not changed considerably since ATD, along with the Institute for Corporate Productivity (i4cp), last took its pulse in 2009. In The Value of Evaluation: Making Training Evaluations More Effective, ATD and i4cp found that 37 percent of respondents reported that their organizations evaluated the business results of any learning programs. The percentage of respondents who thought their evaluation programs helped achieve organizational learning and business goals in 2009 was almost identical to the current Study. 

Recognizing the need for effective evaluation efforts that drive the success of learning programs and the business overall, this Study delves deeper into the types of measurements organizations are taking, the programs that are being evaluated, and the specific approaches and tools being used.

Closer Look at the Participants 

ATD distributed a survey in mid-2015 to talent development professionals. The survey was designed with input from Wendy Kirkpatrick, president and founder of Kirkpatrick Partners; Jim Kirkpatrick, senior consultant at Kirkpatrick Partners; and Ken Phillips, founder and chief executive officer of Phillips Associates. These subject matter experts also helped distribute the survey. 

The findings rely on responses from 199 participants who had knowledge of their employer’s use of evaluation. More than 80 percent of these participants were at the manager, director, or executive level (Figure 1). On the organizational level, 13 percent of participants’ organizations had fewer than 100 employees, 30 percent had 100 to 999 employees, 28 percent had 1,000 to 9,999 employees, and 29 percent had 10,000 or more employees. Participants represented a wide variety of industries, with manufacturing (12 percent of participants), healthcare (10 percent), education (10 percent), and finance (9 percent) being the most heavily represented.

Snapshot of Evaluation and Measurement 

Evaluation involves measuring—gathering and assessing or analyzing information—to provide feedback. Ideally, this feedback informs decisions around learning programs and the business as a whole, which then helps accomplish goals in these areas. 

In the Kirkpatrick and Phillips models, each level involves measuring and basing feedback on different types of information. Level 1 information (such as “smile sheets” that ask participants how engaging they found an instructor) and Level 2 information (such as the results of a quiz administered at the end of a class) are typically easier to gather and connect to learning programs. On the other hand, while companies may collect Level 4 and 5 data (the business impact of learning programs, such as sales volume data), talent development professionals may have difficulty gaining access to it and isolating the learning program’s impact on the numbers. 

Advertisement

Efforts at the lower evaluation levels are commonplace. Although nearly nine out of 10 participants reported that their organizations used Level 1 to any extent, the rate drops to six in 10 for Level 3, which looks at actual behavior change. By contrast, only 35 percent measure business impact and a mere 15 percent use ROI. Only a small number (4 percent) do not use any levels of evaluation. 

Ideally, evaluation efforts should provide insights that can be acted upon to inform and drive improvements in the effectiveness of learning programs and the success of the business as a whole. However, only 44 percent of talent development professionals believe that their evaluation efforts are helping meet learning goals to a major extent, and only 36 percent believe they greatly help meet business goals. 

This is not surprising given that Levels 1 and 2, which is what the vast majority of organizations measure, do not cover the actual application of skills to work activities or the business results of learning programs. Levels 4 and 5, which focus on business results and ROI, respectively, are not used in most organizations.

Review of Recommendations from the Research 

Use of each of the five levels has not changed considerably since 2009, when ATD last took the pulse of the state of evaluation efforts. Although nearly 90 percent of participants reported that their organizations used Level 1 evaluations (which measure the reaction of participants) and about 80 percent used Level 2 evaluations (which look at learning), the rate drops to 60 percent for Level 3 evaluations (which consider actual behavior change and application of skills to work). Only slightly more than a third measure Level 4 (the business or mission impact of learning programs) and a mere 15 percent use Level 5 (ROI). Therefore, it follows that more than two-thirds of the already limited funding for evaluation efforts is directed toward Levels 1 and 2. 

While the use of Levels 1 and 2 is not associated with higher learning or business effectiveness, application of the higher levels is. Level 3 evaluations are associated with greater learning effectiveness. Levels 4 and 5, which are used less frequently, are associated with jumps in both learning and business effectiveness. Thus, talent development functions should consider allocating their evaluation funds wisely and focus more on higher evaluation levels if they are currently using them extremely sparingly and sporadically, or focus on using them if not already doing so. 

Jim and Wendy Kirkpatrick suggest that “to have sufficient resources to implement a quality Level 3 plan, [you need to] streamline evaluation at Levels 1 and 2. Carefully consider what information is useful to the training department to ensure that training is of sufficient quality, and what information is required by stakeholders, if any, at these levels. If you do not plan to use a particular piece of data, save resources by choosing not to gather it. Reserve those saved resources for your Level 3 plan” (Kirkpatrick and Kirkpatrick 2015). 

Talent development professionals should take care to proceed thoughtfully and wisely with their higher-level efforts, especially Levels 4 and 5. They must carefully choose the programs to focus on and the metrics and research designs for impact and ROI studies to maximize the learning and business effectiveness of these efforts. In such cases, it may be to talent development’s advantage to identify the critical metric first, design a learning program around improving it, and then measure again. This strategy may also encourage other departments or functions to partner with the talent development function before the program is launched and share data and evaluation resources. 

Besides considering the payoff, talent development professionals must also assess the hurdles present with potential studies. Indeed, isolating the connection between learning programs and business or mission outcomes is one of the top barriers to evaluation identified in this Study. Another major roadblock is that higher-level evaluation efforts may take considerable staff time, investment in tools, or collaboration with other departments. In other words, they should ask what the business impact of a business impact study is or what the ROI of a ROI study is before embarking on the evaluation process. 

Another reason to plan ahead and plan carefully when designing evaluation efforts is the shortage of talent development professionals with the necessary skills to conduct rigorous evaluation efforts in today’s job market. More than half of the participants in the Study cited difficulty in attracting and retaining qualified staff. Indeed, talent development professionals should consider investing in their own evaluation skills, knowledge, and capabilities.

Measurement Small
Editor’s Note: This article is excerpted from a new report from ATD Research, Evaluating Learning: Getting to Measurements That Matter  (ATD Press, April 2016). This research was sponsored by the International Association for Continuing Education and Training (IACET).  IACET’s mission is to advance the global workforce by providing the standard framework for quality learning and development through accreditation. IACET believes this research will contribute to an existing body of knowledge that will improve standards and practices for the classroom, online, and mobile.

About the Author
Maria Ho is the manager of ATD research services. She serves as ATD's senior research program strategist and designer and provides oversight and direction for all of ATD's internal and external, industry specific, and market research services. Prior to joining ATD, Maria was a public policy researcher, data analyst, and writer at the Pew Charitable Trusts in Washington, D.C.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.