Advertisement
Advertisement
shutterstock_244361152.jpg
ATD Blog

Find the Levers with Metrics

Wednesday, February 4, 2015
Advertisement

valves_kris-carillo

Photo courtesy of Kris Carillo; Flickr


You have a course that’s underperforming. Is it not generating the buzz you’d like? Are satisfaction ratings low? Or, do you have a new assessment system that tells you people aren’t learning enough?

   Let’s use metrics to guide your improvements.

Assessing whether your course is succeeding or failing is one kind of metric—typically referred to as a key performance indicator (KPI). KPIs are handy for reporting to management overall results. However, to turn any insight on whether the course needs improvement into action, you need operational metrics that delve deeper into the mechanics of the course. These metrics will help you discern which course features provide potential adjustment levers.  

Finding these levers through metrics is a three-step process:

Advertisement
  1. Propose levers—using your common sense as a starting point.
  2. Test the levers with data—find which ones really have an impact.
  3. Revise your deductions—as you learn from the data, keep trying new ideas.
    Step 1: Propose Levers

The starting point for finding levers is your own experience. What do you think is a likely candidate? Brainstorm a list, and pick the ones that seem the most promising. In other words, you’re forming hypotheses about the course performance.

Training has a number of components that can affect outcomes, from videos to quizzes. For example, when using video, you need to assess what video features are measurable or have potential impact? A clear priority would be to test whether videos matter at all. If they do matter, you can test whether the video’s length has an impact or if including live actors has a measurable effect.

Step 2: Test the Levers

The next key step is to test the effect of a training feature that you think is important. For example, let’s say that the inclusion of videos in training is considered important for a high-quality outcome. Because videos are expensive to produce, there’s a lot at stake by including them.

The simplest and most direct way to test the value of video is to make two versions of one course: Version A with videos and Version B without videos. Keep everything else within the training exactly the same. Then randomly assign people to participate in the two versions of training. This is called “A/B Testing” (or “Experimental Design”), and it’s been proven highly effective in testing websites.

Advertisement

You can find some terrific examples of A/B Test results at http://whichtestwon.com. For example, does a happy face appeal more to an audience, or a serious face? When the audience is boaters, apparently it’s the serious face.

An important caveat: Test results will depend on who’s taking the test. Engineers may respond differently from sales staff, and people in the German branch may respond differently from the United Kingdom branch, for instance. To avoid any issues:

  • Be sure to test the right group of people for your needs.
  • Be sure to have people with the same background looking at the A and B versions—this is why random assignments are so important.
    You don’t need to make test materials that are separate from your regular courses. Make two versions of a real course, and really deliver them to your regular audience. You’ve created very little extra work, and in the process answered a question.

Revise Your Deductions—and Keep Testing

Your test of video impact may demonstrate that the course performs roughly the same with and without videos—by whatever overall measure you’re using: satisfaction, knowledge transfer, and so on. This may mean that videos aren’t a lever after all. But like any good scientist, you’ll want to repeat the test at least a few times, with different courses and different audiences, before you embrace that idea.

If, on the other hand, the course with videos does substantially better with various courses and audiences, you can move on to testing specifics of the videos, including: 

  • Is there an ideal length for videos?
  • Does a specific type of content that performs best in videos? 
    Remember that A/B testing will be your friend. More importantly, share your insights. Imagine what we can accomplish as a profession if we have our own compendium of which test won.

 

About the Author

John Osborne is a knowledge manager at the intersection of learning and development, online publishing, and data analytics. Based in the tech industry, his bias is towards harnessing technology to transfer knowledge—from social media to business intelligence, and from the enterprise to the global online audience. John’s mission is to create positive change in training methods by applying insights from the wealth of data generated by emerging tools.  His background includes statistical analysis of knowledge transfer, application of web analytics to user intent, and delivering training in classrooms from Jakarta to Redmond.

 

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.