logo image

ATD Blog

Curiosity and Critical Thinking: Unveiling the Science of Measuring Learning Impact

By

Fri Oct 27 2023

Curiosity and Critical Thinking: Unveiling the Science of Measuring Learning Impact
Loading...

Brought to you by

gp-strategies-logo-color-2020-with-tagline.png

Too often, learning and development (L&D) professionals struggle to prove the efficacy of our learning programs. We often don’t know how to measure our impact. To do so, we must approach measurement like scientists do.

Let’s say your sales department wants to improve sales volume. After a thorough needs analysis, you develop and deliver a rigorous sales skills training program. Now you want to know if it’s working. You look at the sales of your trainees and are thrilled to report that the trained salespeople are selling, on average, 10 more units the month after training. Are these results credible? (Spoiler alert: probably not.)

Advertisement

Embrace the Science of Measurement

When we measure the impact of our learning programs, we are conducting a science experiment. And like any good science experiment, we must start with a hypothesis.

In our sales training example, our hypothesis could be, “Employees attending the new sales skills program will sell more units.”

But we need to be careful. Simply showing an increase in unit sales might not be the most credible metric to share with your stakeholders. There are almost always other variables at play.

Measure More Accurately With an Observational Design Approach

One of the best ways to account for other plausible explanations is to use test and control groups. The test group gets the new training, the control group does not. This scientific approach is the same method used in other disciplines, like researchers conducting clinical drug trials and marketers evaluating an advertising campaign’s effectiveness.

When comparing test and control groups, we should expand our hypothesis to reflect them: “Employees attending the new sales skills program show greater sales improvement than those not attending.”

Advertisement

By framing the hypothesis this way, we set up our experiment. We identify how we will show success (or failure) by comparing the improvement in sales performance between our two groups.

  • Original Results: On average, trained salespeople increased sales by 10 units the month after training.

  • Revised: On average, trained salespeople increased sales by 10 units the month after training. Untrained salespeople averaged a sales increase of six units during the same period.

  • Refined Further: On average, trained salespeople showed an incremental sales increase of four units over the untrained salespeople the month after training.

Get Curious About What You Can Uncover

Taking prior performance into account is critical, but often it is not enough. With an observational study, we need to rule out plausible reasons besides our training why someone’s performance may have improved.

Consider questions like these:

  • Did newer salespeople benefit more from training?

  • Did results vary by region?

  • Did results vary by type of customer (for example, size or industry)?

Each of these questions becomes another hypothesis to prove or disprove. As we test these alternatives, the chance that we will discover some exciting things is high—and we will better understand what other factors drive performance.

3 Steps to Improve Your Learning Measurement

1. Start With a Solid Hypothesis

Advertisement

Think about what you want to know and frame it as a hypothesis. Remember, a hypothesis is a well-thought-out proposition. We don’t hypothesize that “our new onboarding program will be useful.” We hypothesize that “newly hired employees completing our new onboarding program will have a lower 90-day turnover rate than new hires completing our old program.”

2. Test Your Hypothesis

Assume that an onboarding program posted good results. It showed a 90-day new hire turnover reduction from 21 percent to 12 percent. Can the onboarding program take all the credit? Let’s also say that your organization launched a mentorship program shortly after the onboarding program. You need to explore whether the onboarding or the mentoring (or a combination) led to the reduction. This requires adding another hypothesis or two and digging deeper.

3. Plan to Act

At the outset of a measurement project, clarify your intent. Ask yourself:

  • Am I measuring to prove it worked?

  • Am I measuring because I want to know how to improve the program?

  • What will I do if the results are unfavorable?

  • What actions am I prepared to take based on the results?

Asking these questions provides you with insights that are catalysts for action.

The Journey Toward Continuous Improvement

Embrace your measurement journey with a scientific mindset by checking out GP Strategies’ Measurement Academy and learning how we help organizations uncover valuable L&D insights daily.

You've Reached ATD Member-only Content

Become an ATD member to continue

Already a member?Sign In

Advertisement
Advertisement

Copyright © 2024 ATD

ASTD changed its name to ATD to meet the growing needs of a dynamic, global profession.

Terms of UsePrivacy NoticeCookie Policy