ATD, association for talent development

ATD Blog

Curiosity and Critical Thinking: Unveiling the Science of Measuring Learning Impact

By

Fri Oct 27 2023

Curiosity and Critical Thinking: Unveiling the Science of Measuring Learning Impact
Loading...

Brought to you by

00000161-775a-dc2f-aff9-7f5baad50000-sponsorID

Content

Too often, learning and development (L&D) professionals struggle to prove the efficacy of our learning programs. We often don’t know how to measure our impact. To do so, we must approach measurement like scientists do.

Too often, learning and development (L&D) professionals struggle to prove the efficacy of our learning programs. We often don’t know how to measure our impact. To do so, we must approach measurement like scientists do.

Content

Let’s say your sales department wants to improve sales volume. After a thorough needs analysis, you develop and deliver a rigorous sales skills training program. Now you want to know if it’s working. You look at the sales of your trainees and are thrilled to report that the trained salespeople are selling, on average, 10 more units the month after training. Are these results credible? (Spoiler alert: probably not.)

Let’s say your sales department wants to improve sales volume. After a thorough needs analysis, you develop and deliver a rigorous sales skills training program. Now you want to know if it’s working. You look at the sales of your trainees and are thrilled to report that the trained salespeople are selling, on average, 10 more units the month after training. Are these results credible? (Spoiler alert: probably not.)

Embrace the Science of Measurement

Content

When we measure the impact of our learning programs, we are conducting a science experiment. And like any good science experiment, we must start with a hypothesis.

When we measure the impact of our learning programs, we are conducting a science experiment. And like any good science experiment, we must start with a hypothesis.

Content

In our sales training example, our hypothesis could be, “Employees attending the new sales skills program will sell more units.”

In our sales training example, our hypothesis could be, “Employees attending the new sales skills program will sell more units.”

Content

But we need to be careful. Simply showing an increase in unit sales might not be the most credible metric to share with your stakeholders. There are almost always other variables at play.

But we need to be careful. Simply showing an increase in unit sales might not be the most credible metric to share with your stakeholders. There are almost always other variables at play.

Measure More Accurately With an Observational Design Approach

Content

One of the best ways to account for other plausible explanations is to use test and control groups. The test group gets the new training, the control group does not. This scientific approach is the same method used in other disciplines, like researchers conducting clinical drug trials and marketers evaluating an advertising campaign’s effectiveness.

One of the best ways to account for other plausible explanations is to use test and control groups. The test group gets the new training, the control group does not. This scientific approach is the same method used in other disciplines, like researchers conducting clinical drug trials and marketers evaluating an advertising campaign’s effectiveness.

Content

When comparing test and control groups, we should expand our hypothesis to reflect them: “Employees attending the new sales skills program show greater sales improvement than those not attending.”

When comparing test and control groups, we should expand our hypothesis to reflect them: “Employees attending the new sales skills program show greater sales improvement than those not attending.”

Content

By framing the hypothesis this way, we set up our experiment. We identify how we will show success (or failure) by comparing the improvement in sales performance between our two groups.

By framing the hypothesis this way, we set up our experiment. We identify how we will show success (or failure) by comparing the improvement in sales performance between our two groups.

  • Content

    Original Results: On average, trained salespeople increased sales by 10 units the month after training.

    Original Results: On average, trained salespeople increased sales by 10 units the month after training.

  • Content

    Revised: On average, trained salespeople increased sales by 10 units the month after training. Untrained salespeople averaged a sales increase of six units during the same period.

    Revised: On average, trained salespeople increased sales by 10 units the month after training. Untrained salespeople averaged a sales increase of six units during the same period.

  • Content

    Refined Further: On average, trained salespeople showed an incremental sales increase of four units over the untrained salespeople the month after training.

    Refined Further: On average, trained salespeople showed an incremental sales increase of four units over the untrained salespeople the month after training.

Get Curious About What You Can Uncover

Content

Taking prior performance into account is critical, but often it is not enough. With an observational study, we need to rule out plausible reasons besides our training why someone’s performance may have improved.

Taking prior performance into account is critical, but often it is not enough. With an observational study, we need to rule out plausible reasons besides our training why someone’s performance may have improved.

Content

Consider questions like these:

Consider questions like these:

  • Content

    Did newer salespeople benefit more from training?

    Did newer salespeople benefit more from training?

  • Content

    Did results vary by region?

    Did results vary by region?

  • Content

    Did results vary by type of customer (for example, size or industry)?

    Did results vary by type of customer (for example, size or industry)?

Content

Each of these questions becomes another hypothesis to prove or disprove. As we test these alternatives, the chance that we will discover some exciting things is high—and we will better understand what other factors drive performance.

Each of these questions becomes another hypothesis to prove or disprove. As we test these alternatives, the chance that we will discover some exciting things is high—and we will better understand what other factors drive performance.

3 Steps to Improve Your Learning Measurement

Content

1. Start With a Solid Hypothesis

1. Start With a Solid Hypothesis

Content

Think about what you want to know and frame it as a hypothesis. Remember, a hypothesis is a well-thought-out proposition. We don’t hypothesize that “our new onboarding program will be useful.” We hypothesize that “newly hired employees completing our new onboarding program will have a lower 90-day turnover rate than new hires completing our old program.”

Think about what you want to know and frame it as a hypothesis. Remember, a hypothesis is a well-thought-out proposition. We don’t hypothesize that “our new onboarding program will be useful.” We hypothesize that “newly hired employees completing our new onboarding program will have a lower 90-day turnover rate than new hires completing our old program.”

Content

2. Test Your Hypothesis

2. Test Your Hypothesis

Content

Assume that an onboarding program posted good results. It showed a 90-day new hire turnover reduction from 21 percent to 12 percent. Can the onboarding program take all the credit? Let’s also say that your organization launched a mentorship program shortly after the onboarding program. You need to explore whether the onboarding or the mentoring (or a combination) led to the reduction. This requires adding another hypothesis or two and digging deeper.

Assume that an onboarding program posted good results. It showed a 90-day new hire turnover reduction from 21 percent to 12 percent. Can the onboarding program take all the credit? Let’s also say that your organization launched a mentorship program shortly after the onboarding program. You need to explore whether the onboarding or the mentoring (or a combination) led to the reduction. This requires adding another hypothesis or two and digging deeper.

Content

3. Plan to Act

3. Plan to Act

Content

At the outset of a measurement project, clarify your intent. Ask yourself:

At the outset of a measurement project, clarify your intent. Ask yourself:

  • Content

    Am I measuring to prove it worked?

    Am I measuring to prove it worked?

  • Content

    Am I measuring because I want to know how to improve the program?

    Am I measuring because I want to know how to improve the program?

  • Content

    What will I do if the results are unfavorable?

    What will I do if the results are unfavorable?

  • Content

    What actions am I prepared to take based on the results?

    What actions am I prepared to take based on the results?

Content

Asking these questions provides you with insights that are catalysts for action.

Asking these questions provides you with insights that are catalysts for action.

The Journey Toward Continuous Improvement

Content

Embrace your measurement journey with a scientific mindset by checking out GP Strategies’ Measurement Academy and learning how we help organizations uncover valuable L&D insights daily.

Embrace your measurement journey with a scientific mindset by checking out GP Strategies’ Measurement Academy and learning how we help organizations uncover valuable L&D insights daily.

You've Reached ATD Member-only Content

Become an ATD member to continue

Already a member?Sign In


Copyright © 2026 ATD

ASTD changed its name to ATD to meet the growing needs of a dynamic, global profession.

Terms of UsePrivacy NoticeCookie Policy