Advertisement
Advertisement
ATD Blog

Science of Learning 101: What Happens After Training

Friday, September 25, 2015
Advertisement

Post-Training-Success.jpg
This summer, the Science of Learning Community began the discussion of Eduardo Salas’ and fellow authors’ essential work, The Science of Training and Development in Organizations: What Matters in Practice, which used meta-analyses (statistical methods for contrasting and combining results from multiple research studies) to find the best ways to design, deliver, and implement training so performance outcomes are as good as possible.

In July, I reviewed their analyses on the critical elements before training. Last month, I reviewed critical elements during training. This month, I’ll complete this series by discussing what the meta-analyses uncovered about the most effective practices after training.

Why Post-Training Is So Important

My mother-in-law used to say this about cooking: “You put good things in; it’ll taste good.” While this maxim sounds logical, it doesn’t always work in practice. For instance, chocolate and mustard don’t taste good when you mix them together. Likewise, you can ruin good foods by cooking them for too long.

What does this have to do with training? Simply this: Good ingredients don’t ensure good training, and training “tastes” best when L&D pros “cook” it the right way. Certainly, it’s nice when learners earn a good score on a post-instruction test, but it’s inadequate if their performance doesn’t improve. In other words, transfer of training is the ultimate goal.

In their research, Salas and his fellow authors come to the same conclusion. They explain that although organizations spend billions of dollars on training each year, even good training doesn’t always transfer to the workplace. Therefore, they assert that what happens after training “can have as great an impact on training effectiveness as what happens during training….” What’s more, lack of transfer happens for a variety of reasons, but their meta-analyses found that follow-up activities can have a positive impact on this challenge.

Figure 1 shows Table 4 from the research, which is a checklist of steps L&D practitioners can take following training experiences to ensure maximum effectiveness. 

After_Training.png

Advertisement

Support on the Job

The analyses shows the post-training environment has a significant impact on whether training will transfer to the workplace. For example, is the supervisor supportive of the skills covered in the training? Will the work environment allow people to use these skills? These factors have a huge impact on whether the skills practiced in training continue or die out. 

The paper also points to research that shows only 7 to 9 percent of skill acquisition comes from formal training. Instead, leaders (both formal and informal) are key factors in learning—as they greatly influence what people actually do on the job. Think about how people perform where you work and you’ll realize the truth of this statement. Salas et al. state that organizations must provide tools, training, and support to help these leaders coach others and reinforce desired training, as well. 

Debriefing 

The authors also say that organizations can institute debriefing as a powerful (but often underused) tool for reinforcing learning after training. A common—and effective—tool in the military, debriefing enables learners to reflect on the training experience and identify what went well and not-so well. In addition, when used after training, debriefing can help workers create performance agreements that tie measurable actions to their performance goals. 

Evaluation

Training evaluation typically means collecting data to determine whether the training met learning objectives. The Salas paper provides a good discussion of how best to do this so the organization can determine whether the accomplishment of the learning objectives resulted in improved job performance.  

Advertisement

Training, Systems, and the Science of Learning 

The Science of Training and Development in Organizations: What Matters in Practice provides a list of important questions that L&D practitioners can use to ask about training programs—in general terms and specific aspects of individual training programs. (See  Figure 2, which is Table 6 from the research.)  

Key_Questions.png
The authors suggest that readers consider the analogy of investing large sums of money, such as retirement funds, into mutual funds without ever researching the fund’s past performance or reviewing how the investment does over time. Doing so would be akin to throwing away your money, right? Similarly, not trying to obtain the best outcomes from your training is irrational. That’s why they tell readers that “decisions about what to train, how to train, and how to implement and evaluate training should be informed by the best information science has to offer.” 

Indeed, that is exactly what this paper’s meta-analyses does for training. It uses the science of learning to outline how to gain the best outcomes from training. For instance, investments in training must use adequate analyses to ensure that the organization is addressing the right training needs, using the right training methods, and taking the right actions before, during, and after training. Simply stated, like retirement investments, we must analyze, monitor, and follow-up on our training investments. 

To be sure, science of learning provides evidence-based training design principles, and training developed with these principles provides better outcomes for our organizations. But design principles are not always enough. Training is a system issue. If learners don’t feel confident, even good training may not take. If learners return to an environment that doesn’t support the training, even good training will fail. Bottom line: L&D professionals involved in designing training need to understand the entire learning system. 

Sources 

Salas, E., Tannenbaum, S.I., Kraiger, K and Smith-Jentsch, K.A. (2012). The Science of Training and Development in Organizations: What Matters in Practice, Psychological Science in the Public Interest, 13 (2), pp. 74-101. 42.

 

About the Author

Patti Shank, PhD, CPT, is a learning designer and analyst at Learning Peaks, an internationally recognized consulting firm that provides learning and performance consulting. She is an often-requested speaker at training and instructional technology conferences, is quoted frequently in training publications, and is the co-author of Making Sense of Online Learning, editor of TheOnline Learning Idea Book, co-editor of The E-Learning Handbook, and co-author of Essential Articulate Studio ’09.

Patti was the research director for the eLearning Guild, an award-winning contributing editor forOnline Learning Magazine, and her articles are found in eLearning Guild publications, Adobe’s Resource Center, Magna Publication’s Online Classroom, and elsewhere.

Patti completed her PhD at the University of Colorado, Denver, and her interests include interaction design, tools and technologies for interaction, the pragmatics of real world instructional design, and instructional authoring. Her research on new online learners won an EDMEDIA (2002) best research paper award. She is passionate and outspoken about the results needed from instructional design and instruction and engaged in improving instructional design practices and instructional outcomes.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.