Learning transfer is the true measure of a program's effectiveness.
Organizations invest in learning with the goal of getting new employees up to speed quickly and helping experienced employees improve their performance. Rob Brinkerhoff called this "the fundamental logic of training."
Training is a success if the trainees achieve the expected levels of performance. It generates a positive return on investment when the value of the performance gains exceeds the cost of achieving them. Training is a failure when the performance goals are not met, regardless of how much employees liked the training or think they learned.
Newly acquired knowledge and skills must be applied to the work of the individual and firm for training to have value. As Don and Jim Kirkpatrick noted: "If the trainees do not apply what they learned, the program has been a failure even if learning has taken place."
Formula for success
A simple way to remember the vital role of learning transfer is the equation: Learning x Transfer = Results
The on-the-job results—which are what business leaders care about—are the product of the amount learned times the amount transferred. So, and as the Kirkpatricks pointed out, even if the training is a 10 out of 10, the results will be zero if training transfer is zero. To maximize value, therefore, learning professionals and business managers need to work together to optimize both learning and learning transfer. And to improve transfer, we need to measure and manage it.
Defining learning transfer
Learning transfer is sometimes confused with knowledge transfer, which is the transmittal of information from an instructor to a student. That is not what we mean here. In corporate training and development, the pressing issue is learning transfer, which is defined in the second edition of The Six Disciplines of Breakthrough Learning as "the process of putting learning to work in a way that improves performance."
Three words in that definition warrant special mention:
- Process—transfer takes place over time; it is not a one-time event.
- Work—learning adds value only when it is applied to the employees' tasks and responsibilities.
- Performance—what is actually accomplished.
Likewise, learning transfer is sometimes mistakenly defined as the percentage of the program content trainees can recall. That is virtually impossible to measure and not useful. No one can remember, let alone apply, more than a fraction of all the material in a typical training program. Performance is what counts, not recall.
The metric that really matters for assessing learning transfer is the percentage of trainees who achieve the desired standards of on-the-job performance in the expected timeframe.
Performance is achieved only through effective transfer of great training to real-time job tasks. Therefore, meaningfully assessing learning transfer requires assessing outcomes on the job.
Learning transfer deserves more attention
A 2017 study by Lever Learning confirms the need for much greater attention to learning transfer. Only 8 percent of respondents felt that their learning transfer support was "highly effective," while 66 percent felt that it "could be improved." A majority of respondents felt that training produced long-term job improvement in fewer than 30 percent of the trainees. Similar findings have been reported by others.
That suggests that as much as 70 percent of corporate training ends up as "learning scrap"—a term coined in 2005 by the authors of The Six Disciplines to describe training that employees attend but never use in a way that improves their performance. The purpose was to draw parallels to manufacturing scrap. Producing manufacturing scrap wastes time, materials, and opportunity. So does producing scrap learning.
Whereas companies have made great strides in reducing manufacturing scrap to negligible levels, the amount of learning scrap is still unacceptably high. In a survey conducted by McKinsey & Company, only 25 percent of business managers said that training and development contributed measurably to improved business performance. Similarly, half of business managers surveyed by CEB believe that completely eliminating L&D would not adversely affect performance.
How can that be? We have never had better tools with which to teach or better understanding of how people learn. Most instruction is pretty good—we know a lot about instructional design and how people learn. The problem is a lack of transfer.
More than a decade ago, Jack Zenger wrote: "Talk to any group of laymen or professionals about what's broken in the current learning and development process, and most will tell you it's the lack of serious post-training follow-through." That still seems to be the case today. The overwhelming majority of training professionals in our 6Ds workshops agree that "inadequate structure, support, and accountability after training" is the most common reason that training fails to improve performance.
Improving learning transfer
Many factors affect learning transfer and training effectiveness (see figure). That means that improving transfer will require a multifaceted approach focused primarily on the post-training environment.

- content validity
- transfer design
- personal capacity
- opportunity to use
- motivation to transfer
- learner readiness
- self-efficacy
- performance expectations
- outcome expectations
- supervisor support
- supervisor sanctions
- peer support
- positive personal outcomes
- negative personal outcomes
- performance coaching
- resistance to change.
Understanding the relative strengths and weaknesses of these transfer factors in your organization—and taking steps to correct those that impede the use of new learning on the job is essential to the ultimate success of any learning initiative.
Learning transfer must be measured
Given the importance of transfer to training's overall success, it must be managed; leaving it to chance and individual initiative invites failure. The great management guru, Peter Drucker, famously remarked that "you cannot manage what you don't measure." To manage and improve transfer, you must measure it by evaluating performance on the job. The extent to which training does—or does not—improve performance is a composite measure of all the elements of that operating system, which include:
- the appropriate selection of candidates and timing of the training
- the effectiveness of the instruction
- the health of the learning transfer climate (learning transfer system)
- the support (or lack thereof) from direct supervisors.
The performance you should evaluate is the very performance that the program was designed to achieve. Depending on the business needs and program goals, that might be to improve leadership, deliver better customer service, increase sales, and so on.
Because such outcomes also are affected by many nontraining factors, it is difficult to assess the relative contribution of training. Therefore, focus on the critical behaviors that are needed to produce the desired results since those must change first, are more directly influenced by training and coaching, and are the leading indicators of improved performance.
How do you assess a change in behavior? Through observation. Meaningful behavioral assessment requires an informed observer who knows what to look for (checklists, rubrics) and who can observe a representative sample of performance.
If the trainees achieve the standards of performance, then you can be confident that all the components of the training operating system are working as planned. If the performance falls short of the desired goals, then additional analysis is needed to determine where the breakdown occurred. Did the instruction fail to impart the needed skills? Or, more likely, was the breakdown in the transfer step?
If employees are able to perform the skills satisfactorily in a test environment, but fail to do so on the job, then the breakdown is in the transfer step rather than the training itself.
How do you identify where the breakdown occurred? Ask the employees questions based on the LTSI such as:
- To what extent did you have opportunities to apply what you learned in the program?
- Did your manager encourage you to use your new skills?
- Did your manager help you?
- Did your co-workers encourage or discourage the use of what
you learned? - Were your efforts to apply your learning recognized or rewarded?
Assessing the level of managerial support is imperative. Numerous studies have shown a direct correlation between level of engagement of the learner's manager and the degree of performance improvement. Robert Brinkerhoff summed it up this way: "When managers support learners and learning, it works. When they do not, it does not." That is why Steelcase routinely polls its employees after training to determine the level of managerial support, and then posts the results on the company's learning portal.
Actionable data
From the business's point of view, training is successful only when trainees achieve performance goals. Doing so requires high-quality instruction as well as effective learning transfer. The most effective organizations optimize both.
As numerous authors of this Prove It series have noted: The purpose of measurement is to provide actionable data. Regardless of whether the training was a success or fell short of expectations, you need to assess both on-the-job performance as well as the transfer climate to provide insights on how to improve the process and create even greater value from training in the future.
Read more from CTDO magazine: Essential talent development content for C-suite leaders.