Workplace learning and performance professionals are ever seeking to prove training's business value. Read on for a pragmatic approach to this endeavor.

In a previous role, when I updated my vice president on training projects, she often answered professionally, yet curtly: "John, so what?" I took her matter-of-fact response as a career-critical reminder that executives do not care about training activity (such as number of class participants or course satisfaction scores). Reporting how many emails I sent each day or how many meetings I attended in a week would deservedly provoke the same response—so what?

These C-level discussions drove me up the evaluation chain to Kirkpatrick's Level 4: Did the training trigger business results? Executives do and should care about revenue and expenses, the mark of an organization's success. They also are appropriately concerned about customer satisfaction, quality, productivity, employee engagement, and turnover because such leading indicators ultimately feed revenue and expenses.

My previous organization tracked business results through data from customer satisfaction scores, productivity and quality ratings, employee engagement scores, and employee turnover figures. The same organization's learning management system tracked training activities by measuring the number of sessions conducted; number of employees who completed training; and participant names, locations, groups, and levels.

The training challenge

To communicate accurately a training program's business results to the C-suite, workplace learning professionals must first address several questions:

  • Is there a connection between business results and training activities?
  • Is this connection due to cause—was training a cause for the measurable effect?
  • Is this connection due to coincidence—did training and the improved results coincide by accident?
  • Is this connection due to correlation—did some aspect of the training map to a positive business result?


When considering cause and effect in the experimental lab, a single cause more easily can be isolated. In the workplace, however, it is difficult—and sometimes impossible—to prove that training is the sole or most significant cause of a change in business results. In an organization there are too many variables, too many unknown factors, and—therefore—too many potential causes.

In their book Human Capital Analytics: Measuring and Improving Learning and Talent Impact, Kent Barnett and Jeffrey Berk list training as one of seven possible contributors to any business result. These factors are people, process, technology, culture, externalities, measurements, and a learning and development program.

The quality improvement movement introduced several techniques, including root cause analysis, or RCA. The premise behind RCA is that when addressing business problems, it is useless to target the symptoms; instead, it is critical to isolate the root cause.

We live by this concept in our personal lives: We know that aspirin can suppress virus symptoms, but it does not provide a cure for the sickness. Yet we often forget the difference between symptoms and causes in the workplace, where there is seldom a singular cause-effect relationship. Thanks to Kaoru Ishikawa, a pioneer in the quality improvement movement, we have the "fishbone diagram" (see below) to depict the combination of potential causes.

While it is difficult and sometimes impossible to isolate training as a significant cause, there is a second problem with the cause-effect approach: What if, after training, the results deteriorated? Would we want to take responsibility for causing poor results?



Using coincidence to explain training's impact is even more problematic than citing cause. It will guarantee a "so what?" reply from the C-suite.

To explain that sheer luck allowed training to correspond with positive business results strongly conveys that there is no real connection between the two. The organization could likely obtain the same results without the training—and save some money, too. Clearly, this is not a route to help solidify stakeholder support.


While there certainly are challenges with correlation, of the three approaches it is the most plausible. In the workplace, many factors contribute to business results, and executives know this. The key is to acknowledge the many contributing and perhaps unidentified factors, and to show enough evidence that there is a positive correlation between the training initiative and a business result.

In The Extraordinary Leader: Turning Good Managers into Great Leaders, John Zenger and Joseph Folkman describe their study of competencies and results. In 20-plus years of research, they identified 16 leadership competencies that tracked positively with business results. In one example, leaders who rated high in leadership competencies reported lower employee turnover.

Zenger and Folkman do not draw cause-effect conclusions such as poor leadership is the cause of employee turnover. Instead, they shrewdly report that there is a positive correlation between effective leaders and lower turnover.

Put it into practice

According to the ASTD research report, The Value of Evaluations, only 37 percent of organizations surveyed conduct evaluations at Level 4 (business results) even though 75 percent say Level 4 has high or very high value. To make the connection between results and training, pull relevant business metrics generated from your organization's operations or finance functions and training activities extracted from the learning management system or other data management system.

Next, look at your training program's objective: Was it to improve sales, increase customer satisfaction, or reduce errors? Finally, look for positive correlations between the two sets of data. The key is to disclose the realistic limitations proactively and confidently. For example:

  • While multiple factors contributed to the results, there is evidence that training played a major role.
  • Strong evidence points to a connection between the training completed and the productivity increase.
  • There is a significant positive correlation between those who completed the training and their sales figures.
  • The preponderance of data indicate that when the skills are applied, the results improve, which also points to a number of factors beyond training such as management's support.
  • Pre- and post-training audits reveal significantly higher quality scores post-training.
  • While ultimate proof is not realistic in our dynamic environment, there is considerable data to indicate a positive correlation between the training and the results.
  • The data seem to indicate a connection between those completing the training and the recent improvement in customer satisfaction scores (with a confidence level of 80 percent or an error range of 10 percent).

Supplementing data with participant comments, when positioned effectively, increases credibility. For example, the participants estimated an average 10 percent productivity improvement when applying the new skills. Keep in mind that participants tend to over-estimate training's impact by as much as 30 percent to 50 percent. You can mitigate this bias by readjusting survey feedback. In this example, decrease the 10 percent estimate to 5 percent. When extrapolated to its global potential, the significant business impact remains evident.

Remember that executives are not expecting 100 percent proof of training's impact. Moreover, if the C-suite has unrealistic requirements, it is the job of workplace learning professionals to manage—that is, lower—unrealistic expectations. There is no bulletproof path to evaluation, so it is a challenging yet critical responsibility to, as business partners, show evidence of training's value.