Top
1.800.628.2783
1.800.628.2783
Advertisement
112216_evaluation
Insights

Debunking 2 Myths About Learning Evaluation and Alignment

Wednesday, November 23, 2016
Advertisement

Research shows that 96 percent of CEOs want to see business impact from learning. However, only 8 percent actually do. CEOs turn to their learning leaders to drive improvement by better aligning learning with business strategy. Learning leaders face several challenges in aligning learning and demonstrating business impact with learning evaluation, though.

Often, the complexity and elusiveness of learning evaluation has been likened to the quest for the Holy Grail—wrought with inconsistencies and myths. Let’s debunk two such myths. 

Myth 1: There Are Only Two Ways to Evaluate Learning 

When it comes to learning evaluation, most L&D leaders typically think of just two models: The Kirkpatrick Model has dominated the learning evaluation narrative, followed by the Philips ROI model. But according to research conducted by the Institute of Employment Studies, several other models have been developed throughout the past 50 years that delve into training evaluation. These models include the Hamblin Model, The Organizational Elements Model (OEM Model), the Indiana University Model, IS Carousel of Development Model (Carousel), Philips ROI, the Kearns and Miller Model (KPMT), and the Context, Input, Reaction and Outcome (CIRO) Model.

Most of these models expand upon the Kirkpatrick model and try to address some of its limitations, including criticisms by researchers that the interdependence and implied causality of each of the four levels has not been proven by empirical research. The table below provides a high-level view of these models and shows the various components of each level of evaluation.

Theodotou_Figure1.gif

As you can see, four of these models (Indiana University, IS Carousal, KPMT and CIRO) have added an additional stage prior to Kirkpatrick’s “Reaction” level, in order to evaluate the business need context. In addition, the Hamblin, OEM, Indiana University, and Phllips ROI added a fifth level after Kirkpatrick’s last stage of “Results”—to evaluate broader societal value and impact.

Advertisement

So the next time your team is discussing learning evaluation models, you can debunk the myth that only two exist. 

Myth 2: In the Real World, Learning Alignment Cannot Be Achieved 

Learning alignment can be complex. Evidence shows that organizations that evaluate learning consistently are able to better align their learning goals to their strategic goals and move the dial. One such organization is the Defense Acquisition University (DAU), which during the past decade has made significant strides in learning evaluation.

DAU serves the defense industry with a mission to “provide a global learning environment to develop qualified acquisition, requirements and contingency professionals who deliver and sustain effective and affordable warfighting capabilities.” In 2015, DAU graduated 173,773 professionals, delivering more than 7 million hours of learning and 310 online learning modules with 700,000 completions.

Today, DAU is recognized as a best-in-class organization for learning alignment. It has won 26 awards, including the Best Overall Corporate University in World by the Global Council of Corporate Universities (GCCU), Corporate University Award of the Year for North America by CUBIC (Corporate University Best in Class), Elearning Top 100 Best Government Organization, Brandon Hall and CLO LearningElite Award. DAU focused on aligning learning with strategy not to increase profits, as most other learning organizations do, but to ensure the safety of the nation.

“In this business, we cannot afford to be second place. As our workforce is successful, so are the men and women of our Armed Forces. Their success in training and on the job ultimately translates to the safety of our nation and the achievement of our national interests” says Dr. Christopher Hardy, director of strategic planning and analytics at DAU.

DAU uses the Kirkpatrick Model to evaluate learning and deploys Metrics that Matter–CEB surveys immediately following a course to evaluate the first two levels of Kirkpatrick’s model (Level 1: Reaction and Level 2: Learning) which it defines as consumptive metrics. After 60 days, DAU sends out another survey to check in with learners and their supervisors to evaluate learning pertaining to Level 3 (Application) and Level 4 (Business Impact) of the Kirkpatrick Model.

DAU deploys text mining on some 50,000 surveys to identify patterns in learner responses on particular courses and to analyze root causes in low performing courses. This approach ultimately drives learning content improvement and strengthens alignment of such learning content with the strategic goals of the Department of Defense. The DAU example shows that learning can be evaluated effectively and aligned to strategy and clearly debunks Myth #2.

Bottom line: By debunking these two myths, we have provided some guidance for evidence-based reflection and analysis in your organization—just in time for your 2017 talent development strategy planning meetings with your team and CEO.

About the Author
Marina Theodotou, EdD, is a learning faculty member at the Defense Acquisition University. She is a learning and development leader with global business experience across private, government, and nonprofit sectors. She helps organizations increase productivity, profitability, and performance by optimizing their talent through the design and development of relevant, game-changing, and measurable learning curricula. Marina is a certified Lean Six Sigma Black Belt.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.