Advertisement
Advertisement
ATD Blog

Are Smile Sheets Supported by Research?

Thursday, September 17, 2015
Advertisement

smile.jpg
Sally is the chief learning officer for a large well-known bank. She was recently brought in by her CEO to “professionalize” the Learning and Performance function within the company. Sally inherited a very extensive learning measurement system that rigorously cataloged the results of each training course using Likert-Like questions and numerical averaging.

As Sally sifts through the data, she notices something curious. Instructors whose smile sheet ratings were consistently low were still teaching, even after several years of poor results. Sally, of course, is thrilled! She’s been mandated to professionalize the learning function and already she’s found a problem she can fix. She makes a plan. She’ll bring her team together to improve—or fire—the low-performing trainers.

Brilliant, right? Maybe! It depends on whether Sally’s assumptions are accurate:

  • Instructors who get the best smile sheet ratings will help their learners achieve the best learning results and the best on-the-job performance. 
  • Instructors who get the worst smile sheet ratings will produce learners who achieve the worst learning results and the worst on-the-job performance.

Interestingly, Sally’s predecessors never thought to correlate their smile sheet ratings with learning results or on-the-job performance. Sally, like most L&D professionals, doesn’t think of doing this either. She just assumes that smile sheet results are related to learning outcomes. Indeed, as an industry, we tend to assume that smile sheets are telling us something important about the successes and failures of our training efforts. Are we right? Is Sally right? And does it really matter, anyway?
In 2013, ATD and i4cp teamed up to examine how organizations measure the effectiveness of the learning programs. The research, reported in The Value of Learning: Gauging the Business Impact of Organizational Learning Programs, found that Level 1 smile sheets were by far the most common method of measuring our learning solutions. One implication from these findings is that we, as workplace learning professionals, get most of our feedback from smile sheet data.

Consequently, because feedback enables improvement, the validity of smile sheet data is critical. In other words, if we’re not getting good data from our smile sheets, we’re not going to be able to create maximally effective learning.

Research on Smile Sheets

So, what does the research say? Are smile sheets effective in giving good feedback? The short answer is No. Two meta-analyses in the scientific literature covering more than 150 research studies found that smile sheets are virtually uncorrelated with learning results (see Further Reading below). Specifically, both meta-analyses found that smile sheets were correlated with learning results with a correlation of r = .09.

Statisticians tell us that correlations below r = .30 should be considered weak correlations. Therefore, a correlation of r = .09 is practically no correlation at all! It would be like correlating smartphone usage with hair color.

Bottom line: Sally’s assumptions were faulty. Our assumptions, as an industry, have been faulty for decades. We’re all caught in the delusion that smile sheets tell us something important, when in fact they may regularly mislead us.

Advertisement

A Smarter Smile Sheet

Wow! What should we do then? Should we throw out our smile sheets? Should we collect data to be polite and then ignore the data? I don’t think so. After thinking about the smile sheet problem for a decade or so, I’ve come to the conclusion that if L&D professionals redesign smile sheets from the ground up—based on the science of learning and guidelines from the measurement field—we can build effective smile sheets.

In Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, (more info available at www.SmileSheets.com), I describe how to build a much more effective smile sheet. There are two critical aspects.

First, questions should be based on the science of learning. Because we’re asking that our smile sheets tell us about learning, it makes sense to base our smile sheet designs on learning science. Second, we have to do a better job of designing our smile sheet questions. Most of us use Likert-like scales or numerical items, both of which are fundamentally inappropriate for smile sheet use. We also have to help our learners make good smile sheet decisions. According to research from Brown, Roediger, and McDaniel (2014), as well as Kirschner and van Merriënboer (2013), learners don’t always know their own learning. Because of this, asking them questions about their learning may give us faulty information—unless we design our questions to support their smile-sheet decision making. Too many of our questions ask learners to answer questions they are not prepared to answer.

Here are some of the major areas that smile sheet questions should target:

  • ability of learners to successfully apply what they learned to their work 
  • learners’ full comprehension of what was presented to them 
  • learners’ capability to remember what they’ve learned 
  • level of motivation that learners will bring to implement what they’ve learned 
  • post-training resource that will guide and enable learners to persevere in putting their learning into practice.

Most smile sheets do a poor job of targeting these issues, which is unfortunate because it is through these types of research-inspired factors that learning effectiveness can be discerned.

Bottom Line

Advertisement

Most of our smile sheets are ineffective and provide poor feedback. Fortunately, we can build better ones. Of course, smile sheets will never give us all the information we need. It’s critical that we also measure learning by focusing on learner comprehension, decision making, and memory, as well as measure application by examining the factors and results produced in the way of on-the-job performance.

While smile sheets are never enough, most organizations will continue to use them, so we ought to make them as effective as possible. Scientific research makes it clear that our current smile sheet efforts are ineffective. Fortunately, the science of learning and wisdom from the measurement field give us hope for better smile sheets.

Like what you read? Join us at the ATD 2016 International Conference & Exposition in Denver, CO, May 22-25 to hear Will Thalheimer speak.

Further Reading

Alliger, G. M., Tannenbaum, S. I., Bennett, W., Jr., Traver, H., & Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50(2), 341-358.

Brown, P. C., Roediger, H. L. III, & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Cambridge, MA: Belknap Press of Harvard University Press.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169-183.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

About the Author

Will Thalheimer is a learning expert, researcher, instructional designer, business strategist, speaker, and writer. He has worked in the learning and performance field since 1985. In 1998, Will founded Work-Learning Research to bridge the gap between research and practice, compile research on learning, and disseminate research findings to help chief learning officers, instructional designers, trainers, e-learning developers, performance consultants, and learning executives build more effective learning and performance interventions and environments. He speaks regularly at national and international conferences. Will holds a BA from the Pennsylvania State University, an MBA from Drexel University, and a PhD in educational psychology: human learning and cognition from Columbia University.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.