February 2016
Issue Map
Advertisement
Advertisement
TD Magazine

It’s Time for a New Type of Smile Sheet

Monday, February 8, 2016
Audio
It’s Time for a New Type of Smile Sheet

Traditional smile sheets are ubiquitous, but do they really give us the feedback we need?

Smile sheets are everywhere. In our classrooms. In our e-learning. In our conference sessions. And yet, nobody seems to trust them—even though we've been using smile sheets the same way since Plato handed Aristotle a smile sheet back in 333 BC (date is approximate). Smile sheets go by many names: feedback forms, response forms, reaction forms, and even happy sheets. No matter what they're called, they often bring trouble. Consider the following scenarios.

Advertisement

Organizational paralysis

John is the chief learning officer for a major government agency. He knows that the subject matter experts his agency hires to train employees aren't using effective learning designs. They're well-intentioned and they know their stuff, but they tend to lecture and cram too much content into their sessions.

John has tried everything. He's gotten his instructional designers to critique their courses and offer help in redesign. He's put more stringent guidelines into contracts. He's provided educational sessions for the trainers. But regardless of what he tries, he hears the same refrain: "We must be doing something right; we're getting good marks on our smile sheets."

Garbage-in garbage-out problem

Sandra is a learning-evaluation expert working for a global pharmaceutical giant. Every year she spends weeks developing a beautiful and sophisticated report to send to senior management. She compiles smile sheet data for every course and every e-learning program.

She continually receives kudos for her learning-evaluation designs and her gorgeous report. Yet she knows, deep in her heart, that her report is practically meaningless, suffering from the garbage-in garbage-out problem. The questions she's allowed to use on the smile sheets simply aren't good enough to capture meaningful data about learning effectiveness.

Smile sheet data as ornamentation

Hamid was recently hired as a training director for a midsize manufacturing company. The first week on the job, Hamid's boss Ralph—the head of talent development—calls a meeting to show Hamid the evaluation system they've been using for the past seven years. Ralph clearly loves the way they've collected data. He beams as he explains the minutia to Hamid.

Toward the end of the meeting, Hamid asks, "So what have you been able to do with the data?" Ralph shows him the report they've created. Paging through the spiral-bound document, Hamid asks, as innocently as he can, "Have you been able to use the data to improve our training or e-learning?" Ralph responds in a choked whisper, "No, we just do this for the execs. Nobody on the learning side uses the data." Hamid swallows hard, worrying that maybe he's joined the wrong company.

Benchmarking blindness

Frankie, known for her optimism and innovation, has learned the hard way how difficult it can be to create good smile sheets and deploy them wisely. That's why—in her role as her company's chief talent development officer, she was thrilled when she was approached by a supplier that developed a smarter smile sheet and had done the difficult work of collecting benchmark data on learners in companies throughout the world.

Frankie immediately hired the supplier. She couldn't wait to get the year-end report so that she and her team could analyze the data and share it with senior management.

As Frankie convened her team and began digging into the data, one of the newly hired instructional designers asked, "How do we know that these other companies—the one's we're being benchmarked against—are creating effective learning designs?" Frankie immediately saw the implication, and after dropping her eyes says, "Well, we don't, of course. This just helps us see if we're really off track. We've still got a ton of work to do to improve our own effectiveness."

Learning from experience

I've been around the talent development field for 30 years. In that time, I've looked at smile sheets from many angles—as an instructional designer, a trainer, a workshop provider, a conference speaker, and as a consultant.

As a novice leadership trainer, I used smile sheets to improve my hauntingly mediocre performance. Later, as I provided workshops for my research and consulting company, I wanted a smile sheet to help me improve my workshops. But I found that most smile sheet designs were unworthy—they just didn't help me see what was good and what was bad.

As a consultant, I saw clients struggle with learning measurement. I saw wicked-smart CLOs paralyzed by smile sheets. Their trainers were getting good ratings, so they had no incentive to improve. Too often smile sheets kill the political will to push for training-design improvements. I've seen organizations hoodwinked into believing that they should benchmark themselves against other companies—even where the benchmark data were practically meaningless. I've also seen instructional designers buying into the use of poorly constructed smile sheet questions because the questions were put forth by so-called measurement experts.

I've made tons of mistakes in designing my own smile sheets and I'm sure I'll make more, but I've learned a few things along the way. First, I learned that smile sheets are very important. In the talent development industry, they are the number one way we get feedback about the success or failure of our learning programs. The Association for Talent Development, partnering with the Institute for Corporate Productivity, estimates that smile sheets are the most-used feedback mechanism, much more used than tests of learning, application, or organizational results.

I've also learned that there is no perfect smile sheet and no perfect smile sheet question. There are trade-offs in every measurement decision we make. Our goal, then, is not to create perfection, but to create an instrument that measures information relevant to learning effectiveness and provides reasonably good data about that targeted information.

Smile sheets have the potential to give good feedback; the written comments are especially valuable. Many learners want to help us when they answer our smile sheet questions. Of course, they're more likely to engage our smile sheets when we give them good questions to answer.

Depending on our smile sheet design, we can support or harm learners' decision making. For example, one of the things we know about memory is that people forget. Well, people forget in our learning programs too.

When we give them a smile sheet at the end of a three-day program, we hold the incredible conceit that our learners are remembering every aspect of the program. They probably remember clearly the last hour of the program and maybe the first hour of the first day, but certainly a lot of what went on in the interim is rather fuzzy. To help learners make well-informed smile sheet decisions, we may have to remind them of what they've learned.

What's wrong with traditional smile sheets

One of the perennial questions in the scientific research is whether smile sheets are correlated with learning results. Presumably, we hope that our smile sheets are telling us something about the success or failure of our learning initiatives in creating beneficial outcomes.

As it turns out, traditional smile sheets are virtually uncorrelated with learning results. Two meta-analyses—one done in the 1990s and one done in the 2000s—examined more than 150 scientific studies and found that smile sheets are correlated with learning results with a correlation of r = 0.09.

To put this 0.09 finding in perspective, statisticians tell us that correlations below 0.30 are weak correlations. Therefore, correlations of 0.09 are practically not correlated at all. This would be like correlating hair color with flu symptoms. And let me be clear about what the research says: If we get high marks on our smile sheets, we could have a very effective training program, or a very poor one. If we get low marks, we could have a poor learning design, or a very good one. With traditional smile sheets, we just can't tell.

When I first heard these research results, my knee-jerk reaction was that we should just throw out our smile sheets. If they tell us nothing, we're better off without them. But then I remembered that smile sheets helped me learn. Even more importantly, I came to the obvious realization that many organizations won't get rid of smile sheets. There are benefits, but there is also the weight of tradition and the value we get in honoring learners by asking for their input.

I knew from looking at the scientific research on learning that some learning factors were more important in learning design than others. If we could just channel what we know about learning into our smile sheet designs, we would create better smile sheet questions.

I also began looking at the practical realities around how smile sheets are delivered and how their results are analyzed. One thing I knew from running a product line in the leadership development space was that numeric averages did not do enough to guide action in improving our learning designs.

Advertisement

The trouble with Likert-like scales

One of the oddest things we do in the training field is use Likert-like scales on our smile sheets. We provide learners with statements and then ask them to respond as follows:

  • strongly agree
  • agree
  • neither agree nor disagree
  • disagree
  • strongly disagree.

These fuzzy response terms have two major downsides. First, it makes it hard for learners to calibrate their responses. Second, when we report out the data, we create a formidable problem. What we tend to do, without a hint of regret, is to turn the answer choices into numbers. Statisticians tell us that changing this nominal data into ordinal or interval data is a nonstarter, but we do it anyway: strongly agree equals 5, agree equals 4, etc.

Then we take an average of learner responses. When we take these averages, we lose information on how widely dispersed the responses are and lose information about whether the data are skewed up or down.

When we analyze our smile sheet results, we typically use only one question of the many that are asked. Our course gets categorized based on that one question. "My course is a 4.1!" What do we do with a number such as this? Is it good? Is it bad? It's impossible to be definitive.

Note that numeric scales are just as bad, if not worse, because learners' numeric responses are characteristically unclear in their meaning.

Trusting learner responses

Inherent in the idea of the smile sheet is that we are trusting learners to tell us what they think of the learning experience. Yes, of course, learners are not experts on learning, but they should know their own learning, shouldn't they? Research is pretty clear that learners don't always know their own learning.

This doesn't mean that we should avoid asking learners questions, but it does mean that we should ask them questions that they're good at answering and that are related to learning effectiveness. It also means that we should avoid fuzzy response terms like those in Likert-like scales because the fuzziness itself can enable bias to sneak into a learner's decision making.

Improving smile sheets

To improve smile sheet questions, we have to follow two rules. First, we have to focus on true learning effectiveness. We can do this by focusing on factors that are based on the learning research. Second, we can help learners make good smile sheet decisions while also creating data that are meaningful and actionable.

To focus on learning effectiveness, take the following four critical goals into account. I call them the "four pillars of training effectiveness."

  • Do the learners understand?
  • Will they remember?
  • Are they motivated to apply?
  • Are there after-training supports in place?

Smile sheet questions should address these goals or focus on overall effectiveness in helping people perform better on the job. The questions ought to prevent the garbage-in garbage-out problem by being free of bias and supporting learners in smile sheet decision making. They also ought to create data that are actionable—more actionable than numeric averages. Here are some guidelines from my book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form:

  • To support their decision making, remind learners of the learning experience and learning content.
  • Increase learner attention as learners work through the smile sheet by persuading them of the value, by not using too many questions, and by using good question design to entice learners to engage.
  • Avoid biasing questions, which may seem easy, but is quite tricky—tricky enough to warrant the use of a measurement expert.
  • Ask clear and relevant questions; questions that will be used in later decision making.
  • Ask questions that learners can answer knowledgeably.
  • Provide descriptive and easily understood answer choices, using descriptive answer choices rather than numeric or Likert-like response choices.
  • Provide delayed smile sheets in addition to end-of-training smile sheets, where warranted, to gain insight into whether learners are able to put what they've learned into practice.

Making the Case for Performance-Focused Smile Sheets

Smile sheets do have benefits and can be used for many reasons. Here’s a short list, which I’ve modified slightly from learning-measurement expert Rob Brinkerhoff:

  1. Red-flagging training programs that are not sufficiently effective.
  2. Gathering ideas for ongoing updates and revision of a learning program.
  3. Judging strengths and weaknesses of a pilot program to enable revision.
  4. Providing instructors with feedback to aid their development.
  5. Helping learners reflect on and reinforce what they learned.
  6. Helping learners determine what (if anything) they plan to do with their learning.
  7. Capturing learner satisfaction data to understand—and make decisions that relate to—the reputation of the training program and the instructors.
  8. Upholding the spirit of common courtesy by giving learners a chance for feedback.
  9. Enabling learner frustrations to be vented—to limit damage from negative back-channel communications.
About the Author

Will Thalheimer is a learning expert, researcher, instructional designer, business strategist, speaker, and writer. He has worked in the learning and performance field since 1985. In 1998, Will founded Work-Learning Research to bridge the gap between research and practice, compile research on learning, and disseminate research findings to help chief learning officers, instructional designers, trainers, e-learning developers, performance consultants, and learning executives build more effective learning and performance interventions and environments. He speaks regularly at national and international conferences. Will holds a BA from the Pennsylvania State University, an MBA from Drexel University, and a PhD in educational psychology: human learning and cognition from Columbia University.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.