November 2013
Issue Map
Advertisement
Advertisement
TD Magazine

Psychological Biases May Influence Feedback on Evaluations

Friday, November 8, 2013

A new study takes a closer look at the reliability of Level 1 feedback.

Advertisement
Intelligence1
Trainers often use participant feedback from Kirkpatrick Level 1 evaluations to make significant changes to their training content or delivery—or both. But a recent study I led suggests that certain psychological biases can skew the accuracy of Level 1 feedback, rendering it less reliable.

The study simulated a typical training session, and out of 934 total participants, 804 (86 percent) rated the training better than average. Additional sessions were then altered in "negative" ways (such as the facilitator speaking in a monotonous tone, purposely stumbling or halting while speaking, or ignoring audience questions), yet 82 percent of participants still rated the training better than average.

There are a few possible reasons for participants responding more positively than expected on these Level 1 evaluations.

First, participants may be influenced by the response bias, a cognitive bias that generally occurs when an individual is influenced to answer a question in a certain way. In training contexts, participants who are given feedback surveys are primed to assume that the presenter wants them to answer positively. This assumption alone could cause participants to do so.

Person-positivity bias also may be at play. It causes people to think more favorably of individuals than groups. Studies have found that an individual is rated more highly when that person is on his own, as opposed to when he is viewed as part of a group. Training evaluations may be positive simply because the participants sympathize with the trainer as an individual.

Advertisement

The goal-gradient hypothesis is another factor that might skew Level 1 feedback. In a training context, it suggests that as the participant nears the goal (completing the training) she is more likely to rate the training more favorably than she would have at some earlier point in the course.

This study suggests that Level 1 evaluations should not be taken at face value. As trainers, we need to exercise caution in how we use this feedback to make adjustments to our programs. Level 2 through 4 evaluations may provide more reliable feedback on what truly has been learned, transferred, and retained.

About the Author

Dr. Ben Locwin has been a frequent collaborator with ATD, including as a member of the Advisory Board for the Healthcare Community of Practice.

He is a behavioral neuroscientist and author of a wide variety of scientific articles for books and magazines, as well as an acclaimed speaker. He is an expert media contact for the American Association of Pharmaceutical Scientists and a committee member of the American Statistical Association. He also provides expertise to organizations on human learning and performance, and advises on a range of business, healthcare, clinical, and patient concerns.

He is an author, an international speaker, and has hired over 1,000 people in many high-criticality roles. Says Dr. Locwin, “I have refined my approaches empirically, using deep humility to challenge long-held beliefs and preconceptions that plague these aspects of the people-centered discipline.”

Follow him on Twitter: @BenLocwin.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.