A new study takes a closer look at the reliability of Level 1 feedback.
The study simulated a typical training session, and out of 934 total participants, 804 (86 percent) rated the training better than average. Additional sessions were then altered in "negative" ways (such as the facilitator speaking in a monotonous tone, purposely stumbling or halting while speaking, or ignoring audience questions), yet 82 percent of participants still rated the training better than average.
There are a few possible reasons for participants responding more positively than expected on these Level 1 evaluations.
First, participants may be influenced by the response bias, a cognitive bias that generally occurs when an individual is influenced to answer a question in a certain way. In training contexts, participants who are given feedback surveys are primed to assume that the presenter wants them to answer positively. This assumption alone could cause participants to do so.
Person-positivity bias also may be at play. It causes people to think more favorably of individuals than groups. Studies have found that an individual is rated more highly when that person is on his own, as opposed to when he is viewed as part of a group. Training evaluations may be positive simply because the participants sympathize with the trainer as an individual.
The goal-gradient hypothesis is another factor that might skew Level 1 feedback. In a training context, it suggests that as the participant nears the goal (completing the training) she is more likely to rate the training more favorably than she would have at some earlier point in the course.
This study suggests that Level 1 evaluations should not be taken at face value. As trainers, we need to exercise caution in how we use this feedback to make adjustments to our programs. Level 2 through 4 evaluations may provide more reliable feedback on what truly has been learned, transferred, and retained.