First identified by Don Kirkpatrick in his seminal works on training evaluation, Level 1 evaluation helps an organization to assess participants' reactions to a learning event, facilitator, setting, materials, and learning activities. Like a canary in the proverbial coal mine, Level 1 is an early indicator of whether the participant valued the experience enough to positively speak about it to others.
It can also identify whether it was a negative experience likely to inhibit learning and application back on the job and lead the learner to speak negatively about it to others. Level 1 evaluation, by itself, should not be the only training evaluation tool you use.
What is it?
Level 1 evaluation means measuring the reaction to the learning by evaluating the trainee's perceptions about the learning experience. The intent is to determine if the trainees liked the training and if the training was relevant to their work, to make potential improvements that positively impact learning. There are several areas of consideration: the program content, sequencing, and materials; the logistics of the training and the training environment; the instructor's facilitation, interaction, and organization of the content; and the expectation for applying the training back on the job.
1| Determine how the data will be gathered.
- Use a questionnaire immediately after the learning event that is either paper or electronic.
- Use interviews and focus groups as a follow-up to the reaction questionnaires to gather more information.
- Provide in-session polling with an audience response system tool.
- Gather in-session data with wall charts, index cards, and other implements.
- Connect with the learner using social media such as Twitter or Facebook to find out what they are really saying about the training.
2| Choose your perspective. Choose when you'd like to gather the data. Will you wait until the end of a session or unit? Depending on the length of the program, will you get the reaction at the end of a program or day or module? If you are running a pilot program, what do you want to know during the session so that you can make adjustments?
3| Align to your desired outcomes. Determine the business outcomes with your stakeholders first and design your Level 3 and 4 evaluations. Determine the learning objectives and the KSAs (key skills and abilities) for the Level 2 evaluation. Write your Level 1 evaluation to assess delivery and the learning experience.
4| Write questions that ask for quantified responses. Most questions are subjective (including Likert-scale questions such as "strongly agree"). What do you want to know? "Did the training include practice opportunities," with numeric response options tells you something different than asking, "did you like the practice opportunities?" If you use Likert-scale type questions, provide a "comments" section for further information about this answer.
5| How will you use the evaluation information? If you ask how the learner rates the usefulness of the training on a scale of 1 to 5, what will you do with that information? Think about what you want to know, and ask questions that give you those answers. Pay attention to the answers. Asking the right questions helps you make sure the training is doing what it's supposed to do.Advertisement
Myth 1. Level 1 is just a "smile sheet" and offers no relevant or useful information.
Fact. Level 1 evaluation has its place in helping uncover content sequencing problems, delivery and facilitation problems, and training environment problems, allowing you to quickly rectify them. It also identifies what the learner perceives to be going well.
Myth 2. If the learners are happy, then the training is a success.
Fact. Happy learners and unhappy learners may or may not acquire knowledge and skills. You must complete a Level 2 evaluation to identify whether they have learned the content.
Myth 3. Learners can accurately assess the training's value to their actual job.
Fact. Learners may not know how they are going to use it and if the job environment is conducive to new knowledge application. This validation should be done at the instructional design phase with subject-matter experts, manager input, and support.
Learners provide their subjective perceptions about the pace, the delivery, the organization, and activities of the program. They can be biased by the difficulty of the material, the personality of the trainer, and other factors. The Level 1 evaluation should be one tool in your evaluation arsenal identifying the needed improvements that will strengthen the program. Make sure that the Level 1 reaction form includes
- reaction to the content
- reaction to the effectiveness of the trainer or facilitator
- reaction to the materials, such as handouts, audiovisuals, case studies, and activities.
Why it works
Level 1 evaluation, used correctly, has a significant place in understanding the satisfaction of the learner. Immediate feedback helps the facilitator and organization make needed adjustments to the program. Level 1 is so much more than a smile sheet!