ATD Blog
Fri Oct 16 2015
The sole purpose of a formative evaluation is improving the draft training program to increase the likelihood that it will achieve its objectives when it become available. As a result, you conduct a formative evaluation while the training program is under development and use the findings to revise the training program before it becomes generally available to learners.
Specifically, the formative evaluation assesses these characteristics of the training program:
Clarity: Learners should be able to comprehend content on the first explanation and follow exercises with no additional assistance, other than that provided in the instructions. Learners should not be slowed by inconsistencies in content or terminology, nor by grammatical errors or awkwardly presented content.
Accuracy: The material should be current, correct, and complete.
Performance: Printed pages should match those on the screen in word processors. Slides should appear on the projector as they do on the computer screen. Addresses of websites shown to class should be accurate and currently working. Hands-on exercises should work as intended.
To make sure your training program is clear, accurate, and performs properly, conduct these three types of formative evaluations:
pilot tests, in which you run the training program for the first time with people who represent the intended learners
technical reviews, in which SMEs verify the accuracy of the content
production reviews, in which editors assess the completeness and style of the content, and production specialists make sure that the printed and projected output matches your intentions.
A pilot test is one in which you take the training program for a trial run—that is, you deliver it for the first time to assess which parts work and which ones need improvement. When identifying parts of the training program that need further work, identify as specifically as possible what is not working and suggest possible improvements. You generally conduct a pilot test with the second draft of student materials and instructor’s materials and with participants who have characteristics similar to those of the intended learners.
Because the draft training program has not been demonstrated yet to be effective and the results of the pilot test are only intended to assess whether the training program works, do not use the test results to assess the performance of learners. The pilot test might indicate errors in teaching sequences, or tests or similar types of assessment activities, which, in turn, affect the performance of learners when performing the skills covered by the objectives.
To conduct a pilot test, follow this suggested procedure, adjusting it to the situation in your organization.
#1: Reserve a physical or virtual classroom for the pilot well in advance.
For a physical classroom, make sure that it has the audiovisual and computer equipment needed (such as a projector for slides and computers for every learner for training programs with computer-based activities). Make sure that the classroom can be set up to meet your needs (for example, you might want learners to sit at tables so they can work in groups) and can accommodate the number of learners for the pilot. For a live virtual classroom, make sure that it can accommodate the desired number of learners (most virtual classrooms limit the number of people who can participate at one time). Also make sure that it has the technical capabilities needed to present the training program, such as the ability to display slides with animations (most virtual classroom software can show slides but some do not display animations), show the desktop of the instructor’s computer, conduct polling questions, chat and speak with participants, and let participants enter the class using their smartphones and tablets, rather than just computers.
#2: Recruit between eight and 15 learners to participate in the pilot.
If you recruit fewer than eight, you might not receive a sufficiently broad perspective on the training program. If you recruit more than 15, however, you might not be able to debrief each learner. Learners recruited for the pilot should be supportive of the program or, at the very least, not openly hostile toward it. If you are a course developer and someone else will be teaching the training program when it becomes generally available, also recruit an instructor to teach the pilot so you can assess the effectiveness of the instructor’s materials.
#3: Send a reminder.
Be sure to send a reminder (typically via email) to all of the participants between two and five working days before the pilot of the training program is scheduled.
#4: Prepare the materials.
For face-to-face classroom pilots, print and copy student materials, including copies of slides, so you can distribute the materials when the training program starts. For live virtual classroom pilots, send the student materials before the program starts.
#5: Reminder participants this is a test.
At the beginning of the pilot, remind learners that this is a test of the training program, not them. Reinforce that if learners do not understand something or feel that instructions could be clarified, learners should assume that the problem is with the training materials, not themselves. Ask learners to mark any issues on their copies of materials, describe their concerns, and offer suggestions (if they have any) for improvement. Also inform learners that you will pause to solicit their feedback at several points in the pilot.
#6: Run the training program and, at appropriate intervals, pause and ask for feedback.
This request for feedback is called a debriefing. Some instructors like to debrief a pilot after every unit because the comments on it are still fresh with learners. Other instructors like to debrief at the end of each day, to avoid interrupting the flow of the class. Choose an interval that feels comfortable to you. Begin the debriefing by reminding learners again that the pilot is a test of the training program, not them. Then ask the learners some questions that solicit their feedback. For example: What material works well? What specific information or instructions were unclear? Do you have specific suggestions for improving them?
#7: Conduct a debrief.
At the end of the pilot, conduct an end-of-program debriefing that considers the entire training program, not just a single unit or day. Ask learners about their overall impressions of the program, and then ask them to identify parts of it that were effective and parts that need revision. Encourage learners to provide specific suggestions on ways to fix the problems they identified; the more specific their feedback, the better you can address their concerns. Conclude the debriefing by asking learners whether they plan to apply the skills in their jobs and, if they do not plan to do so, why, and what would help them to do so.
#8: Assess the results.
Assess learners’ performance on tests and assessment activities to make sure:
that questions and activities really address the objectives
that learners understand the instructions and questions
that learners have really been taught the material so they have an opportunity to perform successfully on the assessments.
#9: Review your notes.
After completing the pilot, review your notes. Categorize proposed comments as A (showstoppers—design and development should not continue before you address these issues), B (must change—although design and development of the program can continue, you must address these issues before making the program generally available), and C (nice to change—comments to address if time permits). Using these comments and their priorities as a guide, revise the training program, addressing category A and B issues before completing the program.
Next week, we will take a look at technical reviews.
Want to learn more about the most essential topics in learning and development and learning technologies? Check out our brand new event, Core 4, in New Orleans this September.
This post is excerpted from Training Design Basics, 2nd Edition (ATD Press, 2015). In this book you will learn best practices for designing and developing training programs in the real world, tactics to successfully launch and run training programs you’ve designed, and how to adjust design practices along three tiers of effort in platinum, silver, and bronze scenarios.
For more advice, join Saul Carliner November 4, 2015, for the webcast “Training Design Basics: The Platinum, Silver, and Bronze Approaches to Instructional Design.”
You've Reached ATD Member-only Content
Become an ATD member to continue
Already a member?Sign In