More and more organizations are looking to formal mentoring programs to deliver high-impact results at a fraction of the cost of traditional classroom training. Building sustained, relationship-driven learning is clearly one of the most effective methods practitioners have to solve tough talent development issues. However, when it comes to measuring the value of mentoring programs, things are less clear. Here are a couple of ideas to help you tackle the challenge.
Use Leading Indicators to Develop a Story of Mentoring Success
Building a clear picture of the impact of mentoring programs requires multiple measurements over the life cycle of the program. A common mistake practitioners make is waiting until the program is over to determine how successful it was. The problem, of course, is that those results are six months or even a year too late. Mentoring programs are sustained learning processes in which talent development occurs within conversations and is applied on the job over time.
The AXLES Model
TERP Associates has developed a model for developing mentoring programs, called the AXLES model. During the Evaluating Effectiveness component of the AXLES model, having several checkpoints creates a view of progress and impact. At TERP, we use the New World Kirkpatrick approach to evaluate all of our talent development solutions, including mentoring programs. Here is a brief overview:
- Level 4: Results: An evaluation plan begins with determining what to measure at the strategic level. What organizational results would indicate success for the program? What existing tools and reports exist for measuring results? How often will you measure?
- Level 3: Behavior: The next step is to determine which behaviors contribute to achieving Level 4 results. Determine which factors of performance create success at a strategic level. How will you measure behavior on the job? How often will you gather data?
- Level 2: Learning: Define the skills that are needed for learners to demonstrate the Level 3 behaviors on the job. These skills, and the knowledge and context that go with them, are what you measure at Level 2 to determine learning.
- Level 1: Reaction: What expectations did the learners have before entering the program, and how well were those expectations met? What is the perceived value of the mentoring relationship? What obstacles do they expect to encounter?
Sample Evaluation Methods
- Level 1: Monthly focus groups alternating between learners and mentors; surveys sent to 25 percent of participants each month; learning management system checklists
- Level 2: Observation forms (completed by mentors); multi-rater assessments; individual development plans (IDPs)
- Level 3: Observation forms (completed by supervisors); multi-rater assessments; IDP follow-ups; peer mentoring sessions
- Level 4: Engagement surveys; retention numbers; productivity reports; sales numbers; profitability; time to success; 9-box results
Have more questions about evaluating the effectiveness of mentoring programs? Join TERP’s Jenn Labin for an ATD webcast, Empowering Mentors to Supercharge Results, on September 26 at 2 p.m. ET. Hope to see you there!