Some people will say that the step to isolate the effects of your programs when measuring and evaluating their results is a defensive move. This can easily be said for evaluation at large. Anytime the ball is being run toward your goal, you’re on the defense – protecting what is yours. The key is in taking the offense and addressing tough questions before they are asked.
The Tough Questions
Mike Swan is the training manager at a large tire retail company. He piloted a new training initiative in five stores. The purpose of the training was to reduce customer wait time and increase the number of cars serviced per day. Upon completion of the pilot, data showed that customer wait time had gone down and cars serviced per day increased. Mike shared these data with his Chief Learning Office (CLO) as well as the Chief Financial Officer (CFO), hoping to receive enough funding to implement the initiative in other stores. The CFO, impressed there had been improvement in the two measures, asked:
“How much of that improvement is actually due to the program?”
Mike responded that he could not say with any level of certainty, but he said he knew that without the training, the improvement would not have occurred. The CFO asked a second question:
“How do you know?”
When Mike could not answer, the CFO suggested that he find out before he received additional
funding. Mike is now playing defense
The Emotional Debate
Had Mike addressed the isolation issue during the evaluation and presented the positive results so that answers to the tough questions were evident, Mike may have received funding on the spot. All the executives wanted to know was how much change in improvement was due to the program--a fair question.
Those who argue that you cannot or should not isolate the effects of a program are often uninformed or misinformed. While a long-time part of the research process, this important step of measurement and evaluation was first brought to light in the training industry in the late 1970s when Jack Phillips developed the ROI Methodology. It was later incorporated into the first Handbook of Training Evaluation and Measurement Methods published in the U.S. by Gulf Publishing and authored by Jack Phillips (1983). The book, now going into its fourth edition, is used by training managers and academia worldwide. In spite of the wide application and acceptance by executives and researchers of this important step, the topic of isolating the effects of the program stirs up such an emotion in people that one has to wonder whether or not there is a fear that maybe the training does not make a contribution.
It is because of this debate and the need for more information that this topic is covered in the ASTD Handbook of Measuring and Evaluating Training. In this chapter author Bruce Aaron, Ph.D., capability strategy manager for Accenture, describes the importance of isolating the effects of your programs through the evaluation process. He describes some of the approaches often used by organizations. As you read the chapter, you will find there are a variety of techniques available.
The End of the Debate
Will this debate of isolating the effects of the program ever end? That’s like asking the question, will the need for evaluation ever end? Hopefully the answer to both is no. Without debate, there is no research – without research there is no grounding – and without grounding there is no sustainability.
Fortunately, more than ever, individuals responsible for training measurement and evaluation are taking the offense. They are pursuing good evaluation, including isolating the effects of their programs. They plan ahead and can answer the tough questions – before they are asked.