Top
1.800.628.2783
1.800.628.2783
ATD Links Archive
Issue Map
ATD Links Archive
ATD Links

Planning for Effective Evaluation

Evaluation, like everything else in life, takes careful planning to reap what one sows. The need to plan an evaluation before conducting it has always been recognized, but the scope and complexity of evaluation projects today demand even better planning than in the past. Without proper planning, evaluations often run into trouble. Data may become contaminated and conclusions drawn may be invalid. Evaluations may also fall behind schedule, go over budget, and be abandoned altogether. Thus, launching an evaluation without a plan is similar to taking a trip without an itinerary.

Project alignment

Vital to planning evaluation is to decide who will use evaluation findings and for what purposes. This is critical to establishing the scope of the evaluation and the kinds of data that will need to be collected. Also essential to planning evaluation is encouraging stakeholders to base their decision making on facts. It is important to define the kinds of decisions that stakeholders are likely to make and determine the types of data and information that would be most useful to the decision-making process.

A second key purpose of evaluation is to determine the effectiveness of performance improvement solutions and hold accountable those responsible for designing and implementing human resource development programs. To determine program effectiveness and accountability, the right kinds of data must be collected and analyzed, and the effect of the solution must be isolated from other influences that may affect program outcomes.

Both of these key purposes—decision making and accountability—are driven by the objectives of the program being evaluated. Powerful objectives that are clearly measurable position programs for success, whereas weak objectives without measures set up a program evaluation for difficulty, if not failure.

Data collection planning

Evaluation data may consist of many kinds of information, from the numerical to the attitudinal. Because the types of evaluation data vary with the nature of programs and the needs of stakeholders, it is important to have a variety of data collection methods available to address a wide range of learning and performance improvement solutions. 

To put together an effective data collection plan, the following questions are key:

  • What objectives are being evaluated?
  • What measures are being used to evaluate the objectives?
  • Where are the sources of data?
  • How should data be collected?
  • When should data be collected?
  • Who should be responsible for collecting the data?

The answers to these six questions can then be assembled into a worksheet that will become the final work plan driving the data collection phase of evaluation.
Here’s a brief introduction to the major techniques used for data collection, their uses, and advantages and disadvantages of each collection method.

Surveys and questionnaires. One of the most widely used data collection techniques, surveys and questionnaires, allows evaluators to collect data from large numbers of people with relative ease and can be summarized and analyzed quickly.

Tests and assessments. Tests are the oldest form of educational evaluation and still considered to be the best gauge of learning. Though we typically think of paper-and-pencil tests, assessments for evaluation may include any of the following: written tests and quizzes, hands-on performance tests, or project/portfolio assessments.

Interviews. Probably the most widely used data collection method, the individual interview is the most flexible data collection tool available and also the easiest to deploy. All it takes is an interviewer armed with a list of questions and a subject willing to answer them. In evaluation, interview subjects are often drawn from project sponsor or client, senior managers responsible for business decisions related to the evaluation, participants in training, managers of training participants, instructors, and instructional designers responsible for training.

Focus groups. Long a staple of market researchers, focus groups, or group interviews, also have become an important tool for evaluators. Although more widely used in needs analysis, focus groups of training participants and key stakeholders conducted after training programs can yield rich data to help understand how training affects learners and their organizations.

Advertisement

Action plans. An action plan is a great tool to get learners to apply new skills on the job. It is usually created at the end of training and is meant to guide learners in applying new skills once they return to work. It may also involve their supervisors and become a more formal performance contract. For evaluation, action plans can be audited to determine if learners applied new skills and to what effect.

Case studies. Case studies are one of the oldest methods known to evaluators. For many years, evaluators have studied individuals and organizations that have gone through training or performance improvement and written of their experiences in a case study format. This method is still widely used and forms the basis of entire evaluation systems, such as Robert Brinkerhoff’s Success Case Method.

Performance records. This category includes any existing performance data the organization already collects, often in computer databases or personnel files. Organizations now measure a massive amount of employee activity that is often relevant to training evaluators, including the following kinds of performance records: performance appraisals, individual development plans, safety, absenteeism and tardiness, turnover, output data (quantity and time), quality data (acceptance, error, and scrap rates), customer satisfaction, labor costs, and sales and revenues.

Data analysis planning 

Once data have been collected, it is critical to analyze these properly to draw the correct conclusions. Many techniques exist, depending on the type of data collected. Here’s a review of the most common data analysis techniques used in evaluation. 

Statistical analysis. Statistical analysis is appropriate whenever data are in numeric form. This is most common with performance records, surveys, and tests. Statistics has three primary uses in evaluation:

  • summarize large amounts of numeric data, including frequencies, averages, and variances
  • determine the relationship among variables, including correlations
  • determine differences among groups and isolate effects, including t-test, analysis of variance (ANOVA), and regression analysis. 

Qualitative analysis. Qualitative analysis examines people’s perceptions, opinions, attitudes, and values—all things that are not easily reduced to a number. It addresses the subjective and the intangible, such as interviews, focus groups, observations, and case studies. Although difficult to master, this form of analysis gives a more complete in-depth understanding about how stakeholders think and feel about training and performance improvement. The data, once summarized in some form, are then analyzed to discover the following: 

  • Themes: common, recurring facts and ideas that are widely expressed and agreed upon
  • Differences: disparate views and ideas expressed by different individuals and groups of people under study and the reasons for these differences
  • Deconstructed meaning: the underlying values, beliefs, and mental models that form the cultural foundation of organizations and groups. 

Isolating program effects. Just because we measure a result does not mean that training caused it. Organizations are complex systems subject to the influences of many variables, and isolating the effects of an individual program can be confusing and difficult. Yet, it is essential to identify the causes of increased knowledge and performance if we intend to properly evaluate training outcomes. 

Financial analysis. When evaluation is taken to the fifth level—ROI—financial analysis becomes important. This includes assembling and calculating all the costs for the program and converting the benefits to monetary values wherever possible. The primary use of financial analysis in evaluation is to calculate a return-on-investment at the end of the program. Secondary uses include forecasting potential paybacks on proposed training and determining if business goals related to financials have been achieved.

Comprehensive evaluation planning tool

To manage the many details of evaluation planning, a comprehensive tool is a must. The planning tool is broken into four phases so that it can be used throughout the program evaluation process to plan and capture key evaluation data. 

  • Phase 1: Establish the evaluation baseline. During the needs analysis phase, begin planning the evaluation and collecting baseline information that will establish measures for the program’s objectives and allow comparison with the final results.
  • Phase 2: Create the evaluation design. During the design of the training program, create a detailed evaluation design, including the evaluation questions to be answered, the evaluation model to be used, and the methods and tools for data collection. At this time, also decide what kinds of data analysis will be conducted, based on the types of data to be collected and the nature of the evaluation questions to be answered.
  • Phase 3: Create the evaluation schedule. During the evaluation design process, create or incorporate a separate schedule for evaluation into the overall training plan. This will ensure that evaluation tasks are scheduled and milestones are met.
  • Phase 4: Create the evaluation budget. During the evaluation design process, develop a separate budget or at least separate line items for evaluation. This will ensure that evaluation work has the necessary resources to achieve its goals. Figure 3-1 is a sample of a comprehensive evaluation planning worksheet that can be used to plan out an evaluation of training or performance improvement solutions.

With this plan as a guide, evaluation becomes more manageable. It is also a great communication vehicle to share with key stakeholders so they can see the proposed scope and cost of the evaluation, along with its likely benefits and the potential payback if implementation occurs as planned.

This article is excerpted from the ASTD Handbook of Measuring and Evaluating Training, which provides insight into all aspects of measurement and evaluation with sections on planning, data collection, data analysis, and several case studies of practical applications. Edited by measurement expert Patricia Pulliam Phillips, 34 proven practitioners convey their know-how on the topic, providing sample tools, practitioner tips, knowledge checks, references and resources, and more. In addition, interviews with gurus provide insight into the past and future of evaluation and measurement.

About the Author
Donald J. Ford is a training and performance improvement consultant specializing in instructional design and process improvement. He has worked in the field of human resource development for twenty years, including training management positions at Southern California Gas Company, Magnavox, Allied-Signal, and Texas Instruments. His consulting clients include Toyota, Nissan, Rockwell International, Samsung Electronics, Orange County Transportation Authority, Glendale Memorial Hospital, and others. For these and other clients, he has developed custom classroom, self-study and web-based training, conducted performance and needs analyses, facilitated groups, managed improvement projects, taught courses, and evaluated results. He teaches graduate courses in human resource development for Antioch University, Los Angeles, and Cal State University Northridge. He has published 35 articles and three books on topics in training, education, and management, including Bottom-Line Training: How to Design and Implement Training Programs that Boost Profits (Gulf Publishing, 1999), In Action: Designing Training Programs (Editor, ASTD, 1996), and The Twain Shall Meet: The Current Study of English in China (McFarland, 1988). Don holds a bachelor's degree and master's degree in history and a doctorate in education, all from UCLA.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.