Advertisement
Advertisement
long-term-view
ATD Blog

Science of Learning 101: Can We Really Assess Learning?

Friday, February 6, 2015
Advertisement

This is the second post of the Science of Learning 101 series. I am writing this for L&D practitioners and very much want your comments. Is this helpful to you? Does it answer your questions? Your comments can change the course of this series.

In order to assess (measure) something, we need to know what we want to measure and how to measure it. For example, there are specific ways to measure points scored in different team sports, how fast different objects are moving, and the amount of liquids, solids, and weights used in cooking.

How do we measure learning? In last month’s post, I defined learning as “changes in behavior that result from experience.”

To assess learning, we need to know how to measure the changes in behavior that are most important to us. For example, what do you measure when you need to assess the learning experience of a class on using Google mail? One likely measurement could be does the learner now know how  to sign up for and use a Google mail account to send and receive email.

People can and do learn on their own. All. The. Time. But when we want to help people learn, we build instruction—the creation of special environments specifically to promote learning. So the obvious reason for assessing learning is to see if the special environment we built changed what the learner is doing as a result of what we did.

Eek! Research shows that we don’t tend to find out if the training had an impact on what people are doing on the job or in the real world. Rather, we tend to only assess what they can recall from the training. That, my friends, means that we are not regularly doing what most professionals do: assess their work. As you will see by my discussion below, this is not rocket science.

Different Ways to Assess Learning

Testing with recall questions about the content really isn’t much of an assessment at all. Here are a few examples of real-life learning assessments.

  • Chen demonstrates he is a good candidate to move into the vacant account executive position by his sophisticated product knowledge and excellent dealings with clients.
  • Nola creates unique jewelry out of twisted strands of sterling silver. Her students anxiously await her new classes.
  • Kia helps Penn with the more difficult statistics coursework for their classes. Both are hoping to move into more advanced and lucrative positions within the company.

All of these people learned their craft through informal and formal learning. Stop for a moment and consider what they might have done to improve their expertise.
Traditionally, educational classroom assessment provides grades and placements, helps determine where teachers need to give additional support, finds where to address misunderstandings, and targets feedback.

Too often, in traditional corporate classroom and online learning, we don’t determine if people need more support or resolve misunderstandings—so it may become difficult for people to continue learning. Instructional feedback is often too shallow (answer: correct/incorrect). Likewise, too many designers don’t realize that feedback is also meant to be instruction!

Advertisement

Direct and Indirect Inferences About Learning

We make inferences (a conclusion made on the basis of evidence and reasoning, an educated guess) about learning by collecting direct or indirect evidence.

Direct evidence includes samples of what people do, which tends to be the strongest evidence. Chen’s answers to product knowledge questions are direct evidence, as are Kia’s statistics skills. We can’t measure everything directly, such as perceptions or feelings.

Indirect evidence includes perceptions and rankings. These can come from supervisors, peers, customers, the person, and others. For example, we might ask Kia if Penn seems to be struggling with statistics. A survey of Nola’s teaching skills by her students would be indirect evidence (observations of her classes would be more direct). Indirect evidence tends to be less strong than direct evidence because it’s more subjective.

An ideal assessment might combine direct and indirect measures. We call this “triangulation” of assessment evidence. Table 1 below provides a list of possible direct and indirect evidence of learning. There are more types of evidence but these are the main types for training.

Table 1: Direct and Indirect Evidence of Learning

Direct Assessment

Advertisement

Indirect Assessment

  • Scenario question results
  • Observations
  • Actual performance (simulation)
  • Actual performance (on the job)
  • Work products (such as reports)
  • Checklist of desired behaviors: Supervisor, self, peers, and customers
  • Supervisor, self, peer, and customers rating
  • Interviews with supervisor, self, peer, and customers
  • Graduation from training programs

 

Measuring learning is difficult because we have to first agree on what exactly we are measuring. And we could spend hours (or more) arguing even the definition of learning! One thing is certain: we are not measuring direct evidence such as speed or weight; we are measuring the outcome of the evidence, something that stands in for the evidence.

Triangulating data (by looking at direct and indirect evidence) is often worthwhile. It increases the chance that you are getting a decent picture of what is happening. It’s really hard to reduce this picture to a single number unless you are statistically proficient, and even then, you are reducing something or an initiative to a number. I’m not usually a fan of that because you often lose sight of what that number really represents. I recommend you read Dr. Brinkerhoff’s Success Case Method (SCM) for evaluating the success of a training initiative.

What Is the Purpose of Assessment?

We may believe that assessment serves to tell the one person taking instruction how they are doing, but that’s only one of many reasons for assessment. Next month I’ll discuss other reasons for assessment—and why leaving them out actually leaves L&D practitioners in peril.

References

About the Author

Patti Shank, PhD, CPT, is a learning designer and analyst at Learning Peaks, an internationally recognized consulting firm that provides learning and performance consulting. She is an often-requested speaker at training and instructional technology conferences, is quoted frequently in training publications, and is the co-author of Making Sense of Online Learning, editor of TheOnline Learning Idea Book, co-editor of The E-Learning Handbook, and co-author of Essential Articulate Studio ’09.

Patti was the research director for the eLearning Guild, an award-winning contributing editor forOnline Learning Magazine, and her articles are found in eLearning Guild publications, Adobe’s Resource Center, Magna Publication’s Online Classroom, and elsewhere.

Patti completed her PhD at the University of Colorado, Denver, and her interests include interaction design, tools and technologies for interaction, the pragmatics of real world instructional design, and instructional authoring. Her research on new online learners won an EDMEDIA (2002) best research paper award. She is passionate and outspoken about the results needed from instructional design and instruction and engaged in improving instructional design practices and instructional outcomes.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.