Top
1.800.628.2783
1.800.628.2783
Advertisement
Research Center Unauthenticated Header Video -- Blue Graphs
TD Magazine

How to Effectively Evaluate E-Learning

Monday, August 15, 2016
Advertisement

Being able to ascertain the quality of a learning module helps to ensure the highest return-on-investment and perpetuates a positive perception of the value of online learning.

Imagine you're asked by your senior leadership to review an existing online learning program (it could be created in-house, purchased by an external supplier, or maybe it came packaged on your learning management system). Your goal: identify if it's "good" for your organization. How would you start? What criteria would you look at? How would you define "good"?

This scenario is pretty common to many of us in the learning and development (L&D) field (and one I have repeatedly found myself in with clients as both an internal consultant and external supplier). And it's no surprise that these requests are increasing since the L&D field continues to use learning technologies as a vital component in the delivery mix to our employees.

ASTD's 2011 State of the Industry Report found that in the United States alone $171.5 billion was spent on employee learning in 2010, an increase of more than 26 percent from 2009. In addition, technology-delivered training (especially mobile learning) continues to grow. The report notes that Fortune Global 500 companies focused more than 40 percent of their formal learning hours on this type of training delivery. The economy may be sluggish, but organizations clearly are spending money on training and development and on learning technologies.

Since there's a need for more online content, the focus for developers of this type of learning is on rapid development: "Create it fast!" "Be timely!" "Use template-driven solutions!" While there's plenty of value in rapid development and deployment, there's also a potential that rapid comes at a cost of quality. Yet the L&D field has not accurately defined what constitutes a good online training program.

Today there's a greater focus on ATF, or "after the fact," results post-initiative (and rightfully so), but little discussion about ABD, or "already been developed," learning—that point between final development and actual deployment.

There are two reasons why we should care about these issues. First, if we're spending this much money on in-house or external online learning development (as the data from the State of the Industry Report suggest), don't we want to be sure our money is getting the highest quality return-on-investment? Second, when learners are exposed to "not so good" online learning it does a disservice to all online learning because it taints learners' perceptions of the value of this delivery method.

Think about this last point for a moment. How many people have gone through a poorly designed online learning course and, therefore, made the assumption that all online training programs are a terrible way to learn? Both corporate employees and college students have told me that they believe online learning is inferior to face-to-face or that it's a poor way to learn. I argue that if the L&D field continues to accept not-so-good online learning as a solution for our clients, then we are perpetuating a negative stereotype within our profession.

To combat this, here are eight factors to examine when evaluating online learning. They will help to determine whether the program is worth your time and effort within your organization.

Instructional design

The first area to consider is the instructional design of the content. Regardless of delivery method, a good learning initiative prescribes to some instructional process or model. It may have been the popular ADDIE model (initially developed in 1975 by Florida State University for use by the U.S. Armed Forces), the Dick and Carey Model (a bit more sophisticated and complex than ADDIE), or the ASSURE model (more popular with the K-12 academic set).

Regardless of which model was (hopefully) used, one of the common—and critical—components is the identification of learning objectives (A in ADDIE, Stage 1 of Dick and Carey, and A in ASSURE). Look at the learning objectives and evaluate them. What does this e-learning program claim it will do for the learners? Is it viable to truly measure the objectives that the online learning sets out to instruct against, or are the learning objectives weak (using phrases such as "Learners will know ..." or "Participants will learn ...")? A surefire way to know if an instructional design model has been followed is to evaluate the strength of the learning objectives.

Think about this area of focus on a scale of 1 to 10, with 1 being "terrible instructional design" and 10 being "fantastic instructional design." Where would you rank the online learning program you are evaluating?

Level of interactivity

Another factor in determining an online learning module's perceived level of good is its level of interactivity (at least for the asynchronous online learning). One way to interpret interactivity is in the combination of ways in which the learner engages in the content, from passive page turning to the much more engaging situation-based scenario (see sidebar on page 56).

While there's no set formula or minimum threshold, a good online learning program should incorporate many of these instructional delivery strategies. The more strategies that are used, the better the interactivity is for the learner. And the more the learner is engaged with the content, the better the learning experience and, potentially, the higher the retention. Using more interactive strategies caters to more learning preferences, but it also means more development time and higher costs, too.

Visual impact

It's not good to judge a book by its cover, but do you think your learners will anyway? Yes. In today's 21st century mentality, with so many messages competing for our attention, learning content must look good enough to engage the learners from the beginning. Otherwise they will tune out before giving the content a chance. Is this fair? Of course not, but it's a reality. In training (whether online or instructor led), if your visuals look bad, the learner has a higher chance to disengage, even if the content has a great message.

Examine the look and feel of the learning and determine if they are engaging and professional. In addition, even if the graphics are engaging, ask yourself if they are right for the audience. Do they reflect the brand of the learning program, the module, or organization overall? Are the graphics and text relevant? "The media is the message" is an often used phrase in the communication world—does the look of the learning support what's being taught? There's nothing more disconcerting for a learner then a serious message (such as safety training or risk compliance topics) where juvenile graphics or "fun" fonts are used, leading to serious message discord.

Language

Like any learning, clear language is key, but in a face-to-face situation a good facilitator can see when students don't understand a word or are confused by a concept and then can elaborate as needed for comprehension. A simple "does that make sense?" by the facilitator can immediately identify and clarify the learners' needs.

This is not the case with asynchronous online learning, so clarity of message and the semantics used have to be selected with great care. Approach the online learning's language and tone from two different perspectives: target learners' knowledge and target learners' demographics.

Target learners' knowledge. Is jargon used that is appropriate for the target audience? Are the examples and scenarios used universal to the group or are they too specific to the experiences of some? Is the learning well written?

Advertisement

Target learners' demographics. Is the tone used in the learning in conjunction with the age of the learners? What is the perceived language proficiency of the learner in relationship to the content? For example, if English is the language used, what is the perceived comprehension level of the learner? Are the examples used universal to this audience or exclude some? For instance, if sports analogies are used, is that appropriate for the audience? Finally, if humor is used in the learning is it appropriate or could it be misinterpreted by some audiences? Humor is a great strategy for keeping audience attention, but if used incorrectly it can greatly distance a learner from the learning.

Technical functions

If you break down the technology facet of the learning it can be approached in five areas.

Course interface and navigation. Do the buttons take the learner where they're supposed to and function as intended? Are icons clear and used consistently? Is the e-learning intuitive to use for learners who are new to online learning? If not, does it include a how-to section on maneuvering through the online learning?

Content display and sound. Do the font, text, and images look as intended? If content isn't displayed correctly, is it due to a plug-in and are the needed plug-ins available for easy download and updating? Does audio sound as it should through the organization's infrastructure, or does it sound distorted or jumbled?

Accessibility. Is the module Section 508 compliant? In other words, does it meet the criteria of "accessibility" identified in the U.S. Rehabilitation Act, which mandates that learners with differing abilities be able to access the content in an equitable way? In addition, is the online learning technically accessible by all potential learners? What if a learner can't access the Internet? Can he still take the learning somehow?

Hyperlinks and files. Do the links take the learner to where they're supposed to? If there's a link to a file, is that file (such as a PDF) there? Do external hyperlinks work as expected?

LMS and help. If the online learning connects to your organization's learning management system, is it sharing the data like it's supposed to? Are help screens available to learners? Does the learning identify where learners can turn should they run into technical or content-related issues?

In some cases, the above areas overlap. For example the LMS functionality may be because of an organization's intranet capabilities, or the audio of the learning may sound terrible because of the sound capabilities of the computers in the organization. The point here is to determine whether the learning isn't providing the expected experience because of the limitations of the organization or the limitations of the learning module itself. In either case, if it doesn't work well for you as an evaluator it won't work well for your learners, either.

Time

Another area of focus should be related to the length of the learning module. First, how long does it take a learner to complete the learning? Some experts look at attention span to determine a "good" length of time for an online module; research suggests between 15 and 30 minutes for each topic or module as a good guideline.

Putting the attention span and our time concept aside for a moment, answer this question: Does the learning meet the stated learning objectives? If so, the overall length of the learning program should be as long as it takes to meet the overall learning objectives.

These two concepts may seem counterintuitive, but they're not at all. If the online learning is good overall but longer than the suggested timeframe to keep learners engaged, you could simply separate the content into pieces. That holds the integrity of the learning, but better fits the 15- to 30-minute delivery suggestion. However, if the timing is but one variable of the learning that you would not consider good, then it may not even be worth this chunking approach.

Cost

If the online learning scores brilliantly in all the above-noted criteria, what if it's too costly to purchase or maintain? There are many ways to examine the costs of running any training program, but the best way to think about it is to be consistent. Does your organization already calculate a cost-per-learner metric or have some other way to determine the cost of running a learning program—online or not—on an annual basis? If not, you should.

First, determine the costs of running an existing program by determining all the costs for developing the course (instructional designer costs, time, travel costs, purchasing cost, and any annual fees for maintaining the course such as an LMS, conference center rental, or annual licenses). Then divide this number by the number of learners who have or will experience the course in the calendar year. Now you have your annual cost-per-learner metric.

Once you calculate the cost per learner for existing programs, calculate it for the online program you are evaluating. You probably will have to estimate some of the figures in the formula (for example, how many learners will go through the program during the first year). Where does the online program fall with the distribution of all your programs? This gives you a good way to compare this potential program with existing ones based on operational costs.

Any e-learning endeavor does have some nonfiscal benefits that also could be considered as part of its value, mainly reusability. While upfront development costs (or purchase costs, if it's off-the-shelf) can be seen as higher than creating instructor-led training, as the learning is reused the return-on-investment increases. Conversely, instructor-led training costs tend to remain the same or increase over time. So when discussing value, consider the cost and management annually but also determine whether its reusability, the consistency of message, and other advantages of the online learning are worth the investment by your organization.

Rankings

The final factor to consider when determining whether an online program is good is to revisit the previous seven areas of focus and ask, "Out of these seven areas, which one(s) are the most important to me and my organization?" This helps to clarify and prioritize focus since putting a weight on the areas that are most important helps to determine which of the scales matter most.

For example, if the graphic design and look and feel of the learning is critical to your learners and the learning is ranked high, then there's a potential for a good fit for that initiative. So consider placing a 1-5 ranking on each of the seven scales, with 1 being "not really important" and 5 being "mission critical to our organization."

Team effort

This is just one approach to evaluating the quality of an online learning program—seven areas plus a look at a weighted average of importance. You may know another or develop a different approach for your organization. Regardless of the methodology you use it's best to use a team-based approach to evaluation. Get a team together and compare notes using the same criteria:

  • Determine what was important to each reviewer (ranking 1-5 on level of importance for each of the seven scales).
  • Independently review the online learning in question, ranking the learning on the seven areas of focus noted above.
  • What were the top scoring areas of the seven scales? Compare and contrast and talk. Find out what your team thought of the learning and if it's worth it to your learners.

By taking a group approach you help to minimize rater bias and get a better holistic view of the impact and potential effectiveness of the online learning for your organization. Aristotle said, "Quality is not an act, it is a habit." Instill and evaluate quality in your learning—whether it's delivered online or off.

About the Author

Steve Yacovelli is owner and principal of TopDog Learning Group, LLC, a learning and development consulting firm based in Orlando, Florida, that provides guidance and solutions in change management, instructional design, leadership and organizational development, learning strategies, and custom e-learning creation.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.