Advertisement
Advertisement
shutterstock_110678570.jpg
ATD Blog

Equating Real World Performance with Mobile Learning

Tuesday, May 28, 2013
Advertisement

Chad Udell is the managing director at Float Learning. Float guides industry-leading companies to understand and leverage the power of mobile learning.

Chad is the author of Learning Everywhere: How Mobile Content Strategies Are Transforming Training on Rockbench and ASTD Press. You can see the book here.

Chad facilitates ASTD’s Mobile Learning Certificate Program course along with others from the Float team. He will be blogging here this month, focusing on mobile learning as a tool for sales enablement training. You can catch up with Chad and the Float team over on their blog at floatlearning.com/blog.

Join Chad as he explores topics this month on mobile learning strategy and how to get started on the road to mobile learning with your sales organization.

Stop measuring percent complete. Start measuring percent closed.

Mobile Learning as a Sales Training Tool – post 4 of 4

Click here for post one, two, and three

 

Evaluating training effectiveness is a long-running challenge for our industry. How do we know what we are doing is having a real and lasting effect on our audience’s behavior and, in turn, improving our company’s performance?

An entire subset of the training industry is focused around this. Assessment tools and specialists, learning management systems and many other products out there exist to make this easier or more effective. These vendors are doing what they can to help you determine if your learners have achieved what you have set out to do. Common metrics used with these systems include whether the learners have attended class, completed courses, and achieved some level of mastery of the content you have designed and deployed for them to consume and internalize.

Does tracking all of these mean that learning has occurred? From a certain point of view, yes, it has.

The industry benchmark Kirkpatrick taxonomy of evaluation (currently in a redesign process) devotes the first to levels of evaluation to “Did they like it?” and “Did they learn something?” This taxonomy is certainly not accepted by all to be the end-all be-all of training effectiveness evaluation, but it definitely is widespread in its devotees. There is little in those first two levels to affirm success or even equate the training intervention with real-world performance. Just because they were there, they liked it, and they learned something doesn’t mean that they will sell more or be more productive.

Measuring Results

Advertisement

In sales, we are a results-driven culture. Coffee is for closers, right? What if Level 1-2 were viewed as being the means to an end and not a valued form of measurement itself? In our community of practice, we are seeking more meaningful relationships between our interventions and sales activity.

How do we move past this view of measurement to get to the real questions: “Are our training materials affecting behavior?” and most importantly, “Are our materials affecting organizational performances or results?”

A key will be to reexamine the deliverables we are deploying to our learners, and reframe their points of measurement to more closely alight with the processes and tools in place to help our sales team make the sale.

With mobile being a new thing for many of us, it offers a place to do a "soft reset." What and how will we measure, and how will we tie to real-world activities? Key components of mobile experiences that can be measured differ dramatically from traditional learning experiences. We are concerned with things like measuring competency or grasp of subject matter in our ahead-of time-learning materials. Just as usage data, goal/funnel conversion, time spent on task and task success rates are more important for marketing activities, these same sorts of actives are useful for mobile and just-in-time learning content.

Tying the sales training delivery to an activity in the company CRM or other associated business process platforms directly is a powerful area to explore as you reexamine your mobile learning strategy for sales enablement. By doing so, you will uncover a few things:

  • Do sales people who are stuck need refreshers while reviewing cases?
  • Do high performers access training materials before closing?
  • Are our lowest performers’ pipelines dying at points that correspond with completion and competency numbers for existing content?
  • Are there ways to directly insert job aids or performance tools in problem spots of the pipeline that help pull a potential client through it to make the sale?
  • Are there behavior patterns that emerge between groups of learners? Product lines? Regions? Managers?

Another aspect to examine would be the post-support or sales-incident logging.
If your sales personnel need to update a CRM or application after a meeting, visit or call, perhaps there is an opportunity to add fields to the ticketing screen that ask if they used mobile help before or during the call. A follow-up field could ask what content they accessed or whether it was helpful.

Make these fields a required entry prior to logging the meeting notes and you instantly have data points to track to see if your mobile learning is helping those you designed it for. Use this information to inform your revisions and additions, and continue to check in to see if the overall evaluation of your software improves.

Covering Other Aspects of Learning

These are all tips for the formal learning process and content, but social and informal activity is also an area to explore.

When was the last time you reviewed your Salesforce Chatter or Yammer usage to correlate sharing with performance? Usage with closed sales? Are our best-performing sales people active in Yammer or Chatter? Do they have their posts favorited or shared? The best chefs are on TV (a simplistic assumption, I know), but these chefs share their recipes with everyone to improve us all. Hopefully your best performers do the same.

Advertisement

Measuring event intervention access and assessing if the spacing matches with product releases and other real-world events is also worth exploring. Do new product rollouts require more accesses of just-in-time information? Are job aids for older products available in a format that helps sales people make quick decisions?

The lifecycle of your products and services may have some bearing on whether your sales people remember things about them. Just because something is near end of life or not the newest offering should mean that sales tools are unavailable to help close the deal.

Putting This Into Action

Tying the intervention delivery to the results is a bit of mythical beast in the training world. Mobile learning doesn’t guarantee this, but it certainly gets us closer. This has been a point of disconnect previously, but it’s testable now.

Pick one of your low-hanging fruit projects and insert real measurement into the process. Look at mLearning as a performance improvement tool, not a “have to have” for this test.

Since it's new and possibly not fully replacing your other methods yet, view mobile as an additional means to reach your audience instead of the only one. It's more content than you might have had before – an augmentation of sorts.

Take your learning and measurement hypotheses and do an A/B test. Give the new content to one group and not the other. Do the ones that get the new tools do better? Make more sales? Move leads through their pipeline more quickly? These metrics shouldn’t be vanity metrics (non-actionable, low value numbers), but rather real-world indicators based on the results.

Once you know a bit of what works and what doesn’t with this smaller effort, you can then expand it. Pick larger projects, embed your content directly in the CRM, move to real sales enablement tools for mobile, and start measuring metrics that  matter.

Stop measuring percent complete. Start measuring percent closed.

-

I enjoyed connecting with many of you at ASTD 2013 in Dallas. The sales enablement community of practice for ASTD is vibrant, and I look forward to seeing what is around the bend for the coming year.

Thanks for following this series of posts. For more great information on how to mobilize your sales enablement program, connect with me on LinkedIn, and come check out the blog at Floatlearning.com. If mobile is new to you and you want to learn more about what kinds of mobile learning options are out there, download our free introduction to mobile learning, The Float Mobile Learning Primer.

About the Author

As managing director of Float Mobile Learning, Chad Udell strategizes with Fortune 500 companies and their learning departments to help deliver mobile learning to employees. Chad also works with universities and other learning organizations to develop their unique visions of where and how to use mobile learning. Chad's focus is on understanding an organization's business drivers and goals and then creating the strategy that can best deliver solutions. Chad is recognized as an expert in design and development, and he speaks regularly at national conferences on design, development and mobile learning. He has been a faculty member of Bradley University for more than five years.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.