It used to be that we inferred the efficacy of training by measuring learning outcomes. We still do. Most often that means having SCORM content report into a learning management system (LMS), record what happened, and provide an audit trail. SCORM still does this; it’s not broken.
You don’t need to replace your SCORM content with xAPI content; nor will you replace your LMS with a learning record store (LRS). Instead, xAPI and SCORM is an evolution similar to the Homo Sapien vs. Neanderthal debate. At first glance they seem similar. There’s even the occasional love child, but they are different species with the same ancestors.
The Experience API gives us the opportunity to skip on measuring learning outcomes, and focus instead on performance outcomes. Isn’t that what we wanted to do with learning outcomes anyway? We just couldn’t always measure performance, and we needed a proxy.
xAPI does this by way of leveraging all sorts of data from all kinds of platforms. While SCORM was limited to highly specified packages of content that were launched from specific platforms, xAPI does away with this, enabling us to track more social and informal interactions that learners have with content and platforms. It extends this support to activities outside the browser: think app or even real-world activities. And most importantly, it lends itself not to tracking learning, but tracking experiences—whatever they may be.
Case in point
A business is going to open new retail stores and sell widgets. These widgets will be fast-changing, with a new widget being produced by a manufacturer and sent to market nearly every month. As a result, product knowledge is going to be an issue, but it will be tough to do face-to-face training on such a regular basis.
The business commissions an online project to provide resources to stores, delivering product knowledge as soon as it can get it out to them. How does the organization measure the efficacy of this training?
Traditional management predicts that training will define learning outcomes that it believes translates into improved product knowledge. That would mean info, then a test, which must be passed with a score of "X" percent or higher. But how does the business know if that actually translates into improved sales; learning outcomes only suggest that will happen.
What if the business could track not only the learning resources it created, but also the sales that rang through the Point of Sale device? It can now correlate data sources to suggest which resources were most useful in driving sales of particular widgets. To dig deeper, the business can analyze not just product knowledge, but behavior and skills:
- How often do sales reps check resources?
- What are the habits of really effective sales people?
- Do high performers access the training?
- Does training make a difference at all?
The business can even look at tracking the physical activity of its sales people (FitbIt has an API for example) and matching that against performance to suggest if physically active sales people are more effective than their peers.
These are the sorts of questions that enable the training function to shape strategy and tweak performance. But they aren’t the sort of questions you would ask your LMS. Your LMS gives you completion stats and audit trails. These are the questions you ask an analytics package.
Enter the LRS
A learning record store is a database that can store and retrieve data that is produced in xAPI format. In an eco-system where training portals, point of sale devices, and other technology are all submitting “statements” on a moment-by-moment basis, the LRS is a key piece of infrastructure that requires high availability and high reliability.
That’s not to say an LMS might not be both. Rather, it means that the load, and the style of data which must be stored in the LRS, is generally different to that stored in a typical LMS.
It’s important to note that the term “LRS” is quite specific. Where LMS can be used to refer to a range of vastly different products, LRS refers to a distinct piece of software that adheres strictly to the xAPI specification for an LRS. There is essentially a recipe card for building an LRS; if it omits part of the spec or tweaks it in some non-standard way, it’s not an LRS. And while the recipe is well-documented, executing it isn’t for the faint-hearted. It takes months of work to build a standards compliant LRS.
Should my LRS be part of my LMS?
No one wants to buy a new piece of infrastructure; it’s a difficult decision that will have time and costs associated with it. Therefore, it’s attractive to think that an existing system can be upgraded to include the new functionality—for instance, adding an LRS component to your LMS.
However, I’m not convinced this is such a good idea. (Readers beware: I’m part of a team building an open source LRS, so I’m somewhat biased). There are many reasons for this, but at the core of the argument sit the issues of connectivity, analytics, and scale.
- Connectivity. An external LRS will come complete with features like oAuth 2.0, a standard that allows external applications to connect to data within a system and to insert new data from external systems. You don’t need to be interacting on the LRS to insert data. In fact, most users will never even see the LRS. They won’t have accounts; they won’t login to the LRS. Accessing everything through an LMS remains a huge barrier to measuring performance.
- Analytics. Although analytics are not a part of the xAPI standard per se, they are the reason you might want an LRS. The part of the system that stores data is what allows it to conform to the specification. The ability to interpret this data is what makes an LRS a useful piece of software. An LMS is fundamentally not built around data analysis; it is built around courses and content. It’s not enough to have your LMS “store” xAPI data. That’s just the start. You are going to need some really powerful analytics to start answering performance questions.
- Scale. LRSs will need to store and process vast amounts of data. Where we used to get a handful of data points from a learner to the LMS, we’ve seen xAPI generate 60,000 statements by just a handful of individuals. This is an order of magnitude larger, and dictates a different technology stack. Typically this means a NoSQL data store. Something no mainstream LMS currently available is built upon.
If you are interested in measuring the performance impact of learning, you will need an LRS. I believe that an LMS simply won’t cut it. Starting off isn’t always easy, though, and xAPI is a bit “chicken and egg” in nature.
You won’t need an LRS until you’ve got at least one system producing xAPI statements. But the second you have something producing xAPI statements, you’ll need an LRS to store them. The good news is you don’t need to get this right first time. LRSs are designed to be interoperable. You can run more than one at once and share data, or readily transfer data out from one to another.
Indeed, I believe you will need an LRS. Fortunately, there’s already a market developing for this technology. The ADL provides an open source LRS for development purposes. Learning Locker, a free open source LRS, is now available in pre-release for testing. You can use a Wax LRS developer account for free. And Rustici’s SCORM-cloud will convert existing SCORM content to xAPI and store it for you. You can even use some of their free tools to get started straight away.