Imagine you logged into your computer this morning and it had a new widget flashing in the corner of the monitor. You don't see an obvious way to interact with this little box-like widget as it doesn’t have any fields or buttons.
After a few clicks, a message appears that says the new widget is your new personal information assistant. You're not real sure what that means, but the message assures you that you would love an assistant and information whenever you want. So, you proceed.
First, you open your email. The box in the corner begins explaining a new feature in your mail program that was released last night after you quit working. After reading a message about the price of wheat in China, you open your search engine. Before you can type anything, the box has filled with the market prices of wheat in all of Asia. Although it’s possible that’s not what you were going to search for that information, it's a good assumption.
This would be amazing, right?
I mean, we're talking about a “magical” box that would tell you all of the things it suspects you need to know. It would do this before you consciously realize the need for supporting information, and it would prioritize finding that information over other tasks immediately at hand. In short, it would be a tool that follows along with your work and provides the right contextual information just when you need it.
The importance of this type of technology is backed up by Alex Kirlik’s work in human computer interaction. Kirlik says in his book, Adaptive Perspectives on Human-Technology Interaction, “…due to inherent uncertainty in the performer’s task environment... training is insufficient to improve judgment performance. Here, performance can only be improved by enhancing the overall reliability of the proximally displayed information (e.g. by improving or adding sensor or display technology)….”
Obstacles to adaptivity
We’re years away from this kind of “proximate” support being a reality. Sure, there are technology companies working on adaptivity that could support tools like this today, but the bigger hurdle is that the typical organizational learning group is not ready to take advantage of it.
There’s a lot of work that goes into feeding the algorithms that lead to really smart recommendations. This hurdle is a little lower in formalized education than other learning environments.
In formalized education, there’s a predetermined set of things that children are supposed to know, and there’s a good amount of research pushing people into a particular way of teaching them. This leads to large amount of content that consistently supports particular learning objectives. I’m not stating that formal education has the right learning objectives to prepare a child for the world they are going to enter, but I am saying the consistency in content requirements makes this easier from the adaptive perspective.
In real work, the complexity multiplies. There are two huge differences between formal education and the working world: the variation in jobs roles and the work to be done by people in those roles. Learning technologies will be continuously developed to get smarter about accounting for such fuzziness, but until then we can make it easier to work with many different new technologies as they arrive.
Here are three steps we all need to take to get ready for better, smarter support by the technology we choose to use.
Step #1: Who could benefit from knowing what? The content required to support a person in each role varies drastically. There are a few pieces of information that help people move forward with their work that need to be widely known, like company mission, the required steps by each employee in the payroll system to ensure they get a check, the performance review process, or details of the product or service roadmap (and yes, compliance).
These types of companywide information are generally documented and have a generous amount of content available to support them. Outside of those, what do people in each role really need to know? This is a solid research question that has to be answered in-depth before working on the next two items.
Step #2: Separate content from technology. The amount and structure of content needs to be carefully reviewed and clear content strategies need to be put in place. There’s a lot more to this, and I’ll be talking about it in my presentation at ASTD 2014 TechKnowledge in January. But the bottom line is that most content libraries are not granular enough to deliver the right bits of information effectively.
Consider the example in the beginning of this post with the email about wheat prices. Imagine that the box delivered a one-hour e-learning course on the history and current state of wheat pricing, instead of just a list of current prices. Now, arguably, you may not want either, but you’re definitely more likely to read the list of prices than you are to flip through an hour long course.
Step #3: Assessment is not done in an LMS or assessment system. Another point in the complexity, performance assessment is not clearly and consistently defined in real work. It’s hard to do that, and even when it is clearly and consistently defined. More important, we need to check that whether the assessment is “meaningful.” By this, I mean assessing if the performance being assessed in the way it’s being assessed is meaningful in-context of both the work being done, and how the assessment is happening.
It would be silly to deliver traditional assessments to people while they’re doing their jobs. It is (cynically) funny to imagine a “test your knowledge” box opening up in the middle of driving a tractor. In corporate learning, if we want people to keep working, we need to rely on meaningful measures of actual work performance as an assessment.
We’re on the edge of some radical and exciting changes. Even if you have no intent on using adaptive technology in your organization, there are universal truths in the three items above that will help prepare your learning group for the next challenge you encounter.