Advertisement
Advertisement
ATD Blog

Readying Global Learning Leaders for Artificial Intelligence

Thursday, September 7, 2017
Advertisement

Artificial intelligence (AI) is the buzz phrase du jour. Both the general and business press are touting new advances while circulating horror stories. Top corporate individuals are debating the merits. Every supplier is talking about how AI is being baked into their products. However, we’ve seen such hype before. The real question is, to paraphrase Gertrude Stein: Is there any “there” there?

To evaluate that question, we need to understand what AI is, what it can (and can’t) do, and what it means for work L&D. To start with, however, we need to have a clear understanding of intelligence.

Intelligence has two major components: to do and to learn. For one, if a system (human, animal, or otherwise) can evaluate complex situations and make correct choices, that’s observably intelligent behavior. On the other hand, you can also consider the ability to learn intelligent behavior through modifying behavior based upon past experience. Thus, intelligence can be built or acquired.

AI Overview

AI has two main modes. Given that AI is artificial intelligence, we had to build it based upon what we knew of intelligence. Our early models were from the cognitive revolution as a reaction to the existing behaviorist paradigm. That paradigm said we couldn’t know what goes on in the brain, and treated it as a black box, with inputs and outputs, and tried to predict the outcomes.

The cognitive revolution posited that we could, in fact, understand how our brains worked. The models we created were formal logical systems—the way we believed our own brains worked. There were explicit representations of the world and rules that operated on those representations. In fact, the rules themselves could be operated on by rules. So these systems could be crafted to do smart things, and they could learn.

Such systems worked in well-defined domains. However, ongoing research suggested that the emergent behavior from our cognitive architecture wasn’t quite as logical as we thought. There are consistent errors in the ways our brains work that are clearly different from formal logical reasoning, such as stereotyping, functional fixedness, set effects, confirmation bias, and limited working memory. In short, we are much more a product of the current context than formal logic would support. We needed a better model.

In several flavors—connectionist, genetic algorithms, fuzzy logic—efforts were made to generate more human-like behavior out of programmed systems. These systems were termed sub-symbolic, as they seemed to operate in ways that weren’t easily ascribed to clean semantics. In small applications, their behavior was indeed more human-like. They, however, weren’t truly handcrafted, but instead learned by being exposed to sets of data and receiving feedback.

Today, both areas offer complementary capabilities. In domains where we can create formal models to explain behaviors, we can build symbolic systems that represent appropriate responses. However, these systems tend to be limited to specific domains, and don’t rise to the ultimate status of “general intelligence.”

Sub-symbolic models, on the other hand, learn from sets of data with feedback, and create the ability to deal with patterns in such data. They can successfully deal with domains we may not fully understand, and their responses can similarly be uninterpretable by human oversight. They can be trained to recognize emotion and other behaviors that appear more general, but are limited by what is chosen to be the important factors.

Advertisement

A downside of the increasingly prevalent sub-symbolic models is that whatever they are developing as rules aren’t easily comprehensible. We have to make up explanations for their behavior, just as we do for our own reasoning.

With increasing processing power, we’re able to run both systems side by side. Indeed, one of the most promoted systems, IBM’s Watson, integrates separate systems and determines which output makes the most sense from each. There are arguments that such systems are just search. The question then becomes one of how these systems can (and should) be used.

AI and Work

The benefits of artificial intelligence systems are that when they operate, they can do so without fatigue, bias, or error. When they are trained and validated, their performance can be predictable, which cannot be said about humans. On the other hand, they have little capability to go beyond what they’ve been trained to do. They can learn from feedback, and even incorporate some randomness, but they are limited in how far they can stretch.

Handcrafting logical rules, however, is on the downswing, and increasingly we’re training computers that possess learning algorithms to do tasks. With large sets of data, if we can characterize the inputs and provide feedback on success or failure, these systems can learn to do the task on other, new data. So, for instance, we can train systems to read visual patterns in things like X-rays and detect anomalies. And, they can achieve human-level ability or greater.

The real way to look at it is that computers can automate many tasks that previously required humans. They can do things well that we can’t: process complex calculations reliably and fast, and remember arbitrary details with perfection. On the other hand, they’re very hard to get insight from, and they can’t act with any use outside their training.

AI is being used to do natural language recognition and response, as in chatbots. They’re also helping in troubleshooting. Robot control is a common place to see AI as well. AIs are becoming able to comprehend streams of information like news and either select relevant bits or summarize them. And they are being used to make decisions in finance, medicine, aviation, and more.

Which of course is a worry for those people whose jobs could be replaced. And, of course, these systems are quite brittle, in that they can’t handle what’s out of their training. Unlike humans, they have no other large background of experience to apply. Efforts to build such background have taken decades and as yet have not been able to build a complete set.

Advertisement

Another view is that rather than replace people, we need to look at how to get the best out of each. An effort called intelligence augmentation is focused on just this: incorporating the best from each into systems that work better than either alone.

AI and L&D

So what does this mean for L&D? Several things. For one, it is a tool to assist the organization in optimally executing. Pairing humans with smart tools is a task for someone, and given that L&D is supposed to be the part of the organization that’s solving performance problems, there’s a strong argument. So here are some basic ways in which AI can be used:

Tools like chatbots to answer questions are one opportunity. They can help employees or customers address questions about policies and procedures. This can be for learners to self-help, or to assist support staff who assist their customers.

AI can assist in adaptive learning. Certainly for rote knowledge and well-defined domains, intelligent tutoring systems have proven to be effective. If you can build a model of expert performance, it can be used to counsel learners about when they’re off-track. This could (and should) be applied to learning skills as well!

AI can look for patterns. If you’re tracking rich data, such as through xAPI, you can see what correlations are found that humans might miss. These can also be used to make categorizations, discriminate between situations, and having actions tied to those decisions.

These are in play now, but new abilities are likely on the horizon:

  • parsing material and generating questions or being able to answer them
  • reviewing complex work products and providing a learning assessment
  • systems that can self-explain and make recommendations for use.

There are issues to be considered as well. Using AI in these roles will likely cross the boundaries of the organization, and responsibilities will need to be shared. On ethical consideration, should machines replace humans, and if they do, what do we do to help those who are replaced? And these systems are very vulnerable to their training sets; how do we manage to validate what they do? And when does the human touch supersede accuracy? Not every situation requires the answer so much as just a willing ear.
We’re at an inflection point, where AI can replace people, or augment them. There are business issues, and moral ones as well. We must decide soon what path we’re going to follow. I think that we should be working backward from the task, and figure out what can be in the world (in AI and non-AI tools), and what should be “in the head”—that is, what humans do. That will be a role that L&D should be taking the lead on. Are you ready?

Want to Learn More?

For a closer look at the impact AI is having on L&D, join me October 26-27 at the ATD 2017 China Summit. We will review AI technologies, discuss the implications and trade-offs, and explore paths forward. Come see the future of organizational learning.

About the Author

Clark Quinn, Ph.D. is the Executive Director of Quinnovation, Co-Director of the Learning Development Accelerator, Chief Learning Strategist for Upside Learning, and a co-author of the Serious eLearning Manifesto. With more than four decades of experience at the cutting edge of learning, Dr. Quinn is an internationally-recognized speaker, consultant, and author of seven books. He combines a deep knowledge of cognitive science and broad experience with technology into strategic design solutions that achieve innovative yet practical outcomes for corporations, higher-education, not-for-profit, and government organizations.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.