Fall 2022
Issue Map
Advertisement
Advertisement
Two women and a man look up and a robot is overhead.
CTDO Magazine

Make Way for AI

Friday, October 14, 2022

Navigate the uncertainty surrounding this expanding technology to tap into it for TD good.

Autonomous vehicles, diagnostic medical imaging, financial fraud detection—those are just a few realistic and attainable use cases that recent advances in artificial intelligence have led to. A multitude of AI-driven TD applications are already in the marketplace as well, and many organizations are eager to implement AI to engage employees, identify skills gaps, personalize learning, automate processes, and make better data-informed decisions. 

Advertisement

AI has the potential to address some of the biggest challenges in TD, innovate workplace learning, and ultimately optimize human potential. However, the rapid and changing development of AI inevitably brings confusion and anxiety to decision makers and practitioners alike. We are left struggling to understand the marketplace and evaluate various AI products without the relevant knowledge and skill set.

AI in practice

What makes AI difficult to understand is that there is no single agreed-upon definition. AI is an ever-evolving term that could mean different things in different domains to different people.

For the purpose of this article, I am defining it as technologies with the ability to perform tasks that would otherwise require human intelligence and capabilities, such as visual perception, speech recognition, and language translation.

Within the AI field, the consensus is that there are two types of AI: general and narrow. In the former, which remains a concept of science fiction, machines are envisioned to have broad cognitive abilities, including the ability to think or at least simulate convincingly all of a human's intellectual capacities.

Narrow AI refers to any system that focuses on performing one specific task or limited range of tasks that would require human intelligence. This form of AI may even surpass human abilities in those specific areas.

Current AI applications fall into this category. Keeping that in mind will help you manage expectations and ask relevant questions when engaging with AI vendors.

It is also important to recognize that the promises of AI lie in the fact that not only can AI simulate human intelligence in performing a specific task, but it can do so on the fly, at scale, and without overt user input or interventions.

For example, when a user interacts with a product recommender system, such as the music streaming service Spotify, the algorithms analyze millions of user preferences and song characteristics to provide in-the-moment song recommendations to that user without explicitly asking for their preferences.

AI in talent development

Broadly speaking, the type of AI applications for TD broadly fall into four categories. However, those categories are not clear-cut, and there are many crossover functions.

Chatbots and conversational agents. Chatbots are software applications that mimic written or spoken human speech for the purpose of simulating a conversation with a real person. They are commonly used for customer support and general communication with customers in businesses.  

In a TD context, chatbots can provide conversational answers and serve as a quick reference guide. Increasingly, there are applications for coaching and performance support by presenting learning concepts with a series of conversations.

Chatbots can also potentially tap into various sources of information that are distributed across an organization and serve as a knowledge management tool.

Content curation and recommendations. A lot of learning content people consume is personalized. Many learning platforms offer the ability to select, organize, and recommend material based on a learner's or group of learners' particular attributes.

Those attributes include knowledge levels, media preferences, prior experiences, job roles, and locations. Taking into account one or several attributes, the platform can push curated content to each learner or group.

Manual content curation is a time-consuming and tedious process. By using AI algorithms, a TD team can source, process, and combine relevant content in many ways to provide a personalized experience for different learners.

Adaptive learning. Previously known in the academic world as intelligent tutoring systems, adaptive learning systems have evolved into more commercial platforms with learning paths and assessments. 

A learning path is a sequence of courses or learning material that enables learners to progressively build their knowledge. Personalized learning paths are either pre-generated based on such aspects as job roles, organization chart, and required competencies or can be changed dynamically based on the learners' progress—their successes, misconceptions, misses—or interests or some other criteria using AI algorithms. The system then provides step-by-step personalized instructions based on the learning paths.

Learning analytics. A relatively new yet growing field, this is the measurement, collection, analysis, and reporting of data about learners, learning experiences, and learning programs for the purpose of understanding and optimizing learning and its impact on an organization's performance.

AI technologies can analyze patterns, create models, and predict learner behaviors and their performance outcomes. In some instances, AI-powered analytics can offer an effective way to analyze learner engagement data and identify patterns that suggest where content could be better written or completely redesigned or provide additional support to learners if they are failing to complete a course or learning activity.

Challenges and limitations

Although there are many solutions in the market, vendor-supplied AI technologies remain opaque and difficult to evaluate. Moreover, potential impacts and consequences of AI applications in TD are not yet fully researched and understood. To fully embrace AI's potential, we must examine some of its challenges and limitations.

Costs and time. In designing adaptive learning, granularity is an issue. Learning designers and developers need to figure out how much adaptation to provide to the learners.

Will you adapt at the curriculum, course, or module level? Will you adapt per activity or scenario?

A learning content repository requires vigor in constantly updating and monitoring content usage, more so when you factor in multiple pathways. You must also regularly check rules and assumptions to make sure they are still valid.

Often, we get caught up in thinking adaptivity is always the way to go, but is it worth the opportunity cost? What are other alternatives?

Black box algorithm. That refers to the general inability to see inside a system and see how it arrives at a decision.

Many AI platforms do not provide the ability for users to access key algorithms and evaluate whether a particular decision—such as materials recommendation or content adaptation—is appropriate. That is exacerbated by the use of proprietary algorithms and the difficulty for average users to understand them.

What's worse, users often have no way to opt out or edit those choices. What happens if the recommendation or prediction is inappropriate or inaccurate? That could potentially demotivate users and harm the organization's credibility.   

Trust. In the process of measuring learning and understanding how and what to adapt to individual learners, companies and platforms are collecting a huge amount of personal data. At the same time, ownership and governance of data are often ill-defined or not defined at all for many organizations, which may instill a feeling of mistrust among employees.

Companies are collecting data without clear and conscious consent from learners. In today's surveillance culture, people don't trust machine learning algorithms, especially when the decision-making process behind the scenes lacks transparency and accountability.

Advertisement

What happens when the prediction goes wrong? Should you always make recommendations to learners even when they don't prefer it? Should you predict learning behavior just because you can? Who owns the data?

Those are questions that you need to consider and clarify, especially when workplace learning records are potentially linked to performance reviews.

Privacy and lack of regulations. There are few consistent guidelines or regulations for addressing the privacy and ethical issues that use of AI in the workplace raises.

Minimally, people should have the right to access, manage, and control the data they generate. At a national and international level, the European Union's General Data Protection Regulation is a well-known example of a recent data privacy regulation.

However, at an organization level, efforts to safeguard user privacy and mitigate unintentional bias are much more ad hoc. Companies need to seek informed input from a range of stakeholders to create and enforce centralized policies and software implementation guidelines that include safeguarding data, communicating with appropriate transparency, and giving users opportunity for consent and control over their information.   

TD involvement

To leverage AI at the system level, TD professionals need to actively engage in the decision-making process, seek diverse input, and take concrete actions. To start, take the following steps.

Learn about AI and data. First and foremost, expand your AI knowledge. Learn about the commonly used techniques, terminology, and data literacy basics.

Can you identify and examine the type of data available at your organization, especially data collected from learning activities and on-the-job performance? Do you know how AI products make use of the data set? How is the data collected and analyzed? And who owns and controls the data?

Advocate for data policy, guidelines, and governance. For AI to realize its promise of enhancing learning, it will require people to trust the technologies and know that they can mitigate bias and errors.

Become aware of issues relating to the ethical and responsible use of data, and advocate for a framework for AI ethics and data privacy in your company. Ask questions such as: What kind of bias-detection mechanisms are in place to mitigate the risk of data bias? Can users view, edit, and update the algorithm or, in some instances, opt out of the data-collection process altogether?

Engage with the procurement process. When evaluating AI applications, get involved with the procurement process; do your market research; and don't be afraid to ask critical questions, such as: What kind of data is required? How can we use it?

Determine whether your organization has a solid business case—not a learning case—for articulating how you will get and use the AI-produced results.

It takes a significant amount of data to get real value from AI applications. Will you be able to source that volume of data to make it worth using AI?

With more data availability, better algorithms and faster computer power, AI will continue to mature and will come to have a significant impact on TD in the coming years. As a TD leader, you play a critical role in shaping and guiding AI tool development. Begin that work now to ensure that AI applications are equitable, ethical, and effective while mitigating the potential harm, risks, and bias.

Read more from CTDO magazine: Essential talent development content for C-suite leaders.

About the Author

Stella Lee is the chief learning strategist at Paradox Learning. She has over 20 years of progressive experience internationally in consulting digital learning initiatives with higher education, government, nongovernmental organizations, and the private sectors. Today her focus is on enterprise-wide learning strategy and governance, digital ethics for learning, artificial intelligence and e-learning applications, learning management system design, evaluation, and learning analytics.

Stella has served as subject matter expert in evaluating e-learning standards for the United Nations’ International Atomic Energy Agency and conducted postdoctoral research with iCore Research Lab at Athabasca University, Canada’s Open University.

She has a doctorate in computer science with a focus on adaptive learning technology from University of Hertfordshire. Stella serves as Canada’s startup advisor and is the technology columnist for Training Industry Magazine.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.