logo image

ATD Blog

The Ethical Tightrope: Navigating AI in the Training Industry

By

Wed Nov 13 2024

Adobe Stock 135804119.aiCopyright(C)2000-2006 Adobe Systems, Inc. All Rights Reserved.
Loading...

Brought to you by

Imagine a world where your next job interview is conducted by artificial intelligence (AI) that analyzes your every blink, twitch, and vocal inflection. Now picture that same AI deciding whether you’re fit for a promotion based on your learning style. Does this sound like science fiction? Not quite. Welcome to the brave new world of AI in the training industry, where the line between innovation and ethical quagmire is as thin as a microchip.

As AI seeps into every corner of our lives, from weather apps to our social media feeds, one industry stands at a critical crossroads: learning and development. Here, the promise of personalized learning experiences collides head-on with concerns about privacy, bias, and the very nature of human learning. Are we on the brink of an educational revolution, or are we unknowingly coding the biases of today into the learners of tomorrow?

AI systems are only as unbiased as the data they’re trained on and the humans who design them. In 2019, HSBC teamed up with Talespin to create a VR program for soft skills training, but it hit some bumps when it rolled out globally. The AI, primarily trained on Western expression datasets, consistently misinterpreted common nonverbal cues:

  • In Hong Kong, the AI got confused by subtle Chinese communication styles. When a Chinese employee said, “We might want to consider another approach,” the AI read it as uncertainty when it was a polite way of disagreeing.

  • In the Middle East, the AI didn’t recognize the importance of the right hand in greetings and gestures. Using the left hand, which is considered impolite in many Middle Eastern cultures, wasn’t flagged as a faux pas.

  • In the UK, “That’s not bad,” means it is quite good. However, the AI interpreted it as lukewarm approval rather than positive feedback.

The VR scores didn’t match up with real-world performance, and HSBC didn’t just shrug it off. They brought in cultural experts and threw in some extra training on cross-cultural communication. They also made sure humans were keeping an eye on things, just in case the AI missed some cultural nuances.

HSBC’s story shows how tricky it can be to use AI for soft skills training across different cultures. But it also proves that with some tweaks and a willingness to learn, you can turn those challenges into some valuable insights for global business.

Privacy is another major concern.

In the training industry, AI systems often collect vast amounts of data on learners’ behavior, preferences, and performance. While this data can be used to improve learning outcomes, it also poses significant privacy risks if not handled properly.

To address concerns:

  • Strict data protection measures must be put in place to safeguard learners’ privacy.

  • Organizations should give learners control over their data, including the right to access, correct, and delete their information.

  • Regular audits to ensure compliance.

Next, is the issue of transparency.

Many AI algorithms, particularly those using deep learning, make it difficult to understand how they arrive at their decisions. If an AI system recommends a certain learning path or makes an assessment of a student’s abilities, both educators and learners should be able to understand the reasoning behind these decisions.

To improve transparency:

  • AI developers should prioritize creating interpretable models that can provide clear explanations for their decisions.

  • Organizations using AI in training should provide clear documentation on how their AI systems work and make decisions.

To further improve AI in training, greater diversity must be ensured in AI development teams for a wider range of perspectives, potentially reducing bias in AI systems. Rigorous testing for bias should be conducted before they’re deployed in educational settings, and continuously monitored.

By addressing these concerns, we can harness AI’s power to improve training outcomes while protecting learners’ rights and ensuring fairness. As educators, technologists, and lifelong learners, we have a collective responsibility to shape an AI-driven future where technology augments human intelligence.

Ultimately, this journey isn’t just about smarter machines, but about nurturing smarter, more capable humans. Let’s embrace this challenge with open minds and an unwavering commitment to ethical progress.

You've Reached ATD Member-only Content

Become an ATD member to continue

Already a member?Sign In


Copyright © 2025 ATD

ASTD changed its name to ATD to meet the growing needs of a dynamic, global profession.

Terms of UsePrivacy NoticeCookie Policy