Happy multiethnic diverse young people sitting in office and giving high five each other after great collaboration, discussing strategies. Disabled male manager sitting with his colleagues at office
ATD Blog

Why AI-Generated Art Is Missing the Mark for People With Disabilities

Tuesday, September 19, 2023

Artificial intelligence (AI), particularly in the realm of instructional design, has revolutionized how we access and create resources. With the advent of image generators, visuals can be summoned almost instantly, enriching our educational content. But like any tool in its infancy, AI image generation has its pitfalls, especially regarding inclusivity.

The Problem

For a community already navigating the challenges of proper representation, it’s discouraging to find that AI-generated images often struggle to accurately represent people with disabilities. The quality of these generated images lags behind the depiction of their able-bodied counterparts, likely due to the limited training images of people with disabilities available for AI to learn from.

A request for images of deaf individuals, for example, can yield an array of inaccurate and, frankly, insulting depictions: individuals with exaggerated and bewildered facial expressions, seemingly mimicking sign language. Similarly, a prompt for people with hearing aids or cochlear implants might produce images of individuals with overly futuristic cybernetic appendages or, just as bewilderingly, ordinary headphones. Although cochlear implants can technically be classified under cybernetics, the exaggeration in AI-generated images is both inaccurate and misleading.

Likewise, generating images of blind individuals presents its own set of challenges. Instead of accurate representations, AI tools tend to over-utilize sunglasses, perpetuating the stereotype that all blind people wear them. Imagery of unusual “seeing-eye” canes that appear integrated into the person’s limb or body further illustrates the AI’s misunderstanding. The same misrepresentation appears in AI-generated images of individuals in wheelchairs. Too often, these images depict a bizarre amalgamation of human and wheelchair, resulting in half-human, half-wheelchair hybrids.


Even seemingly straightforward requests, like generating younger individuals with canes, result in inconsistent outputs. Though older people with canes are depicted with only a bit more accuracy, it’s clear that the AI’s training data predominantly favors them over younger individuals.


The Solution

The solution lies in understanding the mechanics behind image generation. Central to this process are “tokens.” These units dictate the AI’s image generation, with each request having a specific token count. An overabundance of keywords can confuse the AI, leading to the “mishmash monstrosities” that sometimes result. This contrasts starkly with text-based AI models, like ChatGPT, which thrive on detailed instructions. In the world of AI-generated images, less is more. Some image generators, like MidJourney, even provide commands like “/shorten” to optimize the prompt’s efficiency.

In my own experience, understanding tokens has played a pivotal role. By generating more than 10,000 diverse clipart images, I discovered the strategy of separating the creation of characters from the image backgrounds. Around the 6,000th image, I shifted to generating characters against a plain white background. Dedicating all the tokens solely to the character’s creation resulted in images, particularly those of people with disabilities, of a significantly higher quality. Later, these images could be seamlessly integrated with separately generated backgrounds.

While AI’s current representation of people with disabilities leaves much to be desired, understanding the mechanics behind it can guide us toward more accurate and inclusive results. As the industry evolves, it’s imperative to advocate for richer, more diverse training data sets. In the interim, optimizing our use of current tools, as illustrated with the token approach, can help bridge the gap.

For instructional designers and educators, the accuracy of content, both textual and visual, is paramount. AI, with all its promise, must be wielded with understanding and care. As we move forward, let’s ensure that every individual, irrespective of their abilities, is represented with the dignity and respect they deserve.

About the Author

Leo Rodman, a seasoned LMS specialist, serves as the LMS lead administrator at Impact Networking. Spearheading the launch of a cutting-edge Docebo LMS, he’s prioritized user experience and strategic deployment, working with advanced AI tools to make his whole team more efficient. Before this, at Leading Real Estate Companies of the World, Leo managed the migration of over 136,000 users and thousands of courses and learning objects, showcasing expertise in branded content and systems integration. He also has extensive experience in web design and IT system administration.
With proficiency in tools like Articulate 360 and Adobe Creative Suite, Leo stands at the intersection of technology and education.

Sign In to Post a Comment
Great post! Loved your fresh perspective on the topic
SAT Coaching online
Sorry! Something went wrong on our end. Please try again later.
Hi Leo - love this absolutely fascinating discussion of what AI can and does get wrong (it's not the unproblematic good that some people think it is), and some good learning for anyone wanting to use such tools. And as ever with most new ways of doing things, it's the human element (in this case, the fact that you as a user of AI do have to do some work to get the best out of generative AI) that's key.
Thank you Simon I really appreciate that! I've also written a bit about issues with skin color diversity and other forms of diversity in AI.
Sorry! Something went wrong on our end. Please try again later.
Sorry! Something went wrong on our end. Please try again later.
Sorry! Something went wrong on our end. Please try again later.