logo image

ATD Blog

Why Subtitles Often Hinder Learning


Wed Feb 28 2024

Why Subtitles Often Hinder Learning

If you’ve ever had several excited people simultaneously shout out responses to a posed question, then you know that human beings can only meaningfully comprehend one person speaking at a time. The reason for this concerns a particular neurological bottleneck.

In order to understand oral speech, the brain relies on the Wernicke/Broca Network: a small chain of cells processing the meaning of auditory words. Unfortunately, the brain only has one of these networks. This means we can only funnel one voice through this network at a time and comprehend only one speaker at a time: a neurological bottleneck.


Surprisingly, when we silently read, the Wernicke/Broca Network activates to the same extent as when we listen to someone speak. This means our brain processes the silent reading voice in the same manner it does a speaking voice. Accordingly, just as human beings can’t listen to two people speaking simultaneously, neither can we read while listening to someone speak simultaneously.

This is the basis for the “redundancy effect,” “cognitive load,” and other theories that have long demonstrated learning and memory decrease when students are presented with simultaneous text and speech elements.

This issue is highly relevant to on-screen captioning. When captions are present during a video narration, people tend to understand and remember less than people who watch the same video without captions. Even when captioning is identical to spoken narration, the bottleneck is activated.

With that said, there are several circumstances when combined captions and narration will not clash and can improve learning.

The first concerns people learning a new language. For the bottleneck to activate, both reading and listening comprehension skills must be fluent. When individuals are new to a language and not fluent in both (or either), captions can help them make better sense of narration they might otherwise miss.


The second concerns degraded or hard-to-understand speech. In some documentaries and video lessons, the audio quality is incredibly poor. This means viewers must expend a lot of cognitive energy simply deciphering the words spoken; this is cognitive energy not spent on deep comprehension or thought. In these instances, captions can ease the decoding of narration and boost learning.

The third concerns heavy accents. When a narrator or teacher has a heavy accent, this (again) forces viewers to expend much cognitive energy simply deciphering speech. Again, in this case, captions can ease decoding and boost memory and transfer.

In the end, once we recognize the underlying mechanism driving many cognitive or learning theories, many seemingly discrepant academic theories work themselves out. Very few research studies are at odds—they simply tap into different aspects of the same basic mechanisms.

For a deeper dive into the issue of transfer, join me at ATD24 International Conference & Expo for the session The Transfer Dilemma: Applying Skills Across Contexts.

You've Reached ATD Member-only Content

Become an ATD member to continue

Already a member?Sign In


Copyright © 2024 ATD

ASTD changed its name to ATD to meet the growing needs of a dynamic, global profession.

Terms of UsePrivacy NoticeCookie Policy