July 2018
Issue Map
TD Magazine

The Learning Myths That Plague Us

The Learning Myths That Plague Us

Myths, superstitions, and misconceptions are eating unnecessary time and money.

Organizations are all too frequently following approaches that are unjustified or even contrary to their best interests. Yet, we see continuing investments in these misguided exercises. That is contrary to our stance as practitioners.

As an industry, should we be concerned with myths about learning, superstitions about design, and misconceptions around common terms? We seem to be progressing, so is this a concern? Surely, they are just a relatively small problem, right?

The short answer is yes, such beliefs are problematic. What's more challenging is dissuading such beliefs, yet it's critical that we do. What are these beliefs, why do they persist, and what can we do? First, let's look at the problem.

The problem

The evidence is strong that these beliefs are among us. Researchers exploring the prevalence of such beliefs find robust evidence of misconceptions permeating practice. One study in the United Kingdom found that 76 percent of teachers were using learning styles in their instruction. Another study in the United States found that 93 percent of the public believes in learning styles, as do 76 percent of teachers. More surprising, and worrying, is that 78 percent of those with some neuroscience education believe in learning styles.

Other myths that are extant include left-right brain (64 percent of the public, 49 percent of teachers, and fortunately only 32 percent of the neuroscience-educated), and that humans only use 10 percent of their brain (36 percent of the public, 33 percent of teachers, and 14 percent of those with neuroscience education). This is still too high.

There are several problems with this persistence of beliefs. For one, if we design learning to accommodate these beliefs, we likely are wasting money. To accommodate learning styles, for example, we'd generate differentiated content unnecessarily. For a second problem, leveraging our design principles in these ways may not just be wasteful; it potentially could be harmful. We could be prolonging learning and also hindering or preventing the very outcomes we intend. We need to get smarter.

Types of beliefs

As suggested, some of the beliefs we have are myths, some are superstitions, and some are misconceptions. These differ in their nature, but all can be problematic.

Myths, in this case, are beliefs that are proved wrong. For instance, research on learning styles indicates several problems. The first is that we can't reliably identify learning styles, and the second is that there's no evidence that accommodating learning styles in design makes a difference. Other such myths include that individuals differ by their generation, that men and women learn differently, and that there are digital natives. The list goes on.

A second category is that of design practices that are contrary to what we know about learning. They're not specifically disproved so much as they're just known to be contrary to empirical outcomes. These beliefs manifest in tools and work products. In this case, I am talking about practices such as considering clicking to be a sufficient driver of engagement and that knowledge presentation will lead to acquisition. While they're not completely wrong, they're simplistic. And we can do better.

The last category is misconceptions, the ideas that some people strongly support and others dismiss. One problem is that the ideas are not adequately laid out, so others can interpret them in various ways. In other cases, the ideas are useful in some situations but not in others. The problem here is that they can lead to practices that are contrary to good outcomes and valuable approaches. Examples include the term microlearning, the idea of problem-based learning, and the Kirkpatrick model of evaluation. Each of these has adherents and real results—and also misinterpretations and problematic interpretations.

What we need is to get to the bottom of why they're infectious, find alternative approaches, and finally inoculate ourselves with learning science.

Beliefs and practices

The beliefs have distinct roots, but several commonalities guide their emergence. For one, they tap into our own experiences. Learning styles, for instance, reflect our experiences that learners differ. They do—but not systematically and not in any way that we can address. For another, there are imperfect inferences from data. For instance, the belief in different brain hemispheres came from early research on the separated brain, but our knowledge has expanded since then. Other reasons include heavy promotion and a lack of contrary data.

So, one source of problems is people misinterpreting the results of others. For example, the attention-span myth arose from research on webpage behavior, which is quite different from trying to learn something. Other times, folks will extend particular data beyond its applicability.

In short, we believe myths because they appeal to simplifications of the world that appear to make our lives easier or more comprehensible. Unfortunately, the unscrupulous will prey upon those beliefs, and the naïve will be victimized. The ultimate result is misguided investment. We should be wary.


A core problem is focus. It's too infrequent that we see sufficient attention on learning design (or learning engineering). If we are further distracted by the latest shiny object or slogan that promises to meet all our needs, we may well layer unworthy hype on top of bad design. Gold-plated bad design is still bad design.

What can we do? We need to have a bit of a background in science methods in general and then learning science in particular. We need to be able to comprehend the methods and the theoretical background. Then we can evaluate claims and the associated causal stories.


A learning and science background

Properly, science is advanced through systematic methods. Sometimes we do purely experimental studies to see what we find. More commonly, we create explanations for what's been observed and then conduct targeted experiments to validate or undermine our stories.

There are standards for what constitutes valid experimentation. Studies need to be designed to answer the questions posed. Statistics are used to assess the likelihood that the results came by random chance instead of from a reliable indication. The method must have sufficient power to determine the difference, either using a sufficient number of subjects (in the case of human studies) or a strong analytical method. The results need to be sufficiently qualified about how broadly they can be extended based on the study's diversity. And, finally, the details need to be sufficiently detailed to be replicable. Results really aren't considered completely validated until someone else has had similar results.

Research in learning science has established some robust results. At the neural level, meaning is distributed in patterns of activations across neurons, so different patterns of activation represent different ideas, events, actions, etc. Thus, learning is about activating patterns in conjunction and strengthening the associations between them. At a higher level, learning is a cycle of action and reflection. We do something, observe the outcomes, and reflect on what this means (and, importantly, what we should do differently).

Instruction, for the purposes of training and development, is designed action and guided reflection. Critically, learning doesn't happen by information presentation. Instead, learners need repeated, spaced, and varied practice, applying the knowledge to solve problems like the ones they'll need to be able to accomplish after the learning experience. It also needs to be the right practice, addressing current gaps with targeted feedback.

Instruction varies by the nature of what's being learned. We can also adjust the learning experience based on how the learner is doing: We can simplify it if learners are struggling or increase the complexity if they're systematically succeeding. What doesn't matter are factors that distinguish between learners: gender, age, or learning preferences.

To put it another way, the basis upon which we're designing learning is for the learning outcome, not the learner. Yes, we need to understand the audience but then develop the best design, not different designs for different learners. This specifically includes sufficient reactivation—reconceptualization, recontextualization, and reapplication (read: new models, examples, and most importantly practice)—to support retention until needed and all appropriate (and no inappropriate) transfer situations.

Becoming an aware consumer

Going forward, there are some steps you should take. In When Can You Trust the Experts? Daniel Willingham suggests four:

  • Strip and flip it (cut it down to the core and test by reversing the claim).
  • Trace it (check out the legitimacy of the claimant).
  • Analyze it (determine whether the research passes credible standards).
  • Ask, "Should I do it?" (determine whether to use it).

    It's useful to unpack these further.

    For one, being clear on the claim is important. What should you be doing differently, and, if you do, what will occur? There should be clear implications of what's being suggested. Are there examples to suggest the outcomes? I'd add: What's the basis for the claim? Is there sound theory behind it? For instance, Jungian psychology isn't a robust basis; it's just one man's intuition.

    In addition, you should trace back and investigate the claim's source. What are her qualifications? What sources is she citing, and is she citing them accurately? (The latter, in particular, is a common source of problems.) Several of these claims end up at an inappropriate inference or a vested interest.

    It helps to look at the study itself. Is the methodology sufficiently rigorous? Did researchers use representative subjects, have sufficient quantities of representative subjects, and eliminate sources of bias and noise? Did they perform so many tests that they were bound to get one valid result regardless? Are there rival explanations for the results that are simpler? These are all questions to ask if you are going to drill down.

    Of course, the easy path is to find and use the time-tested methods and experts. Some folks have established a reputation for quality. Similarly, there are well-regarded resources that provide valuable good guidance.

    Rely on sound research and theories

    As learning professionals, it is incumbent on our responsibility to the learners under our care to use only evidence-based methods. If we're following pseudoscience, we have no right to complain when our budget is cut. If we're not demonstrating that we can justify what we're doing on sound theory, we deserve to be looked at askance by our executives.

    There's a time and a place to be experimental, but it's not where we already have established principles. The people who look to your expertise deserve to have you on top of what's known in your field and applying it. We can do it, and we should.

Related Tags:
About the Author
Clark N. Quinn is the executive director of Quinnovation. He provides strategic performance technology solutions to Fortune 500, education, government, and not-for-profit organizations. He earned a PhD in applied cognitive science from the University of California, San Diego, and has led the successful design of mobile, performance support, serious games, online learning, and adaptive learning solutions. Clark is an internationally known speaker and author; his most recent title is Revolutionize Learning & Development: Performance and Information Strategy for the Information Age. He has held executive positions at Knowledge Universe Interactive Studio, Open Net, and Access CMC, and academic positions at the University of New South Wales, the University of Pittsburgh’s Learning Research and Development Center, and San Diego State University’s Center for Research in Mathematics and Science Education. He blogs at, tweets as @quinnovator, and works as executive director of Quinnovation.
Sign In to Post a Comment
Great topic, but given the focus on relying on sound data and research it seems ironic that there are no citations or references or links to research that supports the article.
Sorry! Something went wrong on our end. Please try again later.
I would be interested to know if the author thinks that qualitative research is as valid as quantitative researcher, or less so.
Sorry! Something went wrong on our end. Please try again later.
I'm so glad someone with credibility is challenging the fads in L&D rather than trying to sell another one. I can very much relate to the author's claims that you have to take new ideas with skepticism and compare them with what real science has shown to be fact. I'm not advocating every L&D professional be a neuroscience expert, but becoming familiar with learning research periodicals and reading them regularly seems a large step in the right direction.
Sorry! Something went wrong on our end. Please try again later.
Sorry! Something went wrong on our end. Please try again later.