January 2022
Issue Map
Advertisement
Advertisement
A gray background with 3 horizontal ovals on the right and one large oval on the left
TD Magazine

True or False: Your Test Questions Could Use Some Improvement

Thursday, December 30, 2021

How you word and structure learner assessment questions can make the difference in whether you capture reliable, meaningful data.

To this day, it is likely I am the only student in Southern Illinois University history to take the same class four times. (For those of you thinking that may not be the best statement to instill confidence that I can help you write better assessment questions, please stay with me here.)

Advertisement

At that time, I was a broadcasting student taking a required broadcast law class that would reveal itself to be the most challenging class in my 10 years of collegiate studies. I took the class once but dropped it due to poor performance. I took it again, earning an F instead of a withdrawal. I took the class yet again, finally achieving a D, which was considered passing.

Prior to my third attempt, the professor called me to his office and told me that he knew I knew the material, but he couldn't understand why I could not pass his tests. I did not have an answer. I knew that, even after my initial half semester of his class, students who had earned an A observed that I could apply broadcast law concepts better than they could.

It did not dawn on me that the test itself may be the problem until, in pursuing my second bachelor's degree, I enrolled in a special topics class focused on teaching and learning. In that class, I was introduced to Sharon Shrock and William Coscarelli's book Criterion-Referenced Test Development, which was foundational in building my understanding that there are right and wrong ways to assess learning.

Death by matching

Using that knowledge, and with the broadcast law professor's permission, I audited his class, which was the only way by university policy I would be allowed to take the class a fourth time. I looked at anything that pointed in a different direction from what Shrock and Coscarelli explain, but I found nothing. My conclusion was that it was the nature of the tests themselves that contributed toward so many students failing the class each semester.

One hundred percent of the class grade derived from three tests, each of which entailed what I colloquially call "death by matching." Matching tests can create anxiety in test takers even when they consist of small groupings of five terms in one column and five phrases in another column. Ours comprised 50 court cases in the left column, numbered 1 through 50, and 52 phrases in the right column, lettered A through ZZ. Each phrase in the right column listed outcomes of court cases that were similar to the others but ever so slightly different in nuanced ways. The professor essentially was testing us on how well we recalled the nuances as opposed to how well we understood the general concepts or, more importantly, how well we could apply the rulings in a corporate broadcast environment.

What's best for the six levels of cognitive learning

Matching tests are inadequate because they can only be effectively used to test recall. They cannot be used to test the higher Bloom's taxonomy levels of apply, analyze, evaluate, and create.

Which testing approach best measures the higher Bloom's taxonomy levels? Arguably, only essay questions offer the flexibility to test across all six levels—remember, understand, apply, analyze, evaluate, and create. Short-answer questions provide nearly the same flexibility but fall short of testing the top levels of evaluate and create. Both styles benefit from allowing open-ended questions that are easy to construct but must be graded manually.

The more essay questions there are to grade, the less reliable the scoring becomes, because results can differ due to scoring sequence, scorer fatigue, and halo effects if the essays are not somehow anonymized. Also, much like matching questions, essay questions can cause testing anxieties among many participants.

This is where the oft-maligned multiple-choice (and multiple-selection) test question can be seen in a positive light. Test creators can word multiple-choice questions in ways that effectively test the first four levels of Bloom's taxonomy, much like short-answer questions. However, unlike short-answer questions, multiple-choice questions are easy (and fast) to grade electronically or by hand while remaining the most flexible of the closed-ended question types.

So why do multiple-choice questions get a bad rap? The speed and ease of grading are offset by the extra energy and effort needed to craft questions in such a way that test takers cannot easily guess them, that the questions do not create unnecessary test anxiety, and they do not inadvertently test an individual's ability to take the test.

The ability to reverse engineer test questions is something I also tested during my undergraduate studies. For two classes, whose grades were solely derived from the results of multiple-choice tests, I made the unfortunate and ill-guided decision to use the information in Criterion-Referenced Test Development to make educated guesses through my midterm and final exams. While I did not earn an A in either class, I likewise didn't earn a D or F.

Since that time, I have only used these superpowers for good. To that end, I would like to share some of the methods crafty students often use to reverse engineer multiple-choice assessment questions and how test creators can mitigate against them.

Guessing game

When I present on this subject at conferences, I like to have participants take my "mercilessly unfair quiz." It features questions for which participants could not possibly know the answer, yet most manage to score 80 percent or higher. Let's look at each question and answers in greater detail.

Mercilessly Unfair Quiz
1) How much money, with tax included, did Kevin spend on breakfast this morning?
A. $13.21
B. $13.22
C. $13.23
D. $13.24

2) How much money, with tax included, did Kevin spend on dinner last night?
A. $28.00
B. $29.00
C. $30.00
D. $31.19

3) How often does Kevin take the cat for a walk on a leash?
A. Once a day
B. The cat gets really incensed if you try to constrain and limit her free roaming in any way, so never.
C. Once a week
D. Four or five times a month

4) Which of the following are names of stars?
A. Epsilon Eridani
B. Tau Ceti
C. Wolf 359
D. Procyon
E. All of the above

5) The sun will never burn out.
A. True
B. False

Question 1 provides a list of responses that only vary by three cents. There is no way to know the answer, which results in only one option: to guess. When I ask the question during a presentation, more than half the room will select answer C, though few will be able to explain why they answered the way they did.

C is indeed the correct answer. That parallels the thought process we follow when we build our own questions and answers. We usually reserve A for our best distracter and would rarely put the correct answer toward the top. What, after all, would happen if a test taker read answer A and knew it was the correct answer without reading the other painstakingly created incorrect answers? (They would pass the question, but in drafting the possible responses, that is not how we often think.)

Answer B is usually, though not always, a slightly different wording of answer A. We often are reluctant to place the correct answer as D for similar reasons why we avoid A: What if the learner reads through all the possible answers and, not knowing which to choose, selects the last in the list? D, then, often becomes a distracter—and because we often use our best distracters earlier in the list, D frequently becomes the weakest distracter. Sometimes we make it even weaker by making D a funny answer (for example, "Kevin is a dine-and-dasher so he paid nothing for breakfast"). While humorous distracters can relieve test anxiety, they become a throwaway because they no longer distract, making it easier for learners to guess the correct answer and also creating risk that learners will not take the assessment seriously.

If we are not careful and conscious of the correct answer's placement among the distracters, we tend to place the correct answer most often as C and second most often as B. Fortunately, some authoring software provides the ability to scramble the order of answers in multiple-choice questions. If your authoring software lacks that capability, consider listing answers in a logical order (either alphabetically or chronologically) to minimize that effect.

The standouts

Question 2 is similar to Question 1, though the answers reveal a key difference: All except one of the answers are approximations, or rounded off. The one answer that provides an exact dollar amount tends to stand out. When a test taker has no idea what the answer could possibly be, they will frequently focus on any answer that stands out, and it is that extra focus or second glance that will often cause them to select that answer as their guess. The correct answer is D.

Such differences can appear in other ways. Sometimes an inadvertent extra space will grab a test taker's attention, or it may be a typographical error or a slight mismatch in font type or size. Any of those factors will cause that extra glance, which will—rightly or wrongly—often prompt a learner to select that answer. To avoid that, always proofread for consistency, and consider using all general or all specific answers without mixing and matching.

Imbalanced responses

Question 3 has multiple issues. Most people will be drawn immediately to the longest answer. We tend to put more effort into the correct answer's wording than we put into the distracters. We also often consider assessments not as an event that comes after the formal learning process but rather as part of it. As a result, we often attempt to pack more information into the correct answers as a teachable moment. Test takers can pick up on that. Most will select answer B, which is the correct answer, solely on that principle.

However, recall what I mentioned earlier about using humorous answer choices. In this instance, the humorous choice is the correct answer rather than a distracter. Any answer choice that draws extra attention can result in a lack of balance and may ultimately reduce the validity of the assessment. Similarly, validity is negatively affected when two or more distracters, worded differently, are essentially the same answer, as seen in answer choices C (once a week) and D (four or five times a month). Even if B did not stand out as the correct answer, easily dismissing answers C and D converts this question into a 50/50 coin flip.

Advertisement

The key takeaways with Question 3 are to keep the answers similar in length as much as possible, ensure your answer choices are equally believable but not identical in meaning, and avoid humorous answer choices regardless of whether they are distracters or correct answers.

All of the above

Question 4 offers answer choices that many astronomy and sci-fi fans could probably answer. But there is one thing that keeps everyone else from reaching for their smartphones and searching the answer choices: "All of the above." We tend to use that response infrequently in tests to the extent that when it appears, learners' attention is drawn directly to it. It is not uncommon to experience an assessment in which every appearance of "All of the above" is the correct answer. That, as well as the further observation that there's a fifth answer choice for this question, causes most respondents to correctly identify the right answer as E.

In general, it is best to avoid using "All of the above" as an answer choice. Once a test taker identifies two of the answer choices as being correct, they will not consider the remaining answer choices, making the question ineffective. To get around that, consider using a multiple-selection question, prompting learners to select any, all, or none of the answers as correct.

In the rare instance that "All of the above" is unavoidable, consider using it for all questions within a single assessment. It is also best to avoid using "None of the above" as an answer choice, as well as confusing answers such as "Only A, B, E, and G."

Coin toss

Question 5 is technically a multiple-choice question, even though there are only two choices. This 50/50 coin toss gains greater odds of having its correct answer guessed by use of the word never. True-false questions using always and never are usually false because both words establish an absolute condition. Most things in life are less like a light switch and are more like a dimmer switch, so rarely will a situation exist where always or never applies. The correct answer is B.

While true-false questions are great for use as ungraded knowledge check questions, they rarely have a place in graded assessments, particularly because it is difficult to create a true-false question reaching higher than the remember level on Bloom's taxonomy. That said, there are rare instances where only two plausible answers exist—in which case it may be more effective to create a two-answer multiple-choice question than a true-false question.

Avoid the pitfalls

This article provides a few observations and recommendations that hopefully will guide you in creating assessment questions that truly measure what you want to measure. Looking back, some of the examples may appear overly simple, but they illustrate common pitfalls we often experience as test takers and, if we aren't careful, as test builders as well.


The Most Common Test Question Types

True-False

  • Easy to write and easy to score
  • Easy to guess
  • Best used to measure the remember level on Bloom's taxonomy

Matching

  • Easy to write and easy to score
  • Creates testing anxiety in some
  • Best used to measure the remember and understand levels on Bloom's taxonomy

Multiple choice/selection

  • Easy to score
  • More difficult to write
  • Can test all levels of Bloom's taxonomy except evaluate and create

Fill in the blank

  • Easy to write
  • Measures recall, not recognition
  • Can test the remember, understand, and apply levels of Bloom's taxonomy

Short answer

  • Easy to write
  • Measures recall, not recognition
  • Can test all levels of Bloom's taxonomy except evaluate and create

Essay

  • Measures recall, not recognition
  • Must be graded manually
  • Can be used for all levels of Bloom's taxonomy

Additional Test-Writing Tips and Tricks

  • Nearly all question types can create test anxiety, so variety is good; mix up the question types.
  • Highlight negative words such as no, not, and none.
  • Only use negative words when they are essential.
  • Preface multiple-choice questions with "Choose the best answer."
  • Alert test takers to multiple-selection questions (for example, "There may be more than one answer" or "Choose all the answers that apply").
  • Make instructions clear and thorough yet succinct.
  • Keep point totals evenly divisible into percentages: divisible by 10 for a 90 percent passing rate; divisible by five for an 80 percent passing rate; and divisible by four for a 75 percent passing rate.
About the Author

Kevin Lange is a learning consultant with Capital One. Since 2005, he has worked as a facilitator, instructional designer, project and program manager, and learning strategist within academic and Fortune 500 and Fortune Global 500 companies such as Citi, Expedia, American Express, and Sony. Lange holds bachelor’s degrees in radio-television and speech communication from Southern Illinois University, an MBA from Indiana State University, and an MEd in educational technology from University of Florida.

6 Comments
Sign In to Post a Comment
Thank you really informative
Sorry! Something went wrong on our end. Please try again later.
Thank you for sharing your experiences! I served on the CPLP item writing team several years ago, and that was a major eye-opener! After my experience item writing, I passed a state insurance exam with no study, simply because the questions and multiple choice answers were so poorly written. Most had two distractors that were obviously incorrect, and then it was at worst a 50/50 choice. Most times, the correct answer was so obvious as to be painful. Yet, most people took three attempts to pass.
Sorry! Something went wrong on our end. Please try again later.
Thanks for this, Kevin. In addition to this, do you know why or how instructional designers came up with the "80% passing rate" as a standard? I was wondering if there is any research around why it's used. Thanks.
Thanks, Kevin. I tried to give you a thumbs up
Sorry! Something went wrong on our end. Please try again later.
(continued from earlier post) Measuring multiple levels of success falls more in line with the adaptive/personalized learning approaches more organizations seek. Dr. Patti Shank recently released her book "Write Better Multiple-Choice Questions To Assess Learning" noting the pitfalls of only having a "passing" score (representing the lowest level of skill as the cutoff) when there is benefit in measuring additional levels which represent proficiency and mastery.
Sorry! Something went wrong on our end. Please try again later.
Hi Teri! This is a really good question. My guess (and this is only a guess) is that we sometimes equate success based upon the "70% is a C", "80% is a B", "90% is an A" standard seen in many schools. I have seen each of these "grade levels" used to represent a "passing score" depending on the organization. That said, this begs a larger question: This begs a larger question. The grade levels of A, B, and C represent more than success ("passing"): they represent levels of success. (continued)
Sorry! Something went wrong on our end. Please try again later.
Sorry! Something went wrong on our end. Please try again later.
Sorry! Something went wrong on our end. Please try again later.