ATD Blog
Tue Mar 26 2013
Can the number of “experience points” a student earns in a “gamified” learning experience be used to predict their achievement in final exams? This was the question a part of a research project conducted at the University of Warwick in the United Kingdom recently set out to answer.
Does More Experience Equals Exam Achievement?
For the purposes of the experiment, we used an executive-level course at the Business School as our benchmark. Students would access a gamified learning platform, in which viewing, commenting, and contributing back to the platform would help them earn experience points and permissions to progress (or “level up”) to the next stage. We could then compare these results with the final assignment grades they achieved, which were completed offline and double-blind marked by accredited examiners.
The results confirmed what we believed might be true: students who earned more experience points went on to get higher marks in the final assignments. The relationship between experience points and exam results, however, wasn’t entirely straightforward. Earning the most experience points didn’t guarantee you the top grade, but it was a fair indicator that you would pass. Interestingly enough, the most consistent result was seen in those students who didn’t play the game. This small group of students who failed to engage with the platform went on to struggle with the final assignment.
How Does Contribution Quality Relate to Performance?
Experience points gave us a general pointer as to performance, but the relationship wasn’t always clear. We needed to investigate the quality of student’s contributions further to learn more about the effect of gamification.
We started a new experiment to account for quality. We used the same audience, doing the same course as our benchmark. For this analysis we used a model called Cognitive Presence. Developed by Randy Garrison in 2001, Cognitive Presence is part of what Garrison calls the Community of Inquiry framework, and it gives us a tool by which we can measure the quality of a contribution to an online learning environment.
The model runs from level 1 to level 4, with level 4 being the highest level of Cognitive Presence, signifying that the student has reached a point of critical thought that is demonstrative of higher-order learning. Not all contributions make the scale. For example, a simple “nice article” doesn’t add much to the experience from a cognitive point of view.
For this part of the research, we split the class into two separate experiences. For one group, gamification encouraged participation; for the other group gamification demanded participation. For the second group to progress, a student would have to contribute and comment to earn enough experience points to “level-up.” At intervals throughout each experience, we asked students to respond critically to discussion questions. It was these answers that we put through the Cognitive Presence content analysis.
The results were compelling. In the first group, where participation was encouraged, students contributed 734 times in class discussion and submitted 182 responses to discussion questions. Of these answers, 53 (29 percent) reached level 4—the highest level of critical thought. In the second group, where participation was mandatory, students contributed 2239 times in class discussion, and submitted 178 responses to discussion questions. Of these answers, zero reached the highest level of critical thought. Not one. Only three answers made it to level 3.
What Does All This Mean?
Our “hard” gamification had meant a large amount of discussion was generated; in fact, three times more than when our gamification had been less demanding. But the quality of this discussion seemed to be totally lacking; it actually hindered critical thought rather than enhancing it. Students anecdotally reported being exhausted from the whole experience. They played the game, but frankly, it was too much.
The irony is that if we’d simply measured the participation figures from our experiment, we would have come to the conclusion that our gamification was having a huge positive impact. We did triple the amount of contributions, after all!
But as the saying does, the devil is in the details. Gamification can be good for learning, but you have to get the game setup right. When the gamification aspect is overly demanding—and when participation in everything is contingent on passing the course—then we quality can have a tendency to drop severely.
These findings tally with previous research into the effect of intrinsic/extrinsic motivators for learning activities. Bottom line: When an extrinsic motivator is adopted by a user autonomously choosing to go for an objective, engagement increases; when the goal is forced upon the user and completion is contingent on participation, motivation often suffers.
Additional information about this research can be found in a forthcoming article in the April 2013 issue of Inside Learning Technologies & Skills magazine or in the peer-reviewed paper, “Gamification as a Tool for Increasing the Depth of Student Understanding Using a Collaborative E-Learning Environment,” included in the April special edition of the International Journal of Continuing Engineering Education and Life-Long Learning.
You've Reached ATD Member-only Content
Become an ATD member to continue
Already a member?Sign In