How Highlighting the Best and the Brightest Can Backfire

Research finds exposing students in a massive open online course to the best of their peers’ work lowers their grades and increases dropout rates.

Recognizing and rewarding an individual’s hard work makes sense since it gives people something concrete to strive for. But, new research suggests, encouraging comparisons with the best and the brightest can easily backfire: When students think their peers’ successes are unattainable, comparisons end up leaving them feeling unmotivated and disengaged.

“Leaders and organizations often expose people to selective information about their peers as a motivational tool (e.g., photos on lobby walls of exemplary employees’ smiling faces, graphs showing customers that they are less energy efficient than their neighbors),” write public policy researchers Todd Rogers and Avi Feller. But “such practices can backfire when they lead people to perceive that the level of performance of their exemplary peers is out of reach.”

Simply reading high-quality versus low-quality essays could make people think they were less capable of high-quality work themselves.

It is, of course, true that peer comparisons can motivate people. Recent experiments suggest that comparing one’s electricity usage with others can get people to save energy, and showing people how often their neighbors vote can increase voter turnout, to name just two examples. But those experiments involve relatively attainable goals—voting and decreasing electricity usage are a little easier than improving one’s writing or mathematics ability. So what happens when the standards aren’t so easily achievable?

To find out, the researchers carried conducted experiment with 5,740 students in a massive open online course. As part of the course, students had to write essays and then read and evaluate at least three other, randomly chosen students’ essays. Of course, multiple students graded each essay. Then, for each student, Rogers and Feller compiled an “essay portfolio quality” using other students’ evaluations. For example, if Alice graded Bob, Carol, and Dave’s essays, Alice’s essay portfolio quality is the average of every grade Bob, Carol, and Dave got on their essays, excluding Alice’s. The question was this: Would Alice get a lower grade or be less likely to complete the class if Bob, Carol, and Dave wrote better essays?

In short, yes, though only if Bob, Carol, and Dave wrote very good essays. Considering only those who completed the course, students whose essay portfolios ranked in the top 100 scored about six percentage points worse than their peers on average. The effect on course completion was more severe: About two-thirds of enrolled students completed the course regardless of the essays they graded, but for those with top-100 essay portfolios to grade, the completion rate was just 45 percent.

A follow-up experiment with 361 participants showed that simply reading high-quality versus low-quality essays could make people think they were less capable of high-quality work themselves, suggesting that, far from motivating students, peer comparisons could backfire—and that has important practical consequences. “Peer assessment is a popular practice in both online and offline educational settings,” the authors write, and educators must be careful to use it wisely, perhaps by ensuring students are exposed to average in addition to exceptionally well-written work.

Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.

Related Posts