Once every four years, Americans gather around their television screens to partake in a national pastime, delivering brash, passionate opinions on a topic they barely understand—Olympic figure skating.
Of all the Olympic sports, figure skating is particularly attractive for the armchair judge because the scoring is more subjective than measuring a stick or declaring that a ball went into the net. Artistry ostensibly garners more points than a lifeless, mechanical performance.
Judging creativity, especially in high-stakes situations like the Olympics, is a contentious topic. Who is truly qualified to judge the creativity of a work of art or performance? And can you, as a casual, quadrennial ice-skating observer, ever rise to the ranks of a professional Olympic figure-skating judge?
A new research paper suggests that amateurs can, indeed, be trained to be better judges of creativity—at least when it comes to children’s paintings.
Study participants in the training group were given a brief lesson about the “subcomponents” of creativity, as determined by previous research. The training group participated in a practice round, where it rated the creativity level of paintings from one (not creative at all) to seven (very creative). Afterwards, the participants were told the “actual” creativity level, provided by a panel of expert judges.
“Non-trained judges seemed to agree on something else than the real creativity of the drawings.”
The control group, on the other hand, performed an unrelated task with the set of paintings. These subjects did not learn about the components of creativity.
The results showed that participants in the training group were more likely to excel at judging the creativity of children’s paintings in two distinct ways.
First, those who received training were more likely than the control group to deliver similar creativity ratings as the panel of expert judges.
Additionally, when the training group returned to the lab four weeks later to rate the same exact paintings, it tended to be more reliable, rating the paintings the same as it had in the first trial. The control group was not as reliable with its ratings.
Interestingly enough, while the control group participants’ ratings did not match the experts’, they did tend to agree with one another. In fact, they agreed with each other just as much as the experts and the training group. The authors write, “Non-trained judges seemed to agree on something else than the real creativity of the drawings.”
One mitigating factor, they hypothesize, could be that the group of participants was relatively homogenous—mostly young female university students.
The researchers say there’s no way to know what the untrained group was measuring. They emphasize that agreement with expert opinion should be the baseline of true creativity—after all, they do research creativity for a living.
Still, four years from now when you’re loudly debating the lack of spark in Ashley Wagner’s triple lutz—fifth beer of the night in hand—it’s good to know that while your opinion might not vibe with the experts, it’ll likely be the same as that of your barroom compatriots.