What’s the Right Way to Talk About Psychology?

Frontiers in Psychology just published a damning list of terms that researchers abuse. What can journalists learn from it?
frontiers psychology brains

Here are some words and phrases you have probably been misusing: comprise, fulsome, foundering, begging the question.

Here are some others: comorbidity, latent construct, hierarchical stepwise regression, principal components factor analysis.

That second list comes from a review titled “Fifty Psychological and Psychiatric Terms to Avoid: a List of Inaccurate, Misleading, Misused, Ambiguous, and Logically Confused Words and Phrases,” which was published Monday in Frontiers in Psychology by researchers from Emory University, Sacred Heart College, Georgia State University, and SUNY–Binghamton. It’s a rich and technical list: The authors have marshaled a thorough lexical critique, buttressed by persuasive demonstrations of clinical imprecision, plus damning statistics about the number of peer-reviewed papers that include phrases the authors find dubious or meaningless, such as “statistically reliable” (62,000 manuscripts) or “reliable and valid” (more than 190,000 manuscripts). The article is an aggressive corrective against the loose use of language in psychology and related fields. The researchers even cite an epitaph from the Confucian analects: “If names be not correct, language is not in accordance with the truth of things.”

This list is also potentially troubling for science journalists, many of whom are circulating the Frontiers piece this week. Some seem grateful for the corrective, others a little paranoid about it. (What if you misused “empiricism” in that feature last fall?!) Some of the caveats are of immediate use: The authors, for example, make a strong case that “genetically determined” is a misleading phrase that we should expunge and replace with “genetically influenced.” It’s worth emphasizing that the semantics here aren’t merely cosmetic. “A gene for x” and “the autism epidemic,” for example, are sensationalist, oversimplified, and fundamentally incorrect characterizations. It’s fun to believe there might be a gene for liking the Stooges, but single-gene disorders rarely affect that sort of phenotypic response; the “autism epidemic,” meanwhile, became a fixture in public-health discourse only because the rate of diagnosis rose sharply as doctors learned to recognize the condition. (Also: The Frontiers article doesn’t say so, but “epidemic” generally carries overtones of communicability. No one can “catch” autism.) Other useful caveats: Oxytocin is NOT the “love molecule” (though apparently you can take oxytocin intranasally); and “statistically reliable” means nothing (“the statistical significance of a result should therefore not be confused with its likelihood of replication”).

The best way to keep psychological discourse lucid and honest is to keep interrogating language—in the lab and in the newsroom.

Indeed, the charge of sensationalism is a running subtext to the authors’ terminological critique; researchers (it is hinted) would be a lot more precise with their language if they weren’t trying to make headlines. Some of these points are annoying in their literal-obsessiveness, especially where metaphor is concerned. For example, while it may be imprecise to suggest that a certain part of the brain “lights up” under a given stimulus, it’s also worth remembering that figurative language can serve practical ends (and that, especially in a mongrel-tongue such as English, complete avoidance of metaphor is very nearly impossible). There are also moments of hair-splitting, as in this passage dismissing the notion of “operational definitions” (this is the howler that induced conniptions among some of my colleagues):

Psychological researchers and teachers should therefore almost always steer clear of the term “operational definition.” The term “operationalization” is superior, as it avoids the implication of an ironclad definition and is largely free of the problematic logical baggage associated with its sister term.

“Operationalization” is a poor, unwieldy substitute that fails to telegraph its definition—operational or otherwise. More straightforward injunctions, such as banning the phrase “antidepressants,” make much more sense in a clinical setting than in scientific journalism. A magazine feature can include selective serotonin reuptake inhibitors in a list of “antidepressants” because readers will know what they mean: SSRIs are antidepressants—in the cultural, but not clinical, sense.

Yet both scientists and journalists can learn from the list, which—for all its occasional puritanism—offers a bulwark against cliché and sensationalism. No one should really be writing “gold standard” or “hard-wired,” whether the topic is bioethics or pop music. The section on oxymorons is a good reminder that scientific-sounding language is not the same as scientific language. (Reporters do tend to legitimize their science writing by embracing jargon.) The best way to keep psychological discourse lucid and honest is to keep interrogating language in both the lab and the newsroom. It would be nice to see a parallel list of “50 convenient terms for communicating psychological concepts to a lay audience.” This would be immensely useful in journalism, where an efficient explanation serves best, but also in the social sciences, whose practitioners could communicate better with civilians, to mutual benefit.

Related Posts