Skip to main content

The Pernicious Mission Creep of Ranking Academic Journals

The use of journal rankings to rate individual papers, scientists, and even programs has upset loads of people in academia. One paper's solution: Get rid of journals.
  • Author:
  • Updated:


A couple of years ago we ran a freelancer’s piece in which a non-scientist collected years of data on a natural phenomenon and then, in collaboration with a trained scientist, published the results in a scientific journal. Our article cited that fact as evidence that there was some reason to accept the validity of the non-scientist’s premise, even though it ran counter to that of more established researchers. Compounding the issue, said journal was low prestige, which matters greatly in the hierarchical world of academic science.

In this relatively short story, I added a lengthy bit of additional text about the journal’s “impact factor” as an attempt to telegraph to our scientifically literate audience that yes, we knew this journal was an imperfect vessel. That chunk stayed in until the final edit, when I pulled it out as too much of a distraction. Caveat reador!

“If you prefer the hip but shoddy science, read GlamMagz, but if you value substance over style, read the regular journals.”

Impact factors themselves are an imperfect vessel, and in a new paper, a neurobiologist working in Germany last month suggested pulling out journal rankings the same way I did the offending paragraphs. Björn Brembs and his two British co-authors’ convincing synthesis of the growing body of work critical of rankings met with a cold shoulder from the high-impact journals where they initially submitted—the most common criticism being that this was old news.

Grumbling about the journal system or influence-ranking schemes is not new, nor is Brembs’ own concerns about impact factor’s shortcomings. But if his paper’s novelty is limited—“we feel we have aired many of these issues already in our pages recently,” Nature responded in its rejection letter—its value as journalism, as a compendium of data points that deserves a wider audience, is not.

Earlier critics have asked, in essence, do these rankings hurt my career; Brembs is asking do these rankings hurt our science. While this is in part a defense of workaday publishers—“if you prefer the hip but shoddy science, read GlamMagz, but if you value substance over style, read the regular journals”—the authors take a big step beyond in a bid for a somewhat utopian open-access academic publishing model, such as the recently announced Episciences platform.

Deploying from existing academic literature metrics such as retractions (there are more in higher impact journals), expert ratings, sound methodology, and utility of what’s published, Brembs and company argue that “using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary.” That’s a jump from biopsy to decapitation.

The venue for this critique? Frontiers in Human Neuroscience, which on its homepage places a badge noting its impact factor is 2.9. That’s low. High-impact journals with names familiar to the general public—publications like Lancet, Nature, Science, Cell, The New England Journal of Medicine, and the Journal of the American Medical Association—routinely have scores in the high 30s to low 50s; the top journal overall is CA: A Cancer Journal for Clinicians, with a stunning 153. In specific disciplines like the social sciences, a top journal might get in the mid teens, like Behavioral and Brain Science’s current 18.571.

On his own blog, Bjembs explained the venue in his post headlined "Everybody Already Knows Journal Rank Is Bunk":

Now, why is all this published in a journal on human neuroscience? Well, certainly not for the psychology of confirmation bias and self-selection. We did of course submit our manuscript to the journals with the general readership. Especially, since the data in the literature were new to us and virtually every one of our colleagues that we asked.

The Impact Factor itself is a product of the Thomson Reuters Intellectual Property Division, which since 1975 has compiled a Journal Citation Report (“The Recognized Authority for Evaluating Journals”), which in turn is part of its broader Web of Knowledge indexing project. A journal’s Impact Factor is derived by taking all the citations to the papers it’s carried in a year and dividing that by the number of papers. (References, i.e. citations, in another journal paper matter because it’s essentially an academic saying, “Yes, there is something I value in my colleague’s work.”) In its latest edition, the Thomson Reuters citation report looked at 10,853 journals across 83 countries and 232 academic disciplines.

The measure was designed to help university libraries best spend their limited journal-purchasing budget, and a side benefit would be to help swamped academics triage their work reading. But perhaps inevitably, these ready-made rankings found new utility in status-conscious academe. Now, for a journal to be rated for the first time is itself a minor-key honor, and for a journal to be suspended (for gaming the system by frivolous cross citing among your own family of journals) it’s a badge of shame.

Such mission creep—structural biologist Stephen Curry terms it “a cancer that can no longer be ignored”—has a darker side. Wrote Brembs:

Thus, today, scientific careers are made and broken by the editors at high-ranking journals. As a scientist today, it is very difficult to find employment if you cannot sport publications in high-ranking journals. In the increasing competition for the coveted spots, it is starting to be difficult to find employment with only few papers in high-ranking journals: a consistent record of 'high-impact' publications is required if you want science to be able to put food on your table. Subjective impressions appear to support this intuitive notion: isn't a lot of great research published in Science and Nature while we so often find horrible work published in little-known journals? Isn't it a good thing that in times of shrinking budgets we only allow the very best scientists to continue spending taxpayer funds?

Responding to this "impact factor mania" last December, the San Francisco Declaration on Research Assessment damned the use of “journal-based metrics” as a surrogate for funding, appointment, and promotion of individuals or projects, or as shorthand for the merits of published research. The declaration’s homepage contains a slew of links to editorials—many in high-impact journals—decrying the misuse of these metrics. Some even suggest a more reliable ranking system, like altmetrics or the h-index to better suss out the value of an individual researcher or paper. Australia’s The Conversation blog even wonders if genuine impact, how the work improved policy and practice, or had economic, social, or environmental benefits, should be part of the mix.

Those recommendations—which do have value and should be considered—are nonetheless reminiscent of Americans who want to get rid of the Internal Revenue Service; you can scrap the name, but another Hydra’s head just like the IRS will pop up. Whether it’s on the table or under it, a way will arise for all the players to know where lies prestige.

In one sense, the system may be self-correcting, albeit slowly. George Lozano, appropriately enough an evolutionary biologist, and some colleagues have been studying the falling number of top papers (based on citations) appearing in elite journals as the digital age weakens the big guys’ primacy. (This work first appeared on the open-access site arXix.orgbut then migrated to a ‘real’ journal.) Lozano hasn’t predicted mass extinction of top journals, but a reordering of who’s on top.

Given this background, it’s understandable how a cursory reading of Brembs’ paper suggests it’s not all that new (“everybody already knows that high-ranking journals publish unreliable science,” he asks). But to argue that a ranking system is not only flawed when misappropriated, but somehow fundamentally pernicious, is a little more revolutionary.

In the meantime, Brembs has made a virtue of necessity, as he wrote flippantly on his own blog: “I hope it is now clear that in order to convince readers that the conclusions we draw from the literature are reliable, we had to publish in a journal with an impact factor of 2.339–and don’t you skimp on any of those decimals!”