Now that the Food and Drug Administration has approved genetically modified salmon for human consumption, a timeworn debate over the safety and effectiveness of genetically modified food has once again resumed.
Several lazy tropes drive this discourse—GMOs are “Frankenfood,” GMOs will feed the world, GMOs cause cancer—and none of them are true, which is, in part, what makes this revolving argument so frustrating to follow. But one phrase more than any other routinely gets tossed into the conversation like a grenade: anti-science.
The phrase feels good—there’s nothing more rhetorically satisfying than ejecting a dissenting view before the game even begins. But we need to stop using it to characterize those who disagree with a scientific position we support.
For one thing, there’s nothing necessarily wrong with being anti-science, if only because science is neither a) an all-encompassing explanation of everything, nor b) an inherently virtuous phenomenon. For another, such a dismissal obscures the deeper reasons for being doubtful about GMOs (and vaccines and global warming and so on), reasons that can teach us a lot about how we incorporate science into a democratic society and monitor its applications.
To condemn a person as anti-science implies that science is the intrinsically superior explanation for phenomena we encounter. But it’s not. Science cannot explain everything, much less something as basic as what it’s like for you to be you. Despite all the research neuroscientists have done mapping the brain’s circuitry, they still cannot explain how the brain embodies consciousness. (Neither can you, for that matter, although you know it exists.)
So, given the humbling inability of science to explain something as fundamental as our sense of self, nobody would say that your affirmation of your own consciousness, which lacks a scientific explanation, makes you a stone-age ignoramus. To the contrary, you are simply comfortable acknowledging that a basic truth—your identity as a human—eludes a scientific explanation. To be exclusively pro-science, in this case, would be to deny your own consciousness.
Relatedly, to call someone anti-science also suggests that science is inherently beneficial to self and society. But, again, this is not the case. Even the application of seemingly innocuous scientific knowledge can easily lead to more darkness than light, more ulterior motive than social good. Until we can predict future outcomes, there will always be those who are temperamentally inclined to see the disastrous manifestations of what’s to come. These people can be annoying, but we need them.
Even when this form of opposition is misguided, ideologically blinded, passively dishonest, or just blatantly wrong (think vaccines), it’s still not the case that the opposition is necessarily “anti-science.” It’s simply an indirect (and, again, understandable) acknowledgment that science is a social and political process as much as a scientific and rational one and, as such, is persistently vulnerable to contingencies that some critics might think they foresee more reliably than others. This is why the precautionary principle, although often abused, retains some merit.
If you agree that these points are sound reasons for burying the term “anti-science,” then we can finally go beyond a blanket dismissal of those who are opposed to certain technologies and, in a more charitable vein, explore from whence such skepticism derives. This task offers the chance to develop a less contentious and more historically and psychologically nuanced understanding of how a democratic society negotiates the incorporation of science into private enterprise and public policy.
To understand this “anti-science” skepticism, it’s worth reiterating an obvious point: People generally don’t like being told what to think. This is especially true in the West, and even more so in the United States, where freedom-loving folk prefer to arrive at their own conclusions, scientific or otherwise, on their own terms and with their own eyes. We might be anti-intellectual, but we’re quite serious when it comes to trusting our own ideas, even when those ideas are woefully misinformed.
Until relatively recently, though, they weren't woefully misinformed. America’s founding fathers politically codified our individualist philosophy at a moment in history when 98 percent of the country spent its time laboring on a farm. Being boots-on-the-ground agriculturalists, every free citizen (and their slaves and servants) experienced an ongoing interaction with an environment that provided the raw data for self-styled expertise. Every farmer’s wealth of knowledge grew from his own soil. For people who once toiled under the arbitrary authority of a mercurial king, it’s impossible to capture how unbelievably satisfying this was.
But as Americans industrialized in the 19th century, and as farms commercialized and specialized, a critical shift occurred. The yeoman, under pressure to scale up and boost yields, soon found himself bowing to another form of authority: scientific expertise. As so called “book” farmers introduced scientific methods into traditional agrarian practices, and as those practices culminated in higher yields and larger markets, professors gradually edged out plowmen as the source of agricultural expertise.
This shift was one of the most important social developments of the time. As expertise slowly transitioned from inside the farmer to outside the farmer, tensions inevitably flared. They flared not so much over agricultural science per se as over the farmers’ relationship to those who defined it. What came to matter was not scientific knowledge about farming but trust of those who possessed that knowledge. The fact that expertise was often used to belittle or mislead those who toiled the soil according to tradition (rather than adapting the newer science) hardly helped build a bond between farmers and agricultural scientists. This tension was never been resolved.
The effects of this distrust and abuse are very much visible today in debates where the “anti-science” label gets used. Placed in historical perspective, the popular opposition to things like GMOs and vaccines, much like 19th-century opposition to fertilizers and insecticides, reflects less an overt rejection of science than a distrust of experts who peddle it. Consider the sources of information that the opponents rely on. The anti-GMO crowd relies on quacks like “the food babe” and a yogi named Jeffrey Smith; the anti-vaxxers rely on websites run by mothers of children with autism; and deniers of global warming get their information from conspiracy theorists on the AM dial. Formal expertise is shunned.
What does this tell us? That the widespread opposition to otherwise well-established scientific claims—i.e., GMOs are safe, vaccines are central to public health, humans cause global warming—is more of a social failure than a rejection of science. It also tells us that scientists—many of whom are quick to level the “anti-science” charge—have some work to do. It’s not enough to prove the benefits of a technology. They also have to communicate it to the public in a way that assures us that they work more for the benefit of humanity than for their own reputations or corporate funders. They have to win us over, rather than alienate us, with their amazing wealth of knowledge, knowledge from which we’re alienated.
One of the more terrifying aspects of living in the modern world is that we lack the most basic understanding of the technologies that structure our lives. As a result, we have no choice but to trust others—those with real expertise—to make scientific choices that design the wheel we spin on. To be skeptical over what we don’t understand is not anti-science. It’s human.
The Things We Eat is a regular Pacific Standard column from James McWilliams on food, agriculture, and the American diet.