We call them “digital natives.” Digitally naive might be more accurate.
Between January of 2015 and June of 2016, my colleagues and I at the Stanford History Education Group surveyed 7,804 students across 12 states. Our goal was to take the pulse of civic online reasoning: students’ ability to judge the information that affects them as citizens. What we found was a stunning and dismaying consistency. Young people’s ability to navigate the Internet can be summed up in one word: bleak.
In middle school, 82 percent of students couldn’t distinguish between an ad and a news story. Though fluent in social media, three-quarters of high school students missed the significance of the blue checkmark showing that an account is verified by Facebook. Over 30 percent thought a fake news post was more trustworthy than a verified one. Viewing a screenshot of “nuclear flowers” supposedly taken near the site of the Fukushima Daiichi disaster, four in 10 considered it “strong evidence” of environmental damage, even though there was nothing to indicate that it had been taken near the site, or even in Japan.
Since the 2016 election, a host of states have mandated courses in media literacy, including a new bill passed in California this August. Many of these new courses will rely on approaches that were state-of-the-art when we connected to the Internet using a dial-up modem. “Five Criteria for Web Evaluation,” a guide that’s still widely circulated, is based on an article published in 1998—the Mesolithic era of the Web. Such “criteria,” often presented as a checklist, tell students to ferret out signs of digital dubiousness: banner ads, misspellings, broken links, and the like. But today’s Web—dominated by astroturfing (front groups masquerading as grassroots citizen groups), search-engine optimization (the calculated gaming of search results), and sophisticated lobbyists posing as academic think tanks (complete with a roster of pay-for-play academics)—is a far more ferocious beast than its 1998 ancestor.
What if the answer isn’t more media literacy, but a different kind of media literacy? Consider what happened when we gave our tasks to fact checkers at the nation’s most prestigious publications. These professionals never stopped to consider whether a site had a .org or a .com at the end of its address; they knew that these domain designations, which once meant a lot, mean little today, as does a 501(c)(3) designation. The fact checkers also took each site’s “About” page with a grain of salt. They knew that spiffy-looking graphics and a fancy interface indicated nothing more than a well-spent budget. They knew that what you see is not what you get.
Instead of burrowing into a silo or vertical on a single webpage, as our Gen Z digital natives do, fact checkers tended to read laterally, a strategy that sent them zipping off a site to open new tabs across the horizontal axis of their screens. And their first stop was often the site we tell kids they should avoid: Wikipedia. But checkers used Wikipedia differently than the rest of us often do, skipping the main article to dive straight into the references, where more established sources can be found. They knew that the more controversial the topic, the more likely the entry was to be “protected,” through the various locks Wikipedia applies to prevent changes by anyone except high-ranking editors. Further, the fact checkers knew how to use a Wikipedia article’s “Talk” page, the tab hiding in plain sight right next to the article—a feature few students even know about, still less consult. It’s the “Talk” page where an article’s claims are established, disputed, and, when the evidence merits it, altered.
In short, fact checkers learned about a site by leaving it, relying on the broader Web to get a fix on a single one of its nodes. They understood that spending precious minutes examining an unfamiliar site—when one doesn’t even know whether the site can be trusted or who’s behind it—can be a colossal waste of time. They intelligently ignored reams of less important information to quickly locate a site’s essence. Compared to other adults whom we tested (professors from three universities, as well as undergraduates at Stanford University), fact checkers spent less time reading websites. They learned more by reading less.
Our technological creations have risen up against us. To cope with this mess, schools impose Internet filters that block pornography and other vile content. Wise move. But when filters limit students’ investigations only to pre-approved sites, students will never learn how to separate reliable information from sham. Instead of teaching kids to deal with reality, we protect them from it.
In the short term, we can do a few useful things. First, let’s make sure that kids (and their teachers) possess some basic skills for evaluating digital claims. Some quick advice: When you land on an unfamiliar website, don’t get taken in by official-looking logos or snazzy graphics. Open a new tab (better yet, several) and Google the group that’s trying to persuade you. Second, don’t click on the first result. Take a tip from fact checkers and practice click restraint: Scan the snippets (the brief sentence accompanying each search result) and make a smart first choice.
These tips will take a bite out of the most common errors.
But when misinformation spews from the highest offices of the land, schools need to offer more than tips.
The subjects we teach in school are under assault. What should teaching history look like when kids can go online and find “evidence” that “thousands” of blacks suited up in Confederate grays to fight for their own enslavement? What should science teaching look like when anti-vaxxer sites maintain that there’s a “proven” link between autism and measles shots—despite a retraction by the journal that made that claim, the disbarring of the researcher who tampered with data to support the claim, and a British judge’s declaration that “no respectable body of opinion” supports the linkage? What should math teaching look like when statistics are routinely manipulated, and when research reports rarely share with readers the squishiness of the original data? Ushering the curriculum into the 21st century will demand of us—the adults—to undertake the educational equivalent of the Manhattan Project.
It won’t come cheap. Then again, neither does democracy.
Understanding Gen Z, a collaboration between Pacific Standard and Stanford’s Center for Advanced Study in the Behavioral Sciences, investigates the historical context and social science research that helps explain the next generation. Join our newsletter to see new stories, and let us know your thoughts on Twitter, Facebook, and Instagram.
Understanding Gen Z was made possible by Stanford University’s Center for Advanced Study in the Behavioral Sciences (CASBS) and its director, Margaret Levi, who hosted the iGen Project. Further support came from the Knight Foundation.
See more in this series:
Blending, Nesting, and Multitasking: How Gen Z’ers Use the Library
Libraries have changed from a silent space to a community hub that hosts all kinds of activities. Read more
How Gen Z Presents a Challenge to Traditional Arts Organizations
Social institutions that nourish the arts need to offer young people aesthetic experiences that reflect their lived experiences. Read more
Are Elite Institutions Teaching Students the Wrong Values?
There’s a clear need to rethink what “impact” means, given the concept’s distorting effect on students’ priorities and ethics. Read more
How to Make College More Relevant for Gen Z
With a nimbler approach to the curriculum, we can help this generation develop their ideals into real-world solutions. Read more