Kyle Smith has been teaching at Superior High School, perched on the Wisconsin side of Lake Superior, for the past 27 years. Last year, he asked his students to examine two websites.
Partnership for a Healthier America runs campaigns like Veggies Early & Often that promote nutrition and healthy eating to school-aged children. The International Life Science Institute (ILSI) also offers nutrition information but has minimized the harmful effects of tobacco and cast doubt on guidelines that tell people to consume less sugar. Health advocates characterize ILSI as “little more than a front group advancing the interest of the 400 corporate members that provides its $17 million budget,” according to The New York Times.
Nearly two-thirds of Smith’s students rated ILSI as the more credible of the two sites, he later told me over email. These students have been on the internet practically since birth. So why did they get duped?
They aren’t alone. In 2019, my research team at Stanford tested 3,446 high school students by providing an internet connection and having them solve six tasks. On one task, they watched a video on social media that allegedly showed ballot stuffing in the 2016 Democratic primaries. A few keywords in their browsers would have led to articles from Snopes and the BBC showing the video to be from the 2016 Russian elections. Only three students—less than one-tenth of 1 percent—traced the clip back to Russia.
States have begun to wake up to the threat posed by digital illiteracy. New Jersey is the latest to pass legislation requiring online literacy in the curriculum. Such mandates couldn’t have come sooner.
A few years ago, you could spot sketchy content by its telltale misspellings, malapropisms, and tortured sentences—the ham-handed attempts by foreign governments to spread disinformation. Not anymore.
Today, the tools of generative AI allow bad actors to mass produce fraudulent content in crystalline prose. NewsGuard, a company that tracks misinformation and assigns credibility ratings to news outlets, has located 510 news sites created by AI tools as of Oct. 10. Place your bets on that number mushrooming in the pre-election days to come.
How should we prepare Kyle Smith’s students—actually, how should all of us prepare—to meet this onslaught? One answer, intoned like a Greek chorus, is to “teach critical thinking.” Smith’s students, however, didn’t need to do more thinking. They needed to do less.
Smith’s students, however, didn’t need to do more thinking. They needed to do less.
Landing on ILSI, students were impressed because, as they put it, the organization “shows all of the science and numbers of food equity.” They recognized its dot-org domain and its tax-exempt status, both of which, they believed, added to the legitimacy of the site. They were swayed by the reports under its science and research tab and commented favorably on the organization’s international reach, with “13 entities across the globe” that “use research from all over the world,” making it “more able to synthesize information regarding nutrition.”
Each moment students spent delving into this polished site—pressing links, reading the About page, scrolling through its Ph.D.-studded advisory board with representatives from esteemed universities—gave the organization’s PR impresarios more time to work their magic.
Imagine, however, a fundamentally different approach. Before rushing headlong into a site, students could have taken a deep breath and asked themselves a preliminary question: Do I really know what I’m looking at? Is this truly the website of a credible scientific organization?
Thinking you can tell what something is by looking at it plays into the swindler’s hand. Unless you bring extensive background knowledge to a topic, it’s easy to fall victim to crafty information manipulation. When award-winning academics judge sites outside their expertise, they, too, get taken in. No matter how thoroughly you scour ILSI’s website, you’d never learn that Mars, maker of M&M’s and Skittles, cut ties with the group because they didn’t “want to be involved in advocacy-led studies” or that in 2021, Coca-Cola abandoned ship as well.
You discover these crucial pieces of context only by getting off the site and consulting the internet, which is precisely how fact checkers vet unfamiliar sources. To gain quick context, these professionals open new tabs and use the internet to check the internet, a process called lateral reading.
It is not something students spontaneously do. But they can be taught. In a treatment-control study conducted by my research team in the Lincoln, Neb., public schools, students whose regular teachers taught them to read laterally nearly doubled their ability to make wise choices compared with peers in regular classrooms. In a Canadian study, students showed a sixfold increase in use of fact-checking techniques like lateral reading and a fivefold increase in citations of appropriate context after only seven hours of instruction. Similar results have been obtained by researchers working in Sweden, Germany, and Italy.
No one is immune to the slippery wiles plied by today’s digital rogues. It’s sheer hubris to think we’re smart enough to outsmart the web, relying on knowledge from 9th grade biology to evaluate scientific reports on virology or an introductory statistics class to assess multiple-parameter data from the North Greenland Ice Core Project. Instead of thinking we possess the tools needed to suss out a cloaked site by dissecting its prose or locating flaws in its research reports, the act of leaving a site to leverage the power of the internet wrests control from its designers and puts it back where it belongs.
In our own hands.