Clarification: This story has been updated to clarify Claude Goldenberg’s title.
Writing about the ongoing debates in reading instruction can sometimes feel a bit like time travel.
Take this passage, from an Education Week article:
Many other topics in education draw heated debate, but the arguments over reading instruction—the first “R"—have been perhaps the most vociferous—and the most public, often spilling over into school boards, state legislatures, and even Congress.
And in recent years, as educators have grown increasingly desperate over students’ poor performance, and frustrated over a seeming inability to change their programs, the gloves have come off. The two sides have become intractable, with advocates at both extremes accusing each other of stoking the flames with incendiary rhetoric, rather than reason.
This story was published more than 30 years ago. But I could have written these same words yesterday.
Debates over how to teach reading have been detrimental to the education field over the last century. At their core, these arguments lie along one of the field’s philosophical fault lines: Should teachers take a more traditionalist approach, focusing on explicit instruction and guided practice? Or should they follow a progressive approach that emphasizes experiences with stories?
These reading debates set off earthquakes once every few decades; in between, they lie dormant. But they’re always there, ready to shake the ground once again.
Science of reading takes center stage
Over the past few years, I’ve covered their latest iteration, the “science of reading” movement, as it’s prompted state legislators to mandate changes to teacher training and instructional practice and forced much-critiqued publishers to scrub references to outdated methods in their materials.
The entire time, I’ve heard from skeptics—wary veteran teachers, critical reading researchers—that this moment feels like déjà vu all over again.
This story is part of a special project called Big Ideas in which EdWeek reporters ask hard questions about K-12 education’s biggest challenges and offer insights based on their extensive coverage and expertise.
They remember previous failed attempts to align classroom instruction with research evidence on how children learn to read—most notably the George W. Bush-era Reading First program, which didn’t improve students’ reading-comprehension abilities and was plagued by accusations of financial mismanagement. And they have remained assured that, inevitably, we would return to what they saw as a less doctrinaire approach, one that gave educators the freedom to use whatever methods they thought would work best.
This time is different, in some important ways. For one, the research base on what works in reading instruction is even stronger now than it was 20 years ago. Experimental studies have proved that kids do need explicit instruction in foundational skills—and they’ve identified a host of other components essential to reading instruction, like developing kids’ spoken language abilities and world knowledge.
When I started reporting on the changes to early-reading instruction across the country, I was confused by some voices that, it seemed to me, were rejecting these underlying findings. If we have the evidence about what works best for kids, why wouldn’t we try to follow it?
In part, it’s because the partisan politics are no less messy than they were two decades ago. The debates emerge from conservatives and far right groups who favor traditionalist instructional and “back to basics” approaches, respectively, and progressive educators who champion teachers’ freedom to try different approaches and fear that a focus on mechanics will strip the joy from reading. Still, though, there are left-leaning groups that promote explicit instruction in foundational skills, too, arguing that teaching all students the building blocks of reading is an equity issue.
But beneath these partisan debates lie deeper political battles—divides that have to do with reading’s philosophical fault line.
A mystery of the reading wars
As I’ve written about this subject over the last four years, I’ve come to understand another underlying issue that explains why we keep having the same fights over and over again. It also sheds light on one of the central mysteries of the reading wars: why many in the education field weren’t familiar with research that directly affects their work.
The evidence base that informs the “science of reading” comes from fields that generally don’t overlap much with education research—namely, neuroscience and cognitive psychology. This research divide and its implications for practice go far beyond reading instruction. They affect everything schools do when it comes to teaching and learning. And the divide is tied up in thorny arguments about expertise, power, and who gets to lay claim to the truth.
Understanding how this bifurcation operates and the politics behind it is crucial for anyone who has a role in determining how schools move forward from here.
For the first half of the 20th century, there were two main ideas about how children learned to read. One was explicit instruction and practice in phonics: the way that letters represent sounds and are blended together to form words. The other was a whole-word or look-say approach—the idea that children memorized words as whole units. But in the 1960s and 70s, things started to change.
Researchers in education continued to study how people learn to read, but scholars in other fields—linguistics, psychology, neuroscience—also began to seek to understand how reading ability develops and how skilled reading works.
These fields of study use different research methods. The fields have different definitions of what counts as evidence—as proof. Understandably, they came to different conclusions.
Much of the research conducted by those in the education field determined theories of how reading works and was derived from watching kids and analyzing their behavior. These kinds of observational, descriptive studies are common in the education field, where a lot of research takes place in classrooms.
It’s during this period that cueing—the idea that children should rely on multiple sources of information, not just letters, to read words—first emerges.
In 1967, reading researcher Kenneth Goodman posited that readers use three different systems of information when they try to make sense of text: syntactic cues (the structure of sentences and stories), semantic cues (the meaning of the text), and grapho-phonemic cues (letters). Attending to all these sources of information, he proposed, would help children become better readers.
Around the same time, New Zealand reading researcher Marie Clay was conducting her own observations of young readers, also finding that they use multiple clues from the text to figure out words—letters, but also context and understanding about the conventions of print.
By the 1970s, other researchers were examining reading, too, from a different vantage point. Cognitive psychologists started investigating the processes that underlie skilled reading. But instead of watching kids in classrooms, they took to the lab.
Eye-tracking studies, performed in the lab, tested whether skilled readers really did skip letters and words when reading text or whether they attended to the letters. Experimental studies, testing different instructional strategies often in lab settings, confirmed the effectiveness of explicit, systematic instruction in phonics and phonemic awareness. Later, brain-imaging studies would show that explicit instruction in decoding words could alter the brain functioning of struggling readers so that their neural activation matched that of skilled readers’.
The great research divide
As the journalist Emily Hanford has explained in her groundbreaking reporting on the national literacy landscape, these theories about how kids learn to read, promoted by Goodman, Clay, and others, became a prescription for how to teach. Their work powered by observational studies, these scholars became among the field’s most-cited experts, influencing everything from preservice training to curriculum materials—even as researchers in psychology and neuroscience demonstrated for decades that their theories were wrong.
Why hadn’t these findings made their way into classrooms? Or even into education researchers’ conversations?
In academia, divisions between different departments are more than just semantic—generally, researchers in different fields don’t talk to each other at all.
Education researchers and psychology researchers often publish in different journals. They attend different conferences. There is some overlap, of course, but for the most part, findings in one field don’t typically inform research in another.
In part, this is because these fields are preoccupied with different questions and use different methods to answer them.
In lab-based experimental research, the kind that tracks kids’ eye movements or monitors their brain activity, cognitive psychologists test the effect of discrete interventions in controlled conditions. They want to know: Is x better than y, even incrementally? These studies can have big implications for classroom practice, but the scope of the research usually doesn’t extend to explaining how findings can be applied in an instructional setting.
Education researchers work in less controlled conditions—namely, schools. When it comes to teaching and learning, they want to know what policies lead to better student outcomes, a question they usually answer with observational studies.
Research like this can’t make causal claims, but it examines interventions in context in a way that lab-based experiments often can’t, because it occurs within the ever-changing conditions of the classroom. As a result, its relevance to teachers’ day-to-day work is more immediate and more legible, while cognitive psychologists often don’t—or can’t—give the same concrete advice to practitioners.
This bifurcation explains some of the problem. But there’s also an ideological divide in these communities, a difference in underlying assumptions about the goals of learning and how it should be measured.
Different research disciplines not only have different understandings of how reading works but also fundamentally different ideas about what reading is.
Experimental research tests the effect of changing one input on an output, or a range of outputs. Most often, in the context of reading instruction, these outputs are standardized measures of learning: words read per minute, questions answered correctly, test scores. Researchers only change one input at a time so that they can be sure of which intervention causes what effects, and they use standardized outcomes so that they can compare effects across groups.
But many education researchers have long argued that reading ability can’t be accurately measured in this deconstructed way—that it’s not an accurate reflection of what happens in the classroom.
They argue that what it means to read “well” is dependent on context and culture. They say there are important dimensions of the learning experience that these studies don’t measure—like whether students feel confident in their reading abilities or whether they see themselves as readers. Some question whether these standardized-test scores are even a reliable metric of students’ abilities at all.
The debate is also personal, implicating educators’ professional identities. And this can make people dig in even more.
Over the past few years, I’ve heard from dozens of teachers and professors of education who say that they feel like the hundreds of hours they’ve spent observing children and meticulously cataloging their insights are being devalued—that their work is being dismissed as unscientific. And they feel that their ultimate goal, to help children grow into eager and curious readers, is being supplanted by a mandate to train kids on discrete skills.
The conversations have made it clear: Different research disciplines not only have different understandings of how reading works but also fundamentally different ideas about what reading is.
Translating research to practice
That doesn’t mean there’s no empirical truth to be had here.
Decades of research clearly shows that systematically teaching kids about the relationships between letters and sounds helps them become better readers.
And it’s not just experimental studies. Reporting—from Hanford, from this publication, from other outlets around the country—has documented the harm that reading instruction influenced by the cueing theory has had on some children. And they’ve shown the effect that explicit instruction in foundational reading skills can have for kids who are struggling to learn to read. This is qualitative evidence, too.
But it’s also true that experimental findings don’t tell the whole story. They don’t even answer the most basic questions about classroom practice. There’s still no research-based consensus, for example, over exactly how much time teachers should devote to phonics during an English/language arts block, or even the mix of routines of word study, building background knowledge, and writing that together make up a strong instructional program.
To translate research to practice, the field needs more open dialogue—between researchers from different academic traditions and between researchers and practitioners. But we also need to address the big, thorny questions of ideology that underlie so many of these debates head on: What does “reading well” mean? What does it look like, and how do we know when it’s happening?
Opening these channels between different fields matters for getting reading right. But it matters for education as a whole, too. These divides aren’t just in reading instruction. They also run deep in math.
Over the past few years, two math education researchers—Nathan Jones at Boston University and Julie Cohen at the University of Virginia—have led a series of conversations between scholars in two different research traditions: math education, which generally favors inquiry-based instruction, and math special education, which relies more heavily on structured, explicit teaching.
What they found will feel familiar to anyone who’s followed the reading wars: A division in how the two groups thought teachers should build math knowledge—and a division even in what they thought the purpose of math education should be.
Finding ways forward
There are some ideas about how the reading field—and other disciplines—can start conversations that can bridge these gaps.
Back before this current iteration of the reading wars, Harvard professor of education James Kim suggested a type of scholarly work called “adversarial collaboration.”
The procedure, he wrote, in a 2008 article in PDK International, “requires antagonists to collaborate on a prospective study and agree on an arbiter who imposes the rules of engagement over the entire process.” One of the goals of this process, he wrote, is to “speed up the dissemination of evidence that can potentially change the minds of skeptics.”
Another option is the “pre-mortem,” a project-management strategy used in the business world. In a 2022 paper in Reading Research Quarterly, California teacher and literacy coach Margaret Goldberg and Stanford education professor emeritus Claude Goldenberg suggest that states and districts seeking to translate reading research into the classroom apply the exercise.
A pre-mortem, they write, “is a ‘what went wrong?’ discussion of a plan that has not yet been put into action.” It gives stakeholders with different concerns and viewpoints the opportunity to predict how a plan will fail and then collaboratively design structures to avoid those failures.
It’s a way to value different types of knowledge and different ways of knowing: putting the district leaders armed with psychology studies in conversation with teachers who can say, from lived experience, where implementation pain points will be in the classroom—and what the unintended consequences of the best-laid plans could look like.
In this current chapter of the great debate, there are some—having seen the reading wars erupt again and again—who are determined to put an end to the fighting once and for all. Researchers who study multilingual learners and dyslexia advocates have collaborated on a position paper on best practices for teaching English-learners; a group of reading researchers and practitioners calling themselves the “Peaceniks” have created a primer for early-reading teachers that aims to bring clarity without “divisiveness.”
I doubt that the reading wars will ever end, not really. As long as there are philosophical debates between traditionalists and progressives, between different groups of researchers, between Republicans and Democrats, reading will always get caught in the cross hairs.
But this kind of collaboration—between different groups of researchers, between researchers and educators, between the policymakers and the policy implementers—could happen more often. It’s hard, practically and emotionally. But it’s also a way forward.