In “Straight Talk with Rick and Jal,” Harvard University’s Jal Mehta and I examine some of the reforms and enthusiasms that permeate education. In a field full of buzzwords and jargon, our goal is simple: Tell the truth, in plain English, about what’s being proposed and what it might mean for students, teachers, and parents. We may be wrong and will frequently disagree, but we’ll try to be candid and ensure that you don’t need a Ph.D. in eduspeak to understand us.
Today’s topic is “evidence-based practice.”
—Rick
Rick: There’s a lot of enthusiasm for “evidence-based practice.” I get it. I’d much rather schools employ evidence-based practices than “evidence-free” or “demonstrably ineffective” ones. But the question of just what an evidence-based practice is has loomed large as districts spend vast sums of pandemic aid on “evidence-based” interventions intended to combat learning loss or promote social and emotional well-being. After all, the evidence behind evidence-based practice can be surprisingly squishy. Even medical researchers, with their deep pockets and fancy lab equipment, change their minds with distressing regularity on things like exercise, alcohol consumption, or diet. The fact that a study concludes X doesn’t mean that another study won’t conclude Y a few years later.
In 2015, researchers attempted to replicate 97 studies with statistically significant results and found that more than a third couldn’t be duplicated. When someone took the original data and reran the study, the results disappeared. Moreover, even when something “works,” replication can be a huge challenge. Imagine that a vaccine is shown to “work” when patients got a 100-milligram dosage exactly 28 days apart. If that’s the deal, that’s what the evidence supports. If those exact instructions aren’t followed, the evidence becomes unreliable. Few educational interventions are understood or implemented that precisely; they’re usually more like public-health measures like masking or social distancing, where evidence tends to be more suggestive than definitive.
Now, I don’t mean to come off as “anti-evidence.” I think the research behind the science of reading is extensive, robust, and compelling, and it should be treated that way. Indeed, the thing I most appreciated about Russ Whitehurst’s tempestuous tenure as founding director at the Institute of Education Sciences was his fight to elevate rigorous evidence among researchers and practitioners. While Russ’ efforts infuriated many who like clothing their favored practices in the garb of evidence, I thought his efforts were overdue and important.
So, I guess I embrace the principle of evidence-based practice, but I’m not sure we always know what that even means. How about you?
Jal: I like the idea of evidence-informed practice. That centers the notion that there is some evidence that educators should take into consideration but also the reality that there is much about any given situation that is context-specific and depends on the expertise of the local practitioner.
Research on teaching suggests that teachers make at least 1,000 decisions a day, and research on “street-level bureaucrats”—teachers, cops, social workers—similarly finds that enacting a practice means making lots of discretionary decisions, often with insufficient resources and multiple conflicting imperatives. Given that, the idea that there is “evidence” that could guide each and every one of those decisions is just wildly unrealistic. There is also the fact that there are some pretty fundamental value disagreements in education that can’t be reconciled through empirical evidence.
One touchstone I find useful here is Dewey’s view that practitioners knowing the science of a particular area expands rather than contracts their range of action. People think that the equation is something like “science determines action” whereas it is more like “science helps you perceive more aspects of the situation, which enables better and more sophisticated contextual judgments.” For example, if you know that kids who can’t read frequently cover that up with compensatory strategies, you might be more discerning in who needs help with reading. This view centers the idea that teaching is complex professional work that can be informed by evidence, not dictated by it.
Rick: This is all really compelling. I especially like the “1,000 decisions” framing. I’ve always been struck that seasoned teachers talk about their successful handling of a dicey moment in ways that remind me of athletes discussing a big play: “It all happened quickly; I saw the opportunity and I reacted.” In both cases, accumulated experience is being instantly, fluidly applied. It’s all informed to some degree by research and training (or practice and film study), but the actions are instinctive.
Most of the day-to-day work of teaching is like that, I think. It’s like a canvas tarp stretched over more formal scaffolding of scope, sequence, and assessment. The evidence can be built into the scaffolding but usually only indirectly shapes the tarp. The “evidence-based” discussion can be so frustrating, as you note, because there’s good evidence about parts of that underlying structure (like how to build phonemic awareness) but much less that reliably speaks to the bulk of what teachers do each day.
In fact, much of what gets touted as “evidence-based” practice just isn’t. Rigorous research can and should eventually influence practice but only after a long, gradual accumulation of evidence compiled by many researchers employing various methods. That’s not the norm in education research. Indeed, when education advocates assert that a practice is “evidence-based,” the claims are frequently based on a handful of studies conducted by a small coterie of researchers. At other times, they cite research with seeming disregard for its validity. And, at least for me, the “What Works Clearinghouse” hasn’t alleviated these concerns; if anything, it seems to me that inconsistent standards have allowed dubious findings to be stamped as “evidence-based.”
Jal: Well, this very much gets to the policy conversation and when evidence is often used as a weapon or a cudgel rather than as a way of investigating what might work. The “science of reading” seems to have been boiled down to “phonics works” by conservatives who are angry, perhaps justifiably so, that progressives and ed. schools have been overly resistant to incorporating phonics into their instruction. But as a matter of research, there is a lot more to reading than phonics—including vocabulary, comprehension, fluency, and so forth.
And then, if we were to zoom out further, we want kids to read, yes, but we also want them to develop a love of reading, which would draw on a whole other body of literature (and practical knowledge) about intrinsic and extrinsic motivation. So, even in an area where more science might be helpful, there is still the question of which science we are centering.
Despite the obvious appeal of drawing on evidence (climate change, cough cough), I think evidentiary claims are too frequently used to close conversations rather than open them. In my experience, the best educators and leaders see lots of complexity, consider context, and artfully weave together different approaches to solve particular problems. Whereas people who are loudest about “evidence-based practices,” ironically tend to be more ideologues who have a few preferred solutions that they think can address every problem. Evidence should help to guide our work, but it cannot—and should not—try to determine it.