When I consider the state of education research today, I can’t help but think of Samuel Jackson’s classic line from Pulp Fiction: “If my answers frighten you, Vincent, then you should cease asking scary questions.” Over the past decade or two, “sophisticated” education research has done just that: stopped asking scary questions. Big-dollar, widely-cited research has shied away from asking complicated, sometimes scary questions about how programs and policies actually work, choosing to focus on asking “What works?"—a query that promises to provide safe, sure answers.
The problem? The seemingly safe answers to “What works?” are a mirage. Decades of frustrating experiences have taught that the answers provided by boutique pilots or carefully manicured ventures tend to dissipate on the broader stage. So, researchers can document that some school improvement approaches seem to “work” . . . but $8 billion in School Improvement Grants disappoints. New teacher evaluation strategies, when carefully and strategically employed, can “work” . . . but grand state directives based on those fall short. Some charter management organizations (CMOs) seem to “work” . . . but the pursuit of “proven” models leads to stagnation and sameness in the charter sector, along with evidence that the early faith in CMOs was overstated.
The reasons for these disappointments are myriad, murky, and a little frightening. They have to do with how bureaucracies work, a tendency to overestimate technical expertise, challenges relating to both culture and contracts, and much more. The thing is that figuring all this out is complicated, exhausting, and extraordinarily frustrating.
That’s why funders, policymakers, and educational leaders have sought experts who can offer simple, definitive answers that will allow them to bypass all this frightening messiness. These leaders’ amorous glances have fallen upon econometricians equipped with slick analytic tools (impressive stuff like “quasi-experimental designs” and “regression discontinuities”) and simple outcome measures (reading scores! graduation rates!) that allow for definitive answers. The allure of all that has proven hard to resist, even when seemingly scientific answers turn out to be less-than-reliable guides to policy and practice. Meanwhile, econometricians wind up being asked to offer advice that strays far beyond their expertise, which entails devoting enormous time and energy to collecting data, managing data systems, and utilizing complex statistical tools—more so than to the vagaries of how schools and systems actually work.
One remedy to all this is to place much greater weight on the frightening question of “What’s really happening?” That includes asking where things get problematic and why educators, administrators, and “the system” may be pushing back. This is why so much of the most practically valuable research on school improvement over time has not been the celebrated analyses that declare “whether X worked” but the seminal work of scholars like Ted Sizer, Valerie Lee, Richard Elmore, Tony Bryk, and Charles Payne, which has illuminated how schools actually work and change. Yet, I think most fair-minded observers would conclude that this kind of research, especially given its relatively low cost, has been an afterthought when it comes to federal or philanthropic support.
This lack of attention to what’s actually happening has meant that many once-heralded ideas worked much worse than anticipated, and that we don’t really have a great grasp as to why.
This state of affairs has also created a remarkable bias against more fundamental change. Indeed, our relentless focus on “What works?” has rewarded those programs, policies, and practices designed to yield short-term bumps in test scores, while distracting attention from more fundamental and complex efforts. Meanwhile, those ventures like New Classrooms, which are actually working to reengineer what teachers do all day and how technology complements that, are relegated to an odd niche—since the complex business of redesigning how teachers operate inside larger schools and systems creates all kinds of complications when it comes to judging whether they “work.”
As the No Child Left Behind-Race to the Top-Common Core era winds down and a new one begins to take shape, it’s a timely moment for funders, advocates, and researchers to reflect on the state of research as well as reform. There is an opportunity to protect what has been best and most useful in education’s quantitative revolution, while reclaiming the intellectual pursuits that can and must undergird it.