In Horace’s Compromise, penned nearly four decades ago, author Theodore Sizer famously describes classrooms shaped by a comfortable but dysfunctional agreement: The teacher would pretend to teach, and the students would pretend to learn. The result was a pageant of schooling shorn of the substance of learning.
We can’t help but wonder if something similar characterizes the relationship between education researchers and education leaders. Researchers pretend to be interested in surfacing useful findings, and those in schools pretend to be interested in using what they’ve learned.
For all the cheery talk of “evidence-based practice,” “research-practice partnerships,” and the rest, the truth is that schools have long employed practices with scant empirical grounding. Indeed, the quality of evidence frequently seems a matter of little import, with findings more often used to justify the decisions of school or system leaders than as a way to seriously evaluate them.
Sizer knew there were outlier teachers, those who rejected the compromise he sketched. Similarly, there are outlier researchers and education leaders. One of us (Goldstein) wrote recently about his experience collaborating on educational evaluations with two recent winners of the Nobel Prize in economics. Those scholars brought skills, savvy, and a genuine curiosity about where the truth lies. Notably, they didn’t see themselves as part of the education community. This made it relatively painless for them to embrace results that contradicted their assumptions or the expert consensus.
But we’ve found that’s the exception. Most education researchers face powerful professional incentives to till the same field of study for decades. Along the way, they cultivate relationships, funding, and influence as they ascend within their intimate subfield. This yields predictable brands and tight-knit tribalism. As a result, researchers often wind up studying questions and employing methods calculated to impress their colleagues and funders, regardless of how relevant any of that may be for educators or kids.
We know there are educational leaders and researchers who’ve tired of this happy dance and are ready to partner in pursuit of sometimes uncomfortable truths.
Most education leaders and entrepreneurs seem to have made their peace with this state of affairs. After all, few of those who have created or shepherded schools, interventions, curricula, programs, trainings, or software are all that eager to see years of their life’s work negated by an evaluation that might conclude, “Nope, that doesn’t work.” Not many will willingly subject their ego to that kind of Judgment Day—especially when it poses an existential threat to reputations and future grants.
Yet some kind of evaluation is frequently a condition of sustained funding. Consequently, what’s emerged is a cottage industry of friendly evaluators who discreetly look away when practitioners spin null results or negligible gains as wondrous news. Indeed, when the evaluators of a given intervention or school model are frequently architects or enthusiasts of the reform in question, it’d be surprising if write-ups flatly concluded that it failed. (Jon Baron of the Coalition for Evidence-Based Policy used to document this dynamic, and his successors at Arnold Ventures delight as he did in skewering press releases announcing research findings that don’t accord with the actual results.)
We think this is a story not of bad behavior but of professional incentives and human nature. We wind up with another sort of Horace’s Compromise and another dysfunctional status quo.
We don’t know of a simple fix to the perverse incentives at work, but we know there are educational leaders and researchers who’ve tired of this happy dance and are ready to partner in pursuit of sometimes uncomfortable truths. For those individuals, we have six suggestions:
- Be clear about what you really want from an evaluation: truth or compliments.
- Make a “prenup” describing what happens if the results are disappointing. Just the act of pondering a negative result can set up a better evaluation. If you have an appetite for hard truths, gauge the risk of actually pursuing them. Talk to your board members or funders. Are they prepared to see uncomfortable findings? Will they crawl away if the results disappoint or will they actively commit to supporting your efforts to learn from the evidence, to essentially redo your model and try again?
- Get a good sense of what you want before seeking a partner. For education leaders, are you seeking hard numbers or something more descriptive? For researchers, are you intent on gauging whether a school or program “works” or are you more interested in learning how it works and might be improved? Knowing this upfront can make it easier to find the right researcher and to ask the right questions.
- Leaders need to deliberately seek out the right researcher. Google your question and words like “evaluation” or “randomized controlled trial (RCT).” Gather the names of a few scholars who’ve studied questions like yours—and try to find a couple folks who haven’t (remember that researchers can benefit from being outside the in-group bubble). Introductions are great, but cold emails can also work surprisingly well. As you narrow your search, peruse what they’ve published and check out their Twitter accounts. Seek out researchers who seem free from agendas and are willing to question convention.
- Researchers need to find education leaders or entrepreneurs who are serious about R&D and truth-seeking. They should look off the beaten path for those leaders who can demonstrate that they’re willing to do what it takes to collect useful, reliable data and who can point to a track record of countenancing hard truths, acknowledging what’s not working, or using data to refine their programs and practices. Finding such partners is easier when researchers cultivate new networks and are open to exploring questions that may stretch beyond their comfort zone.
- Finally, leaders should interview a prospective researcher the way they would an architect, and researchers should scrutinize potential partners the way architects would a client. Think of it as a negotiation. With an architect and client, it’s a discussion of ideas, constraints, and practical considerations. Leaders need to pose questions to be tackled while understanding that researchers will bring their own queries and expertise on how to find the answers.
There’s great power in evidence. It can enable us to learn how to better support and educate students. But, as every courtroom drama teaches, evidence is not the same as truth. There have been some notable strides in the methodological sophistication of 21st-century educational research. But it will take much more if we are to trade today’s research-practice pageantry for something more valuable.