To the Editor:
In the current era of accountability, efforts to synthesize and disseminate research findings that meet high standards are more useful than ever. Educators, parents, and the public are clamoring for objective information about products and approaches schools can use that have the potential to raise student achievement. The What Works Clearinghouse was designed to respond to this need by providing easily accessible and digestible information, based on transparent standards and objective principles.
In a recent Commentary, Robert E. Slavin argues that the clearinghouse should favor large studies and relax its standards for statistical methodologies (“The What Works Clearinghouse: Time for a Fresh Start,” Dec. 19, 2007). He recommends that the clearinghouse get more in the business of choosing what constitutes appropriate interventions by excluding some from its reviews altogether. But adopting these suggestions would take the clearinghouse in a direction that ultimately would impinge on its mission to be a trusted and reliable source of evidence about what works.
Mr. Slavin does not make a case for why his recommendations will improve clearinghouse reviews. His suggestion to omit from reviews interventions that last less than 12 weeks suggests that researchers should provide information only on what they believe are valuable interventions for educators. This is inconsistent with the mission of the What Works Clearinghouse. A research review process driven by the interests of researchers and developers is potentially rife with bias and does a disservice to educators, who are capable of making their own decisions about which interventions can help their students.
Mr. Slavin calls the clearinghouse “unaccountably inconsistent.” But all studies must meet the same standards in an area, as well as the overarching clearinghouse standards that are rooted in well-accepted principles of statistical inference. Principal investigators have some flexibility within their individual review areas, but allowing flexibility to accommodate the unique features of various areas is a strength of the clearinghouse’s methods. Review areas need to accommodate the wide-ranging scale and content of studies that naturally arises when the areas under review are as diverse as preschool literacy interventions and dropout-prevention efforts, to take two examples.
The U.S. Department of Education and the new managers of the clearinghouse recognize that more needs to be done. We are adding more reviews and creating new products based on input from decisionmakers and practitioners. The clearinghouse also is taking up the challenge of providing information about the diversity of settings, local contexts, and contradictions among studies of the same interventions.
This debate about how to evaluate and synthesize research underscores the need for an institution like the What Works Clearinghouse. Individual researchers will have different views about how standards should be defined, but educators then would be overwhelmed by a plethora of standards. And the changes advocated by Mr. Slavin would likely have the effect of further elevating his own Success for All program, which may lead other educators to question whether standards being promoted by individuals are little more than a means for advancing other objectives.
An independent entity—one whose sole purpose is to provide unbiased and objective information about what research has found about effectiveness, using transparent and clear standards—can provide a richer source of evidence that will help support the wide-ranging needs of educators.
Mark Dynarski
Director
What Works Clearinghouse
Senior Fellow
Mathematica Policy Research Inc.
Princeton, N.J.