After nearly two years in development, a new federally backed research service on “what works” in education began rolling its first products off the assembly line last week.
Launched with $18.5 million in funding from the U.S. Department of Education, the newly operational What Works Clearinghouse is the department’s electronic version of a Consumer Reports for research in education.
Part of the Bush administration’s push to transform education into an evidence-based field, the clearinghouse has the job of vetting research on programs and strategies and publishing the results on a Web site where practitioners and policymakers can easily find them.
“There is no trusted source of information for what research says in education, and there’s a plethora of voices out there and curricula that is being advertised as scientifically based,” said Grover J. “Russ” Whitehurst, the director of the department’s main research arm, the Institute of Education Sciences.
Visit the What Works Clearinghouse.
“This will make it far easier to use research findings, will create new demand for research, and will set a clear quality standard for the next generation of research and evaluation studies in education,” he said.
Already on the rise, demand for research-backed educational programs stepped up with the advent of the No Child Left Behind Act. The federal law puts a heavy emphasis on “scientifically based” research in education, requiring schools that receive federal money for serving needy students to use proven programs for most aspects of their education programs.
The clearinghouse products unveiled on June 30 won’t immediately answer all of educators’ questions about which interventions are scientifically based. They are limited for now to 10 “study reviews” of specific experiments on two topics: peer-assisted learning strategies and middle school mathematics programs.
But Phoebe H. Cottingham, the commissioner of the department’s National Center for Education Evaluation and Regional Assistance, said the new reviews are the first step toward building the broader, reader-friendly reports that can more directly answer educators’ “what works” questions.
Due out in the fall, those reports will include “intervention reports” that systematically analyze all the effectiveness evidence for particular programs or practices, as well as “topic reports” that summarize the entire research base in specific areas, such as character education or adult education.
Few Studies Make Cut
In its first-round study reviews, the clearinghouse gives reports either two checks for “meeting standards” or one for “meeting standards with reservations.”
The reviews also summarize the studies and rate them on specific strengths or weaknesses—whether, for example, classrooms were actually implementing the intervention being tested or whether the study sample was large enough to generate meaningful results.
Studies that failed to meet clearinghouse standards are also listed on the Web site, but are not formally reviewed.
The 10 studies selected were the first of 100 the clearinghouse will publish this summer. They were culled from 18,000 citations the clearinghouse had gathered for doctoral dissertations, published studies, conference proceedings, and other reports, some dating back 20 years. Analysts screened out studies that were not relevant to the topic, those that included no student-achievement data, and those that failed to meet the clearinghouse’s methodological standards.
Practitioners and researchers who read the resulting reports at the What Works Web site, www.whatworks.ed.gov, last week said they were carefully and clearly written. Those observers also praised the site for its navigability.
Most were disappointed, though, to see that so few studies made the cut.
“School leaders are very eager to comply with the research-based mandates in No Child Left Behind,” said Terri Duggan Schwartzbeck, a policy analyst for the Arlington, Va.-based American Association of School Administrators. “But when a superintendent looks at that he or she will say, ‘Oh, none of the research meets their standards—well, that’s not really going to help me.’”
Whether the meager showing reflects a lack of solid research in the field or overly strict clearinghouse methodological standards is debatable. Following research-evaluation practices in medicine, the clearinghouse puts a premium on randomized field trials, in which subjects are randomly assigned to either control or experimental groups.
But clearinghouse developers said that they also count as valid evidence comparison studies that use carefully matched groups and “regression continuity designs,” which are experiments that use a cutoff point to separate comparison groups and to statistically account for differences between groups.
Case studies, surveys, studies that rely on pre- and post-test data, and descriptive kinds of reports did not meet the clearinghouse’s standards.
“It’s a very narrow conceptualization of what constitutes evidence in education,” said Catherine Emihovich, the dean of the college of education at the University of Florida in Gainesville. She fears that the emphasis on such carefully controlled settings will produce research that educators won’t see as relevant in their own messy, real-life classrooms.
Politically Delicate Task
While few studies may meet clearinghouse standards now, developers say they hope to spur more high- quality research by making their standards clear.
“If we do the job right, we can elevate or get beyond the puny state of knowledge we’re in now,” said Robert F. Boruch, the principal investigator for the project, which is being led by the Campbell Collaboration, an international research group that Mr. Boruch helps head, and the American Institutes for Research, a Washington- based think tank.
Federal officials acknowledged, however, that the clearinghouse venture might also prove politically delicate for them. Federal law prohibits the department from endorsing specific curricula or programs. Under the Clinton administration, a similar but smaller-scale effort to provide lists of “promising” and “exemplary” research-backed programs ran into heated opposition from prominent mathematicians. (“Academics Urge Riley to Reconsider Math Endorsements,” Nov. 24, 1999.)
Though the Web site emphasizes that the Education Department is not recommending the programs listed, some observers worried last week that local educators would interpret the reports differently.
“The fact that they’re out there may cause people to jump to conclusions,” said Daniel A. Laitsch, a senior policy analyst for the Association for Supervision and Curriculum Development, a national group based in Alexandria, Va.
“I’m looking at it from the point of view of somebody who is just coming to the What Works Clearinghouse and sees that there is one study on, say, the Expert Mathematician program that meets the evidence standards, and then jumps to the conclusion that this is a research- backed practice.”
Other experts said, however, that it was too soon to tell how successful the venture would be.
“It is a high-quality effort aimed at an important goal—increasing the best use of evidence in education decisionmaking,” said Gerald R. Sroufe, the government-relations director for the Washington-based American Educational Research Association, “and should be given time to fully demonstrate its merits.”