Only two of the most popular school improvement models for elementary schools have “moderately strong” evidence to show that they work, according to a consumer-style guide released last week by a Washington-based research group.
The federally financed report by the American Institutes for Research rates 22 of the most widely used comprehensive-school-reform models on the strength of the research on them and other characteristics. Although the AIR produced a similar study five years ago, reviewers this time judged programs by a tougher standard, one that mirrors the federal government’s definition of what constitutes “scientifically based research” in education.
By that measure, the researchers found that none of the programs has accumulated “very strong” evidence of effectiveness. Only two programs—Direct Instruction, a model based in Eugene, Ore., and the Baltimore-based Success for All program—earned the group’s second-highest, “moderately strong” rating.
The report, “Comprehensive School Reform Quality Center Report on Elementary School Comprehensive School Reform Models,” is posted by the American Institutes for Research.
At the other end of the scale, though, no programs were found to negatively affect student achievement. But the researchers gave seven programs a “zero,” meaning that studies attesting to their success were unreliable.
“Our purpose in providing ratings is not to pick winners and losers, but rather to clarify options for decisionmakers,” said Steve Fleischman, a managing director for the AIR who oversaw the study.
“With increasing numbers of schools not meeting adequate yearly progress under the [federal] No Child Left Behind Act and the law’s emphasis on programs grounded in scientifically based research,” he said, “we think this [report] will be an obvious place where decisionmakers turn.”
New Information
Some developers of school improvement models argued that the AIR’s rating system favors more-prescriptive, better-financed improvement models.
“We’re really an approach that takes different characteristics in different contexts,” said Lewis Cohen, the executive director of the Coalition of Essential Schools, which got a zero rating. The Oakland, Calif.-based group uses broad principles, rather than specific curricula, to guide improvement efforts at schools in its network. “It’s a lot easier to measure impact if all schools use the same kind of approach,” he added.
The idea of using packaged, schoolwide improvement models gained ground after Congress set aside money for such programs in the late 1990s. Since then, roughly 6,000 elementary, middle, and high schools have used federal funds to adopt more than 500 different reform models, according to the study.
For their analysis, which was funded by the U.S. Education Department’s office of elementary and secondary education, the AIR researchers concentrated on elementary programs used by at least 20 schools in three or more states.
Mr. Fleischman said the evaluators drew heavily on criteria developed for the department’s What Works Clearinghouse. The research organization helped set up and manage the federal clearinghouse, which also vets and publishes reviews of studies for education consumers.
But the new report goes beyond What Works’ framework to rate programs on whether they provide professional development or technical assistance, for instance, and whether they inspire more parent involvement in schools.
Meeting the Standard
Still, the report’s bread-and-butter findings are its ratings on whether programs improve student achievement.
After the moderately strong and the very strong categories, the researchers placed five models in the “moderate” category. That distinction means that while there was some notable evidence of positive effects, those studies were a little weaker.
Programs in the moderate category include: Accelerated Schools PLUS, of Storrs, Conn.; America’s Choice School Design, of Washington; Core Knowledge, of Charlottesville, Va.; School Renaissance, of Madison, Wis.; and the School Development Project, based in New Haven, Conn.
The reviewers said another eight programs showed “limited” evidence of effectiveness. They are: ATLAS Communities and Co-nect, both of Cambridge, Mass.; Different Ways of Knowing, of Santa Monica, Calif.; Integrated Thematic Instruction, of Covington, Wash.; the Literacy Collaborative, of Columbus, Ohio; the National Writing Project, of Berkeley, Calif.; Modern Red Schoolhouse, of Nashville, Tenn., and the Ventures Initiative and Focus System, of New York City.
Besides the Coalition of Essential Schools, the six other programs that received a zero are: Breakthrough to Literacy, of Coralville, Iowa; Comprehensive Early Literacy Learning, of Redlands, Calif.; Community for Learning, of Philadelphia; Expeditionary Learning, of Garrison, N.Y.; First Steps, of Salem, Mass.; and Onward to Excellence II, of Portland, Ore.
Developers of programs that were rated lower on the report’s scale said that the rigorous studies that would have earned them higher ratings are rare, expensive, and difficult to carry out. Of the 800 studies the AIR reviewed, only 80 were deemed to be “very strong,” according to the report.
“But if this is going to continue to be a federal standard,” added Gregory Farrell, the president and chief executive officer of Expeditionary Learning, “we’ll have to meet it.”
Mr. Fleischman said his organization would publish a similar report on schoolwide improvement programs for middle schools and high schools in the fall of 2006.