Many studies have tested whether a particular school improvement program that looks promising works in real classrooms. Far fewer have tried to figure out why.
When a program fails to increase students’ learning, for instance, was it because teachers simply didn’t implement it? Or were the instructional practices off base?
A team of researchers from the Consortium for Policy Research in Education, or CPRE, set out 13 years ago to answer such questions with a massive study that involved 115 elementary schools, 300 teachers, 800 school leaders, 7,500 students, and three brand-name models of comprehensive school reform.
Called the Study of Instructional Improvement, the project cost more than $20 million in federal and foundation money. Its aim was to get inside the “black box’’ of school improvement by tracking what teachers do on a daily basis, determining how those practices differed from those in a set of more typical schools, and figuring out if the changes had an impact on academic achievement.
“It is certainly useful to know across all rigorous studies which models fared best and worst,” said Steven M. Ross, an education professor and senior research scientist at Johns Hopkins University, in Baltimore, who is not connected with the project. “But this study gets into the whys of the effects, and thus offers more insight into best practices.”
Shifting Reform Winds
While the work by CPRE eventually generated 40 published papers, the project’s capstone report came out just last month. At the time the study got under way, “comprehensive school reform”—schoolwide improvement programs created by outside developers—was the intervention du jour in schools.
More than 7,000 schools in the United States, with the help of outside contractors and an infusion of grants under the federal Comprehensive School Reform Demonstration program, put tested, off-the-shelf programs in place in the hope of improving learning.
Now, the school improvement discussion under the Obama administration is focused on common standards and assessments, turnarounds of failing schools, creation of charter-friendly environments in states, innovation, teacher pay based in part on student performance, and data-driven reform.
But study director Brian Rowan said the study’s lessons apply to any kind of “design based” intervention for schools, whether that means turning around failing schools or implementing a new reading program.
“Comprehensive school reform is just one form of design-based school improvement,” said Mr. Rowan, who is a professor of education and a research professor at the University of Michigan, in Ann Arbor. “It just means you’re not thinking it up as you go along.”
His co-principal investigators on the study are University of Michigan colleagues Deborah Loewenberg Ball, the dean of the university’s education school, and David K. Cohen, a professor of education and public policy. The University of Michigan is a partner in CPRE, a research consortium of seven universities.
Mr. Rowan and his partners argue that any design-based program has two parts: the instructional design itself, and the plan for getting the instructional program in place. The Study of Instructional Improvement attempts to track the implementation process, from 2000 to 2004, of three well-known models that vary in both instruction and implementation: Accelerated Schools, America’s Choice, and Success for All.
At one end of the spectrum, the Accelerated Schools model, developed at Stanford University 23 years ago, uses staff development to build a school culture organized around its vision of learning, which calls for students to “construct” their own knowledge through interactive, real-world activities. But it offers teachers no prescriptions on how to go about doing that, saying instead that teachers must devise their own strategies.
At the opposite end, the Success for All program, developed in the 1990s by Johns Hopkins University researchers Robert E. Slavin and Nancy A. Madden, uses a highly specified plan for instructional improvement and highly specified routines for teaching reading. It organizes students into cooperative-learning groups and provides teachers with a weekly lesson sequence and scripts to guide them through the 90-minute reading lessons.
America’s Choice falls somewhere in the middle. Grounded in the movement for standards-based education and focusing mostly on writing, the program gives teachers curriculum guides and instructs them in routines for teaching writing. But it also requires schools to appoint coaches and facilitators, with whom its staff works to develop core writing assignments and scales for grading them. The coaches and facilitators also work with principals and teachers in their schools to carry out the program.
Distinctive Looks
To track whether the programs were faithfully implemented, teachers in the study kept daily teaching logs, an innovation crafted specifically for the study. Standardized reading and language tests were used to document learning gains for two student cohorts: one that started in kindergarten and moved to 2nd grade, and another that moved from 3rd to 5th grade.
Over time, what the researchers found was that, while teachers in the 28 schools using the Accelerated Schools model were most likely to feel a sense of autonomy and trust in their schools, their teaching practices were not significantly different from those used in the 26 comparison schools. The study’s preliminary analyses suggest that students, likewise, did not learn any more than their control-group counterparts did.
“What was striking was the tremendous variability,” Mr. Rowan said. “It was, ‘I am innovating in some way and so is the teacher next door, so each of us is doing a different mix of the same old practices.’”
In comparison, classes in the 31 America’s Choice schools and the 29 Success for All schools developed their own distinctive looks over time. The different instructional patterns, in turn, led to different, and more successful, student-achievement patterns.
The Success for All students excelled from kindergarten to the end of 2nd grade. The learning gains at that level, in fact, were strong enough to move the average student from the 40th percentile at the start of the study to the 50th percentile 2½ years later.
The America’s Choice students outperformed all the other groups from 3rd grade to 5th grade.
“I think we know in general how to get kids to read really simple, decontextualized passages well, and that is the strong point of Success for All,” Mr. Rowan said. “This isn’t sustained as you go out. It doesn’t inoculate you or teach you to read more-complex material.”
For both programs, the study also found, the gains were greatest when teachers adhered closely to the prescribed teaching practices. “The general principles,” Mr. Rowan said, “are a high degree of specificity for what you want to do and high degrees of support for teachers to do it with fidelity.”
Officials representing the program models had mixed reactions to the findings.
“While I respect the work of the study, what it’s essentially trying to do is take three models—two that are prescribed and one that is process-oriented—and try to compare them in the same frame,” said Lisa Jaszcz, the director of the Accelerated Schools network. “We were really at a disadvantage by being the process model.”
But officials with America’s Choice and Success for All called the study a major contribution to the field’s understanding of how to scale up successful models.
“We took it to heart and said, ‘OK, in some places we need to be more scripted than we have been,’ “ said Judy B. Codding, the president and chief executive officer of the Washington-based America’s Choice program.
“For decades, studies have been saying that every school had to invent its own path to reform,” said Mr. Slavin, Success for All’s developer. “This is the absolute last nail in the coffin” to counter that idea, he said.