A great deal is unknown about what works in learning—but translating even what is known into classroom practice in the myriad settings of American education can be frustrating for researchers and educators alike.
That’s why federal research agencies are showing increasing interest in so-called “improvement science,” which allows educators to try out new interventions quickly in a wide variety of environments while still collecting data for researchers to develop the long-term experimental studies they need to prove effectiveness.
The hope, according to participants at a summit on the field of improvement science held here Sept. 19-20, is to involve teachers and principals in research earlier and ensure that promising new interventions will succeed when they’re taken to scale.
“Practitioners have gut feelings about [interventions] that seem to work, but they’re not sure,” said Jane Muhich, the managing director for community college program development at the Carnegie Foundation for the Advancement of Teaching and a community college mathematics instructor in Seattle. “There’s a huge amount of expert knowledge and a huge amount of research knowledge, but none of it is coming together to create anything actionable to help students succeed.”
As Congress considers ways to improve the relevance of education research in the reauthorization of the federal Education Sciences Reform Act, improvement science may offer new ways for researchers and practitioners to work together.
“We are not organized for great scale. Random acts of improvement really don’t help,” said Jim Shipley, an education consultant based in North Redington Beach, Fla., who helps districts develop improvement research networks. “We need to get districts thinking about improving systems and not buying the next program.”
Network Approach
To do so, large networks of teachers and principals in one or several school districts work with researchers on a particular problem—for example, boosting student attendance. Individual teachers may test and tweak potential solutions in 90-day or semester-long cycles. Issues to explore might include: Does grouping community college students in different ways boost attendance? How about using classmates to track each other’s attendance? Does one intervention work better with middle school students than high school students?
On their own, these small-scale tests may seem like part of a teacher’s normal experimentation but don’t have many broader implications. But a group of dozens or hundreds of teachers can quickly pinpoint the most promising practices for researchers to target in controlled randomized trials.
Improvement science began in the health-care field, spurred by hospital hygiene experiments supported by the National Institutes of Health. The National Science Foundation and the Institute of Education Sciences—the Education Department’s research agency—have both started new programs to use improvement science in education settings.
Susan Moore Johnson, an education professor at the Harvard Graduate School of Education, said she “sees the potential in improvement science.”
“I’ve been disappointed with randomized controlled trials because you end up with a yes or no [on an intervention’s effectiveness] but not a how or why,” Ms. Johnson said.
Understanding Processes
The Stanford, Calif.-based Carnegie Foundation for the Advancement of Teaching, which sponsored the summit, is trying this approach in its Networked Improvement Community project, which works with remedial math teachers at community colleges and, later this year, middle schools.
“It is hard to improve outcomes without understanding the processes that generate them and the interconnections that exist among these processes,” said Anthony S. Bryk, the foundation’s president.
For example, Christopher S. Hulleman, a research associate professor at the Curry School of Education at the University of Virginia in Charlottesville, is working with a network of math teachers to develop a measure of student motivation in science, technology, engineering, and math classes in secondary school and college while teachers also test and share ways to boost their students’ motivation.
One assistant math professor, Kristin Spiegelberg of Cuyahoga Community College in Cleveland, Ohio, said attendance in her introductory math course rose from 65 percent to 85 percent after she created study groups that worked together regularly and contacted each other after absences. That intervention is one of the ones in the project that will be scaled up for formal experiments this spring.
“It was a great moment for me to realize I could improve my class as I went along, rather than waiting until the end of the class to see who passed and who failed,” she said.
The 21,000-student Iredell-Statesville school district in North Carolina has adopted a similar approach districtwide. Several “demonstration classrooms” of teachers willing to experiment work with researchers to test new teaching practices, and then to train other teachers in how to use the effective ones.
Kim Rector, an instructional coach with the district, said the process has helped the district adapt more quickly to the introduction of the Common Core State Standards.
“Just having a continuous-improvement mindset meant [the common core] has not been a problem for our teachers,” she said.
Teachers in the trial quickly analyzed the standards and developed new lesson plans.
“We were able to connect the line: Here’s how they are better than our old state standards,” she said. “We’ve been giving students new hooks on the strategies and not just telling them to try hard.”