A new study from the University of Wisconsin-Madison is among the first to examine the effectiveness of a data-driven effort to improve schooling on a large scale, and it’s good news for data advocates: Even the first steps of data-driven instruction seem to have some positive impact on school districts’ test scores.
The report focuses on 59 districts in Alabama, Arizona, Indiana, Ohio, Mississippi, Tennessee, and Pennsylvania, examining their mathematics and reading results on state standardized tests after the first year of a three-year initiative conducted by the Center for Data-Driven Reform in Education at Johns Hopkins University, in Baltimore.
The study is drawing attention as much for its design as its results. While most randomized studies in education are conducted at the individual, school, or class level, researchers for this study randomly assigned entire districts either to experimental or control groups, which allowed a controlled comparison of districtwide changes in student performance on standardized tests.
“Given how central data is to a lot of reforms that school districts in states across the country are doing, with data emerging from No Child Left Behind and data being generated by the current [federal] department of education, it’s reasonable to ask the question, do the data actually matter?” said Michael Casserly, the executive director of the Washington-based Council of the Great City Schools. This study is one of the first to show that those efforts do work on a large scale, he said.
Districts’ Scores Improve
States began the program in three successive waves, starting in 2005. In each wave, researchers gave one group of districts benchmark assessments tailored to their state examinations and trained school administrators in interpreting and using data to identify areas for instructional improvement. The second group received those services the next year and served as a control group in the first year of the study. Districts that received the assessments had greater gains on their states’ reading and math tests than districts that had not yet begun the experiment.
On average, students from the data-driven-reform districts outperformed their control-group counterparts by approximately8 percentile points in math and 5 percentile points in reading, according to Geoffrey D. Borman, a professor of sociology and education at the University of Wisconsin-Madison and one of the study’s authors. Researchers also noted correlations between districts’ scores and their percentage of students eligible for free- or reduced-price lunch, and found that participating in the CDDRE study was “comparable to reducing school-level free- or reduced-price lunch eligibility by approximately 35 to 60 percentage points,” Mr. Borman said.
While most states are already implementing data-driven reform, research on its impact on student learning, especially at the district level, is sparse, researchers said. The CDDRE study is “important because of its size and its breadth,” said Martin Orland, the director of evaluation and policy research at WestEd, a San Francisco-based research firm. “It’s the largest study that I’m aware of that looks at the issue of data-driven decisionmaking, and it’s enough to be intriguing that there may be supportive evidence for what scholarship has hypothesized and what people are acting on,” he said.
Diverse District Mix
The districts assigned were generally low-performing, but the 549 schools represented were geographically and demographically diverse, Mr. Orland said, which makes the results more likely to be widely relevant.
Looking at reform on a larger scale is important, said Robert Slavin, the director of the data-driven-reform center, whose efforts are documented in the report. Mr. Slavin also writes an independent blog on education research that is hosted on Education Week‘s website. “People quite legitimately say, ‘So what? You can do something in a small number of schools,’ ” Mr. Slavin said. “But this is at such a large scale.”
More Than Data
At the same time, “a study of this breadth raises more questions than it answers,” said WestEd’s Mr. Orland. “What would be ideal would be if you also had some survey information. Were teachers doing things differently? What were they doing that was different? It cries out for further investigation.”
Indeed, the researchers in the new study write that while improvement in test scores could be the result of teachers’ effective use of data, other research frequently shows that teachers do not know how to use data effectively. The improvement could instead demonstrate what Mr. Borman referred to as “the testing effect"—students tend to score better on tests after having practiced for them.
Paige Kowalski, the director of state policy initiatives at the Washington-based Data Quality Campaign, which promotes data-based instruction and reform, said her organization had found that “by and large, teachers don’t yet have the skills and capacity to take results from interim assessments [like those used in this study] and change their own instruction.”
Patte Barthe, the director of the National School Boards Association’s Center for Public Education, said her organization had already been supporting data-driven reform and was encouraged by the study. “What we’re thinking is a good idea is proving to be a good idea,” she said. But, she said, “now that we have tools available, we have to think about ways to help educators, school boards, and parents make use of these tools.”
Results from subsequent years of the program, during which some of the districts adopted various interventions to address areas of need, will be published in an upcoming study, said Mr. Slavin, of CDDRE. That publication will delve into which interventions were used, and indicates that districts’ scores improved more dramatically after they began using interventions to address the problem areas revealed by the data than when they had access only to the benchmark data.
“Benchmarks are useful, but they’re only a part of the process,” Mr. Slavin said.
Which Data Matter?
The Data Quality Campaign’s Ms. Kowalski said that advocates of data-driven reform were moving to use data about attendance, school climate, and other factors outside of test scores.
“We know that the benchmark assessments are just one kind of data,” she said.
Mr. Orland said focusing on improvements in state test scores “does run the danger of equating the test with real learning.” Some of the most effective reforms use additional data like school climate surveys and attendance, he said.
But especially as many schools lag in implementing holistic reforms, Mr. Borman, the study’s author, said “so many districts have been adopting quarterly benchmark assessments that [how they impact student achievement] is an important question in itself.”
Discovering that benchmark assessments have an impact gives a first glimpse at how to “move the needle forward” for student achievement, said Mr. Casserly. “The research is starting to get pretty exciting in terms of informing us what things work and why.”