Launching the latest salvo in the research debate over the movement to tie major consequences to student-achievement results, the Manhattan Institute released a report last week taking aim at contentions that high-stakes test results may be inflated because of “teaching to the tests.”
Read “Testing High Stakes Tests: Can We Believe the Results of Accountability Tests?,” from the Manhattan Institute for Policy Research.
The study came in for sharp criticism, however, from some testing experts, who said the researchers’ conclusions did not fit their facts.
For their report, researchers from the New York City-based think tank analyzed test-score results for schools in Virginia and Florida and seven school districts across the country. The combined school systems, made up of jurisdictions that give multiple kinds of tests to students, enroll 9 percent of all public school students in the United States.
The aim of the study was to see whether the test scores and gains from the high-stakes exams given in those jurisdictions would match up with those from the lower-stakes tests taken by students in the same schools.
The results were mixed. Schools whose students scored high on the state tests that were part of accountability programs tended to score high on independent tests, such as the Stanford Achievement Test-9th Edition or the Iowa Tests of Basic Skills, the researchers found.
“If the allegation of teaching to the test were a powerful one, then we should expect to see a disjoint between the results of schools on high-stakes and low-stakes tests, and we don’t,” said Jay P. Greene, a senior fellow at the institute and the study’s lead author. “We think that you can have a high degree of confidence in high-stakes tests telling you whether students passed a certain threshold of performance.”
How Meaningful?
Critics of the study said such correlations are beside the point.
“What’s at issue is whether the gains in scores that occur on high-stakes tests are meaningful,” said Daniel Koretz, an education professor at Harvard University’s graduate school of education.
He and other researchers said it’s no surprise that in schools where students score high on one exam, they also score high on others. That can even be true, they added, when tests measure completely different subjects, such as mathematics and reading.
“Average scores will correlate highly because of the sorting of students into rich and poor schools,” said Gene V. Glass, an education professor at Arizona State University in Tempe. “And Greene’s analysis shows this—as if we needed proof.”
When the Manhattan Institute researchers looked at schools’ year-to-year gains on both kinds of tests, however, they found far fewer correlations. Schools’ score improvements on high-stakes tests were reflected on low-stakes tests only in Florida, they said.
Mr. Greene said the lack of correlation could raise a red flag about new requirements in the “No Child Left Behind” Act of 2001, the latest update of the Elementary and Secondary Education Act. It requires states to measure whether schools are passing or failing based, in part, on year-to-year test-score gains.
The study is the latest in a spate of high-profile, sometimes-conflicting reports examining whether scores from high-stakes tests hold up on other kinds of exams.
The previous studies, though, have taken a more national look at the issue, comparing gains in state exams against those from the National Assessment of Educational Progress, a federal test taken by samples of students in most states. (“Researchers Debate Impact of Tests,” Feb. 5, 2003.)
In addition to Florida and Virginia, the new study looked at schools in Blue Valley, Kan.; Boston; Chicago; Columbia, Mo.; Fairfield, Ohio; Fountain Fort Carson, Colo.; and Toledo, Ohio.