Scores on state tests have increased consistently and significantly in the five years since the No Child Left Behind Act became law, and there’s some evidence that gains that started in the 1990s accelerated after the law’s enactment, a new report concludes.
The authors of the report, which was set for release this week, are quick to say that the gains found in their comprehensive review of all 50 states’ test results can’t necessarily be attributed to the NCLB law. But the study is almost sure to play a role in the debate over the future of the law as Congress works to reauthorize it.
“Everything that reflects positively on the law has been used as ammunition to date” by its supporters, said Frederick M. Hess, the director of education policy studies at the American Enterprise Institute and a member of a panel that advised the report’s authors.
But, he added, “these findings should be treated very cautiously, … especially trying to link this to something as amorphous as NCLB.”
For the study, the Center on Education Policy convened a panel of five experts on testing and NCLB policies. The group collected as much test-score data as possible from the states and used several statistical methods to determine whether achievement was increasing and whether achievement gaps between minority and white students were narrowing.
The study found that test scores were on the rise in most states with three years of data. For example, 37 of the 41 states with three years of data for elementary school mathematics reported increases of at least 1 percentage point per year in the proportion of students scoring at the proficient level or above. For elementary school reading, 29 of 41 states reported such gains, which the report characterizes as “moderate to large.”
Of the 22 states with enough data for reading and mathematics in elementary, middle, and high schools, five reported both average annual gains of 1 percentage point in students scoring at proficient or above in both subjects as well as higher-than-expected increases in the average scores.
The number of states with an increasing number of students scoring at the proficient level or above on state tests is bigger than for those showing little or no progress.
Note: Not all states had three years of valid data to be included in this analysis.
SOURCE: Center on Education Policy
What’s more, of the 13 states with testing programs dating back to the late 1990s, nine saw test-score gains accelerate after the NCLB legislation became law in January 2002.
Under the law, schools test students annually in reading and math in grades 3-8 and once during high school. They must show adequate yearly progress in bringing all students—both overall and in racial and other subgroups—to the proficient level, or face a series of consequences intended to spur improvement.
Advisory Panel
The report says that the law can’t be identified as the reason for the test-score increases because researchers had no control group to isolate its impact from that of other initiatives that may have boosted achievement. Most states had already undertaken efforts to raise academic performance before the law was enacted.
“You have to be very careful. … At the same time that NCLB was taking effect, a whole slew of things were happening,” said Jack Jennings, the president of the Center for Education Policy and a former education aide to House Democrats. “We cannot draw a direct line between this increase in achievement and NCLB.”
Still, NCLB proponents are likely to use the data to bolster their argument that the law—a top domestic priority for President Bush that passed with wide, bipartisan support—has met its goal of increasing student achievement.
Mr. Bush and Secretary of Education Margaret Spellings have pointed to scores on some portions of the National Assessment of Educational Progress as signs of the law’s positive impact. But experts suggest that scores on all sections of NAEP paint a mixed picture. (“Bush Claims About NCLB Questioned,” March 14, 2007.)
The CEP report provides perhaps the most comprehensive review of student achievement in the half-decade since President Bush signed the No Child Left Behind Act, which revamped the federal Elementary and Secondary Education Act. The report may have more credibility than some other research because its advisory panel of distinguished researchers and policy analysts includes both supporters and critics of the law.
The panel members were:
• Mr. Hess, a policy analyst at the Washington-based AEI who favors innovative and nontraditional approaches to improving schools and has written favorably about many of the law’s accountability provisions.
• Eric A. Hanushek, a senior fellow at the Hoover Institution, based at Stanford University, who has advised the U.S. Department of Education on using so-called growth models in the law’s accountability system.
• Laura S. Hamilton, a senior behavioral scientist for the Rand Corp., based in Santa Monica, Calif., who has studied the impact of the law in three states.
• Robert L. Linn, a professor emeritus at the University of Colorado at Boulder, who has criticized the law’s accountability rules as unfairly identifying some schools as being “in need of improvement.”
• W. James Popham, a professor emeritus at the University of California, Los Angeles, and a testing consultant, and a critic of the law’s test-based accountability.
Even though the research documented increases in test scores and a narrowing of the achievement gap, Mr. Linn said the gains weren’t steep enough for states to reach the NCLB law’s goal that all students score as proficient in reading and math by the end of the 2013-14 school year.
“Even if you look at all of the increases,” Mr. Linn said, “states would still not be hitting 100 percent by 2014.”
NAEP Debate
Critics of the federal law said that state test scores are inaccurate measures of how much students know and what they can do. Such test results are easily skewed by instructional practices that may yield higher scores without ensuring that students understand and retain the material, critics say.
“We’re not surprised to see test scores going up,” said Monty Neill, the co-director of the National Center for Fair & Open Testing, or FairTest, based in Cambridge, Mass. “Our question is: What does that mean? Is that real learning? Or is it score inflation?”
While state test scores may be on the rise, Mr. Neill said, scores on the federally sponsored NAEP are basically unchanged.
NAEP is an “independent barometer” that does “a very good job assessing higher-order thinking skills,” said Mr. Neill, whose group is a leading opponent of standardized testing.
“State tests are bad to use to measure the ability to go out in the world and apply and use knowledge,” he added. “These are the things kids actually need to succeed in college and to become skilled workers.”
Mr. Linn said that NAEP is a good alternative indicator of student achievement, but that its results should be viewed with skepticism, too.
“I think NAEP results deserve attention, but shouldn’t be the primary story,” he said.
In the report, the Center on Education Policy speculates that NAEP scores haven’t climbed the way state scores have because those tests’ content isn’t aligned with state curricula. What’s more, the national assessment, which tests samples of students, carries no rewards or sanctions the way that state tests do, giving students more motivation to do their best on the state tests.
“NAEP results should not be used as a ‘gold standard’ to negate or invalidate state test results,” the report says. “Instead they provide an additional point of information about achievement.”
But NAEP results should play a role in the debate over what skills students should have, said Bruce Hunter, the associate executive director for public policy at the American Association of School Administrators, in Arlington, Va.
State tests emphasize factual knowledge, while NAEP and other tests try to measure the depth of students’ knowledge and their ability to use their knowledge to solve problems and think creatively, he said.