While interest in judging school performance based on the gains individual students make over time is high, the best way to do so is not even part of the current debate, one veteran testing expert argues.
The best approach, according to education consultant Paul E. Barton, is simple: Give two forms of the same test, one at the beginning of the school year and one at the end.
“‘Failing or ‘Succeeding’ Schools: How Can We Tell?” is posted by the American Federation of Teachers.
“The use of before and after tests to measure improvement has been perfected for over 50 years,” unlike other methods, he writes in a paper prepared for the American Federation of Teachers.
Such tests, he contends, have several distinct advantages: They give teachers information on each student at the beginning of the school year; they use known technologies; they can be clearly aligned with the content standards and curriculum for the year of instruction; and the results are readily understood.
In addition, Mr. Barton notes, models that compare progress from the end of one year to the next do not take into account large differences in children’s summer experiences.
One study found that the different amounts of progress that students make during the four summers between the 2nd and 6th grades accounts for more than 80 percent of the achievement gap between economically disadvantaged and advantaged students.
Once before and after tests were in place, Mr. Barton says, policymakers still would need to decide how much growth is enough.
But that could be done, for example, by looking at how much gain is typical, how much gain occurs in the classes of particularly effective teachers, and how wide the distribution of average-gain scores is among schools.
When a standard has been set for how much students should learn during a school year, it should be applied across the board and be broken down by subgroup, adds Mr. Barton, a former director of the policy-information center at the Princeton, N.J.-based Educational Testing Service.