What’s the gap in performance between two runners at the end of a marathon? The answer can be expressed in minutes, or more likely fractions of seconds, to the finish line.
But how do you describe the gap between groups of runners, such as men and women? Do you look at the differences in their average finishing times, or focus on the slowest or the fastest runners in each group? That’s a more complicated problem, but similar to determining whether states are closing the achievement gaps between groups of students, such as those of different races or family-income levels.
A report prepared by the board that governs the National Assessment of Educational Progress cautions that measuring an achievement gap does not come down to a single statistic. Comparing just the average scores for two groups of students, or the percent above the “basic” level on state or national tests, could miss some important changes, the report warns, and misrepresent what’s happening.
For example, while the percentage of students who meet or exceed a standard may be unchanged from one year to the next, students’ scores below the standard may have risen substantially.
“The key idea is that, in some way or another, we must compare whole groups of scores,” said Paul W. Holland, who holds a chair in measurement and statistics at the Princeton, N.J.-based Educational Testing Service. He was a member of a working group that helped prepare the board’s report on using NAEP to confirm state test results.
To help visualize changes in test scores and test-score gaps for whole groups of students over time, Mr. Holland turned to graphing methods often employed in such fields as medical research, insurance, and engineering. Known as “cumulative-distribution functions,” such charts can display test scores across the entire range of performance simultaneously, thus making progress relatively easy to spot.
In the first chart above, for example, the lavender and orange curves show the percent of students who performed below each NAEP score in 4th grade mathematics for two groups of students from State A: 4th graders who took the test in 1996 and were eligible for the federal free-lunch program, and 4th graders who took the test in 1996 and were ineligible for the program, respectively. By looking at where the curves intersect the line marked “basic,” you can see that in 1996, more than half the students from low-income families, or those eligible for free lunches, scored below the basic level on the test, compared with about 23 percent of their better-off counterparts.
It’s also clear from the chart that, four years later, the performance of both groups improved across the board because the curves have both shifted to the right.
In 2000, for example, about 40 percent of 4th graders from low-income families performed below basic on the test, compared with about 15 percent of their more affluent peers. If there were no space between the curves for the two groups, it would mean that 4th graders eligible and ineligible for free lunches performed equally well. As the National Assessment Governing Board report presented here notes, “This is a completely new way of representing ‘gap’ in achievement on NAEP.”
‘Truth in Gaps’
The second chart zeroes in on the actual size of the achievement gap between the two groups and whether it has closed over time. It shows that a gap exists at almost every point along the continuum, but that the gap was higher in 1996 than in 2000. Fourth graders eligible for free lunches who scored at the 30th percentile in 1996, for example, had NAEP scale scores that were 26 points lower than those of their better-off peers. Four years later, that difference had dropped to 20 points.
Taken together, the two charts show that while both groups improved, students eligible for the free- lunch program have “gained ground,” and the narrowing of the gap should be considered real.
“To me, it’s like truth in gaps,” Mr. Holland said. “If you just look at one place along the scale—which is what you do if you look at achievement levels—you don’t see anything else. And these distributions typically are changing in a variety of ways, including places where the achievement levels are not set.”
The governing board is considering whether all future NAEP reports should include such kinds of information.
“This gives us so much more information than any chart we’ve had in the past,” said Marilyn A. Whirry, a governing-board member and a high school English teacher from Manhattan Beach, Calif., “and it’s more honest information.”