The overwhelming majority of elementary-school pupils nationwide score above the average on normed achievement tests, results from a controversial new 50-state survey indicate.
The survey, believed to be the first of its kind, provides average scores from the 32 states that test elementary students statewide, as well as selected averages for district-administered tests in the 18 states without statewide assessments.
The findings are contained in a report released last week by Friends for Education, an advocacy group for public education in West Virginia.
Most states using one of the six major standardized achievement tests offered by commercial firms are “well above the national average” in all elementary grades and subjects, according to the study.
Such findings suggest that norm-referenced tests--in which students are compared with a group tested in the past, not with other current test takers--"do not represent an accurate appraisal of educational performance,” argued John Jacob Cannell, the group’s president and author of the report.
“The main purpose of the tests is looking good,” said Dr. Cannell, a Beckley, W.Va., physician who financed the survey with 11,000 of his own money.
Such tests “allow all states to claim to be above the national average,” he wrote.
Based on the most recent results available, 90 percent of school districts and 70 percent of students tested performed above the average on nationally normed tests, the report estimates.
Those scores are inflated because testing companies set norms that are artificially low, Dr. Cannell charged. Test publishers “want to have good news to sell to superintendents,” he said in an interview.
Moreover, he contended, gains in student performance since such tests were normed can be attributed to teachers’ concentration on teaching material they know will be tested.
To improve the accuracy of the test results, he concluded, testing companies should annually set national norms that truly represent a level at which half the children tested would score above the 50th percentile.
Most test makers renorm tests every five to seven years, according to testing-company officials.
Conclusions Disputed
Most test publishers and education officials interviewed last week took sharp exception to the conclusions of the report, “Nationally Normed Elementary Achievement Testing in America’s Public Schools: How All 50 States Are Above the National Average.”
They denied that test scores are artificially inflated, and said that the relatively high level of performance reflects genuine improvement in student achievement.
Other indicators, such as scores on the National Assessment of Educational Progress and college-admissions tests, confirm that student achievement has improved in the past few years, according to Ramsay Selden, director of the education-assessment center of the Council of Chief State School Officers.
It appears likely, he said, that average student performance is higher than it was in the early 1980’s.
Moreover, state officials said, test results are used not to compare student performance across states, but to evaluate instructional programs and gauge improvements over time.
“I don’t think the people in the states are trying to pull the wool over anyone’s eyes,” added Anne C. Hess, coordinator of student assessment for the Alabama Department of Education.
Nationwide Survey
Friends for Education, a three-year-old organization that claims some 700 members throughout West Virginia, undertook the study last summer after finding that nearly all districts in the state scored above the national average on the Comprehensive Test of Basic Skills.
Those results were unexpected, the report notes, because the economically troubled state ranks near the bottom on most other measures of educational quality, such as adult-literacy rate and college admissions-test scores.
“At every turn, we were confronted with test results showing that desperately poor counties were above the national average,” said Dr. Cannell. “If McDowell County is above the national average, what county could possibly be below?”
Moreover, he added, the group4feared that the relatively high performance of students in such districts could promote complacency and thereby jeopardize public support for school improvements.
With the help of three high-school students working part time, Dr. Cannell last August sent letters to each of the 50 state superintendents and testing coordinators, and telephoned large districts in the 18 states without state testing programs.
The respondents provided information on the tests and norms used, the statistical methods for reporting scores, the number of test takers, and their most recent reading, language, mathematics, and basic-battery scores.
Of the 32 states with statewide assessments, 26 used one of the six leading commercial tests, the survey found.
Those assessments include the Iowa Test of Basic Skills, marketed by Riverside Publishing; the Stanford Achievement Test and the Metropolitan Achievement Test, both published by Harcourt Brace Jovanovich; the Comprehensive Test of Basic Skills and the California Achievement Test, both published by ctb/McGraw-Hill; and the Science Research Associates test, marketed by ibm Corporation.
Six states reported using their own statewide tests. Such tests were nationally normed through an “equating study” with one of the commercial tests, according to the report.
In the 18 states without state assessments, all of the individual districts surveyed reported using one of the six major commercial tests.
With few exceptions, the respondents reported that their pupils had scored above average on the most recent achievement tests for which results were available. (See Databank on preceding page.)
The survey found that all 32 states with statewide testing programs scored above average, including states well below the national average in per-capita income, high-school graduation rate, and college admissions-test scores.
Likewise, in all of the 18 states with district-administered tests, ''the vast majority” of the districts surveyed were above the national averages for their respective tests, according to the study.
Many large inner-city districts--such as Trenton, N.J., New York, Boston, St. Louis, and East St. Louis, Ill.--also reported scores above the national average.
However, the report adds, some large districts, including Detroit, Chicago, Cleveland, and Dade County, Fla., reported scores below the national average.
Among its other findings:
In South Carolina, which ranks 47th in the nation in per-capita income and high-school graduation rate, 62.9 percent of 4th graders tested above the national average on the ctbs total battery.
In Iowa, which has the highest college-admission test scores in the nation, 95.8 percent of schools were above average on the 6th-grade itbs
In Michigan, which ranks at about the national average in per-capita income, college admissions-test scores, and poverty rate, an estimated 90 percent of the 525 districts scored above the national average on the elementary achievement tests.
‘Teaching the Test’
Dr. Cannell contended that these apparently encouraging results were largely attributable to testing companies’ setting their norms too low.
That view was seconded last week by Earl Hobbs, superintendent of the Clayton, Mo., school district, which uses what he said is a more rigorous assessment than those offered by the commercial publishers.
“If we simply wanted to look good, we could pick any one of the [commercial] achievement tests,” Mr. Hobbs said.
The Clayton schools currently use a test administered by the Educational Records Bureau, a Wellesley, Mass.-based nonprofit firm that administers achievement tests in independent schools and some affluent suburban districts.
Dr. Cannell’s report also maintains that the high scores reflect efforts by teachers and administrators to boost performance by adjusting their curricula to match the items and topics on the tests.
In states such as Wisconsin, Utah, California, and Maine, which have adopted “elaborate security measures” to bar teachers from seeing test items, the report states, scores show only slight yearly increases.
Scores in those states “are generally lower than scores from states where teaching the test is possible,” it adds.
Representatives of testing companies last week strongly denied the assertion that their tests are not accurately normed.
Representative Sample
The national averages used in the Iowa Test of Basic Skills, for example, are based on a test administered to a representative sample of students, according to H.D. Hoover, director of the testing program, based at the University of Iowa.
“I would never argue that it is a perfectly representative sample of U.S. elementary-school children,” he said. “There is no such thing as a perfectly representative sample.”
“But it is every bit as good as the [National Assessment of Educational Progress] and the Gallup Poll,” he continued. “I have lots of confidence in those norms.”
Furthermore, added Paul L. Williams, director of research and measurement for ctb/McGraw-Hill, which administers the ctbs and the cat, the level of student performance documented by Dr. Cannell reflects real improvements since the norms were set.
But any such improvements skew the test results, argues Dr. Cannell, who recommends that testing companies recalculate their norms each year so that average scores more accurately reflect current performance.
Testing officials argued, however, that annual renorming of tests would take away many of the advantages of the current system.
Setting norms annually would cost millions of dollars and would harm the ability of administrators to measure student progress over time, argued Vana Meredith-Dabney, supervisor of the educational-assessment section in the South Carolina Department of Education.
“We would be masking any improvement at all if we re-normed every year,” she said.
“If you redefine what is a pound every time you gain weight, you are always 162 pounds,” added Chester E. Finn Jr., the U.S. Education Department’s assistant secretary for educational research and improvement.
Despite such arguments, some testing companies have begun to move toward recalculating norms more frequently.
For example, ctb/McGraw-Hill has begun to provide annual standards based on the previous year’s results. This so-called “rolling norm,” though not based on a representative sample, would “help educators get a better fix on where they are now,” Mr. Williams said.
Copies of Dr. Cannell’s report can be purchased for $5 each from Friends for Education, Box 358, Daniels, W.Va. 25832-0358.