Includes updates and/or revisions.
The test-publishing industry and state assessment leaders have come together for the first time to define a set of best practices for large-scale state testing.
The result of the collaboration, released last week, is a new best-practices guide intended to serve as a road map to improving state assessment procedures. The Council of Chief State School Officers, which represents commissioners of education, and the Association of Test Publishers, a nonprofit trade group, both based in Washington, coordinated its development.
Work on the guide began in 2006, after the onset of large-scale accountability testing, driven by the No Child Left Behind Act, called attention to states’ and test publishers’ needs for guidance in designing and implementing good assessment systems, said Gene Wilhoit, the CCSSO’s executive director.
“This guide is a byproduct of an industry in its infancy that mushroomed quickly,” he said. “That helped us see the need to have much better practice.”
It tackles areas that have been nettlesome for state assessment officials as well as test publishers, including procurement, item development, test security, scoring and reporing, testing special populations, and transitioning assessment work from one provider to another.
Articulation of best-practices in assessment is particularly timely, Mr. Wilhoit noted, in light of two federal Race to the Top competitions. States vying for the $4 billion in stimulus money offered in the main contest are proposing new approaches to testing, among other ideas to improve education, and several groups of states are seeking chunks of a separate pot that offers $350 million for new assessment systems.
Common Language
The guide offers state assessment officials and test publishers a common language and set of expectations that can improve their work, said William G. Harris, the chief executive officer of the test-publishers association.
“Quality [work on assessment systems] requires both states and test publishers to make sure they understand each other’s needs,” he said.
One area that emerged as needing definition and improvement is how states write requests for proposals when they seek bids on new testing systems, Mr. Wilhoit said. States too often write vague proposals, leaving test publishers to make sense of what they want, he said, and that can spark problems down the line when the proposals—or the tests themselves—don’t reflect the states’ needs. The guide offers suggestions for organizing and training state staff in that area.
Navigating transitions can also prove bumpy for states and test-makers, Mr. Wilhoit said. Wrapping up a contract with one provider and moving smoothly into work with another can often produce awkward gaps that need to be addressed. And frequent personnel changes in state assessment departments have shown the need for consistent training to keep the testing processes running smoothly as new employees come aboard, he said.
States are also under increasing pressure to ensure that test items properly reflect their academic standards, according to Mr. Wilhoit. But they often rely too much on test publishers for that alignment, risking gaps that undermine the validity of the tests as good gauges of standards mastery, he said.
One area that was not tackled deeply in the guide was the role that technology plays in assessment, Mr. Harris said. The two groups plan to delve further into that area in the next version. A work group will convene to begin revision in 2011. The groups are seeking feedback on the guide, and that input will shape the next version, they said.
The best-practices guide is available for purchase on Amazon.com and on the publishers association and CCSSO websites.