For most people, spring means the welcome return of longer days and warmer weather. But in public schools, springtime means testing time. Increasingly, this sends chills up the spines of teachers, parents, and students, as schools spend hours and hours preparing for, and now administering, new tests designed to measure the achievement of new standards.
In some places, we’re seeing a backlash against this high-stakes testing from frustrated educators and from more and more parents who are choosing to opt out and not have their children take the tests. Tests are supposed to give educators, parents, students, and others a sense of how students are doing in a particular subject and how instruction might be adjusted. Instead of putting so much emphasis on one test—which is like a snapshot of a moment in time—a more informative measure would be to look at student growth over time—like time-lapse photography.
Public outcry may get much worse once results from the full administration of the new Common Core State Standards tests are released later this year—and not for the reasons you think. In 2013, remember how John B. King Jr., then New York’s commissioner of education, knew how students would score even before the state’s first administration of its common-core exam? Months before the test was given, he predicted that 70 percent of the students would fail—and his prediction came true.
Cut scores are not designed to measure whether a student is on grade level, and they’re certainly not predictors of any individual student’s success."
Well, the same will be true for students in states using the tests designed by the two federally funded assessment consortia.
Last fall, states in the Smarter Balanced Assessment Consortium adopted four “achievement levels,” also known as “cut scores,” for its test. The consortium reported that it expected only 32 percent of 8th graders to pass math and only 41 percent to pass English/language arts under this system. Other grades will have very similar passing rates, the group estimated. The other consortium, called PARCC, or the Partnership for Assessment of Readiness for College and Careers, will do similar cut-score setting this summer, based on results from this spring.
Cut scores are set in a fashion that includes as much subjective judgment as science. The achievement levels themselves are broad and imprecise. They are not designed to measure whether a student is on grade level, and they’re certainly not predictors of any individual student’s success. These categories also create incentives to focus time and attention on “bubble kids,” students whose scores are clustered right below the cut score for proficiency. What we should do is focus on student growth over time, based on actual scores. These could include teacher-administered tests, end-of-course demonstrations of performance, and project-based learning.
States should ditch the use of cut scores, with their simplistic picture of performance. Yet there’s political appeal in bashing schools based on the higher failure rates created by raising cut scores, which isn’t the same thing as declining student performance. New York Gov. Andrew Cuomo cites results on the state’s new tests as a reason, for example, to increase the percentage of teachers’ evaluations tied to test scores. And Eva S. Moskowitz, the founder and CEO of the Success Academies Charter Schools network and a former New York City Council member, got called out recently for promoting the idea that huge numbers of students can’t read or do math.
Interestingly, even Michael J. Petrilli, the president of the conservative Thomas B. Fordham Institute, has said that using the new, tougher common-core-aligned tests to brand schools as failing is “out of bounds.”
The point of setting higher standards is to help students achieve them over time, not rush to premature judgment. Realizing that it’s too soon to attach stakes, policymakers in 24 states already have hit the pause button on various consequences from these assessments for students, teachers, and schools. Let’s move toward a more thoughtful approach that puts testing in its rightful place—and returns spring to a season of growth, not failure.