Assessment

Crafting a New Generation of Assessments

By Katie Ash — February 17, 2009 5 min read
  • Save to favorites
  • Print

The power of technology to improve assessments, in part by providing useful data on not just what students know, but also on how they arrive at their answers, has been left largely untapped, especially in science, concludes a paper released today.

Computerized testing can be less expensive and deliver faster results—two advantages that many school districts have benefitted from—but it also has the potential to provide richer data on student performance, which could help inform curriculum and instruction, according to the report by the Washington-based think tank Education Sector.

“We should be thinking about how we design the assessments to fit more with cognitive science—to really think about how students progress in their learning so we can not only get better evidence about how they’re doing but get better evidence for us to get the ‘why,’ ” Bill Tucker, the chief operating officer of Education Sector and the author of the report, said in an interview.

See Also

Listen to a podcast on “Beyond the Bubble” from Education Sector.

But Scott Marion, the associate director of the Dover, N.H.-based Center for Assessment—a nonprofit organization that works with states and districts to improve their testing and accountability systems—de-emphasizes the importance of technology to improve assessment and encourages states not to wait until the technology is put into place to start reformatting testing models.

“We could have a lot better assessments without going near computers, and we could have computer-based assessments that are the same lousy assessments we have now,” he says. “I think it’s fair to join [technology and assessment improvement], but my only fear is that people think, ‘We can’t go to the future of assessment until we can provide the [technological] infrastructure in every classroom.’ ”

Perhaps a better starting place for states without a strong technological infrastructure in place, says Tucker, is to “really build that infrastructure anticipating that as we go, we’re going to think about new ways to use that infrastructure.” By building flexibility into the system, states can leave open the potential to explore more innovative assessments later, he suggests.

Promising Models

Although little progress has been made in harnessing the potential technology has to craft performance-based assessments, which aim to simulate complex real-world experiences, and measure students’ approaches to them, there are a handful of promising models, according to the paper.

One is the Problem-Solving in Technology-Rich Environments, or TRE, project. Created in 2003 with a test group of 2,000 students, TRE assesses scientific-inquiry skills by presenting students with a problem—such as questions about buoyancy, volume, and mass—to be answered in a performance-based model, rather than by multiple-choice questions. In this case, students gathered information by running experiments with a simulated helium balloon.

The computer simulation program then recorded data on what the students clicked on, how long they spent on each task, and what methods they tried in order to arrive at their answers.

Educators could then examine how students answered the questions and award partial points based on their approach to the problem, as well as see more detailed data on what students’ thought processes and methodologies were compared with what a single multiple-choice question provides.

Other examples, such as the River City project—a computerized simulation game created by Harvard researchers—as well as the Educational Testing Service’s Cognitively-Based Assessment, which is being piloted in schools in Portland, Maine, represent promising models. None of them currently has widespread implementation, according to the Education Sector paper.

Bridging the Gap

To encourage more states to embrace innovative assessment models, the federal government should provide incentives to states or districts that put them into place, the report recommends.

“That’s the role that states can play,” says Tucker. “The federal government can really develop the research, spur these tools, change the way we’re doing assessment procurement right now, but the states are really the place where it gets implemented or tested.”

To bridge the gap between what technology is capable of and the assessments currently given to students, the report calls for a second, smaller fund in addition to the current federal dollars for assessment that goes toward “the research and development of the next generation of assessment technology and practice.”

And those new assessment models should be open to the public for evaluation and criticism, says the report.

“That’s the way that we as a public are going to be able to check up to see if these tools are working,” Tucker says. “If we can think about it early on and make sure that we structure any sort of funding-incentive grants [to encourage transparency], it’s going to be helpful.”

Given the recent focus on science education, as well as concern about U.S. student’s mixed performance on international and American assessments in that subject, that sector provides a good jumping-off point for these policy recommendations, Tucker says.

“If you look at the research and what the recommendations are for science instruction, it’s a place where you really do need both content and process skills together,” he says. “It’s absolutely necessary to understand science content, but it’s also absolutely necessary to think like a scientist.”

In addition, “things like simulation and digital tools—that’s the way we’re doing science right now,” says Tucker. “It’s not divorced from the field.”

Another reason why science could be a good place to start, says Tucker, as well as Marion from the Center for Assessment, is because assessments in science don’t count toward schools’ adequate yearly progress under the No Child Left Behind Act. As a result, states and districts may have more flexibility to try different kinds of assessments.

“It’s still a place that might be a little more fertile,” says Tucker.

The report is the second in a series of Education Sector’s Next Generation of Accountability initiative, following a paper published last November called “Measuring Skills for the 21st Century.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Special Education Webinar
How Early Adopters of Remote Therapy are Improving IEPs
Learn how schools are using remote therapy to improve IEP compliance & scalability while delivering outcomes comparable to onsite providers.
Content provided by Huddle Up
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Teaching Webinar
Cohesive Instruction, Connected Schools: Scale Excellence District-Wide with the Right Technology
Ensure all students receive high-quality instruction with a cohesive educational framework. Learn how to empower teachers and leverage technology.
Content provided by Instructure
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School Climate & Safety Webinar
How to Use Data to Combat Bullying and Enhance School Safety
Join our webinar to learn how data can help identify bullying, implement effective interventions, & foster student well-being.
Content provided by Panorama Education

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Assessment Explainer What Is Standards-Based Grading, and How Does It Work?
Schools can retool to make instruction more personalized and student-centered. But grading is a common sticking point.
11 min read
A collage of two faceless students sitting on an open book with a notebook and laptop. All around them are numbers, math symbols and pieces of an actual student transcript.
Nadia Radic for Education Week
Assessment Letter to the Editor Are Advanced Placement Exams Becoming Easier?
A letter to the editor reflects on changes to the College Board's Advanced Placement exams over the years.
1 min read
Education Week opinion letters submissions
Gwen Keraval for Education Week
Assessment Opinion ‘Fail Fast, Fail Often’: What a Tech-Bro Mantra Can Teach Us About Grading
I was tied to traditional grading practices—until I realized they didn’t reflect what I wanted students to learn: the power of failure.
Liz MacLauchlan
4 min read
Glowing light bulb among the crumpled papers of failed attempts
iStock/Getty + Education Week
Assessment See How AP Exam Scores Have Changed Over Time
The College Board adopted a new methodology for scoring AP exams which has resulted in higher passing rates.
1 min read
Illustration concept: data lined background with a line graph and young person holding a pencil walking across the ups and down data points.
iStock/Getty