Assessment

‘Nation’s Report Card’ Has a New Reading Framework, After a Drawn-Out Battle Over Equity

By Sarah Schwartz — August 13, 2021 10 min read
results 925693186 02
  • Save to favorites
  • Print

The governing board that oversees the test known as the “nation’s report card” has adopted a new framework for designing the reading assessment, one that will provide more granular information about student performance by socioeconomic status and race and test students’ ability to read across disciplinary contexts.

But even after a unanimous vote to approve the new framework last week, some members of the panel tapped to develop the document have lamented what they see as missed opportunities for a fairer test—the after-effect of a heated back-and-forth over equity in assessment during the development process over the past several years.

The National Assessment of Educational Progress, the NAEP, is given to a nationally representative sample of U.S. students to measure what they know and can do across subjects.

The National Assessment Governing Board supervises the NAEP, and leads the process for updating the frameworks that guide how the test is constructed. The reading framework was last revised in 2009. The new changes will go into effect for the 2026 test administration.

A key consideration in updating the framework is maintaining NAEP’s long-term trend line, the ability to compare results from upcoming years to past scores, so as to draw conclusions about whether students are improving or not. (The National Center for Education Statistics, which conducts and analyzes NAEP tests, has said that the new adopted framework is likely to maintain trend.)

Understanding what the trends are is especially important now, said Lesley Muldoon, the NAGB’s executive director, to evaluate the effect that COVID-19 has had on student achievement “so that people can have a trusted baseline that they can use going forward.”

The framework development process has always included a diversity of perspectives, with varying factions working to hammer out their differences to develop a consensus document. But tensions ran especially high this time.

The debate raised questions central to the construct of reading itself: What does “real-world reading” actually look like? And how much of it is influenced by readers’ cultural backgrounds and the social contexts in which they learn?

At the same time, these conversations were taking place in the middle of a national conversation on race that has pushed educational organizations to consider how teaching, learning, and assessment can better support students of color.

Framework offers more data on students’ reading across disciplines

There are significant changes in the consensus document—changes that advocates on both sides of the framework debate said, in interviews with Education Week, would make NAEP a richer source of data on students’ reading ability.

The new framework calls for more detailed reporting on NAEP subgroups. Scores won’t just be disaggregated by race, ethnicity, and English-language learner status, but also differentiated by socioeconomic status within race and ethnicity. So, going forward, it would be possible to see the differences in scores between Black students from high-income families and Black students from low-income families, for example.

Students will also be tested on their ability to read informational text in social studies and science. This isn’t meant to evaluate students’ content knowledge—"this is not a test about whether they know the causes of the American Revolution,” Muldoon said—but rather that students can use discipline-specific reading skills in genres they’ll encounter in the classroom and real world.

And the framework adds a new “comprehension target,” or tested component of reading comprehension ability. Previously, the framework included three: 1) locate and recall information, 2) integrate and interpret information, and 3) analyze and evaluate information.

Now, students will also be expected to “use and apply” what they read, to solve problems or create something new. For example, after reading a series of opinion pieces on a subject, a student might be asked to write a blog that synthesizes the different positions or offers their own argument.

“This is not just your mother’s and father’s ‘find the main idea,’” said David Steiner, a professor of education at Johns Hopkins University and the executive director of its Institute for Education Policy. (Steiner was not involved in the drafting of the framework, but has commented publicly on the process.)

Other updates to the framework formalize changes that have already been made to the NAEP, following its shift to digital, rather than paper, administration. These include updates such as incorporating more digitally native text—such as what might be read on websites—and virtual “characters” that simulate a classroom environment or group work.

One new feature added to this list: Test-takers will also have examples of student responses to questions, to better illustrate what a strong response looks like.

‘What kind of reading do we want to draw inferences about?’

At last week’s board meeting, held both in person in McLean, Va., and streamed online, members praised the consensus process that resulted in the framework adoption.

Still, some members of the development panel felt that the final version diverged too far from the initial drafts—and that commitments made to equity were stripped at the 11th hour by a vocal minority of NAGB’s main board.

At the heart of this disagreement were two interconnected questions: How to define reading comprehension and what constitutes “real-world” reading.

Early versions of the framework, written by the NAGB-appointed development board, put forth a sociocultural model of reading comprehension. The model argues that reading is in part about what’s going on inside a student’s head—the cognitive processes—but that comprehension is greatly influenced by social and cultural contexts like home, school, and community.

These early drafts also broadened the use of “informational universal design elements,” text introductions, pop-ups, and videos that give students some background knowledge about the passages that they are about to read. This change was suggested because research has shown that reading comprehension ability is greatly influenced by readers’ background knowledge on the topic. (Students will probably have an easier time reading Animal Farm, for example, if they have some understanding of the Russian Revolution.)

Gina Cervetti, an associate professor of literacy at the University of Michigan School of Education, and a member of the framework development panel, said that beefing up these knowledge scaffolds would have made NAEP a truer test of students’ reading comprehension ability. It would test their knowledge of text structures, or their skills in analyzing information, rather than their content knowledge, she said. It would level the playing field for students who come to the test with different stores of knowledge.

When this version of the framework was put out for public comment, though, it brought forth harsh criticism from some corners of the education world. “This came to be seen as an attempt to inflate the scores of traditionally underperforming students,” Cervetti said. “And nothing could be further from the truth.”

But Steiner, who criticized the draft framework when it was released for comment, said that providing all that supporting information would have created conditions on the NAEP that don’t exist in real-world reading. Take a word like yacht, he said. “You could argue, and this is argued in many state assessments, you can’t use a word like yacht, because less-affluent students have not grown up in a world of yachts.”

But “yacht,” Steiner said, is a word that regularly shows up in works that students might be expected to read as adults: news, magazines, novels. It’s part of a broad public vocabulary that students would be expected to know, and that teachers could reasonably be expected to make sure students know, he said.

Testing whether students are prepared for reading in college and career should include testing whether they can read and make sense of texts that include that word, he argued—and not testing this could mask indicators that students might have trouble with reading later on.

The draft framework was released for public comment last summer, and the development panel incorporated changes resulting from that feedback. But in May, when the revised framework was presented to the full board, some members thought the changes didn’t go far enough.

Grover (Russ) Whitehurst, a NAGB board member and former director of the Institue of Education Sciences, conducted his own, further revision of the document, striking most of the references to sociocultural frameworks and toning down the use of informational UDEs, to the alarm of many members of the original development panel.

“The goal ... is to handle background knowledge in ways that strengthen the validity of the assessment, rather than trying to define it out of existence as a factor in reading comprehension,” Whitehurst wrote at the time.

To hammer out these differences and create a consensus document, NAGB’s chair, Haley Barbour, assembled a smaller, cross-committee working group which put forth the final framework as adopted.

Informational UDEs are still in the framework, but they play a much smaller role. This concerns Cervetti, who maintains that a more robust set of informational UDEs would make the NAEP more like “real” reading, not less.

“In the real world, outside of a standardized assessment, we rarely read completely unfamiliar texts in isolation,” she said. If a student reads a word they didn’t know, they can look it up. “We all have phones, and computers, and people [around us], and dictionaries,” Cervetti said.

“What constitutes real reading is, I think, a real bone of contention. And it makes a huge difference,” said P. David Pearson, a professor emeritus at the University of California, Berkeley’s Graduate School of Education, and the chair of NAGB’s development panel. “But the question is, what kind of reading do we want to draw inferences about?”

Possible changes to framework development process on the horizon

Pearson said the final framework is “something to be celebrated,” but also that he would want to see more work done—in defining reading in more of a sociocultural context, which he said would bring NAEP in line with other national and international assessments, and in gathering more data about students’ school and community environments. And he questioned the framework development process, which requires that NAGB approve new developments through consensus.

“I think that’s a great tradition, but if things get controversial, and if there are ideological and theoretical differences, then I’m not convinced that consensus is the only way to make important decisions,” he said. “The other thing about consensus is that it’s another name for minority rule, just as the filibuster in the Senate is another name for blocking the majority.”

The majority of the framework development committee supported the version of the document put forth in earlier drafts, Pearson said, and changes were introduced by a small group of dissenters in the full NAGB board.

But Whitehurst, one of these dissenters, said that his does not represent a minority view. He argued that many in the reading education community—including researchers and school-level educators alike—would endorse a model of reading that put more emphasis on the cognitive processes than sociocultural contexts. But, he said, this diversity of viewpoints wasn’t included on the framework development panel.

“Those of us on the board who sort of had to take that position would not have had to if there were greater diversity in the views of those who developed the document,” he said.

After a drawn-out public battle over the reading framework, the framework development process itself is up for review this September by the NAGB board—in part, so the team can “have an easier time with framework development in the future,” said Sharyn Rosenberg, NAGB’s assistant director for assessment development, in the board meeting last week.

Ideally, Whitehurst said, the framework development process going forward would produce documents in which “the tensions are already worked out.”

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Literacy Success: How Districts Are Closing Reading Gaps Fast
67% of 4th graders read below grade level. Learn how high-dosage virtual tutoring is closing the reading gap in schools across the country.
Content provided by Ignite Reading
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
AI and Educational Leadership: Driving Innovation and Equity
Discover how to leverage AI to transform teaching, leadership, and administration. Network with experts and learn practical strategies.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School Climate & Safety Webinar
Investing in Success: Leading a Culture of Safety and Support
Content provided by Boys Town

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Assessment Opinion Why Are Advanced Placement Scores Suddenly So High?
In 2024, nearly three-quarters of students passed the AP U.S. History exam, compared with less than half in 2022.
10 min read
Image shows a multi-tailed arrow hitting the bullseye of a target.
DigitalVision Vectors/Getty
Assessment Grades and Standardized Test Scores Aren't Matching Up. Here's Why
Researchers have found discrepancies between student grades and their scores on standardized tests such as the SAT and ACT.
5 min read
Student writing at a desk balancing on a scale. Weighing test scores against grades.
Vanessa Solis/Education Week + Getty Images
Assessment Why Are States So Slow to Release Test Scores?
Nearly a dozen states still haven't put out scores from spring tests. What's taking so long?
7 min read
Illustration of a man near a sheet of paper with test scores on which lies a magnifying glass and next to it is a question mark.
iStock/Getty
Assessment A District’s Experiment: What Happens When Schools Do Less Testing?
Los Angeles Unified will excuse some schools from periodic assessments. Supporters hope it will inspire new ways to measure learning.
6 min read
An illustration on a red background of a silhouette of an individual carrying a ladder and walking away from a white arrow shaped sign post, with an arrow facing the opposite direction that has been cut out within the arrow shaped sign with cut pieces of paper on the ground below it.
DigitalVision Vectors