Opinion
Federal Opinion

The ‘Growth Model’ Pilot Isn’t What You Think It Is

By Michael J. Weiss — June 15, 2008 6 min read
  • Save to favorites
  • Print

As policymakers think about the reauthorization of the No Child Left Behind Act, one of the key considerations is the measurement of school performance. Many people find the federal law’s original measures of school performance (“status” and “safe harbor”) to be unfair because they do not take into account students’ initial achievement levels. As a result, schools are judged in large part by that which is beyond their control: the amount of knowledge their students had when they entered the school. Consequently, schools are held to very different standards; some need only to produce modest learning gains, while others must produce unrealistically large gains.

In response to such criticism, the federal government initiated the growth-model pilot program in 2005, allowing states to use the progress of individual students over time in determining whether schools are making adequate yearly progress, or AYP, toward academic proficiency for all students. Many thought that the growth-model pilot would help recognize—or at least exempt from negative labeling—schools where students were making large achievement gains but not reaching proficiency because of their low initial achievement levels. Such low-status, high-growth schools fail to make AYP under the law’s original measures, even though they may be relatively more effective than other schools at raising achievement.

Seven of the nine states participating in the pilot (Alaska, Arizona, Arkansas, Florida, North Carolina, Ohio, and Tennessee; Delaware and Iowa are the two remaining) are using “projection models” that give schools credit for getting students “on track” to become proficient in the future, even if they are not currently. Under NCLB’s traditional model—a status model—a school has to bring an initially low-performing 3rd grader up to proficiency by the end of the year for the school to receive credit for her performance. Under the pilot’s projection model, the school could receive credit for this student, even if she failed the 4th or 5th grade exam, if learning gains were sufficiently large such that the student appeared to be on track to become proficient by 6th grade.

In theory, the advantage of using projection models is that they give schools a few additional years to bring students up to proficiency. But, as reported in these pages, the pilot program’s growth models don’t appear to be making a big difference in the proportion of schools meeting annual goals under the federal law. (“Impact Is Slight for Early States Using ‘Growth,’” Dec. 19, 2007.) The reason for this has to do with the type of growth models being used.

In practice, projection models are extremely similar to NCLB’s original status measure. In schools where students enter with high initial achievement levels, the learning gains required to get students on track to become proficient are quite small, while in schools where students enter with low initial achievement levels, the required learning gains to get students on track to become proficient may be unrealistically large. Consequently, under the federal growth-model program, schools are still held to different standards—some must produce large gains while others need only to produce small gains. Both status and projection models require all students to reach a fixed proficiency target regardless of their initial achievement levels. It is because No Child Left Behind’s status model and the growth-model pilot program’s projection models are so similar that very few new schools are making AYP because of “growth” alone.

Are people simply wrong about growth models? Are there really very few low-status, high-growth schools? The answer is no.

This is made worse by the fact that the projection models currently being used are often inaccurate. Florida’s is one example. Education Week reported this past December that the growth-model pilot there was having an impact: “About 14 percent of the schools that made AYP in Florida made it under the growth model but not the status model.” Unfortunately, the reason for this is not that students in Florida are making more growth than students in the other states piloting these models. Nor is it because of the difficulty of Florida’s state standards. Rather, it is because Florida’s projection model is inaccurate. It assumes that students’ test scores will increase according to a linear trend. That is, if a student gains 200 points between 3rd and 4th grade, Florida’s projection model assumes he will continue to gain 200 points between 4th and 5th grade, and then another 200 points between 5th and 6th grade. But the state’s developmental scale is curvilinear, with students typically making significantly smaller learning gains as they progress in school. As a result, Florida’s projection model systematically identifies many students as on track to become proficient when in reality they will not. At their best, projection models mimic NCLB’s status model; at their worst, they allow some additional schools to make AYP because of measurement error and/or model misspecification.

While it is unlikely that projection models will have a significant impact on which schools are identified as making AYP, if federal policymakers decide to continue using such models, they should at least require states to demonstrate the accuracy of their models at the student and school levels. States with records of student-achievement data can demonstrate the accuracy of their models by using them to determine which students were on track to become proficient and then comparing these projections with the observed outcomes (that is, whether the students became proficient). In my research, I’ve found that the models are not very accurate when compared to any reasonable standard.

This raises the question: Are people simply wrong about growth models? Are there really very few low-status, high-growth schools? The answer is no. What most people have in mind when they think of growth models is a particular type that is not allowed under the federal pilot program: value-added models. These aren’t allowed because they do not require all students to become proficient. Unlike the growth-model pilot’s projection models, however, value-added models attempt to measure schools’ relative effectiveness by accounting for students’ initial achievement levels. Although many value-added models are extremely complex, the ones used to measure school performance can be loosely thought of as comparing the average gains of students in a school to the gains those same students could have been expected to have made had they gone to the “average” school. While value-added measures of school performance have their own problems, these are the models people typically think of when they envision an accountability system that includes a growth component. If value-added models were used, we would identify quite a few low-status, high-growth schools.

Value-added models are not allowed under the growth-model pilot program because they don’t adhere to the core principle of NCLB—to bring all students up to proficiency. But they do represent the fairest (albeit imperfect) way to compare schools’ effectiveness. The dilemma over which measure of school performance to use highlights an inherent tension when designing an accountability system for schools, one between the desire to compare their relative effectiveness (value-added models) while simultaneously holding them accountable for bringing all students up to high achievement levels (status or projection models). Some people thought that the pilot program’s projection models were a happy middle ground. Unfortunately, projection models don’t address the essential tension between status and growth. They are just the same old status-model wine in a new bottle.

A version of this article appeared in the June 18, 2008 edition of Education Week as The ‘Growth Model’ Pilot Isn’t What You Think It Is

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Assessment Webinar
Reflections on Evidence-Based Grading Practices: What We Learned for Next Year
Get real insights on evidence-based grading from K-12 leaders.
Content provided by Otus
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
Creating Resilient Schools with a Trauma-Responsive MTSS
Join us to learn how school leaders are building a trauma-responsive MTSS to support students & improve school outcomes.
School & District Management Live Online Discussion A Seat at the Table: We Can’t Engage Students If They Aren’t Here: Strategies to Address the Absenteeism Conundrum
Absenteeism rates are growing fast. Join Peter DeWitt and experts to learn how to re-engage students & families.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Federal Trump Admin. Funding Cuts Could Hit Efforts to Restore School Libraries
The Institute of Museum and Library Services is one of seven small federal agencies targeted for closure in a recent executive order.
Books sit on shelves in an elementary school library in suburban Atlanta on Aug. 18, 2023.
Books sit on shelves in an elementary school library in suburban Atlanta on Aug. 18, 2023. The Trump administration's efforts to eliminate the Institute of Museum and Library Services, the largest source of federal support for libraries, is throwing a number of library programs—including efforts to grow the ranks of school librarians—into a state of uncertainty.
Hakim Wright Sr./AP
Federal Trump Admin. Tells Schools: No Federal Funds If You're Using DEI
A letter sent out Thursday is another Trump administration to curb diversity, equity, and inclusion in schools—and use funding as leverage.
6 min read
Vector illustration of a large hand holding a contract and a smaller man with a large pen signing the contract while a woman in the background is clutching a gold coin and watching as he signs.
DigitalVision Vectors/Getty
Federal Opinion The U.S. Dept. of Ed. Has Been Cut in Half. We Have Thoughts
Absent clear explanation and deft management, the push to downsize the department invites confusion and risks political blowback.
7 min read
The United States Capitol building as a bookcase filled with red, white, and blue policy books in a Washington DC landscape.
Luca D'Urbino for Education Week
Federal Linda McMahon Abruptly Tells States Their Time to Spend COVID Relief Has Passed
Secretary Linda McMahon said the Education Department would no longer honor the extensions it had granted states.
3 min read
Education Secretary Linda McMahon arrives before President Donald Trump attends a reception for Women's History Month in the East Room of the White House, Wednesday, March 26, 2025, in Washington.
Education Secretary Linda McMahon arrives before President Donald Trump attends a reception for Women's History Month in the East Room of the White House, Wednesday, March 26, 2025, in Washington. In a letter Friday, McMahon told state leaders on March 28 that their time to spend remaining COVID relief funds would end that same day.
Jacquelyn Martin/AP