“Value added” models that track the test-score gains of individual students over time hold great promise but should not yet be used as the main basis for rewarding or punishing teachers, according to two reports released this month.
The reports, by a study group of the National Association of State Boards of Education and by Henry I. Braun, a researcher at the Educational Testing Service, describe such models as a welcome antidote to judging teachers and schools based solely on whether their students have exceeded some absolute level of performance.
Value-added models “move the discussion about teacher quality to where it belongs: centered on increasing student learning as the primary goal of teaching,” writes Mr. Braun in “Using Student Progress to Evaluate Teachers: A Primer on Value-Added Models.”
“Using Student Progress to Evaluate Teachers: A Primer on Value-Added Models” is posted by Educational Testing Service.
The executive summary of “Evaluating Value-Added: Findings and Recommendations From the NASBE Study Group on Value-Added Assessments” is available from the National Association of State Boards of Education.
But he cautions that practical and technical problems remain. In particular, while studies suggest a relationship between teacher quality and gains in student learning, that suggestion is far from proving that an individual teacher has caused a student to make progress or not.
“Such interpretations are most credible when students are randomly sorted into classes, and teachers are randomly assigned to those classes,” Mr. Braun says in the study from the Princeton, N.J.-based test-maker. “In reality, the classroom placement of students and teachers is far from random.”
Student learning also can be influenced by a variety of factors beyond a teacher’s control, he continues, such as the physical condition and resources of the school, which are hard to account for in the statistical models available.
Caution Urged
While value-added models might play some role in teacher evaluation, agrees NASBE’s study group on value-added assessment, they should be used with caution.
“We believe that educators should recognize that value-added assessment is a ‘tool,’ ” says the report from the Alexandria, Va.-based organization, “but it is not ‘total’—and indeed that the data can only with certainty identify about the top 10 percent and bottom 10 percent of teachers.”
Both reports suggest that value-added methods hold great potential for making lower-stakes decisions about teachers, such as identifying those who need extra training or support.
Such models also introduce “the promise of a much-needed quantitative component in teacher evaluation,” argues Mr. Braun, but should always be combined with other sources of information, such as observations of classroom performance.
The NASBE study group also was enthusiastic about the use of value-added measures as a “data-driven component” of efforts to improve instruction at the classroom, school, and district levels.
“Indeed, many believe that this is the most significant advantage of value-added models,” the report says.
Many states are exploring ways to add a “growth” or value-added measure to calculate whether schools are making adequate yearly progress under the federal No Child Left Behind Act. The NASBE study group urges the U.S. Department of Education to allow for the use of such growth indicators as a component of AYP calculations.
Adding evidence of students’ academic growth is needed “not only out of fairness,” the study group says, but also because when combined with measures of absolute achievement, the method also provides “the most accurate picture of the effectiveness of schools.”
“In addition,” it says, “failure to use growth as one indicator of success could end up making it even more difficult to retain effective teachers in disadvantaged schools.”
That’s because schools whose students start far below the proficient level on state tests may make great progress over the course of a year and yet still not meet state targets for achievement.