At a time when efforts to tie teacher evaluations more closely to student performance appear to be gaining momentum, one of the nation’s biggest school districts believes it has found another compelling reason to build such a link.
Researchers in the Dallas district have shown that having a less effective teacher can significantly lower a student’s performance over time, even if the student later gets more competent ones. And while new evidence that the students of good teachers tend to perform better might not seem surprising, district officials were struck by just how much teacher quality mattered to student achievement.
|
“This is the first time we’ve measured teachers’ effects on the ability of kids to perform on assessments,” said Robert Mendro, the district’s executive director of institutional research. “And what surprised us the most was the size of the effect.”
The findings also were an eye-opener for some of the system’s school board members, who last week were briefed on the results as they met to discuss an accountability strategy for the 150,000-student Texas district, the nation’s 10th-largest.
Cumulative Effects
Building on the work of researcher William Sanders, who has tracked teacher-quality effects in Tennessee, Dallas researchers started by dividing about 1,500 of the district’s 8,500 teachers--those for whom complete personnel information was available--into five groups of equal size, from least to most effective. (“Research Notes: Bad News About Bad Teaching,” Feb. 5, 1997.)
Teachers’ effectiveness was based on comparisons of their students’ test results at the end of the school year with the test results of students with similar backgrounds who were in the previous grade the year before. Teachers whose students made the greatest gains on the assessments--which included the Iowa Tests of Basic Skills and state tests--were deemed most effective. The researchers also took into account student background factors, such as race and ethnicity, English proficiency, and poverty.
They then tracked the three-year progress--beginning in the 1993-94 school year--of about 17,000 students who were in grades 4-8 by the 1995-96 school year. Those students who had more of the most effective teachers generally made far greater gains on the ITBS than those who had mostly less effective ones.
For example, the average reading scores of a group of 6th graders who had three of the most effective teachers in a row rose from just under the 60th percentile to about the 75th percentile. A similar group of students who had two of the least effective teachers, and then one of the most effective ones, dropped from just above the 60th percentile to just below the 50th percentile.
“What it does is send a message loud and clear that we’ve got to invest more in staff development, in getting teachers with more skills, and in retaining our best teachers,” Mr. Mendro said.
Now that they have the data, district officials are looking at how to respond. Some school board members said the research bolsters their arguments that the system should consider giving student performance a more prominent role in teacher evaluation.
“We want to see what happens when you really do link student performance with teacher evaluations and with accountability,” said board member Kathleen Leos.
‘Diagnostic Tool’
Currently, Dallas teachers are given “classroom effectiveness indices” based on how well their students perform on a battery of tests. But the indices aren’t available until the summer, while the teachers’ formal evaluations are in the spring.
Some principals, nonetheless, do use the indices. Judy Zimny, the principal at L.L. Hotchkiss Elementary School, said she looks at teachers’ past classroom-effectiveness indices when helping them set their goals for the coming school year. When the teachers’ evaluations take place later in the spring, she tries to evaluate how well they’ve met their goals.
“It’s just one more piece, though it’s an especially relevant piece,” Ms. Zimny said.
District officials said last week that they want to see how more principals could make use of the data in deciding how to identify teachers for additional professional development and training.
“This is not a tool to eliminate teachers; it is a diagnostic tool to identify where the needs are,” said James Hughey, the acting superintendent. “This is just in the talking stage.”
Leery of Rankings
Some experts warn against basing teachers’ evaluations too much on their students’ test scores.
“One year of test scores is a pretty poor indicator,” said Julia Koppich, an education consultant from San Francisco who has studied teacher evaluation systems. “You need two, three, or four years to get a pattern, and a poor teacher shouldn’t need to wait that long to get help.”
Ms. Koppich favors peer-review systems, in which teachers mentor and evaluate each other, as a way to improve teaching quality.
In Dallas, meanwhile, some teachers are concerned that such research could be used unfairly to label some educators as “bad teachers.”
“My contention is that the preponderance of teachers across this nation and here in Dallas are good teachers, and if you gave them a different working environment, they’d do better,” said Roy Kemble, the president of the Classroom Teachers Association of Dallas, an affiliate of the National Education Association.