Using some $50 million in federal dollars, Teach For America embarked on a bold expansion effort beginning in 2010, betting that it could scale up while maintaining quality. Although the organization didn’t ultimately reach its recruitment targets, it seems to have passed the quality test, according to a study released March 4.
An experimental study of a subset of new recruits who joined the organization during that expansion found that those teachers’ performance was similar to that of their non-TFA colleagues already working in the same school. In one limited case, K-2 reading, the TFA teachers performed better than other teachers in the same schools.
The study, released by the Princeton, N.J.-based Mathematica Policy Research, is the third random-assignment study of Teach For America conducted over the past decade. It was commissioned to examine TFA’s expansion by about 25 percent by 2012-13, from 8,200 to about 10,250 recruits. (The study examined only the K-5 grades, or a little more than a third of the teachers recruited during the scale-up, so it doesn’t say anything about the performance of middle or high school teachers.)
The study took place in 2012-13. Researchers randomly assigned students in each participating school and grade level to a class taught by a TFA teacher or a class taught by another, non-TFA teacher. (Some of the TFA teachers were in their first year teaching, others in their second year.) Then, the students’ test scores were compared at the end of the year. The random-assignment design makes it possible to attribute any effects to the teachers, rather than differences in the populations that they served.
In all, the sample included some 156 teachers located in 36 different schools and 13 school districts.
Similar to other studies of TFA, teachers participating program were more likely to have graduated from a selective school than their colleagues, and also were younger and more likely to be white and non-Hispanic.
Overall, the study found that there were few significant differences in test scores between those students taught by TFA and non-TFA teachers.
There was one notable finding, however: In reading, students in pre-K to grade 2 taught by TFA teachers did somewhat better than their peers, about 0.12 of a standard deviation. (There’s a lot of debate about whether you can translate these effect sizes into specific learning gains, and not all researchers agree. But for what it’s worth, the study says that’s about 1.3 additional months of learning.)
This finding is interesting for a couple of reasons. First, the results show an effect on reading, when almost all of the prior literature finds that the effects of TFA are limited to math scores. Second, the study uses a different kind of test, the Woodcock-Johnson III, not a statewide reading test (those typically aren’t used until 3rd grade). It’s unclear just what using a different test might have done to the results, but research has shown that effectiveness estimates do vary depending on which exams they’re based on. Third, an effect size of 0.12 is somewhat larger than the effects seen in other studies of Teach For America; the last major experimental study, of secondary math, found an effect size of about 0.07. And finally, this is the first study to find effects at such early grades.
As usual, this study doesn’t address many of the other important questions to about Teach For America, such as whether it’s the program’s selectivity or other aspects of its design that produced these results. And it doesn’t address the most highly debated (and conceptual) question about TFA: Is the cost of recruiting and training these teachers worth these results?
Still, the study is another rigorous one suggesting that, at the very least, the instruction provided by the TFA program doesn’t negatively affect student learning, and could produce gains under the right circumstances.