In my efforts to increase student interest in understanding probability, I often do a magic trick where I repeatedly shuffle a deck of cards and have three different students pick a card. Using the counting principle, students see that if we went through this process more than 132,000 times, theoretically I would guess the selected card of all three students only once. Of course, since I know how to trick my students into selecting the three cards I want them to select, I get the answers I want every time.
Last month, the U.S. Chamber of Commerce, the American Enterprise Institute, and the Center for American Progress published a damning report on education titled Leaders and Laggards that I believe uses similar sleight of hand. The report is presented as a fair, research-based, and data-supported assessment to determine how states are doing in innovation of education. What I see under the surface, however, is a strong bias for Teach for America and alternative-certification programs and against experienced teachers, teacher tenure, and teacher unions.
The report’s attention-getting strategy is to assign grades, with states receiving individual letter grades for seven broad educational categories. Each category comprises several indicators that are given numerical values and tabulated to produce a numerical score for the category, which is then converted to a letter grade.
But the data used in Leaders and Laggards’ seems highly selective. For example, one indicator under the “Staffing: Hiring and Evaluation” category is the state’s use of “national programs to recruit nontraditional teachers,” namely Teach for America, The New Teacher Project, and Troops to Teachers. A state’s score for this indicator is influenced heavily by whether or not it uses teachers from the TNTP and TFA. If it does, it gets 100 points (an A); if it doesn’t, it gets 59. (That score is then combined with a rating given for its relative number of Troops to Teachers.)
But the quality or effectiveness of teachers hired through these programs does not influence the grade. The authors simply note that these organizations “have demonstrated a strong record, based on independent research, of bringing nontraditional applicants into the classroom. …” But, from a different perspective, I think you could also argue that states reliant on these programs have not adequately managed their pipeline of qualified teachers and are overly willing to place people with limited formal teacher-training and short-term commitments into high-needs classrooms.
Few would argue against the proposition that every state should be committed to placing the most competent and effective teachers into the classroom. If TFA and TNTP teachers are the strongest candidates available, they should get the job and hopefully develop a passion to improve their practice and increase their commitment to many years of teaching. But grading states numerically based on a blanket commitment to TFA and TNTP, instead of the effectiveness and quality of teachers hired, seems questionable. And by the way, where’s the indicator for innovative practices in supporting and developing all teachers over the course of their careers?
Likewise, the “Hiring and Evaluation” category includes an indicator for alternative certification. The report rewards states with higher levels of alternatively certified teachers and penalizes states with low levels. A separate indicator rates the states on the “strength” of their alternative-certification systems (in terms of admissions standards and accountability), but no attempt is made to determine the actual quality or the effectiveness of the alternatively certified teachers. Are they just assumed to be better for some reason?
As with several other indicators, the ten states with the highest percentage of alternative-route teachers receive an A, 100 points; the next ten states receive a B, and so forth. The process thus guarantees an alarming conclusion, since the grading is designed to give 40 percent of the states Ds or Fs. I have won multiple awards for effective teaching, but if I assessed my students using this assessment process, I would be on a plan of improvement next week.
And there’s another problem with this indicator. The report assigns a degree of accuracy to the alternative- certification data that I don’t think can be said to exist. The percentage of alternative-certificate teachers in a state was based on an estimate from the 2007-08 Schools and Staffing Survey from the U.S. Department of Education. This estimate comes from a sampling of teachers and principals from each state who completed a mail-in survey questionnaire. As I understand it, SASS surveyed on average only three to eight teachers at schools selected for its sample. So Leaders and Laggards is assigning letter grades on the basis of self-reported survey data. What was the margin of error for the state survey results?
To take another example, the report also grades states in the broad category of “Staffing: Removal of Ineffective Teachers.” This assessment is based entirely on the SASS survey of school principals. States are graded on the basis of the responses that surveyed principals gave to a set of questions on “barriers” to removing ineffective teachers. If a high percentage of principals in a state reported such barriers—e.g., “teacher associations and unions,” “teacher tenure,” “personnel policies” “dismissal is too stressful”—that state received a low grade for the category.
This is another case where the report’s methodology seems calculated to generate sensationalistic results. Again the data is sorted to give the top ten states an A, the next ten a B, the next ten a C, and the remaining states Ds and Fs. But the actual data from the principal survey is surprisingly tightly grouped. A state score of 65 (average) received an A in the report. State scores of 63 and 64 received Bs and a state score of 62 received a C. That is a very steep grading curve, especially for data based on a survey.
More troubling is the fact that the report makes no attempt to determine how many principals actually took steps to remove ineffective teachers or whether the principals in the survey were simply imagining or presuming what barriers they might someday experience. Not to mention that many of the so-called “barriers” referenced actually have legitimate—some might say even say valuable—purposes within a school system. Why weren’t those measured?
The authors of this report appear to have stacked the deck in order to achieve their underlying goal. They believe that improving education requires weakening teacher tenure and union influence while supporting alternative certification and national programs to place inexperienced people (through TFA and TNTP) into teaching positions with minimal training.
The authors’ biased use of data does not inspire me to look to their organizations for solutions to our educational challenges. Outside groups that lack teacher perspectives, distort data and have a pre-determined agenda may have something meaningful to offer. But I’d estimate that chance to be about the same as correctly guessing in order, three randomly picked cards from the deck (1/132600), without resorting to a magician’s trick.