After-School Study’s ‘Unfounded’ Review
Your front-page article “After- School Report Called Into Question,” (May 21, 2003) reports on a statement from some members of the technical working group criticizing Mathematica Policy Research’s study of the federal 21st Century Community Learning Centers program. I am a member of the external panel, but declined to sign the statement for several reasons.
First, some of the technical criticisms are unfounded, and the researchers have responded to these in their letter, which is also cited in your article. As with any research study, the MPR evaluation has both strengths and weaknesses, which were discussed at the working-group meetings and some of which I noted in my review of the draft report that I sent to MPR. The study’s limitations are also documented in the report, and their presence should not lead to the dismissal of a complex research effort, but rather to an informed interpretation of the reported findings.
Second, the statement’s strong rejection of the study design and methods does not reflect the discussions that occurred at the two meetings of the technical working group that I attended. It is, therefore, unfair to reject the results when ample opportunities were provided to debate the technical details of the study.
Finally, what I believe is at the core of this statement is an unhappiness with how the report has been used, rather than with its technical merits. And this is where I agree with my colleagues. The results were used to support a drastic reduction in funding for the 21st Century Community Learning Centers before the study’s results were available for scrutiny and debate by other researchers.
The new mantra of educational research is “scientific rigor,” which is a laudable goal. But the scientific process requires more than just well-designed research studies. It also demands informed debate, and often replication, before important decisions are made using the results of a single study. This should be the lesson we all learn from this experience.
Michael Puma
President
Chesapeake Research Associates LLC
Annapolis, Md.
Gauge the Impact Of Top Teachers
To the Editor:
I read with great interest your recent article on racial gaps in the perception of educational achievement (“Racial Gaps Found to Persist in Public’s Opinion of Schools,” May 21, 2003). You highlight the progress made in North Carolina in closing the achievement gap between minority (African-American and Latino) and white students. Could the fact that North Carolina has the most national-board-certified teachers in the nation play a role here? Of the 23,936 teachers certified by the National Board for Professional Teaching Standards, 5,121 are from North Carolina.
Preliminary research from Lloyd Bond and others has indicated that the students of nationally certified teachers achieve at higher levels than the students of teachers who have not gained such certification. Could there be a link then between the number of board-certified teachers, the ratio of these teachers to student enrollment, and student achievement? It is certainly worth investigating.
My co-researcher Greg Vallone and I are completing a two-year study through the University of California, Los Angeles, in which the number of teachers in a low-performing district who participate in the national-board- certification process has been increased by seven-fold. While it is too early to know the full relationship of this development to student gains, we will track the progress of the students of these national-board candidates.
Adrienne Mack-Kirschner
National-Board-Certified Teacher
Los Angeles, Calif.
Virginians Debate Standards, Exam
To the Editor:
I read with amusement and disbelief the letter “Virginia’s Test Gives No Aid to At-Risk,” published in your May 21, 2003 issue, from Mickey VanDerwerker, a member of the Bedford County, Va., school board and an education consultant. The letter alleged that Virginia’s strong continued gain in its Standards of Learning program is not supported by other student- achievement indicators, such as national test results.
Ms. VanDerwerker has worked hard to gain media attention as someone with seemingly no professional ties to the public education system. She frequently criticizes Virginia’s SOL program. (“As Scores Rise, Virginia Ponders Future of Accountability,” Jan. 15, 2003.) Unfortunately, her allegations in her recent letter represent either gross distortions of the data or are simply wrong. Here are the facts.
On the SAT I, since Virginia’s SOL program began in 1995, our students’ verbal scores are up 6 points and math scores are up 12 points, for a combined gain of 18 points. Ms. VanDerwerker’s claim in her letter that Virginia’s participation rate has declined is simply wrong. The facts show that the participation rate on our most recent year (2002) was 68 percent, more than 20 percentage points above the national average and higher—not lower—than Virginia’s participation rate a decade ago or in 1995. The state’s relatively high participation rate makes our students’ significant SAT score gains even more noteworthy.
On the National Assessment of Educational Progress, on Virginia’s most recent NAEP math test, our 4th graders made the second-highest gain in the nation, and our 8th graders made the third-highest gain.
On the Stanford Achievement Test-9th Edition, over the five years of testing since the SOL program began, Virginia students have shown gains at every grade level tested (grades 4, 6, 9) in every academic- skills category tested (reading, writing, math).
Interestingly, Ms. VanDerwerker chooses to ignore the situation prior to the launch of Virginia’s SOL program in 1995, even though she was a professional educator during this period. She ignores the pre-SOL period because it represents such a stark contrast to the gains post-SOL.
Virginia’s students suffered one of the nation’s worst declines on the NAEP reading test in 1994. During the decade prior to the launch of the SOL effort, on Virginia’s own state Literacy Passport Test, one out of three children failed to pass in any given year, and there was no improvement whatsoever over 10 years’ time. Combined SAT scores gained little from 1991 to 1995.
Knowledgeable educators well know that test scores will fluctuate in any given year, and no doubt Virginia’s will continue to fluctuate on various tests in the future, up some years, down in others. What is important is the overall direction that we can discern from looking at results from numerous tests over numerous years. The data, when looked at fairly and without ideological bias, clearly show that since the launch of Virginia’s SOL program eight years ago, Virginia’s students are learning more and performing better academically as indicated by a variety of state and national assessments.
With the exception of Ms. VanDerwerker, many early critics of Virginia’s SOL program have changed their opinions about the program’s positive impact on student achievement. For example, your Jan. 15 article quotes James McMillan, an education professor at Virginia Commonwealth University (and also an early critic of the SOL program) as saying: “The test scores have risen dramatically and have continued to rise more than I thought they would. I’ve seen evidence that particularly in low-performing schools, there have been real turnarounds.”
Virginia’s success in this regard is due to the tireless dedication and hard work of many teachers and school administrators. It is sad to see the results of their efforts distorted and discredited as Ms. VanDerwerker does in her letter.
Kirk T. Schroder
President (1998-2002)
Virginia Board of Education
Richmond, Va.
Why Waste Funds On a ‘Spin’ Office?
To the Editor:
I was shocked and perplexed by your story on the U.S. Department of Education’s expensive new public relations office created to promote the department’s implementation of the “No Child Left Behind” Act of 2001 (“Ed. Dept. Invests $500,000 in Team to Tout Its Agenda,” May 28, 2003).
It is particularly disturbing that the department so readily dishes out more than half a million dollars for a PR staff of politically connected individuals while simultaneously proposing $50 million in cuts from the 2004 allocation for the Safe and Drug-Free Schools Program.
I strongly support the president and his administration’s efforts to protect our nation from terrorism. But it is unfortunate that the level of sustained funding provided elsewhere is not maintained on a comparable level for protecting our schools. And it is especially alarming that the school safety coffers are being robbed of critical funds while at the same time the Education Department gives birth to a “spin” office whose need is questionable.
While the department’s goal may be to leave no child behind, it would seem that officials there are increasingly leaving common sense and good fiscal practices behind. The confidence of many in the education community is quickly being left behind as well.
Kenneth S. Trump
President
National School Safety and Security Services
Cleveland, Ohio
Report on Reading Was ‘Bad Science’
To the Editor:
In your article on the recent reanalysis of the National Reading Panel’s report (“Analysis Calls Phonics Findings Into Question,” May 21, 2003), Linnea Ehri praises the work by Gregory Camilli and colleagues because Mr. Camilli uses, in Professor Ehri’s words, “science and evidence rather than rhetoric to conduct his critique of the NRP phonics report, unlike other critics.”
In addition, the article goes on to say that previous “harsh criticism” of the report was because of “its narrow focus on quantitative research—studies that have measurable results, are replicable, and have undergone peer review.”
This is false in the case of our critiques, as they were based on the NRP report’s substantive errors and omissions, and in fact avoided the qualitative-quantitative debate. Our critiques met the National Reading Panel on its own empirical turf and identified the numerous deficiencies and misrepresentations in the science to which it staked a claim. We have argued that the NRP erred in its analysis and reporting of studies, omitted studies, ignored major issues in the field, and violated basic principles in appraising experimental research. Despite its claims of being “scientific,” the National Reading Panel report was simply bad science.
Gerald Coles
Ithaca, N.Y.
Elaine Garan
Professor
California State University-Fresno
Fresno, Calif.
Stephen Krashen
Professor Emeritus
University of Southern California
Los Angeles, Calif.
Mr. Coles is the author of Reading the Naked Truth: Literacy, Legislation & Lies (Heinemann, 2003). Ms. Garan’s article “Beyond the Smoke and Mirrors: A Critique of the National Reading Panel Report on Phonics” appeared in the March 2001 issue of Phi Delta Kappan. Mr. Krashen’s paper “False Claims About Phonemic Awareness, Phonics, Skills vs. Whole Language, and Recreational Reading” is available online at www.nochildleft.com.
Noting the Pitfalls In Online Testing
To the Editor:
Regarding your special issue “Pencils Down: Technology’s Answer to Testing,” (Technology Count 2003, May 8, 2003): I had the experience this past fall of watching my students take the Edutest online assessments (from Lightspan). The design of the math and verbal tests was in my estimation very poor. Many problems and passages did not fit the 14-inch monitors we had in our lab, so the students had to scroll up and down to read the entire math problem or reading passage. If the multiple-choice questions required interpreting a graph, the students had to scroll up to the diagram.
It was also quite easy for a student to glance at another student’s screen to see the answer that had been selected. Therefore, cheating is a major concern in this kind of environment. The instant or immediate analysis that is available once the students finish the exam is very helpful, but again, I would use caution in making instructional decisions based on data from a poorly designed test and an environment in which cheating has been facilitated.
Susan Eastman
John Eaton Elementary School
Washington, D.C.
Teaching’s ‘Trilemma’
To the Editor:
Regarding the recent essay by Vivian Troen and Katherine C. Boles (“The ‘Trilemma’ Dysfunction,” Commentary, May 14, 2003): Readers wishing for more details on the “Millennium School” these authors prescribe as a solution to the problems they find in elementary schools should know that the Franklin School in Lexington, Mass., was organized in such a fashion in September of 1957.
Francis Keppel, then the dean of the Harvard University graduate school of education, conceived the team-teaching format for exactly the same reasons that Ms. Troen and Ms. Boles are advocating this structure: to improve the effectiveness of the teacher, to provide career opportunities for teachers, and to improve the educational conditions so that capable teachers remain in the profession.
The Lexington Team Teaching Project was implemented under the direction of Robert H. Anderson, then a Harvard professor of education; Ethel Bears was the Franklin School principal, and I became the principal in 1972 when she retired. The Estabrook School, the first facility designed to accommodate team teaching, was opened in Lexington in 1961.
These Lexington team-teaching schools flourished as long as the superintendent and the principals assigned to them cared to support and facilitate the teaching staff in its collaborative work. The Franklin School and other older schools were sold as the student population declined in the 1980s.
Vivian Troen and Katherine Boles write: “How can we reinvent an occupation that has remained essentially unchanged for the past 150 years?” Team teaching requires a fundamental change in the way teachers work. A change that can lead to great teacher satisfaction, as I know from the experiences of teachers assigned from their single-classroom schools in Lexington to the Franklin School who then became vocal advocates for teaming. The literature on education reform has amply demonstrated that time, training, and support are needed to make such changes.
I find it discouraging that our educational memory is so short. Two books that both the authors and Education Week readers should read are: Team Teaching, edited by Judson T. Shaplin and Henry F. Olds Jr. (especially Chapter 6) and Team Teaching in Action, by Medill Bair and Richard G. Woodward. Both books were published in 1964, and, yes, I know that was nearly 40 years ago. However, they can both be found in the library of the Harvard graduate school of education, where Katherine Boles is a faculty member.
A more recent book by Robert H. Anderson and Barbara Nelson Pavan, Nongradedness: Helping It to Happen (published by Scarecrow Education in 1993), discusses teaming and details research studies on multiage and teaming classrooms.
The power of team teaching enlivened the Franklin School then, and the need is even greater now, as educators struggle to improve the learning environment for all students.
Barbara Nelson Pavan
Emerita Professor of Educational Leadership
Temple University
Philadelphia, Pa.
To the Editor:
I read “The ‘Trilemma’ Dysfunction” with much interest. We in California are moving into a state-mandated change in credentialing that involves teacher induction (job-imbedded learning) for two years. The model is based on research that has come out of the New Teacher Center in Santa Cruz, Calif. It parallels much of what Vivian Troen and Katherine C. Boles have written. I commend them for their roles in addressing much needed changes.
Our beginning-teacher support-and-assessment team is writing to the 20 induction standards and struggling to make the kind of change in our district’s system that will provide meaningful professional development based on teacher needs. One of those standards requires “equity training.” With that in mind, it struck me that this Commentary addresses teachers as females. My only suggestion would be to replace “she” with “s/he.”
Denise Campbell
Hacienda Heights, Calif.