States must evaluate the effectiveness of the free tutoring being provided to children under the federal No Child Left Behind Act. But a lack of resources might force them to compromise on the rigor of those evaluations, a new report says.
The policy brief outlines factors that states might want to consider in deciding whether providers of “supplemental educational services” are fulfilling their promises to raise student achievement. It acknowledges that states “may face a trade-off” between wanting rigorous evaluations and having the time, money, and staff to conduct them.
“Evaluating Supplemental Service Providers: Suggested Strategies for States” is available from tutorsforkids.org.
Released last week by the Supplemental Educational Services Quality Center, a federally funded project of the American Institutes for Research, based in Washington, the paper aims to guide states through the thorny process of designing such evaluations.
The advice comes as states are wrestling with the issue, since the 3-year-old federal law requires them to evaluate tutors after two years and to remove them from their lists of approved providers if they are not improving achievement. Providers themselves also are debating the fairest ways for states to assess their track records.
Inaccurate appraisals risk allowing ineffective companies to use public money to work with struggling youngsters, or unjustly excluding providers from a state marketplace when they have fulfilled their promises.
The stakes will only rise, observers note, as more schools must offer tutoring to children from low-income families if those schools have failed to make adequate academic progress three years in a row.
The number of students receiving such tutoring more than doubled between the 2002-03 and 2003-04 school years. Yet only 11 percent, on average, of those eligible are receiving tutoring. (“NCLB Choice Option Going Untapped, But Tutoring Picking Up,” March 16, 2005.)
Not There Yet
By anecdotal accounts, most states are not far along in designing evaluations. A recent informal inquiry by the American Federation of Teachers found only eight ready to roll. Even as states venture forth, however, many experts question their capacity to design and implement high-quality evaluations.
Ayeola Fortune, the director of extended learning and development opportunities for the Council of Chief State School Officers, in Washington, said most states are not yet confident that they can accurately measure the quality of tutoring services. Few states have the infrastructure or staff to monitor providers, or obtain enough data, she said.
See the accompanying item,
“Some of the more rigorous types of evaluation are going to be prohibitive for a lot of states, given what the capacity level is,” Ms. Fortune said. “But states are motivated to find an answer.”
Michael D. Casserly, the executive director of the Council of the Great City Schools, a Washington advocacy group for urban districts, said districts would welcome “more aggressive” state scrutiny of tutoring providers because there is no proof now that outside providers are serving students any better than the districts themselves.
The crucial issue, to many educators, is whether evaluations will be able to isolate and measure the effect of the tutoring.
“Even with the most burdensome and useful [evaluation], it doesn’t really get to the issue of what is the value added of these providers,” said Beth Antunez, an assistant director of educational issues for the AFT.
Steven M. Ross, the director of the Center for Research in Educational Policy at the University of Memphis, who wrote the policy brief in conjunction with the Supplemental Educational Services Quality Center, acknowledged that, even using a control group, “the best we have is suggestive evidence” that the extra help explains gains in achievement.
Some tutoring providers have suggested forming an accreditation system to evaluate their performance. Jeffrey H. Cohen, the president of Baltimore-based Catapult Learning, said many providers are wary that states will measure programs with varying goals by the same yardstick, yielding an unfair result.
Data Gathering
Providers also worry that if states adopt a common test, programs’ content will evolve to reflect that test and chase diversity from the marketplace, he said.
By many accounts, Louisiana is among the farthest along in designing the data-gathering basis for SES evaluations. The state has developed a data system for all of its after-school programs that can communicate with its data system for all other student information, said Donna Nola-Ganey, the assistant superintendent in the state education department’s office of school and community support.
Providers enter data on enrollment, daily attendance, and test scores along with their invoices for payment. When crossed with the other state data—including which students receive tutoring or other services—Louisiana will be able to compare the state test scores of children in tutoring programs with those who are not, Ms. Nola-Ganey said.
The state will also add parent and teacher comments to its evaluation, she said.
The paper suggests that states evaluate providers in three areas: effectiveness, including measures of student achievement; customer satisfaction, including input from parents, teachers, and school staff members; and service delivery, which asks whether providers offered what they promised in such areas as instructor qualifications, number of tutoring hours, and group size.
The center on supplemental-services quality encourages states and districts to collaborate in obtaining and sharing information needed for evaluations. It also suggests that each state consider creating a database into which all the tutoring-related data could be entered.
But the brief acknowledges that some data might not be readily available from schools or districts, and that some kinds of information-gathering are time-consuming or expensive, such as hiring translators to interview parents who don’t speak English. It also cautions that bias could color the inquiry, such as when teachers serve as instructors in tutoring programs, or districts themselves provide the programs.
States could use state-mandated tests to measure gains if yearly results were available, but couldn’t attribute any gains to tutoring unless they examined results for a control group of children who didn’t receive tutoring, the paper notes.
Use of providers’ own before-and-after tests could save time and money, but would risk bias because such tests are designed to measure the providers’ own programs and are more likely to show gains, it adds.