A strict definition of research inserted last month into a bill for the reauthorization of the major federal law on precollegiate education has raised strong objections from education researchers.
The $15 billion federal Elementary and Secondary Act is scheduled to be reauthorized later this year. The research definition is one of many issues involved in the current debates in Congress over the ESEA, and the future of the renewal legislation was uncertain last week. In an amendment to the version of the bill in the House of Representatives, scientific research is defined as “randomized experiments” that use comparison and control groups to gauge the effects of the treatment being studied.
But education researchers, whose ranks include sociologists, psychologists, anthropologists, and historians, are concerned that the definition leaves out a lot of other important work they do.
“Randomization is a powerful tool, and we should be doing more of it,” said Gerald R. Sroufe, the director of government relations for the American Educational Research Association, a Washington-based group representing 23,000 education researchers. “But it’s a very narrow part of the scientific method.”
Concerns About Quality
Research that is more descriptive, such as case studies, can give more of a ground-level view of classroom workings, experts in the field say. Such studies also provide clues for formulating hypotheses that researchers can test with larger, quantitative studies. What’s more, researchers say, the costs of randomized experiments often exceed the budgets they have to work with.
In its opposition to the new language, the AERA is being joined by a host of other Washington-based social science groups, including the Consortium of Social Science Associations and the Federation of Behavioral, Psychological, and Cognitive Sciences.
For education researchers, the controversy smacks of déjà vu. The Reading Excellence Act, a $520 million grant program passed by Congress two years ago, stipulates that grants may go only to reading programs that use “scientifically based research.” That language was widely considered a slap in the face to education researchers at the time and a feather in the cap for the National Institute for Child Health and Human Development, which uses more of a medical model in its own reading research.
Since then, lawmakers have slid the “scientifically based research” wording into several education bills. But the ESEA language, which was proposed by Rep. Bob Schaffer, a Colorado Republican, ratchets up the standards for educational research even further with its call for randomized studies.
At the center of the debate are long-standing concerns about the quality of education research. (“What Is (and Isn’t) Research?,” June 23, 1999.)
A chorus of critics in recent years has suggested that much of education research is shoddy, vulnerable to political manipulation, and too small-scale to be of any consequence. And a small contingent of prominent academics, such as Harvard University researchers Paul E. Peterson, Frederick J. Mosteller, and Tom Loveless, are suggesting that one way to improve education research is to conduct rigorous, randomized studies. The model they often point to is a wide-scale experiment on the effects of smaller classes that was launched across Tennessee beginning in the late 1970s.
Douglas Mesecar, a legislative analyst for Mr. Schaffer, said he was responding to that group’s call when he drafted the language added to the ESEA bill.”
If it’s something the federal government is paying for, this is a great way to really get a hard look at the services or the programs that are provided,” he said. “Whatever we’re doing for our kids, we want to make sure we get the best.”