With the “No Child Left Behind” Act of 2001 emphasizing rigorous research, calls for more randomized trials in education studies have gained a new momentum in Washington policy circles.
Long before “rigorous research” and “randomized trials” became buzzwords in the field, however, the federal Department of Education quietly began setting aside more and more of its research dollars for experimental studies. According to the department, at least 16 such studies are in the works, nearing completion, or about to be funded over the next two fiscal years. That’s a big increase over previous years, the department’s research chief says.
“It’s very clear in No Child Left Behind that questions of what works in education will have high priority,” said Grover J. “Russ” Whitehurst, the director of the new Institute of Education Sciences, which oversees much of the research supported by the department. “Questions of what works link naturally to randomized trials.”
Common in research in medicine, pharmacology, and welfare reform, such studies entail randomly assigning subjects to either experimental or comparison groups. Such experiments are rarer and somewhat controversial in education.
For instance, of the 1,200 articles on mathematics and science education that were published between 1964 and 1998 in American Educational Research Journal, only 35 involved randomized trials, according to one study.
The apparent reluctance to use the methodology in education stems in part from concerns that such experiments can be expensive, unwieldy, and, in some cases, unethical. In addition, some experts contend, such studies often offer little help in understanding why an intervention works or doesn’t work.
“Good evidence is only as good as the theory that interprets it,” said E.D. Hirsch Jr., the University of Virginia professor who created the Core Knowledge school improvement program. Even though a well-respected, randomized study showed that reducing class sizes improved student achievement in Tennessee, he noted, efforts to do the same in California were not as successful.
“So what good did randomization do in this case?” Mr. Hirsch said.
Shrinking Pot?
The fear, other experts say, is that the Education Department’s new emphasis on randomized experimentation could shrink the pot of money available for studies using matched comparison groups, for descriptive studies, and for basic research.
“I don’t think that is likely,” said Mr. Whitehurst, who was the department’s assistant secretary for research and improvement before the creation of the new institute this fall. “By and large, these randomized trials will be funded out of evaluation and national- activities money, and that’s money that’s not previously been available to the research community.”
What’s more, he said, “we expect the pie to get larger.” He pointed out, for example, that President Bush’s Education Department budget request for fiscal 2003 called for increasing funds for core educational research by 44 percent, or roughly $46 million.
The experimental studies that the department expects to release soon, most of which were begun under President Clinton’s administration, gauge the effectiveness of the 21st Century Community Learning Centers after-school program, the Even Start family- literacy program, and Upward Bound, which prepares students to attend college.
Two of the ongoing, federally financed experiments include a study of the Success for All schoolwide improvement program, which is expected to cost $12 million over five years in public and private money, and a new, five-year, $30 million study examining the effectiveness of six different types of preschool programs.
In addition, the department is planning to spend $47 million over the next two fiscal years for randomized studies in 11 areas. The subjects are: early reading instruction, preschool literacy instruction, after-school programs, family literacy, alternative certification of teachers, professional development, educational technology, English-language learning, vocational education, charter schools, and adult literacy.
Mr. Whitehurst said department staff members analyzed the percentage of federally supported studies examining cause-and-effect kinds of questions that were addressed through experimental—though not necessarily randomized—methodology over the past two years. He said that percentage increased from 32 percent in fiscal 2001 to 100 percent in fiscal 2002.
Still, Mr. Whitehurst said, the department has no intention of counting out other kinds of studies.
Experts say randomized experiments are important because they are the soundest way available to find out what works. A not-yet-published study by three researchers at Mathematica Policy Research Inc., a Washington-based group, suggests, in fact, that investigators may get answers in randomized experiments that are different from those they get when they test the same questions using quasi-experimental methods, such as demographically matched comparison groups.
Some experts trace the movement at the federal level to conduct more randomized education studies to the 1998 passage of the Reading Excellence Act. That was the first call in federal education law for “scientifically based research.”
Clear Preference
The same terms and their definitions have since appeared in two other important federal education laws: the Education Sciences Reform Act, which revamped the Education Department’s research arm, and the No Child Left Behind Act. The latter, which is a reauthorization of the Elementary and Secondary Education Act, uses the term more than 100 times. (“Law Mandates Scientific Base for Research,” Jan. 30, 2002.)
While the definition does not exclude other types of education studies, it expresses a clear preference for randomized methodology.
“Getting the definition into the law became a catalyst for a much broader movement that was about to burst on the scene,” said Bob Sweet, who, as a staff member of the House Education and the Workforce Committee, helped draft that definition.
In some sense, the movement parallels one that took place in welfare reform in the 1970s and 1980s, according to Judith D. Gueron, the president of the Manpower Demonstration Research Corp., a New York City-based research organization. Since then, dozens of experiments have been conducted, leading to a major transformation in state and federal welfare programs.
“There were questions about the morality of doing this, questions about feasibility, and those questions exist in education,” Ms. Gueron said. “They have been overcome, but I think it is harder in schools. It’s going to be a harder approach that should be used carefully.”
The debate rages on in education. The National Research Council, in response to emerging concerns over what constitutes scientific research in education, published its own report on the subject last year.
This fall, in an effort to further that discussion among education researchers, the council convened a second group of prominent researchers. Among the topics on their agenda: randomized field trials and when it’s appropriate to use them.
“What we have now is an opportunity—and I’m hoping the education research community sees it as an opportunity—to respond to the call for better research and to think a little more rigorously about the relationship between the method and the question, and not to reject the idea of randomized trials,” said Michael J. Feuer, the executive director of the National Research Council’s division of behavioral and social sciences and education.
“In a way,” he said, “it’s a rather healthy development to see the policy community expressing a real strong interest in more and better education research.”