As he surveyed the heavens, Galileo made careful observations and challenged the prevailing hypothesis that the earth was the center of the universe. But this same scientist, so careful in his observations, also came to conclusions about the tides that were, by today’s standards, laughably wrong. That the scientific method can be both illuminating and wrong, even when practiced by a distinguished researcher, is a cautionary tale for educators, school leaders, and policymakers.
That the scientific method can be both illuminating and wrong, even when practiced by a distinguished researcher, is a cautionary tale for educators, school leaders, and policymakers.
As any observer of educational policy who has not been living in a cave knows, there are now federal mandates for the use of “scientific” programs in education. I know this because I have done what few members of Congress have done: I actually read the “No Child Left Behind” Act of 2001, signed into law by President Bush in January after passing Congress by overwhelming majorities. In its formidable 1,184 pages, the law uses the term “scientific” or “scientifically” 116 times and the word “research” 246 times. The current controversy over what “scientific research” means in the context of education implies a dichotomy between certainty and sophistry that exists only in the minds of partisans who appear to revel in yet another fact-free debate on educational policy. Let us separate myth from reality:
Myth No. 1: Science grants certainty. What does scientific research really mean? Does it, as its proponents imply, provide a world in which, if we only followed the salutary models of medicine, chemistry, and physics, rational people would agree on clear and obvious solutions? Or does it give us complexity and uncertainty, with debates over the effectiveness of mammography, the sequence of elements, and the number of planets in our solar system, along with the quandaries over tides and planetary bodies confronted by Galileo? When scientific methods are applied, researchers can disagree. They can even be wrong. Even in the hard sciences, controversies abound and certainty is elusive. While educators can learn much from scientific methods, the insinuation that these methods grant certitude is, to put it charitably, a hypothesis unsupported by the evidence.
Myth No. 2: Double-blind studies, such as those used in pharmaceutical research, are the gold standard for educational research. In pharmaceutical studies, the control group receives a placebo while the experimental group receives the real drug. In an astonishing number of cases, both groups show evidence of improved health. That is, something that researchers know to be valueless demonstrates an apparent impact on patient health. As a result, researchers do not have a clean line of demarcation between success and failure, but rather some evidence that some degree of health is associated with some dosage of the experimental medicine that is less evident in the absence of that drug. Where there is a relationship between the experimental medicine and improved health, researchers note that there is an association—a statistical correlation—between the drug and the condition of the patient. They cannot make conclusions about cause and effect until they have a detailed understanding of the physiology of the biochemical reactions caused by the medicine. Sometimes, as is the case with the origin of many cancers, correlation is all that scientists have, as the physiological evidence remains unavailable.
Myth No. 3: The No Child Left Behind Act clearly defines scientific research.
The most serious problem with pharmaceutical studies is that other variables, including the condition of the patient, nutrition, attitude, exercise, diet, sleep, and a host of other personal and environmental conditions that affect the medicine are not always perfectly controlled. Educational researchers, the presumed unscientific slugs in this debate, have not yet figured out how to control the nutrition, attitude, exercise, diet, and sleep of their research subjects, any more than they have figured out how to control the 18 hours each day spent outside of school.
Myth No. 3: The No Child Left Behind Act clearly defines scientific research. In fact, a reading of the plain language of the bill makes two inferences abundantly clear. First, the demand for scientifically supported programs, however pervasive, does not exist in a vacuum. Throughout the bill, the same sentence links a demand for such programs with an equally strong imperative for support of a broad and academically rigorous curriculum. In numerous instances, the same sentence links scientific programs with a demand for professional-development programs.
Most state standards require that 4th grade students comprehend the logic of Venn diagrams, in which students must understand that a statement can be part of one set but not necessarily represent a definition of the entire set. Participants in the debate over educational research would be well advised to rise to this standard. To put a fine point on it, the assertion that “science equals phonics” is only true with respect to the fact that some research studies support the use of phonics as part of an effective reading program. The assertion that “any program that does not include phonics is not scientific” does not meet the standard of logic we require of 4th graders.
Personal opinions, distorted case studies, and flimsy observations can all masquerade as "research."
Myth No. 4: Anything bearing the label “research” is worthy of the name. One need only recall the tobacco advertisements of the 1940s in which physicians endorsed the soothing effect of cigarettes on the throat to question the relationship between authority figures and putative research conclusions. Personal opinions, distorted case studies, and flimsy observations all masquerade as “research.” Galileo’s successors in the 19th and early 20th centuries used their version of science to prove the superiority of the Scandinavian over the Italian and the rightful subordination of the African to the European. Academic journals in the early days of the 21st century allow the inconveniences of sample size and detailed disclosure of experimental methods to give way to political agendas. Rather than be defensive, educators should acknowledge these problems, just as researchers in medicine, physics, and chemistry regularly air their dirty linen and, with equal amounts of clumsiness and rigor, advance the cause of reason.
In education, the mantras of “studies show” and “research proves” are the staple of too many vacuous keynote speakers for whom a footnote is a distant memory of a high school term paper. The real researchers I know confess that our work is but a pebble on a mountain of research begun by others, list the details of findings, welcome double-checks and criticism, and eat crow on a regular basis, firm in the conviction that transparent error is the price we pay for knowledge. Our mistakes involve more work and more risk than speculation unencumbered by evidence, and by our mistakes we simultaneously confess error and advance knowledge.
The best we can do is consider a variety of conflicting studies and recognize the inherent uncertainties of research.
The frailties of scientific research do not render us helpless. We can formulate sound opinions and make well-reasoned decisions on the allocation of scarce resources based on the information available. Rather than asserting that we have found ultimate truth with as much conviction as Galileo had in his false conclusions about ocean tides, we can acknowledge our limitations. The best we can do is consider a variety of conflicting studies and recognize the inherent uncertainties of research. At the same time, we must challenge the “scientifically based” assertions of others, particularly when prejudgment is substituted for fact.
Congress and the president got it right, though perhaps not in the way that they intended. We do need scientifically based programs in education. But real science involves ambiguity, experimentation, and error. However distasteful that trio may be, it is far superior to political agendas, uninformed prejudice, and breathless enthusiasm for the flavor of the month.
Douglas B. Reeves is the chairman of the Center for Performance Assessment and the author of Holistic Accountability (Corwin Press, 2002) and The 20-Minute Learning Connection (Simon & Schuster, 2001). He lives near Boston.