Opinion
School & District Management Opinion

Education Research Could Improve Schools, But Probably Won’t

By Ronald A. Wolk — June 19, 2007 7 min read
  • Save to favorites
  • Print

In my idealistic days 25 years ago, I believed that education research would lead us to the promised land of successful schools and high student achievement.

Many folks still believe that, including the president of the United States, who insists he is determined to make education an “evidence-based field,” “a scientifically based practice.” (Despite the fact that he has long denied global warming, opposes embryonic-stem-cell research, and wants to include the teaching of “intelligent design” alongside evolution in schools.)

—Jonathan Bouw

BRIC ARCHIVE

As much as I hate to say it—and I truly hope I am wrong—I no longer believe it, and here’s why:

Research is not readily accessible—either physically or intellectually—to the potential users. Summaries of major studies appear in periodicals like Education Week, but the detailed results (usually written for other researchers in academic-speak) are usually available only in separate reports or in relatively low-circulation journals that don’t reach those who most need to know.

Even if research findings were widely available and written in clear prose that even a dimwit like me could understand, the reports would not be widely read. Most teachers are not consumers of research, nor are most principals or superintendents.

And even if educators and policymakers did read all the studies in a timely fashion, schools and education practice would not change very much, mainly because making significant changes means altering value structures, disrupting routines, and teaching old dogs new tricks.

Moreover, researchers seem to delight in neutralizing each other. That’s easier to do in social science than the physical sciences because there are so many uncontrollable variables. And the bigger the question addressed, the more vulnerable the findings.

Research rarely leads to significant change because it is often expensive to apply or is a threat to the status quo.

When one study claims small classes boost student achievement, another insists they do not. One study finds social promotion harmful; another says retention hurts children more. Money matters; no it does not. Vouchers work; no they do not. And on and on.

This makes it easy for policymakers and practitioners to get off the hook, because they can always find research results to rebut those they don’t agree with. And it makes it tougher on foundations trying to decide where their grants will make the most positive difference.

When some entrepreneurial soul proposes trying something different from what we have been doing in traditional schools for a century, naysayers immediately warn that there is not enough research to justify such an experiment and remind us that “it is immoral to use other people’s children as guinea pigs.”

By some perverted logic, we are told that we do not have enough research to justify trying something, but if we do not try it, how will we ever get any data to assess whether it works?

Research rarely leads to significant change because it is often expensive to apply or is a threat to the status quo. Good professional development may really improve teaching, but it can be terribly costly. Small classes may boost student achievement, but they increase costs.

If a major study found that public charter schools were outperforming traditional public schools by a country mile, the teachers’ unions would still fight them to the death and use all of their influence in state legislatures to help snuff them out.

In rare cases where research findings are neither too costly nor too controversial, and are therefore embraced by policymakers, they are often applied so ineptly that they are ineffective—or worse, they wind up doing more harm than good.

The textbook example in recent years is the proposal of then-Gov. Gray Davis of California to extend the limited class-size-reduction measure enacted by his predecessor, former Gov. Pete Wilson, to cover all students.

I have often tried to picture how the governor and his aides reached that decision. The only uncynical explanation I can come up with is that they must have been smoking something. Was there nobody in the room who raised crucial questions such as whether there were enough teachers or classrooms available, or whether this was the best use of limited resources?

The federal No Child Left Behind Act is a more recent and powerful example. Based to a fair degree on research and conventional wisdom, the law’s good intentions have been undermined by its heavy-handed implementation.

I find much education research suspect because it depends so heavily on the flawed measure of standardized-test scores. In most important studies I have seen over the years, the research findings are based solely on student test scores. The limitations in the metric devalue the findings.

I have listened to the liturgy of psychometricians enough to understand why researchers rely so heavily on test results. But scores on standardized tests are not a true or reliable measure of student learning. They do not measure many of the things we hope schooling will produce in children, like good habits of mind and behavior, and they do not measure Howard Gardner’s other “intelligences,” like artistic talent, kinesthetic ability, and social skills.

Finally, efforts to apply research findings are not likely to produce the desired outcomes because the educational system, like a combustion engine, will not work efficiently if any of its critical parts are broken. Most would agree, for example, that schools will not succeed without good teachers. But you need good salaries, good working conditions, and radically improved teacher-preparation programs to attract smart students and produce good teachers. You cannot get those conditions, however, without having adequate resources, altering practices in higher education, and making basic changes in the structure and operations of schools. In short, the broken components of the system have to be addressed simultaneously.

By some perverted logic, we are told that we do not have enough research to justify trying something, but if we do not try it, how will we ever get any data to assess whether it works?

Deborah J. Stipek, the dean of Stanford University’s graduate school of education, published an essay on education research in these pages several years ago that made some of the points I make here. (“‘Scientifically Based Practice’,” March 23, 2005.) But one statement in her essay boggled my mind. She wrote: “[B]asing decisions on research and data is a new concept. Both the desire to consult research and the skills to interpret it will need to be developed within the teaching community.”

If the dean is correct, and she probably is, one wonders what educators and teacher-preparation programs have been doing for the past century.

It is easier to criticize than to offer remedies, but Dean Stipek’s comment suggests at least one: Researchers could do more to create an audience for their work. The people who conduct education research and follow it are often the same people who prepare teachers in education schools and departments. What better context for preparing teachers than the most important and timely research on the field they are about to enter? What better opportunity to cultivate in aspiring teachers an interest in research?

Another improvement might be more emphasis on longitudinal studies. These are expensive and time-consuming, but they also can be powerful. Researchers are still feasting off data from the National Education Longitudinal Study (NELS) and the High/Scope Perry Preschool study. Wouldn’t it be helpful to have data on what has happened to the graduates of alternative schools during the past 20 years, and to follow the graduates of charter schools for the next 20 years, instead of relying on standardized-test scores that are usually incompatible with these schools’ educational philosophies and methods?

In the mid-1990s, I was a member of the National Research Council committee that produced SERP—the Strategic Education Research Partnership program. It was an attempt to deal with education’s systemic challenge. Could we identify the highest-priority questions, those whose answers would lead to better schools and improved learning, and get the education and policy community to agree? Could a carefully constructed program of strategic research priorities lead to an integrated assault on education’s systemic problems? Could government and foundations be persuaded to provide long-term funding for such an effort?

If those questions were ever to be answered affirmatively, maybe education research could improve education. Maybe, if there were more of a consensus in the research community, there would be more positive outcomes, both in legislatures and in schools.

Related Tags:

A version of this article appeared in the June 20, 2007 edition of Education Week as Education Research Could Improve Schools, But Probably Won’t

Events

School & District Management Webinar Crafting Outcomes-Based Contracts That Work for Everyone
Discover the power of outcomes-based contracts and how they can drive student achievement.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Harnessing AI to Address Chronic Absenteeism in Schools
Learn how AI can help your district improve student attendance and boost academic outcomes.
Content provided by Panorama Education
School & District Management Webinar EdMarketer Quick Hit: What’s Trending among K-12 Leaders?
What issues are keeping K-12 leaders up at night? Join us for EdMarketer Quick Hit: What’s Trending among K-12 Leaders?

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

School & District Management Schools Want Results When They Spend Big Money. Here's How They're Getting Them
Tying spending to outcomes is a goal many district leaders have. A new model for purchase contracts could make it easier.
7 min read
Illustration of scales balancing books on one end and coins on another.
iStock/Getty
School & District Management Should Schools Have Cellphone Restrictions for Teachers Too?
Schools expect teachers to model responsible cellphone use.
4 min read
Illustration of a young woman turning off her mobile phone which is even bigger than she is.
iStock/Getty
School & District Management Download Shhhh!!! It's Underground Spirit Week, Don't Tell the Students
Try this fun twist on the Spirit Week tradition.
Illustration of shushing emoji.
iStock/Getty
School & District Management Opinion How My Experience With Linda McMahon Can Help You Navigate the Trump Ed. Agenda
I have a lesson for district leaders from my (limited) interactions with Trump’s pick for ed. secretary, writes a former superintendent.
Joshua P. Starr
4 min read
Vector illustration of people walking on upward arrows, symbolizing growth, progress, and teamwork towards success.
iStock/Getty Images