Polls indicate that Americans today are more ideologically and politically divided than they have been for many decades. Yet, people also express a desire for elected officials to work together to solve the nation’s problems.
Greater cooperation across political and ideological lines is badly needed in education to jump-start the long-overdue and anxiously awaited revisions to the No Child Left Behind Act and to clear the logjam of other major federal education laws that have expired or will soon expire. As they deal with this problem, legislators might take a lesson from an extraordinary example of cooperation among education experts with different viewpoints that took place in Washington over a five-year period—and on no less controversial a topic than NCLB.
Here’s what happened. In 2005, when I was the president of the Center on Education Policy, or CEP, I came to believe that policy discussions were concentrating too much on how NCLB was being carried out and not enough on whether it was meeting its goal of increasing student achievement, especially for historically low-performing groups of students. I decided that the center should do a study of student achievement since the passage of NCLB, which was signed into law in January 2002. For guidance on how to design such a study, I consulted with top assessment experts, particularly Robert L. Linn from the University of Colorado at Boulder and W. James Popham from the University of California, Los Angeles. They advised us to look at test data, as there was no other uniform source of information on learning. That meant looking at results from both the National Assessment of Educational Progress, or NAEP, and the state assessments required by No Child Left Behind.
Since this was obviously a sensitive issue that also raised technical questions, it seemed prudent to put together an advisory group of people with different opinions about NCLB. Otherwise, the study conclusions would have little credibility.
Linn and Popham, nationally recognized assessment experts, agreed to be on a panel. Then, we filled out the group with Eric A. Hanushek from the conservative Hoover Institution at Stanford University; Frederick M. Hess from the conservative American Enterprise Institute; and Laura Hamilton from the nonpartisan, research-based RAND Corp. I came to the panel after having served for many years as the chief education expert for the Democrats on the education committee in the U.S. House of Representatives. This mixture gave us expertise in tests, policy, and research, while also bringing in political diversity.
The group met with the CEP’s in-house experts, Naomi Chudowsky, Diane Stark Rentner, and Nancy Kober, and agreed on a three-step process to ensure objectivity in their findings.
We must respect other people’s opinions. No one has a monopoly on the truth."
First, the CEP would ask each state to provide detailed test data and other relevant information and to verify the accuracy of the data collected. Comparable NAEP data would also be analyzed. The Human Resources Research Organization, which has extensive experience analyzing test data and conducting other types of program and policy analysis, helped us with the work. Second, the panel and members of the CEP staff would establish consistent rules for analyzing the state data. Third, CEP staff members would write draft reports with guidance from the expert panel, and would submit the reports to the panel for review and comments.
The panel met in person multiple times a year for the first few years, and developed a sense of camaraderie. Later, members met less often, but supplemented their meetings with email consultations.
The states were wonderfully cooperative in providing data, which the CEP, in turn, made available broadly.
From 2007 to 2011, the CEP produced 17 reports using this process of data collection and analysis. President George W. Bush, important congressional leaders, and education experts cited the CEP’s achievement reports, and the media filed hundreds of stories tied to their release. In later years, the center issued several reports a year on different aspects of achievement.
The expert panel faithfully fulfilled its duties during the whole process. The members spent hundreds of hours debating the rules for analysis, guiding the staff’s work, and reviewing draft and final reports. Amazingly, although there were disagreements among the experts on the rules, analysis, and conclusions, there was always a willingness to find common ground.
What led to our success in this potentially fraught process? Many reasons. For one, the experts were all knowledgeable and professional; they came to respect one another, even though they may have disagreed among themselves on political or ideological matters.
In addition, establishing objective rules for analysis before the data were collected was important. It meant that everyone agreed to let the chips fall where they would once the rules were applied.
Lastly, the openness of the process was key in that each panel member read the draft reports at various stages and debated the accuracy of the conclusions.
While this type of cooperation among people of varying beliefs is not unique, it is becoming rarer because of the divisions we find in society today. I see three lessons in the CEP’s experience that can help us have more, rather than less, cooperation. These points echo my experiences working in Congress for 27 years seeking bipartisan cooperation in legislating.
• First, we must respect other people’s opinions. No one has a monopoly on the truth.
• Second, we must try for cooperation, and give assurances that objectivity will prevail despite one’s beliefs.
• Third, the process must be completely open so that everyone knows everything.
A democracy cannot function well without cooperation and compromise. Neither can there be agreement on the best policies for our nation’s schools unless we try to be objective, open, and respectful. I hope the CEP’s experience will offer a road map for others.