Opinion
Education Opinion

The Letter From: “In short, I see no problem with research becoming public with little or no review.” (Between I and II)

By Marc Dean Millot — July 16, 2008 9 min read
  • Save to favorites
  • Print

Last week I began an essay prompted by an exchange between left-of-center eduwonkette and right-of-center Jay P. Greene over the Manhattan Institute’s release of Building on the Basics: The Impact of High-Stakes Testing on Student Proficiency in Low-Stakes Subjects by Greene, Marcus Winters, and Julie Trivitt.

Eduwonkette pointed out that the study, offered to the press by Manhattan Institute Press Officer Bridget Sweeney in a July 1 media advisory email with the attention grabbing subject line “New Report Debunks High Stakes Testing Myth: Shows Student Gains in Science,” lacked formal peer review. When all is said and done, she argued that this was unprofessional, and has continued to do so here, and here.

Greene replied by arguing that he recognized a trade-off between getting information out early and quality control, consumers appreciate early information and discount it appropriately, and the marketplace of ideas would perform the necessary quality review. He has continued to offer variations on this theme on his blog here and here .
I was attracted to the topic by Greene’s statement: In short, I see no problem with research becoming public with little or no review. It was not only unambiguous, in the context of the exchange with eduwonkette, it had the flavor of a dismissive put down. In my view, simple assertions like Greene’s require better arguments than he offered.

The eduwonkette – Greene exchange provided an opportunity to explore a broader issue. The lack of peer review in k-12 education studies intended to influence k-12 policy – particularly those studies based on quantitative research, is a barrier to school improvement. For perhaps a century, public education has been an area of government where quantitative analysis has been largely irrelevant to the policymaking process. From the school board to the Congress, from the local central office to the federal Department of Education, quantitative research has been a weapon to advance pre-existing interests and beliefs more often than a forum for rational discussion. By aiding and abetting this culture, policy advocacy organizations like Manhattan, researchers like Greene and the philanthropies that fund them are not acting in the public interest.

Research agendas based on a point of view – right, left, center or orthogonal - are not an issue per se. Here is the issue: when policy advocacy organizations release the work of sympathetic academic researchers employing quantitative tools that reinforce the policy preferences of both, but dispense with peer review, the study enjoys the aura of “science” even as it departs from the scientific method.

I may be more sensitive to this environment than someone born to education policy. I spent my first career working in an area of national security where quantitative analysis bounded the politics inherent in decision making – formal guidance on the development of strategic nuclear weapons systems. When I began to work in education in the early 1990’s, helping to create markets for New American Schools Design Teams and recommending lending and equity investments at the Education Entrepreneurs Fund, I drew heavily on the “adaptive management” model that channels the politics in environmental regulation into technical fora. I believe in the power of analysis, know something of its limits, and think public education needs a better balance.

Last week’s Letter From dealt with the incredible nature of Greene’s statement. Today’s was to examine why I was not surprised to read that he has “no problem with research becoming public with little or no review.” In the interim, however, Greene advanced a number of arguments defending his position. The air needs to be cleared of these smoke screens, so my plan is address his arguments here and return to my plan tomorrow.

“I am the victim.”
To set up a “straw man argument” is to take what your opponent actually said, characterize in such a way that it becomes hyperbole, and then attack the flimsy argument you’ve assigned to your adversary. One of the reasons I do my best to quote the very words of people I write about in edbizbuzz is that I prefer to fight fair.

Greene argues that he was victimized by a strawman I set up last week. In his words, I asserted that Greene believes “research doesn’t benefit from peer review.” Readers will have to decide that for themselves by re-reading my post. I suggest that in his initial effort to swat down eduwonkette, Greene got cocky and overstated his own position. It is hard for him to say he does see some problems with his research for Manhattan becoming public with little or no review. Claiming to be the aggrieved party is his best chance of walking back the cat. But in making this claim, Greene is the one setting up the straw man.

I doubt any other edbizbuzz reader inferred that last week’s Letter From was intended to address the vast spectrum of education “research.” I think a reasonable person would understand that my comments were prompted by, and addressed to, a specific situation – the release of a study offered to the media, and by implication to policymakers, with all the fanfare of research subject to peer review, but without the peer review.

In that context, I addressed Greene’s statement of having “no problem with research becoming public with little or no review.” I did not say that Greene does or does not believe in the value of peer review as a general proposition. I suggested he ought to have a problem with this specific fact-pattern because 1) post-hoc review by the market is not a reliable means of quality control, and 2) there are no compelling reasons to forego the review process.

“Everyone does it.”
Related to Greene’s straw man is this assertion:

Millot seems to want to embargo information from the public until it receives peer review. If he really believes that, then he should criticize every researcher with working papers on the web. That’s almost everyone doing serious research.

Sometime in our lives all of us have heard a parent say: “I don’t care if everyone else does it. That doesn’t make it right. If your friends stood in front of a train would you follow them?” That’s a relevant argument, and I believe it applies here, but it need not be my initial reply.

Greene’s conclusion about my position is based on his strawman argument that my Letter From must address “(all) research.” The average edbizbuzz reader can distinguish between the subject of my remarks and the “work in progress” a researcher makes accessible to colleagues via the internet or, say, the daily postings of an edublogger. My argument was that some research – the kind represented by Greene’s report - should not be published until it receives peer review.

I trust that edbizbuzz readers understand the practical difference between using the internet to: 1) improve the conversation among the cognoscenti, and 2) reach those outside that circle. The author of the work in progress isn’t sending media advisories to the press. The blogger may want to affect policy, yet no reporter is going to mistake a few paragraphs for a report subject to peer review. But when a report is offered to the media with some fanfare, it is intended to influence policy and there is the implication that it’s reliable.

Judging from a decade’s worth of reading press accounts, I feel reasonably confident asserting that journalists are bound to treat reports released by the better known policy advocacy organizations without peer review no differently than reports released by RAND, AIR or MDRC that are subjected to the process. As a consequence, I argue that an unreviewed report based on quantitative analysis released after deliberate efforts to interest the media is misleading per se.

Moreover, I find this argument of Greene’s both odd and revealing.

Odd, because it places him on the horns of a dilemma: On the one hand, Greene may agree that, on its face, his work is no more significant than the blog posting or the work in progress and therefore no more deserving of formal review. That may or may not be objectively correct, but I doubt that he, the Manhattan Institute, or his funders would be terribly happy with that finding. On the other hand, if he agrees that his work deserves more respect, he concedes the need for prior review. Revealing because it suggests that he believes his work is both different (i.e., better) and does not require such review.

“Look at All the Good I’ve Done.” To underline his fealty to the value of peer review Greene pointed to his c.v. listing “two-dozen peer-reviewed publications.” That reminds me a bit of the banker accused of processing false loan documents. His attorney directs the jury to consider all the loan documents that were approved according to the rules. It’s not completely irrelevant, but it doesn’t really address the matter at hand. It’s a tactic intended to divert the jury’s attention.

“My accuser’s motives are questionable.”
Both eduwonkette and I question the motives behind a decision to release a report with all the fanfare of one subjected to peer review, but without the review. I’ll get to those motives tomorrow. In his own defense, Greene has not addressed my motives, but he has questioned those of the anonymous eduwonkette.

Who knows what motivates her, if eduwonkette is a her, or even one person? Many edbloggers wonder out loud. Who knows what motivated her to take on Greene? It’s an interesting question but quite irrelevant, and turning the spotlight in that direction is a classic tactic of misdirection. Eduwonkette may be driven by love, money or politics, but the validity of her arguments on Manhattan’s release of the Greene report is not dependent on knowing whether any of these alternative motives are true.

As for my motives, I’ll offer them here. Failure to follow the conventions that give us some assurance of the reliability of research offered to influence policy, especially quantitative research, undermines the value of all policy research and every policy analyst. Leaving quality control to the “marketplace of ideas” reduces quantitative analysis to a slugfest between opposing experts that generates more heat than light, and leaves everyone watching content to stick with their cherished beliefs.

Over the long haul, research findings that support my policy preferences based on research that does not stand up to formal peer review are unhelpful. Such work tends to cast doubt on the same findings generated by a study that passed review, and so undermines efforts to see my preferences become law or policy.

I object to Manhattan’s release of the Greene report because 1) it is misleading per se, 2) it is vulnerable to attack the moment it is released; 3) protecting against that vulnerability is relatively quick, easy and cheap, and; 4) I have little tolerance for the continuation of foolish practices that are readily amenable to repair.

Tomorrow: Back on track – why I was not surprised to read Greene’s statement: “In short, I see no problem with research becoming public with little or no review.”

The opinions expressed in edbizbuzz are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Harnessing AI to Address Chronic Absenteeism in Schools
Learn how AI can help your district improve student attendance and boost academic outcomes.
Content provided by Panorama Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Science Webinar
Spark Minds, Reignite Students & Teachers: STEM’s Role in Supporting Presence and Engagement
Is your district struggling with chronic absenteeism? Discover how STEM can reignite students' and teachers' passion for learning.
Content provided by Project Lead The Way
Recruitment & Retention Webinar EdRecruiter 2025 Survey Results: The Outlook for Recruitment and Retention
See exclusive findings from EdWeek’s nationwide survey of K-12 job seekers and district HR professionals on recruitment, retention, and job satisfaction. 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Education Quiz Education Week News Quiz: Dec. 5, 2024
Test your knowledge on the latest news and trends in education.
1 min read
President Donald Trump listens during a "National Dialogue on Safely Reopening America's Schools," event in the East Room of the White House, on July 7, 2020, in Washington.
President Donald Trump listens during a "National Dialogue on Safely Reopening America's Schools," event in the East Room of the White House, on July 7, 2020, in Washington.
Alex Brandon/AP
Education Quiz Education Week News Quiz: Nov. 26, 2024
Test your knowledge on the latest news and trends in education.
1 min read
Small Business Administration administrator Linda McMahon attends a cabinet meeting in the Cabinet Room of the White House on Aug. 16, 2018, in Washington.
Small Business Administration administrator Linda McMahon attends a cabinet meeting in the Cabinet Room of the White House on Aug. 16, 2018, in Washington.
Andrew Harnik/AP
Education Briefly Stated: October 23, 2024
Here's a look at some recent Education Week articles you may have missed.
9 min read
Education Briefly Stated: October 2, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read