To the Editor:
Peter DeWitt recently responded to a blog post I wrote in which I criticized the work of John Hattie (“John Hattie Isn’t Wrong. You Are Misusing His Research,” Peter DeWitt’s Finding Common Ground blog, edweek.org, June 26, 2018). DeWitt claimed that I am “misreading [Hattie’s] research.” DeWitt linked to my post, and readers can easily resolve this question for themselves.
My whole point in the post was to note that Hattie’s error is in accepting meta-analyses without examining the nature of the underlying studies. I offered examples of the meta-analyses that Hattie included in his own meta-meta-analysis of feedback. They are full of tiny, brief lab studies, studies with no control groups, studies that fail to control for initial achievement, and studies that use measures made up by the researchers.
These examples are not cherry-picked; they are at the core of Hattie’s review. In it, Hattie cites only 12 meta-analyses. I looked at the individual studies making up every one of those meta-analyses I could find that had an average effect size above +0.40.
In DeWitt’s critique, he has a telling quote from Hattie himself, who explains that he does not have to worry about the nature or quality of the individual studies in the meta-analyses he includes in his own meta-meta-analyses, because his purpose was only to review meta-analyses, not individual studies. This makes no sense. A meta-analysis (or a meta-meta-analysis) cannot be any better than the studies it contains.
If Hattie wants to express opinions about how teachers should teach, that is his right. But if he claims that these opinions are based on evidence from meta-analyses, he has to defend these meta-analyses by showing that the individual studies that go into them meet modern standards of evidence and have bearing on actual classroom practice.
Robert E. Slavin
Director
Center for Research and Reform in Education
School of Education
Johns Hopkins University
Baltimore, Md.