Opinion Blog

Classroom Q&A

With Larry Ferlazzo

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to lferlazzo@epe.org. Read more from this blog.

Teaching Opinion

How Teachers Can Judge the Credibility of Research

By Larry Ferlazzo — March 21, 2025 7 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
  • Save to favorites
  • Print

We teachers are bombarded with “research-backed” this or “evidence-supported” that.

Maybe we have the time to read it and maybe we don’t.

But what are the questions we should be asking about it?

Today’s post is the second in a three-part series (see Part One here) offering a checklist that teachers can use to judge the credibility of the research behind the actions we are being told we should take in the classroom.

Don’t Waste Teachers’ Time

Cara Jackson currently serves as the president of the Association for Education Finance & Policy. She previously taught in the New York City public schools and conducted program evaluations for the Montgomery County public schools in Maryland.

Educators are often skeptical of claims that a specific program can improve student outcomes. Research evidence could help assess these claims, enabling school districts to make better spending decisions. Yet, educators’ use of research to make purchasing decisions is limited. Even when educators want to use evidence, they have limited time, and companies selling the programs are unlikely to be forthcoming about the limitations of studies that demonstrate effectiveness.

Educators should ask whether the study is designed in a way that allows us to say whether a program caused changes in student outcomes. You’ve probably heard the phrase “correlation isn’t causation,” but what does that really mean?

Correlational studies measure the relationship between two things, but the correlation could be explained by something other than the two things of interest. For example, teachers might see that students who turn in more homework assignments tend to have higher test scores. Homework could cause higher test scores, but it could also be that students who are more motivated are both more likely to do their homework and to do well on tests.

A key question to ask is, “Can we rule out all plausible alternative explanations?” To begin ruling out rival explanations, I would look for a control or comparison group and whether the study accounts for prior performance.

akeyquestion

Does the study have a control or comparison group?

If a study only has data from one group of students who took part in a program, changes in outcomes could be explained by many factors other than the program. Imagine that a group of 1st graders receive a reading intervention and their test scores increase between fall and spring. It’s possible that test scores improved because students matured, became more familiar with the test, or read at home. We need to compare them with another group of students to rule out those possibilities.

The difference between control and comparison groups is a critical one. In a control group, researchers randomly assign who takes part in a program—or it could be a natural experiment, such as a lottery, in which people are randomly selected to take part in the program. With a comparison group, researchers compare students who receive the program with a group of similar students. The goal is to have two groups that are similar (as seen in the fruit baskets below).

fruit

Figure by Cara Jackson

Does the study account for group differences?

Students in the two groups may be slightly different, and those differences could explain the outcomes. Group differences are particularly important to consider when using comparison rather than control groups, since students in the group taking part in the program may have sought out a learning opportunity.

For example, students who opt into voucher programs, honors or Advanced Placement courses, or gifted programs may have higher motivation or prior performance than the students who make up the control group. Accounting for prior performance helps to address concerns you might have that the findings reflect preexisting group differences.

Even if the researchers randomly assigned students, groups could be different by chance. Also, it’s not unusual for some students to have missing data, and the groups of students with complete data might be less similar than the original sample. Accounting for prior performance can help address these concerns, though we might still be concerned about differences between groups that are hard to observe.

Example: parent involvement

To illustrate the difference between correlation and causation, let’s take a look at studies of parent involvement. A New York Times article claimed parent involvement was overrated, while a CNN article countered with examples of studies of initiatives to increase parent involvement that improved student outcomes. So, which is right?

The authors of the “parent involvement is overrated” article analyzed longitudinal surveys with nationally representative samples of over 25,000 students in elementary, middle, and high school. They examined 63 different measures of parent involvement, including communicating with teachers or administrators, observing classrooms, helping to pick classes, homework support, and volunteering at school. They conclude that most parental involvement fails to improve student outcomes.

The authors of the counterpoint article cite 10 studies in which participants were randomly assigned to an intervention to increase parent involvement. For example, in a study focused on 10,000 middle and high school students in West Virginia, researchers randomly assigned parents to receive or not receive weekly automated alerts to parents about their child’s missed assignments, grades, and class absences. They found that the alerts reduced course failures by 27 percent, increased class attendance by 12 percent, and increased student retention (Bergman & Chan, 2021).

Why were the conclusions different?

The first study, which analyzed longitudinal survey data, is a correlational study that did not involve assigning parents or students to different groups. It’s possible that parent involvement does have a negative impact on student achievement—if, for example, parents misunderstood the subject they were trying to help with or if their involvement caused their child severe stress. But correlational studies cannot rule out alternative explanations for the relationship between parent involvement and student achievement.

One alternative explanation has to do with parents self-selecting into different levels of involvement. Parents might become more involved when they sense their child needs help and less involved when their child is doing well. While the authors control for prior student achievement, such controls might not account for all differences between students whose parents are more or less involved.

Student characteristics such as motivation, or social and behavioral issues, might have prompted greater parental involvement. If so, the negative correlation between parent involvement and student outcomes could exist only because parents get more involved when their children are struggling in ways that cause poor achievement .

In contrast, studies in the counterpoint article can rule out self-selection as an explanation for their findings. Parents did not self-select into receiving the weekly alerts: Rather, the researchers randomly decided who did and did not receive alerts and then compared the two groups. As a result, we can be reasonably confident that introducing the weekly alerts caused reduced course failures, increased class attendance, and increased student retention. For educators interested in improving such outcomes, the study demonstrates effectiveness of a specific practice for middle and high school students.

Implications for educational leaders

We should not waste instructional time or squander teachers’ goodwill by spending time and money on programs that don’t help anyone but the companies selling them. This requires education leaders to know the difference between correlation and causation. When examining evidence, ask:

  • Does this study include a control or comparison group?
  • Does it account for differences between groups that could explain away the findings?
  • How confident am I that the study rules out other explanations for the findings?

In the next post, I’ll move on to another question: how big of a causal effect can we reasonably expect of an educational program?

weshouldnotwaste

Thanks to Cara for contributing her thoughts!

Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org. When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.

You can also contact me on Twitter at @Larryferlazzo.

Just a reminder; you can subscribe and receive updates from this blog via email. And if you missed any of the highlights from the first 13 years of this blog, you can see a categorized list here.

Related Tags:

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Assessment Webinar
3 Key Strategies for Prepping for State Tests & Building Long-Term Formative Practices
Boost state test success with data-driven strategies. Join our webinar for actionable steps, collaboration tips & funding insights.
Content provided by Instructure
Jobs Virtual Career Fair for Teachers and K-12 Staff
Find teaching jobs and K-12 education jubs at the EdWeek Top School Jobs virtual career fair.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Artificial Intelligence Webinar
Promoting Integrity and AI Readiness in High Schools
Learn how to update school academic integrity guidelines and prepare students for the age of AI.
Content provided by Turnitin

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Teaching Opinion Zaretta Hammond: 6 Ways to Uphold Culturally Responsive Teaching
Go beyond performative acts of equity and focus on strengthening the instructional core of every child, the teacher educator advises.
4 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week
Teaching Opinion 'Fire Everywhere.' How to Find Joy in Teaching Right Now
There has never been a more critical time to teach students the power of words.
4 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week
Teaching Inside One Teacher's Effort to Help Students Take Charge of Their Own Learning
While teaching high school math, Robert Barnett wondered how to approach students who learn at different paces.
5 min read
Collage of an online lesson and in-class view of students working with a teacher.
Collage via iStock/Getty
Teaching Opinion Trump’s Executive Orders Are Coming Fast. Here’s What Teachers Can Do
Here are steps teachers can take to help students in the face of the president's executive orders.
4 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week