Artificial Intelligence

Why Teachers Should Talk to Students Before Accusing Them of Using AI to Cheat

By Alyson Klein — February 27, 2025 3 min read
  • Save to favorites
  • Print

When schools first became aware that new versions of generative artificial intelligence tools could churn out surprisingly sophisticated essays or lab reports, their first and biggest fear was obvious: cheating.

Initially, some educators even responded by going back to doing things the old-fashioned way, asking students to complete assignments with pencil and paper.

But Michael Rubin, the principal of Uxbridge High School in Massachusetts, doesn’t think that approach will prepare his students to function in a world where the use of AI is expanding in nearly all sectors of the economy.

See also

Photo collage of woman working on laptop computer.
Education Week + Getty

“We’ve been trying to teach students how to operate knowing that the technology is there,” Rubin said during a recent Education Week K-12 Essentials Forum about big AI questions for schools. “You might be given a car that has the capacity of going 150 miles an hour, but you don’t really drive 150 miles an hour. It’s not about the risk of getting caught, it’s about knowing how to use the technology appropriately.”

While students shouldn’t use writing crafted by AI tools like ChatGPT or Gemini and pass it off as their own, generative AI can act as a brainstorming partner or tutor for students, particularly those who don’t have other help in completing their assignments, he said.

Rubin recalled that his daughter recently needed his assistance with a history assignment. “She has me to go to, and some kids don’t,” he said. “We do believe that the AI chatbots can sometimes be that great equalizer in terms of academic equity.”

But he added, “I did not do the work for my kid. So I want to make sure the AI chatbot isn’t doing the work for anybody else’s either.”

Rubin’s school uses a tool that helps teachers get a sense of how students composed a document they later turned in for an assignment. It allows teachers to see, for example, if a student did a lot of cutting and pasting—which could indicate that they took chunks of AI writing wholesale and passed it off as their own work.

If a teacher at Rubin’s school suspects one of their students plagiarized content from an AI tool, the teacher doesn’t launch into an accusatory diatribe, he said.

Instead, they’ll use it as a “learning opportunity” to talk about appropriate uses of AI, and perhaps allow the student to redo the assignment.

“It’s not just about giving a zero and moving on,” Rubin said.

Never assume AI-detection tools are right about plagiarism


Those conversations are important, particularly when a teacher suspects a student of cheating because an AI detection tool has flagged work as potentially plagiarized, said Amelia Vance, the president of the Public Interest Privacy Center, a nonprofit organization that aims to help educators safeguard student privacy. Vance was also speaking during the Education Week K-12 Essentials Forum on AI.

Most AI detection tools are wildly inaccurate, she noted. Studies have found that commercially available detection tools tend to erroneously identify the work of students of color and those whose first language is not English as AI-crafted.

Programs that look at whether a student copied and pasted huge swaths of text—like the one Rubin’s school uses—offer a more nuanced picture for educators seeking to detect AI-assisted cheating, Vance said. But even they shouldn’t be taken as the final word on whether a student plagiarized.

“Unfortunately, at this point, there isn’t an AI tool that sufficiently, accurately detects when writing is crafted by generative AI,” Vance said. “We know that there have been several examples of companies that say, ‘We do this!’ or even experts in education who have said, ‘This is available as an option to deal with this cheating thing.’ And it doesn’t work.”

The kind of technology that Uxbridge High School relies on gives educators “a better narrative” to work with than other types of detection tools, Vance added. “It’s not just, ‘Is this student cheating or not?’ It’s, ‘How is this student interacting with the document?’”

That’s why Uxbridge’s practice of talking to students directly when AI cheating is suspected is an important first step.

If a student admits to cheating using AI in those conversations, “you need to make it clear to the student that is not acceptable,” Vance said. But teachers should never take the word of an AI detector—or even the type of product Rubin described—as gospel.

“Avoid ever assuming the machine is right,” Vance said.

Related Tags:

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Assessment Webinar
Reflections on Evidence-Based Grading Practices: What We Learned for Next Year
Get real insights on evidence-based grading from K-12 leaders.
Content provided by Otus
Mathematics Webinar How to Build Students’ Confidence in Math
Learn practical tips to build confident mathematicians in our webinar.
Student Achievement K-12 Essentials Forum How to Build and Scale Effective K-12 State & District Tutoring Programs
Join this free virtual summit to learn from education leaders, policymakers, and industry experts on the topic of high-impact tutoring.

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence More Teachers Say They're Using AI in Their Lessons. Here's How
A growing number of teachers say they're incorporating AI into their instruction in a variety of ways.
1 min read
Tight cropped photograph of a female using a laptop with icons floating around that represent education and learning tools
iStock/Getty
Artificial Intelligence More Teens Than You Think Have Been 'Deepfake' Targets
A growing number of teenagers know someone who has been the target of AI-generated pornographic images or videos.
4 min read
A photograph of a 13-year-old girl using her smartphone in a dark room. The content she is browsing from a social media feed projects over her face and on the wall behind her and shows a partial view of a pillow and mattress.
E+
Artificial Intelligence How These Schools Are Getting Parents on Board With AI
Schools should give parents a primer on how AI is used in education, emphasizing its strengths and weaknesses.
1 min read
The school principal addresses parents during a monthly meeting.
The school principal addresses parents during a monthly meeting.
Allison Shelley for All4Ed
Artificial Intelligence How District Leaders Use AI to Save Time, Help Teachers, and More
Here are 20 ways district leaders are using AI to help them do their jobs.
1 min read
Business people supporting and giving the priority to the robot.
iStock/Getty