Corrected: This story should have said The Social Institute is a for-profit organization, not a nonprofit organization.
A new word—“deepfakes”—slipped effortlessly into the lexicon of school administrators last fall in states across the country. Schools discovered that boys, as young as 14, had used artificial intelligence to create fake, yet lifelike, pornographic images of their female classmates and shared them on social media sites like Snapchat.
Students aren’t the only targets. Early last year, the athletic director of Pikesville High School in Baltimore used an off-the-shelf, $1,900 software program to create a fake audio clip of his principal. The fake principal could be heard spouting harmful and racist stereotypes about his Black and Jewish students. The real principal was absolved of any wrongdoing, but the district placed him at another school, following strong reaction from Pikesville’s students and parent community to the fake clip.
The spike in the misuse of AI-generated deepfakes has stunned school districts, which now—through a mix of amended school policies, technological safeguards, and training—are trying to catch up and curb this kind of behavior.
“Deepfakes are the next iteration of bullying,” said Jennifer DiMaio, the assistant superintendent for curriculum and instruction at the Valley Stream Central High School district in Valley Stream, N.Y. Since the start of this school year, DiMaio and school leaders in her district “grappled” with the right approach to teach students about the responsible use of technology. “I don’t think anyone is going to be immune to this, because everybody has a device and everybody has access to AI, and it’s generating at a rate much faster than our comprehension of it,” she added.
DiMaio’s fears aren’t unfounded. A nationally representative survey of more than 1,100 teachers, principals, and district leaders conducted by the EdWeek Research Center in September showed that 67 percent of them believed that their students had been misled by a deepfake. While only 9 percent of educators reported being misled themselves, more than half the respondents were somewhat or very concerned that students would use AI to generate deepfakes that featured their peers or educators.
The data also showed that schools haven’t adopted a uniform approach to training their staff on the dangers of deepfakes: Over half reported in the survey that they’d received no training at all or that the quality of it was poor—7 percent said that the training they’d received was good.
Turning AI into a consumer product has led to a “Cambrian explosion” of existing problems, said Jim Siegel, a senior technologist for The Future of Privacy Forum, a think tank that works on privacy issues. Schools have always dealt with issues like sexual harassment via nonconsensual imagery, cyberbullying, or sexting, where students are both perpetrators and victims. The AI-generated deepfakes have accelerated these harms. The threats to “physical safety, to reputation, now have an audio-visual update,” added Siegel.
Most districts, when faced with a deepfake incident, have meted out expulsions, suspensions, and firings (in the case of the athletic director in Baltimore). In most cases that involved pornographic deepfakes, districts turned to law enforcement to investigate. It’s vital, experts say, for schools to emphasize that students could face disciplinary, and even legal, action if they create deepfakes intended to harass and bully.
Some schools have taken it a step further than punishment and created “learning opportunities” around the disruptive incidents.
Jason Allemann, the principal of Laguna Beach High School in California, convened a panel this past March, soon after he learned that at least one male student had used AI to create “inappropriate images” of girls and shared it with other male students. The panel dealt with issues of privacy, appropriate AI usage, and the legal and ethical concerns around sharing content.
“We’re educating the students who made the bad decision, but we’re also bringing everybody else along with us in those conversations to build awareness and make sure that everybody understands the expectations, not only at school, but just, I think, morally [of] doing something right when you have all these tools at your fingertips,” Allemann said.
Students need to be coached about how to respond from the moment they receive an explicit photo or a piece of hurtful information, said Laura Tierney, the founder of The Social Institute, a for-profit organization that creates school-based programs to encourage positive use of social media.
When confronted with a deepfake, Tierney said schools can focus on what students should do and not solely on what they shouldn’t. This can mean flipping the script from “Don’t share that deepfake” to more positive options, like “If you’re scared of reporting this explicit picture to a caregiver or parent, report it to the counselor,” Tierney said.
How creators of ‘deepfakes’ are disciplined sends an important message
School districts have been quick to launch investigations into deepfake incidents, as they’ve come to light. But the results of these investigations have varied.
The Beverly Hills Unified school district expelled five middle school students in March for creating and sharing explicit deepfake images of their female classmates, which first surfaced the previous month. At Laguna Beach, it’s unclear what, if any, disciplinary measures were taken against the creator or creators because individual student disciplinary information is protected by privacy laws.
In some cases, the victims have felt like too little was done—and too late.
Dorota Mani, the mother of 15-year-old Francesca Mani, found out in October last year that AI-generated nude images of her daughter and other girls were being circulated at the high school in Westfield, N.J. In the year that followed, Mani and Francesca have been vocal about how the school mishandled the incident and the lasting, harmful impact these deepfakes will likely have on those female students.
Mani said she is worried about how the image itself could cause reputational damage without people knowing the context of it.
“You know how many colleges could find out about her picture floating around, without her even knowing, and reject her. You know how many job offers will not be offered because of that picture? And many girls, unfortunately, are in that boat,” Mani added.
Westfield High School temporarily suspended one boy over the incident, according to The New York Times.
A spokesperson for the school would not confirm any disciplinary details, saying “we cannot provide specific details on the students involved or any disciplinary actions taken, as matters involving students are confidential.”
Mani was also critical of how the school handled the incident when the deepfakes first came to light: The girls pictured in them were identified by the school when they were called down to the principal’s office over the intercom, but the identities of the boys behind the deepfakes are not publicly known.
Meanwhile, Francesca still goes to school with the boys who made the deepfakes of her and other girls, Mani shared.
Mani said she’d urged the Westfield school district in late 2023 to include language on AI-generated deepfakes in their schools’ policies. In June, the district amended its policies to “incorporate AI technology into the definition of cyberbullying and expand the infractions in the code of conduct,” said Superintendent Raymond Gonzalez, in an email to Education Week.
Gonzalez also said in the email that the Westfield school board adopted an acceptable-use policy for AI in October of this year, which recommends appointing an AI coordinator and committees at the district level, to oversee the use of AI tools at each school. The board adopted the resolution over a year after the Westfield High deepfakes first emerged, a timeline that is unacceptable to Mani: “Our school failed us.”
Mani has worked with lawmakers on both sides of the aisle on legislation that will better protect young people from the harmful impacts of deepfakes. She’s lent her support to the Defiance Act, bipartisan legislation introduced by U.S. Rep. Alexandria Ocasio-Cortez, D-N.Y. The proposed legislation would give victims the right to take civil action against those who create, distribute, and receive “digital forgeries.”
Twenty states have passed bills that aim to make AI-generated imagery, with the faces of real adults or children, a part of statutes that criminalize the creation and possession of such deepfakes, according to data collected by consumer-advocacy firm Public Citizen.
Find a careful balance between accountability and empathy
Over the past year or so, more schools have been proactive about tweaking their student codes of conduct and disciplinary policies to include deepfakes. It’s not just past incidents that’s spurred them—it’s also the threat of what administrators don’t know about how AI might develop and how accessible it’s going to be to students.
A nationwide survey of students, teachers, and parents by the Center for Democracy & Technology showed that 40 percent of students were aware of a deepfake associated with someone they know at school, compared with 29 percent of teachers in the know; 17 percent of parents surveyed said the same.
This chasm of awareness means that deepfake incidents are likely underreported, the report says.
For some school leaders, an amended code of conduct should signal consequences to anyone planning to manipulate AI at the cost of a peer or an educator.
Shari Camhi, the superintendent of the Baldwin Union Free school district in Baldwin, N.Y., said that in updated school board policies, student-created deepfakes of their peers or educators will lead to suspension. But the suspension will also be followed or preceded by other “restorative” measures “to make the students understand the terrible thing that they did and to be able to make amends to the person they did it to,” Camhi said.
“It’s important for the student to have a path to come back to school. In situations where a longer suspension is necessary, we allow for a shortened suspension if they also attend counseling for a specified duration. That’s necessary when someone does something so egregious,” she added.
Policies need to focus on the needs of victims, too
As administrators create new disciplinary guardrails around deepfakes, experts caution that schools may be paying inordinate attention to dealing with the perpetrators, while leaving victims vulnerable.
“The latest amendments to Title IX frame deepfake imagery as sexual harassment. I think where schools might fail is if they treat [deepfakes] as a violation of acceptable-use policy, and not as sexual harassment,” said Siegel from the Future of Privacy Forum.
The new amendments to Title IX, though, have been challenged in 26 states and by conservative groups, who’ve supported at least eight different lawsuits that prevent these regulations from taking effect. In other places, though, these regulations are in full effect.
It’s possible that once President-elect Donald Trump takes office in January, the regulations, and the inclusion of deepfakes, may be dropped if the administration decides to completely withdraw them.
Camhi’s district started working on a media literacy curriculum in 2020-2021 school year, which is now required for students in grades 6-12 as part of its English/language arts and social studies coursework. Students are pushed to think critically about the information they receive, triangulate data sources, and judge if the information is credible enough to share. Camhi hopes her students will apply the same critical-thinking skills to information they find online.
Camhi said that both teaching and non-teaching staff also went through three hours of professional development in media literacy at the start of the next school year. “We need to backtrack too to train people who didn’t get the skills when they were in school,” said Camhi.
Schools need to move forward with the assumption that students will be tempted to create or share unsuitable deepfakes about their peers or teachers, experts say. It’s their responsibility to teach students how to make the right choice.
The Social Institute’s Tierney collaborated with educators and students to create a peer-to-peer learning platform that’s used in schools across the country. It focuses on posing “essential questions” to students during different class periods, like science, social studies, or English, which are then also shared with their parents. These questions could quiz students about the differences between a real and an AI-generated picture or ask them what’s their role if a deepfake related to their school is being shared on social media.
“These open-ended questions spark curiosity, problem-solving, and huddling and discussion. I think those are the important conversations that can help a school community tackle this topic that is new and ever evolving,” Tierney said.
Signs that this conversational or “educational” approach is working are difficult to pin down. Tierney said that teachers who use this curriculum report higher levels of empathy in students toward each other and their teachers.
Camhi, for her part, is trying to gauge how students have responded to the district’s programming on AI.
The “biggest check mark” for her is when students report they now think more critically about the information they receive.
Data analysis for this article was provided by the EdWeek Research Center. Learn more about the center’s work.