A growing number of teenagers know someone who has been the target of “deepfake” pornographic images or videos generated by artificial intelligence, a new survey shows.
One in 8 young people aged 13 to 20—and 1 in 10 teenagers aged 13 to 17—said they “personally know someone” who has been the target of deepfake nude imagery, and 1 in 17 have been targets themselves. Thirteen percent of teenagers said they knew someone who had used AI to create or redistribute deepfake pornography of minors.
These statistics come from a survey of 1,200 young people, conducted Sept. 7 to Oct. 7 and released by Thorn, a nonprofit group that advocates for child safety online. The report highlights the relative ease with which young people can create deepfakes: 71 percent of respondents who created deepfake imagery of others said they found the technology to do so on social media; 53 percent report they found tools through an online search engine.
Schools nationwide have battled the rising challenge of deepfake nudes over the last few years. Boys as young as 14 had used artificial intelligence to create fake, yet lifelike, pornographic images of their female classmates and shared them on social media sites like Snapchat.
These cases have spawned new questions for schools about how to discipline students who create these types of images and prompted them to review policies on the proper use of technology and sexual misconduct. The concern over online safety has also sparked legislative action by a bipartisan group of lawmakers. To date, 136 bills to address nonconsensual intimate deepfakes have been introduced in 39 states, according to Public Citizen, a nonprofit consumer advocacy firm.
The number of young people who are personally familiar with deepfakes is “really shocking,” said Melissa Stroebel, the head of research at Thorn and a co-author of the study.
The number of young people—1 in 17—who have been targets of deepfakes represent “a small percentage, but when we put that in context, that’s [at least] one in every classroom,” Stroebel said, adding: “That’s a startling rate of exposure to this particular harm at this point.”
More than 80 percent of the young people surveyed said they recognize that deepfake nude imagery “causes harm” to the person depicted. The top reasons they identified as causing harm were the “emotional and psychological impact” of the image and “reputational damage.”
This finding, Stroebel said, indicates that even if the adults are still debating the “reality” of these synthetic images and the harm caused by them, most young people feel strongly that creating or viewing this kind of imagery is abusive.
“That’s a good sign,” she said. “When young people recognize this type of imagery as harmful and abusive, they may be more likely to report it, provided [that] awareness also reinforces the fact that this threat is serious, rather than just a normal part of being online.”
Teens recognize the harm. But to what extent?
The report highlights a disconnect between the common knowledge of deepfakes among teenagers—1 in 3 teens and 1 in 2 young adults have heard of the term “deepfakes”—and the perception of harm caused by these images.
Too many young people don’t automatically consider deepfake images to be harmful, Stroebel said.
Teenage boys and young men are more likely than their female counterparts to think there’s no harm caused by deepfakes, or that the harm is “context dependent.” For instance, 7 percent of boys aged 13 and 14 thought the harm depended on the context compared to 2 percent of girls in the same age group. Among boys between ages 15 and 17, 10 percent thought the harm was context dependent, while 7 percent of their female peers thought so.
Overall, the 9 percent of young people who didn’t think deepfakes cause any harm thought so mainly because these images aren’t real and don’t cause physical harm.
It’s crucial for educators and other adults to teach young people the harms of deepfakes because that can affect how teens navigate the risks from deepfakes they’re increasingly encountering online, Stroebel said. It can also affect how often teens use AI tools—easily available online—to create and share deepfake images of others.
The Thorn report also captured responses from a small subset—2 percent—of young people who have created deepfake images, with a large majority of the creators—74 percent—targeting women. Over 30 percent of the creators indicated they had made nude imagery that depicted minors.
More than half of this group of creators reported that they shared these images with their friends or people at their school. Notably, 27 percent of the creators said the images they made were not shared and meant only for personal consumption. This could mean that people victimized by a deepfake don’t know they’re depicted and won’t have any recourse.
Schools and adults need to talk about risks with young people
To mitigate the risks, schools can start by clearly identifying deepfake nude imagery as a form of abuse and including it in their policies against bullying and harassment.
While most young people understand that deepfake nudes are a form of abuse, the survey found that 16 percent of respondents targeted by a deepfake don’t seek support to deal with the abuse because they fear being shamed, carry a sense of personal blame, or have concerns about not being believed.
Of those who did seek support, 60 percent said they either reported the image online or blocked the person who created it. More than half also sought guidance from a parent, teacher, or adult in their community. Most respondents who acted took both online and offline actions to deal with the abuse, the report noted.
Parents, guardians, or adults in the community around young people should be prepared to have “necessary conversations around relationship awareness, consent, and sexual education,” Stroebel said. “The digital world is just another place where that development is happening at this point.”