Announcement

Journalist Emanuel Maiberg Addresses AI and the Rise of Deepfake Pornography

By Katherine Noel
Posted Apr 22 2024
Camille Francois and Emanuel Maiberg
Journalist Emanuel Maiberg and IGP faculty associate Camille François, Photo: Felix Vargas

As generative artificial intelligence tools become more sophisticated and widely accessible, the rapidly advancing technology has given rise to a growing issue that primarily impacts women and girls: the proliferation of nonconsensual intimate imagery, or “deepfake pornography.” Deepfake porn is synthetic pornography created using AI “deep” learning software, which can take an image of a real person and make their likeness appear in explicit images or videos.

“The victims of this technology are overwhelmingly young girls,” said the investigative journalist Emanuel Maiberg during an April 10 Institute of Global Politics event moderated by senior researcher and IGP faculty associate Camille François.

“It is almost exclusively young women who are nonconsensually being undressed and put into AI-generated porn,” said Maiberg, who is cofounder of the online publication 404 Media (and a former editor at Vice Media’s tech publication, Motherboard). “That is the driving force of the technology, it’s where we see it used first when it gets into people's hands.”

A study by Italian cybersecurity company DeepTrace in 2019 found that 96 percent of online deepfake videos were pornographic and nonconsensual. Another study in 2023 by US cybersecurity firm Home Security Heroes reported that deepfake porn makes up 98 percent of all deepfake videos online, with 99 percent of them targeting women. The study found 95,820 deepfake videos online, which is a 550 percent increase from 2019. 

Maiberg and his colleague Samantha Cole have been covering the evolution of deepfake videos since their conception in 2017, when a single Reddit user named “deepfakes” first began using a machine-learning algorithm to swap female celebrities’ faces onto pornographic actresses’ bodies. The process was streamlined in 2019 with the creation of “nudify” apps, which allowed users to feed photographs of real women into software that instantly undressed them and created fake nude images. 

Recent advancements in generative AI have made everything even easier and more realistic; with Stable Diffusion technology and other text-to-image AI tools, you can take photos of someone and create a custom AI model of the person, which can then be used to generate any image you can describe with words.

“The models are very powerful — once you have the model, it’s so easy to do,” Maiberg said. “You can make the images as fast as you can click a mouse.” 

Celebrities were primarily targeted when the technology initially became popular, but now nonfamous individuals are commonly targeted as well. 

Image
Columbia University Institute of Global Politics
Photo: Caroline Donovan

As content becomes easier to make, websites and creators have monetized sexually explicit deepfake videos. Several popular sites that host thousands of deepfake porn videos make money via display ads and subscription fees, and creators are able to sell models on platforms like Discord and X. And apps for making deepfake videos are often able to circumvent rules within app stores and marketplaces that prohibit apps whose primary use is for pornographic content, Maiberg explained. “The people who are profiting from this are pretty sneaky in how they hide the fact that people use their technology for this abuse.” Maiberg’s reporting on the issue led to the removal of one such app, DeepSwap, but dozens of others remain, and advertise openly on platforms like Instagram and TikTok. 

The issue of deepfake porn is increasingly prevalent among middle school and high school students. Last fall, a group of tenth-grade girls at a New Jersey high school were victims of AI-generated nude images, created by several boys and spread on social media. At a Washington high school, a male student was caught making sexually explicit deepfakes of 14- and 15-year-old female classmates. Last month, sixteen eighth-grade girls at a California middle school were targeted by fake nude images. “If you make a deepfake image of an eighth grader, that can ruin her life,” Maiberg said, adding that victims often have lasting trauma. “Look at Washington where an entire high school fell apart because a couple idiot boys made deepfake nudes of their classmates, and now the whole community is in disarray.”

François spoke about IGP’s work on understanding the laws around the production of deepfake porn, saying “we’re finding out the hard way that there’s not a lot of legal protection for victims of this type of activity.” For minors, synthetic child sexual abuse material (CSAM) in the US is considered as real CSAM and therefore illegal, but some countries don’t yet have laws around AI-generated CSAM. 

American lawmakers have grappled with how to regulate AI-generated pornographic content as it becomes increasingly more common. In January, pop singer Taylor Swift was the target of sexually explicit deepfakes circulated on social media, which brought international attention to the issue. One image shared on X was viewed 47 million times before the account that posted it was suspended. The US doesn’t yet have a federal law regulating AI-generated pornographic content, but Congress is considering proposed legislation and at least ten states have passed laws banning the practice. Still, progress is slow on an issue that is still so foreign to much of society, and regulations haven’t evolved as quickly as the technology. 

“It’s a very new problem and people don’t understand it, they’re not educated about it,” Maiberg said of hyper-realistic deepfakes. He said that in the last decade it’s become “completely taboo” and punishable by law for an individual to share real nonconsensual pornography of a partner or former partner (commonly referred to as “revenge porn”), but we haven’t had the same conversation about AI-generated images yet.

“We’re at a point where there's this crumbling of trust and safety moderation on all these big platforms that we've always relied on — Google, Facebook, Twitter, all of those — at the same time that it's become exponentially easier to generate trash on the internet,” Maiberg said. “As a society, and culturally, we have not really grappled with this and agreed that it’s not okay.”