The Biggest Deepfake Abuse Site Is Growing in Disturbing Ways

A referral program and partner sites have spurred the spread of invasive, AI-generated “nude” images.
An illustration of a crowd where everyone has the same pixelated face.
Illustration: Elena Lacey; Getty Images

A deepfake website that generates “nude” images of women using artificial intelligence is spreading its murky tentacles across the web—spawning look-alike services through partner agreements and recruiting new users through a referral system. The expansion efforts have allowed the service to proliferate despite bans placed on its payment infrastructure.

The website, which WIRED is not naming to limit its amplification, has existed since last year. It digitally “removes” clothing from non-nude photos to create nonconsensual pornographic deepfakes. Researchers say its output is “hyper-realistic,” and unlike similar abusive platforms, it can generate pornographic images even when the person in the original photo is fully clothed. Previously, similar technologies have only worked with partially clothed photographs.

In recent months the website has expanded its services, earning its creator potentially thousands of dollars. The website has made its algorithms available to “partners” through access to its APIs; and two spin-off websites have been created by other people. The original website has been previously reported on, but the extent of its partner programs has not.

The website’s “partner program” page says the scheme was created so that the developer behind the system can “focus more” on AI research, provide customers with alternate payment methods, and create localized-language versions of the site. It claims that having a decentralized model lets it avoid “sudden suspension of service, even termination.”

This approach does seem to have helped the website avoid being taken offline. According to data from digital intelligence platform Similarweb, the site had more than 50 million visits between January and the end of October this year, making it the biggest of its kind. “Hundreds of thousands” of images have been uploaded on a single day, the creator has claimed on the website. Its audience peaked in August with 6.92 million views, according to SimilarWeb’s data.

The site received attention from the Huffington Post and others at around that time as well, leading to its hosting being taken offline and cryptocurrency platform Coinbase appearing to suspend its payment account. Those restrictions slashed visitor numbers in half, down to 3.14 million visitors in October; 13.93 percent of visitors were in the United States. While it has declined in size, its business partners have grown, helping to keep the abusive technology accessible to millions of people. In October, one of the spin-off websites, according to the SimilarWeb data, recorded approximately 830,000 visitors, while the other had almost 300,000. In the months before, both only recorded tens of thousands of visitors. The original website drives much of this additional traffic.

The creator of one of these murky spin-off sites says they are paying around $500 to the original website for the ability to create 10,000 nude images. A counter on the partner website claims to have processed 204,522 images from more than 3,000 paying customers. While the other partner website includes the claim that the AI training data set includes more than 1 million images, it is not clear where these images have come from, with the creator of the original website writing online that the service does not store images or use them for training purposes.

Recruiting partners is not the only way the website has sustained itself. Hundreds of links for the website’s referral program—where people receive free image-generation tokens every time someone clicks—are also being shared on Twitter, YouTube, Telegram, and specialized pornographic deepfake forums.

Since the site was launched, its creator—whose identity is unknown and who did not respond to a request for comment—claims to have updated its algorithm multiple times. The website says it is currently running on version 2.0. The site’s developer claims a third version, apparently due to be released at the start of 2022, will improve “prediction” on photographs taken from the “side or back.” The creator claims that future versions will allow people to “manipulate the attribute of target such as breast size, pubic hair.”

The website’s startup-like growth tactics signal a maturity in abusive “nudifying” deepfake technologies, which overwhelmingly target and harm women. Since the first AI-generated fake porn was created by a Redditor at the end of 2017, these systems have become more sophisticated. The technology was turned into its first app, dubbed DeepNude, in 2019; although its creator took the app down, its code still circulates. Since then this kind of technology has become as easy to use as selecting a photo and clicking upload. Recent horrifying developments have also included easy-to-use video production.

With the increased ease of use, targets of harassment have moved from high-profile celebrities and influencers to members of the public. The expansion of this recent site and its partnerships commoditizes those intrusions even further. “The quality is much higher,” says Henry Ajder, an adviser on deepfakes and head of policy and partnerships at synthetic media company Metaphysic. “The people behind it have done something which hasn't really been done since the original DeepNude tool … that's trying to build a strong community around it.”

The inclusion of partners and payment services across the website and its two partners indicates that this kind of technology is at a tipping point, says Sophie Maddocks, a researcher at the University of Pennsylvania’s Annenberg School for Communication who specializes in studying online gender-based violence. “This harm is going to become part of the sex industry and is going to become profitable; it's going to become normalized,” Maddocks says. Society, technology companies, and law enforcement need to have a “zero tolerance” approach to these deepfakes, she adds.

The websites are raking in money for their creators. All three charge people for processing the images, ranging from $10 for 100 photos to $260 for 2,000. They offer a limited number of free images, billed as trials of the technology, but visitors are pushed toward payment. At various points in their existence, they have accepted bank transfers, PayPal, Patreon, and multiple cryptocurrencies. Like Coinbase, many of these providers cut ties after previous media reports. All three sites still accept various cryptocurrencies for payment.

Ivan Bravo, the creator of the spinoff website that claims to have more than 3,000 paying customers, says “it is not correct” morally that he makes money from a service that harms people. But he continues to do so. “It generates good income,” he said in an email when asked why he operates the website. He declined to say how much money he has earned through sales but says “it has been more than enough to support a family in a decent house here in México.”

Analysis from an independent researcher tracking the websites, who does not want to be named, due to the sensitive subject nature, says Bravo’s website had 630 paying customers in the three days after its launch in August. This could have earned Bravo anywhere between $7,553 and $57,323, the analysis says. Bravo claims he did earn within this range when presented with the figures.

Bravo, who has previously created a desktop app that can be used to “strip” people, tries to justify his website by saying that it and others include disclaimers that prohibit them being used to cause harm to others. He also claims the technology could be developed to work on men and could be used by the adult industry to create custom pornography. (The creator of the other spin-off site did not answer questions sent via email.) However, deepfakes have been used to humiliate and abuse women since their inception—the majority of deepfakes produced are pornographic, and almost all of them target women. Last year researchers discovered a Telegram deepfakes bot used to abuse more than 100,000 women, including underage girls. And during 2020, more than 1,000 nonconsensual deepfake porn videos were uploaded to mainstream adult websites each month, with the websites doing very little to protect the victims.

“This can have real and devastating consequences,” says Seyi Akiwowo, the founder and executive director of Glitch!, a UK charity working to end the abuse of women and marginalized people online. “Perpetrators of domestic violence will go on sites like this to take innocent photos to nudify them to try and cause further harm.”

“I’m being exploited,” Hollywood actress Kristen Bell told Vox in June 2020 after discovering deepfakes were made using her image. Others targeted by deepfake abuse images have said they are shocked at the realism, would not like their children to see the images, and have struggled to get them removed from the web. “It really makes you feel powerless, like you’re being put in your place,” Helen Mort, a poet and broadcaster, told MIT Tech Review. “Punished for being a woman with a public voice of any kind.”

Stopping these harms requires multiple approaches, experts say, a combination of legal, technical, and societal measures. “We need to educate young people, adults, everyone, around what is actually the harm in using this and then spreading this,” Akiwowo says. Others say tech and payment platforms should also put more mitigations in place. More education on deepfakes is needed, says Mikiba Morehead, a consultant with risk management firm TNG who also researches cyber sexual abuse, but technology can also stop their spread. “This could include the use of algorithms to identify, tag, and report deepfake materials, the employment and training of human fact-checkers to help spot deepfakes, and specific education initiatives for those who work in the media on how to detect deepfakes, to help stop the spread of misinformation,” she says.

For instance, Meta’s Facebook has been developing ways to reverse-engineer deepfakes, but this kind of technology is still relatively immature. Microsoft-owned GitHub continues to host the source code for AI applications that generate nude images, despite saying it would ban the original DeepNude software in 2019.

And then there’s the role of the law. Despite deepfakes being used for generating pornographic images and videos since 2017, lawmakers have failed to act on the problem. While many US states and the UK have laws on revenge porn, they don’t cover deepfakes. The UK’s Law Commission has been consulting on the legal challenges of deepfakes since 2019 and is yet to propose changes. There are few ways for victims to effectively fight back.

Those who want to take action face significant legal challenges, says Honza Červenka, a solicitor at law firm McAllister Olivarius, which specializes in nonconsensual images and technology. Elements of copyright laws, privacy torts, and artistic license could be used to get images removed from the web, he says. “We're in a territory of bending laws to the breaking point,” he explains. “The longer this regulatory vacuum continues, the more initiatives like this are going to gather speed, are going to industrialize, and are going to become harder to regulate at a later stage.”

Even when all those things do happen, like so many of the web’s harms, it may be difficult—or impossible—to track down and bring to justice those making the abusive technology and distributing it. A person who is based in Asia may not be able to be prosecuted in the United States or the United Kingdom without extradition, for example. Disrupting deepfakes will require a full arsenal of mitigations. “The best solution we have here,” says Adjer, “is to create as much friction as possible.”


More Great WIRED Stories