The Biggest Deepfake Abuse Site Is Growing in Disturbing Ways

Analysis from an independent researcher tracking the websites, who does not want to be named, due to the sensitive subject nature, says Bravo’s website had 630 paying customers in the three days after its launch in August. This could have earned Bravo anywhere between $7,553 and $57,323, the analysis says. Bravo claims he did earn within this range when presented with the figures.

Bravo, who has previously created a desktop app that can be used to “strip” people, tries to justify his website by saying that it and others include disclaimers that prohibit them being used to cause harm to others. He also claims the technology could be developed to work on men and could be used by the adult industry to create custom pornography. (The creator of the other spin-off site did not answer questions sent via email.) However, deepfakes have been used to humiliate and abuse women since their inception—the majority of deepfakes produced are pornographic, and almost all of them target women. Last year researchers discovered a Telegram deepfakes bot used to abuse more than 100,000 women, including underage girls. And during 2020, more than 1,000 nonconsensual deepfake porn videos were uploaded to mainstream adult websites each month, with the websites doing very little to protect the victims.

“This can have real and devastating consequences,” says Seyi Akiwowo, the founder and executive director of Glitch!, a UK charity working to end the abuse of women and marginalized people online. “Perpetrators of domestic violence will go on sites like this to take innocent photos to nudify them to try and cause further harm.”

“I’m being exploited,” Hollywood actress Kristen Bell told Vox in June 2020 after discovering deepfakes were made using her image. Others targeted by deepfake abuse images have said they are shocked at the realism, would not like their children to see the images, and have struggled to get them removed from the web. “It really makes you feel powerless, like you’re being put in your place,” Helen Mort, a poet and broadcaster, told MIT Tech Review. “Punished for being a woman with a public voice of any kind.”

Stopping these harms requires multiple approaches, experts say, a combination of legal, technical, and societal measures. “We need to educate young people, adults, everyone, around what is actually the harm in using this and then spreading this,” Akiwowo says. Others say tech and payment platforms should also put more mitigations in place. More education on deepfakes is needed, says Mikiba Morehead, a consultant with risk management firm TNG who also researches cyber sexual abuse, but technology can also stop their spread. “This could include the use of algorithms to identify, tag, and report deepfake materials, the employment and training of human fact-checkers to help spot deepfakes, and specific education initiatives for those who work in the media on how to detect deepfakes, to help stop the spread of misinformation,” she says.

For instance, Meta’s Facebook has been developing ways to reverse-engineer deepfakes, but this kind of technology is still relatively immature. Microsoft-owned GitHub continues to host the source code for AI applications that generate nude images, despite saying it would ban the original DeepNude software in 2019.

Source

Author: showrunner