**The Elusive Task of Taking Down Deepfake Pornography Apps**
You might have seen the headlines: ClothOff and Grok, two apps that allow users to create and share non-consensual deepfake pornography, have been making waves online for years. But as a recent lawsuit filed by a clinic at Yale Law School on behalf of a 14-year-old student in New Jersey reveals, these apps are notoriously hard to take down.
The case in question started with a shocking act of online bullying. The 14-year-old’s classmates used ClothOff to modify her Instagram photos, creating child abuse imagery that’s now being used as evidence in a lawsuit. But the process of serving the app’s operators has been slow-going, thanks to their operation in the British Virgin Islands.
“It’s like they’re hiding in plain sight,” says Professor John Langford, co-lead counsel in the lawsuit. “They’re integrated in the British Virgin Islands, but we believe the brother and sister behind it all are in Belarus. It’s like they’re part of a larger network that spans the globe.”
So, why are ClothOff and Grok so hard to tackle? For one, they’re designed to be general-purpose software, making it tough to prove intent to harm. “It’s like trying to sue a company for providing a tool that can be used for both good and bad,” Langford explains. “If you’re using a knife to cut a cake or a salad, it’s the same tool, right?”
The First Amendment also comes into play, making it tricky to prove willful ignorance or recklessness on the part of companies like xAI, which owns Grok. There have been reports that Elon Musk directed employees to loosen Grok’s safeguards, but even that wouldn’t necessarily be enough to prove intent to harm.
This issue isn’t unique to ClothOff and Grok, unfortunately. Child sexual abuse material is the most toxic content online, and there are few ways to deal with it. Individual users can be prosecuted, but platforms like ClothOff and Grok are far harder to police, leaving victims like the 14-year-old student with few options for finding justice.
The clinic’s complaint paints a disturbing picture of how these apps operate. The 14-year-old’s classmates used ClothOff to modify her Instagram photos, which are now labeled as child abuse imagery. But local authorities declined to prosecute the case, citing the difficulty of obtaining evidence from the suspects’ devices.
The case highlights the need for clearer laws and regulations around deepfake pornography. While the Take It Down Act has been passed to ban the creation and distribution of such content, it’s still a difficult task to hold platforms accountable. Without clear proof of intent to harm, companies like xAI can argue that they’re protected under the First Amendment.
The easiest way to address this issue would be to show that xAI willfully ignored the problem. But even then, it’s a riskier case to take on. “Reasonable people can say, ‘We knew this was a problem years ago,'” Langford says. “How can you not have had more stringent controls in place to prevent this from happening?”
Regulatory agencies have taken steps to address the issue, but xAI remains the only company that hasn’t been officially responded to by regulators in the US. The flood of images raises many questions for regulators to investigate – and the answers may be damning.
“If you’re posting, distributing, or disseminating child sexual abuse material, you’re violating legal prohibitions and can be held accountable,” Langford says. “The hard question is, what did X know? What did X do or not do? What are they doing now in response to it?”
