Artificial Intelligence (AI)

The ACLU Fights for Your Constitutional Right to Make Deepfakes


On January 29, in testimony before the Georgia Senate Judiciary Committee, Hunt-Blackwell urged lawmakers to scrap the bill’s criminal penalties and to add carve-outs for news media organizations wishing to republish deepfakes as part of their reporting. Georgia’s legislative session ended before the bill could proceed.

Federal deepfake legislation is also set to encounter resistance. In January, lawmakers in Congress introduced the No AI FRAUD Act, which would grant property rights for people’s likeness and voice. This would enable those portrayed in any type of deepfake, as well as their heirs, to sue those who took part in the forgery’s creation or dissemination. Such rules are intended to protect people from both pornographic deepfakes and artistic mimicry. Weeks later, the ACLU, the Electronic Frontier Foundation, and the Center for Democracy and Technology submitted a written opposition.

Along with several other groups, they argued that the laws could be used to suppress much more than just illegal speech. The mere prospect of facing a lawsuit, the letter argues, could spook people from using the technology for constitutionally protected acts such as satire, parody, or opinion.

In a statement to WIRED, the bill’s sponsor, Representative María Elvira Salazar, noted that “the No AI FRAUD Act contains explicit recognition of First Amendment protections for speech and expression in the public interest.” Representative Yvette Clarke, who has sponsored a parallel bill that requires deepfakes portraying real people to be labeled, told WIRED that it has been amended to include exceptions for satire and parody.

In interviews with WIRED, policy advocates and litigators at the ACLU noted that they do not oppose narrowly tailored regulations aimed at nonconsensual deepfake pornography. But they pointed to existing anti-harassment laws as a sturdy(ish) framework for addressing the issue. “There could of course be problems that you can’t regulate with existing laws,” Jenna Leventoff, an ACLU senior policy counsel, told me. “But I think the general rule is that existing law is sufficient to target a lot of these problems.”

This is far from a consensus view among legal scholars, however. As Mary Anne Franks, a George Washington University law professor and a leading advocate for strict anti-deepfake rules, told WIRED in an email, “The obvious flaw in the ‘We already have laws to deal with this’ argument is that if this were true, we wouldn’t be witnessing an explosion of this abuse with no corresponding increase in the filing of criminal charges.” In general, Franks said, prosecutors in a harassment case must show beyond a reasonable doubt that the alleged perpetrator intended to harm a specific victim—a high bar to meet when that perpetrator may not even know the victim.

Franks added: “One of the consistent themes from victims experiencing this abuse is that there are no obvious legal remedies for them—and they’re the ones who would know.”

The ACLU has not yet sued any government over generative AI regulations. The organization’s representatives wouldn’t say whether it is preparing a case, but both the national office and several affiliates said that they are keeping a watchful eye on the legislative pipeline. Leventoff assured me, “We tend to act quickly when something comes up.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *