Search

FEC Considers Regulation of AI Deepfakes in Political Ads

3 min read
2 views

As the 2024 elections approach, the Federal Election Commission (FEC) is considering the role of AI-generated deepfakes in political campaigns. Advocates believe regulation will protect voters from disinformation.

VideoAsk for an AI chatbot resembling Suarez.

Earlier, the FEC had reservations about its authority on this issue. However, Public Citizen highlighted the fraudulent misrepresentation law, arguing the FEC had jurisdiction. Supporting this, 50 Democratic lawmakers, led by House Rep. Adam Schiff, communicated to the FEC the challenges evolving AI posed for voters distinguishing between real and fake campaign content.

Despite this, Republican Commissioner Allen Dickerson expressed doubts about the FEC's authority to regulate deepfake content. He pointed out the broadness of the fraudulent misrepresentation law and the potential infringement on First Amendment rights. Yet, Public Citizen's Robert Weissman countered, emphasizing the unique deceptive capabilities of deepfakes, which warranted a different approach.

Lisa Gilbert, from Public Citizen, proposed a compromise: candidates could disclose AI's role in misrepresenting opponents. This ensures transparency without completely avoiding the technology.

Nevertheless, the Congress might step in. Legislators, including Senate Majority Leader Chuck Schumer, are considering creating boundaries for AI-generated deceptive content. Additionally, states are actively discussing or even passing laws related to deepfake tech.

Daniel Weiner, from the Brennan Center for Justice, highlighted the dangers AI poses, amplifying election misinformation. The severity of this threat remains to be seen, but concerns are undoubtedly growing.

Suggest a Correction

Found an error or have a suggestion? Let us know and we'll review it.

Share this article

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!