U.S. FCC declares AI-generated cloned voice robocalls illegal
The U.S. Federal Communications Commission (FCC) has declared that calls utilizing AI-generated voices are illegal after an incident where a fraudulent robocall, mimicking President Joe Biden, aimed to discourage individuals from voting for him in New Hampshire’s Democratic primary election.
The calls aimed to discourage voting by falsely asserting that individuals who opted for mail-in ballots would have their personal data included in a public database. This database, the calls claimed, would purportedly be utilized by law enforcement agencies to pursue past warrants and by credit card companies to reclaim unpaid debts.
FCC Chair Jessica Rosenworcel highlighted that the declaratory ruling equips state attorneys general with additional resources to pursue the perpetrators responsible for the robocalls.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” Rosenworcel said.
The FCC noted that state attorneys general previously could target the outcome of an unwanted AI-voice-generated robocall, but the new action makes the act of using AI to generate a voice in these robocalls itself illegal.
Also Read: Google introduces generative AI for smart compose in workspace
Earlier this week, New Hampshire Attorney General John Formella revealed that the origins of the fraudulent Biden robocall have been traced to Life Corp, a Texas-based entity operated by Walter Monk. Formella stated that a cease-and-desist letter has been dispatched to the company, and a criminal investigation into the matter is currently in progress.
“The use of generative AI has brought a fresh threat to voter suppression schemes and the campaign season with the heightened believability of fake robocalls,” Democratic FCC Commissioner Geoffrey Starks said.
The FCC emphasized that “voice cloning” has the potential to persuade recipients into believing that a familiar or trusted individual, perhaps a family member, is urging them to take actions they wouldn’t typically consider.
In 2023, the FCC concluded its investigation by imposing a $5.1 million fine on conservative activists who were found to have made over 1,100 unlawful robocalls in the lead-up to the 2020 U.S. election.
Comments are closed.