Shortly before the New Hampshire primaries in January, a malicious robocall campaign using the AI-generated voice of democratic candidate Joe Biden was detected. The voice message urged supporters not to vote, pretending it was in fact a move in favor of the Democratic Party.
It didn’t take long before the law enforcers found out who was behind the estimated 5,000-25,000 robocalls that also used a spoofed caller ID information – pretending the call came from the treasurer of a political committee that has been supporting the New Hampshire Democratic Presidential Primary write-in efforts for President Biden.
This week, a 54-year-old political consultant Steven Kramer from New Orleans has been charged with 13 felony counts of voter suppression and 13 misdemeanor counts of impersonation of a candidate, The Register reports.
Kramer’s motivation was to help another democratic candidate Dean Phillips get a better chance at challenging Biden. He says the campaign cost him only 500 dollars.
The fake voice recording was created by a magician who was paid 150 dollars. The recording was then distributed through the chain of three companies, with Lingo Telecom ultimately delivering this scripted message:
“What a bunch of malarkey. You know the value of voting Democratic when our votes count. It"s important that you save your vote for the November election. We’ll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.”
This week, New Hampshire Attorney General John Formella announced charges against Kramer which should serve as a “strong deterrent” for anyone considering similar intentions.
Separately, the Federal Communication Commission proposed a $6 million fine for Kramer, as well as a $2 million fine for Lingo Telecom for breaching FCC’s caller ID authentication rules.
With the fast-improving capabilities of AI tools that can create both realistic video and audio recordings, the use of deepfakes for malicious purposes is expected to rise significantly. Even though some companies are hesitant to publicly release their latest AI models due to risks of misuse, cheap and already available tools – already good enough to trick many people – are going to get better too.