No political deepfake has alarmed the world’s disinformation experts more than the doctored audio message of US President Joe Biden that began circulating over the weekend.
In the phone message, a voice edited to sound like Biden urged voters in New Hampshire not to cast their ballots in Democratic primary on Jan 23. “Save your vote for the November election,” the phone message went. It even made use of one of Biden’s signature phrases: “What a bunch of malarkey.” In reality, the president isn’t on the ballot in the New Hampshire race – and voting in the primary doesn’t preclude people from participating in November’s election.
Many have warned that new artificial intelligence-powered video and image generators will be used this year for political gain, while representation for nearly half of the world is on the line in polls. But it’s audio deepfakes that have experts worried now. They’re easy to edit, cheap to produce and particularly difficult to trace. Combine a convincing phone message with a voter registration database, and a bad actor has a powerful weapon that even the most advanced election systems are ill-equipped to handle, researchers say. “The political deepfake moment is here,’’ said Robert Weissman, president of the consumer advocacy think tank Public Citizen. He called on lawmakers to put in place protections against fake audio and video recordings to avert “electoral chaos.”
The fake Biden message comes as an increasing number of US political campaigns use AI software to reach constituents en masse – and as investors are pouring money into voice-cloning startups. On Jan 22, while the deepfake phone message was making the rounds, the AI voice-replicating startup ElevenLabs announced it had raised a new round of funding that valued the company at US$1.1bil (RM5.20bil).
The doctored political recording wasn’t the first. Last year, audio deepfakes spread on social media ahead of Slovakia’s parliamentary elections, including one clip in which party leader Michal Simecka appeared to be discussing a plan to purchase votes. Political use of video and audio deepfakes have meanwhile proven limited.
It’s unclear exactly how the Biden message was generated. New Hampshire’s attorney general was investigating the call on Monday. But tracking the fake audio to its source will prove especially difficult because it was spread by telephone as opposed to online, according to Joan Donovan, an assistant professor of journalism and emerging media studies at Boston University. Audio messages delivered by phone don’t come with the same digital trail.
“This is an indication of the next generation of dirty tricks,” Donovan said.
There’s another reason the fake Biden clip was particularly worrisome to disinformation researchers and election officials. It confirmed their biggest fear: Bad actors are using deepfakes not just to influence public opinion but to stop voters from coming to the polls altogether.
“Even if such misinformation introduces confusion that only impacts a few hundred or thousands of votes, it could be meaningful in terms of the results and outcome,” said Nick Diakopoulos, a professor at Northwestern University who has researched manipulated audio and elections.
The US Federal Election Commission has taken small steps toward regulating political deepfakes, but it has yet to clamp down on the technologies helping to generate them. Some states have proposed their own laws to curb deepfakes.
Elections officials are running training exercises to prepare for an onslaught. Around 100 federal and state officials assembled in Colorado in August to brainstorm the best response to a hypothetical fake video containing bogus elections information. Deepfakes were the focus of another exercise in Arizona in December when officials worked through a scenario in which a video of Secretary of State Adrian Fontes was falsified to spread inaccurate information.
Meanwhile, deepfake detection tools are still in their infancy and remain inconclusive.
On Monday, for example, ElevenLabs’ own detection tool indicated that the Biden call was unlikely to have been created using cloning software – even as deepfake detection startup Clarity said it was more than 80% likely to be a deepfake.