1 min read

AI Voice Generation Used in Russian Influence Campaign: TechCrunch

Generative AI is increasingly being used in malicious activities, and a recent report highlights its role in a Russian influence operation targeting European audiences. TechCrunch, citing a report from Recorded Future, a threat intelligence company, reveals that a Russian-linked campaign, dubbed "Operation Undercut," utilized AI-generated voiceovers in misleading videos aimed at undermining European support for Ukraine.

Recorded Future's report indicates that the videos, which attacked Ukrainian politicians and questioned the effectiveness of military aid, "very likely" employed AI-generated voiceovers. Researchers used ElevenLabs' AI Speech Classifier to verify this, finding a match between audio clips from the videos and the company's AI voice generation technology.

ElevenLabs has not yet responded to requests for comment. While Recorded Future suggests that multiple commercial AI voice generation tools were likely used, it only explicitly names ElevenLabs in its report.

The effectiveness of the AI-generated voiceovers is underscored by the contrast with videos featuring human voiceovers, which contained discernible Russian accents. The AI-generated voices, conversely, spoke in multiple European languages without foreign accents, enhancing the videos' legitimacy.

"The videos, which targeted European audiences, attacked Ukrainian politicians as corrupt or questioned the usefulness of military aid to Ukraine, among other themes," TechCrunch reports. "For example, one video touted that ‘even jammers can’t save American Abrams tanks,’ referring to devices used by US tanks to deflect incoming missiles – reinforcing the point that sending high-tech armor to Ukraine is pointless."

Recorded Future attributes the campaign to the Social Design Agency, a Russia-based organization sanctioned by the US government for creating a network of websites impersonating genuine news organizations and using fake social media accounts to amplify false information.

The campaign's overall impact on European public opinion was reportedly minimal. However, this incident is not the first instance of ElevenLabs' technology being implicated in alleged misuse. The company's technology was previously linked to a robocall impersonating President Biden, prompting ElevenLabs to introduce new safety features.

ElevenLabs prohibits "unauthorized, harmful, or deceptive impersonation" and employs automated and human moderation tools to enforce these policies. Despite this, the company's technology's potential for misuse highlights the broader challenges of regulating and mitigating the risks associated with generative AI.