Incoming call...Citizens Clean Elections Commission(602) 364-3477

How can AI disrupt elections?

In 2024, there will be 84 elections across the world, including in the US, India, the UK, Indonesia, Mexico, Ukraine and Taiwan - estimated to be the highest number of elections for the next 24 years [1, 2, 3]. It's the first major election year since the release of ChatGPT at the end of 2022. How will new AI capabilities affect the democratic process as the world goes to the polls?

We created the demo above to demonstrate one possible use of AI to commit election fraud, by posing as a human and misleading voters to stop them from voting. Creating this demo took one member of our team (a software engineer) ~3-4 days work, using off-the-shelf services from OpenAI, ElevenLabs and Retell AI. With more time and expertise, sophisticated actors could likely produce a better quality AI voice and more convincing dialogue.

To run this scam at scale, a fraudster would just need to connect it to a phone number (which you can buy e.g. via Retell AI), and source a massive list of phone numbers to feed into it. They could scrape personal details about their targets like addresses and names [4], and add these to the language model prompt to personalise the scam content [5].

A couple of years ago, running a scam with interactive audio dialogue would require training and paying a large number of humans. Recent advances in AI capabilities - big advances in conversational chatbots, realistic text-to-speech, and continued progress in speech-to-text - substantially lower the costs of running this kind of operation. AI agents don't tire, know the culture and language of your target population (particularly relevant for foreign influence operations), and can be scaled up rapidly.

We estimate that running this scam using AI currently costs less than paying humans to do the same job.

This scam relies on various AI service providers, which could in theory detect and restrict this kind of activity, but in our testing none of the providers we used reacted at all. Scammers could also replace cloud service providers with open-source models like Llama and Whisper ran locally, degrading performance but possibly reducing cost and making their activity very difficult for governments to monitor and restrict [6, 7].

This is a demonstration of current capabilities as of March 2024, but we expect that the trend of more capable AI systems to continue [8]. By the time of the US elections in November, costs will likely be lower and quality and latency will improve.

However, the effectiveness of a scam of this kind is uncertain: AI voices are still imperfect, language models sometimes produce unconvincing output, and scrupulous voters could verify that the scam is false by contacting their real local elections office. We don't know what the actual rate of voter suppression would be, both from this scam and future variants powered by more capable AI systems.

Some related scams have already surfaced in the US: a non-interactive robocall deepfake of Biden's voice discouraged New Hampshire Democrats from voting in the primary before ElevenLabs banned the originating account [9], and a super PAC-funded text chatbot imitating presidential candidate Dean Phillips briefly allowed voters to discuss Phillips' platform before it was shut down by OpenAI [10]. Capabilities for two-way voice conversations are demonstrated by their legitimate use for phonebanking in congressional candidate Shamaine Daniels' campaign [11]. So far, reports of election fraud using interactive two-way AI conversations have not yet surfaced, but this demo shows that off-the-shelf products can now be used to develop them.

Published March 2024, demo updated in April