Event Highlight

Exploring the Threat of AI to Electoral Integrity

By Giulia Campos MIA ’24
Posted Jan 29 2024
The Panel
photos: Nir Arieli


In 2024, the world faces its first large election cycle since the release of a new suite of advanced AI tools and systems. How to mitigate this technology’s potential harm on the integrity of these elections was the subject of a January 24 event at SIPA’s Institute of Global Politics led by IGP-affiliated faculty member Camille François

“What we know about AI technologies is that those tools and systems can make it easier to throw a wrench in the election process and do it at scale,” said one of the panelists, Alondra Nelson, a social scientist who served as acting director of the White House Office of Science and Technology Policy under President Joe Biden.

Nelson, who is now the Harold F. Linder Professor at the Institute for Advanced Study, was joined on stage by Nevada’s secretary of state, Francisco V. Aguilar, and the investigative journalist and author Julia Angwin

The discussion was the first of a series of programs devoted to the topic as part of the AI Democracy Projects, a new initiative founded by Angwin and Nelson. The second installment was a closed workshop held the following day at Columbia Journalism School, designed to test publicly and benchmark the performance of AI chatbots and other tools that are becoming a popular source of public information. 

What we know about AI technologies is that those tools and systems can make it easier to throw a wrench in the election process and do it at scale.

– Alondra Nelson

“People treat AI like a monolith,” Angwin explained, “but there are actually different products that are all competing with each other.” 

To test the leading AI systems and better understand the risks they pose, the AI Democracy Projects convened elections officials, scholars, journalists, and other experts to assess “accuracy, completeness, bias, and harmfulness," among other measures. The software tested included many of the market’s best-known names: OpenAI’s ChatGPT, Anthropics’ Claude, Google’s Gemini, Meta’s Llama, and France’s Mistral. The results will be made public in a report slated for the end of February.

“We know that AI can be really cool and interchanging,” Nelson said. “We know that working with chatbots can be fascinating and it feels like magic to people. So I think that for policymakers and legislators, it creates a kind of brain fog around the levers and tools they have to regulate the system.”

These new tools are also reshaping the media landscape and how journalists cover elections. 

“As a journalist, whose entire job is to try to create factual, trustworthy information, AI is really the thing that I feel like, at a meta level, I am most worried about,” said Angwin, who is also the founder of the journalism startup Proof News and a contributor to the New York Times’ Opinion section. She adds that the existence of AI gives citizens license to say “‘Oh, that's probably fake’ to anything they don't want to believe.”

This growth in deep information asymmetries is occurring in a landmark election year, both here in the United States and abroad in a quarter of the world’s countries in the world.  

The Panelists

“Misinformation as a whole is a threat to democracy,” said Aguilar, who will be overseeing the presidential election in one of America’s battleground states. And adding to the uncertainty, according to Francois, is “all of this [AI development] is unfolding at a moment where in Silicon Valley, major tech companies are rolling back safety safeguards … and dismantling election security teams.” As a cautionary tale, she pointed to recent elections in Slovakia where deep fakes played a role in disrupting the electoral process. 

As part of the conversation, panelists emphasized the need for continued testing of AI tools to understand the information that is being given to the public, especially during election cycles. 

“In the United States there are a lot of efforts to suppress the votes of certain communities and a lot of that has to do with providing information that makes it seem difficult to vote or dangerous or like it does not matter,” Angwin said. “We do know that those strategies are used to suppress the vote, so the question is, can you make the bot do that?” 

IGP and SIPA will further examine the impact of AI on global elections in partnership with the Aspen Digital unit of the Aspen Institute at an all-day event scheduled for March 28. Officials, scholars, and other experts from the United States and abroad are slated to discuss issues including the use of AI in selected international elections; the global regulatory and legal  landscape; the readiness of elections officials, efforts by leading companies in the tech and AI sector, and more. Watch SIPA and IGP event calendars and other communications for updates.

Watch the entire discussion: