IGP and Aspen Digital Summit Addresses Threat of AI on Global Elections

Posted Apr 08 2024
Panelists at the IGP Aspen Digital 3.28 AI Summit
Photo: Shahar Azran

The Institute of Global Politics and Aspen Digital convened political affairs experts, tech executives, and leading academics on March 28 to discuss the growing threat artificial intelligence poses to the integrity of elections in the United States and worldwide. Over the course of the current year, roughly 2 billion voters in 80 countries will head to the polls to cast ballots in national elections. This historic election year comes at a time of global democratic backsliding and cratering public trust in government. 

The summit featured an afternoon of conversations on AI’s risks to the upcoming US elections, the technology’s impact on recent global elections, and tech companies’ and governments’ role in addressing AI’s use to generate and spread misinformation. 

Columbia IGP AI Summit panelists
(L-R): Jillian Tett, Maria Ressa, Věra Jourová, Hillary Rodham Clinton (Photo: Shahar Azran)

Secretary Hillary Rodham Clinton, who chairs IGP’s Faculty Advisory Board, joined Věra Jourová, Vice-President for Values and Transparency at the European Commission, and Maria Ressa, the Nobel laureate, cofounder, CEO, and president of Rappler, and IGP Carnegie Distinguished Fellow, on the opening panel on the threat posed to the integrity of global elections. Gillian Tett of the Financial Times moderated the panel. 

“Anybody who’s not worried is not paying attention,” said Clinton. “As we're here today, doing this panel and having these [other] experts and practitioners speak to us, there are literally people planning how to interrupt, interfere with, [and] distort elections – not just in the United States, but around the world.”

Clinton was famously the target of a disinformation campaign led by Russia in the run-up to the 2016 US presidential election. Russian officials created thousands of social media accounts to sow societal divisions in the United States and disseminate fabricated articles and false information about Clinton across social media.

“I don’t think any of us understood it,” Clinton said. “I did not understand it. I can tell you my campaign did not understand it. The so-called dark web was filled with these kinds of memes and stories and videos of all sorts portraying me in all kinds of less-than-flattering ways.”

In the first panel, “Risks to the 2024 Global Elections,” Clinton was joined by Věra Jourová, the European Commission’s top digital affairs official, who spoke about Slovakia’s recent parliamentary election in which AI was successfully used to manipulate the vote. Deepfake audio was created of a pro-NATO candidate, claiming he had conspired to rig the election. After it went viral on social media, the candidate was defeated by a pro-Russia opponent.Maria 

Maria Ressa, Věra Jourová
Maria Ressa and Věra Jourová (Photo: Shahar Azran)

Jourová said there is “good data” showing that most of the elections in the EU states are affected by Russian propaganda. The European Commission has taken a more aggressive approach than the US in standing up to tech companies, instituting measures that require social media platforms to remove deep fakes that spread electoral disinformation, in addition to the EU passing the Digital Services Act last year, which regulates online platforms to prevent the dissemination of false information. 

Maria Ressa, a Nobel Laureate and IGP Carnegie Distinguished Fellow, emphasized that women are often the targets of digital violence – particularly AI-generated pornographic images. 

“If you’re a woman, gender disinformation is using free speech to pound you into silence if you’re in a position of power,” she said. 

Ressa stressed the need to abolish Section 230 of the Communications Decency Act, which gives tech companies “absolute impunity” in regards to content posted on their platforms. 

Former Secretary of Homeland Security Michael Chertoff
Former Secretary of Homeland Security, Michael Chertoff (Photo: Shahar Azran)

The panel was followed by a discussion on AI’s implications for the upcoming US elections, moderated by Clinton. The technology has created a “much more effective weapon for information warfare,” said former Secretary of Homeland Security Michael Chertoff, discussing its ability to produce targeted misinformation that can reach millions. He spoke about the new danger of “an effort to discredit the entire system of elections and democracy,” and said circulating persuasive videos about rigged elections is “like pouring gasoline on a fire.”

Chertoff highlighted the need to teach critical thinking and evaluation skills so that people can learn to distinguish between deep-fakes and real media, and added that watermarking would also help in validating images and videos. 

Michigan Secretary of State Jocelyn Benson spoke about working to pass a bill that would make using AI to intentionally disseminate deceptive information about elections illegal. The state is also planning to set up voter confidence councils, Benson said, which will be run by community leaders who can provide voters with accurate information. Late last year, Michigan passed a law that requires a disclaimer on all political advertisements made with AI. 

Benson said it’s critical to “equip citizens with the tools they need to be critical consumers of information” so that they can stand up against adversaries “creating chaos and confusion and fear” through mis- and disinformation. 

Yasmin Green
Yasmin Green, CEO of Google's Jigsaw

The third panel of the afternoon focused on the role and responsibilities of tech companies in addressing AI’s threats on their platforms. Yasmine Green, CEO of Google’s Jigsaw, discussed the role of trust in online information and news, especially among younger generations. "It's not that trust has evaporated," said Green, citing recent research of Gen Z users by Jigsaw. "It's migrated. "Trust is much less institutional and much more social and I think that's really important, as we think about the risk posed by generative AI.”

Clint Watts, head of the Microsoft Threat Analysis Center, said the majority of the threats his team monitors are in video format. He explained that audio is the medium people should worry most about when it comes to AI-generated misinformation, as it’s easier than video to make, but far more difficult for people to discern clues that the audio is fake. 

“When you watch a deep-fake video, you go, ‘I know how that person walks, I've seen how they talk, that’s not quite how it is,’” Watts said. “Audio, you’ll give it a discount – you'll say, ‘yeah, on the phone, maybe they do sound like that or it's kind of garbled, but maybe.’”

Watts and his team track threats from Russia, Iran, and China in 15 languages. When they first began tracking Russian accounts ten years ago, they were following two networks of individuals working on disinformation campaigns. Today, they’re tracking 70 such networks tied to Russia.    

At Meta, Director of Global Threat Disruption David Agranovich and his team have taken down 250 influence operations around the globe, including operations in Russia, China, and Taiwan. Agranovich suggested social media platforms could use watermarking to verify whether content is authentic. He emphasized that companies that produce AI content should join in a coalition, and make sure that everything produced by their models is discoverable as AI-generated. 

“The more we raise the bar across the industry to require companies to be building in these capabilities early, before we get to the point where the bad things have already happened, the more we can actually build meaningful defenses,” Agranovich said. 

“We have an opportunity, now, to start building in these safeguards as the technology's taking off.”

In the fourth panel, participants shared global perspectives on AI’s use in recently conducted elections in Taiwan, Slovakia, and Argentina. As founder of Taiwan AI Labs, Ethan Tu uses generative AI to identify information manipulation trends on social media and identify fake accounts that spread false information. Tu said his company observed a spike in troll accounts in the lead-up to Taiwan’s presidential election in January, with coordinated cross-platform activity on Facebook, X, and TikTok.

[“We use artificial intelligence to identify these people, who are not actually real people,” Tu said.] The disinformation is heavily concentrated in video format, he explained, rather than in text, with “lots of short videos with the same narratives, but different backgrounds, different voices” popping up on YouTube and TikTok. Tu’s team also noted an increase in troll activity following politically significant events, such as when US President Joe Biden said the US would defend Taiwan in the event of a Chinese invasion. When asked if China was the source of disinformation he tracks, Tu responded that the fake accounts push the same narratives as China’s state-affiliated media, and will “try to emphasize the military strength of China.”

Clinton wrapped up the event with a Spotlight Interview with Eric Schmidt, former chairman and CEO of Google and an IGP Carnegie Distinguished Fellow. Schmidt expressed his concerns about social media platforms’ amplification of misinformation going into the 2024 US election, speaking about how the algorithms reward and spread outrageous and harmful content.  

“Emotion and powerful videos drive voting behavior,” he said, “and the current social media companies are weaponizing that because [the companies] respond not to the content, but rather to the emotion.” - Eric Schmidt, former chairman and CEO of Google and an IGP Carnegie Distinguished Fellow

Schmidt called for reform of Section 230 of the Communications Decency Act, which would allow digital platforms to be held liable for the dissemination of harmful false information. Schmidt warned that the issue of electoral misinformation will worsen over the next several years as AI now has the ability to write programs, which means it can generate entire interest groups of people who don’t exist to support a shared cause. 

Schmidt closed the session by telling the audience, “These problems are not unsolvable. This is not quantum physics. The most important thing that we have to understand is this is our generation's problem, it’s under human control.”

Watch AI's Impact on Global Elections: