AI image of the letters AI

AI & Democracy:

Innovation Lab

Part of the “Etats Généraux de l’information,” an initiative launched by French President Emmanuel Macron to address how artificial intelligence poses new challenges to world democracies, and cutting across three of SIPA's Global ChallengesGeopolitical Stability, Democratic Resilience, and Technology and Innovationthe AI & Democracy Innovation Lab connects  researchers from around the world to formulate, design, and test interventions that will aid democratic societies and individuals’ rights.

Leadership

The Innovation Lab is led by SIPA Lecturer and IGP Affiliated Faculty member Camille François, an expert in Trust and Safety and responsible technology, and Maria Ressa, an Inaugural IGP Distinguished Fellow and 2021 Nobel Laureate recognized for her efforts in promoting freedom of the press. 

Scope and Objectives

The Lab’s purpose is to address the multifaceted challenges and opportunities that AI technologies pose to democratic institutions and norms. It aims to provide a robust platform for researchers, policymakers, and technologists to collaborate on solving these urgent issues.

 

Generative AI, Disinformation »

Leveraging open source AI »

Generative AI on Content Moderation »

Decoding Red Teaming »

 

Image
person silhouette
Workstream 4

Generative AI, Disinformation, & Electoral Integrity

Generative AI, Disinformation, & Electoral Integrity

The 2016 US presidential election served as a sobering wake-up call that showed how social media can harm electoral integrity. Since then, efforts have been made across technology companies to build systems, partnerships, policies, and tools to mitigate the spread of disinformation and deceptive behaviors during elections. Unfortunately, economic and political trends imperil our progress thus far even as the growth of AI creates new risks — all at a time when important elections are looming for some of the world’s largest democracies.

Work in this space will be a lasting focus of this Lab. We aim to bridge divisions between industry experts; share best practices and potential strategies for moving forward; and provide technologists and policymakers with frameworks they can use to make transparent and ethical decisions. All told, this multi-disciplinary approach at the intersection of AI, policy, and ethics is designed to accelerate the development of robust, actionable frameworks for handling the challenges posed by generative AI in electoral contexts.

 

  • In 2016, the U.S. Presidential election served as a sobering wake-up call for Silicon Valley about the role that social media can play in impacting electoral integrity. Since then, efforts have been made across technology companies to build systems, partnerships, policies, and tools to mitigate the spread of disinformation and deceptive behaviors during elections. However, two alarming trends have emerged. Firstly, the current context of economic rigor and political debate around election integrity risks undermining some of the genuine progress in this space over previous years. In particular, teams, procedures, research, or tools necessary to tackle demonstrable attempts at undermining electoral processes are under pressure.

    Secondly, we are in the early stages of a seismic shift in the threat landscape due to the advent of generative AI technologies. These two intertwined issues come at a time of rising global instability and a year teeming with critical elections around the world.

    Finally, global trends show 2024 is the tipping point for democracy. Last year, V-Dem assessed that 60% of the world was under autocratic rule; this year, that changed to 72%. There are 65 elections in 54 countries in 2024, including the world’s three largest democracies (India, Indonesia, United States) and critical areas such as Taiwan and the EU. The question we must ask is: How can we prepare for what lies ahead?

Workstream 1

Leveraging open source AI for greater transparency and accountability

The broader question is whether the burgeoning powers of AI will be concentrated among a few corporate giants or shared in democratic, transparent, responsible fashion.

Image
Computer

Leveraging open source AI for greater transparency and accountability

A pivotal, urgent battle is unfolding that pits proponents of open and accessible AI models against those who caution against wide dissemination of such technologies. In this complex landscape, the former camp offers little consensus on what the meaning of open source, while the latter argue over stakes seen as alternately minor and existential.

The broader question is whether the burgeoning powers of AI will be concentrated among a few corporate giants or shared in democratic, transparent, responsible fashion. Because defining openness in AI is a complex task — the concept reflects a dense spectrum of options, not a binary choice — we will actively involve scholars and technologists who are already working on open source and safety in AI. and explore possible mitigation that address a wide range of possible conditions.

Ultimately, we posit that democratic societies need not make a hard choice between fostering a competitive technological landscape and ensuring the safety and ethics of these technologies.

 

  • Currently, a pivotal and urgent battle is unfolding that pits proponents of open and accessible AI models, against those who caution against wide dissemination of such technologies. The landscape is complex, within the Open Source proponent camp, there is no agreed upon nomenclature of what open really means, what an open source stack looks like, and whether anyone is really achieving it. And within the critics camp, some talk of the risk of misuse while others cite existential doom.

    What's at stake is far-reaching: the question is whether the burgeoning powers of AI will be concentrated within a few corporate giants or whether these capabilities will be democratized. The aim is not just academic; it directly influences whether open-source AI can provide researchers and advocates with tools that enable transparency and greater understanding of these technologies, responsible innovation and solutions to problems not necessarily on the radar of large Silicon Valley firms. This urgency is amplified by employee movements within Silicon Valley, raising red flags about the potential security threats of open-source AI models. Here, the policy debate is critical but lacks solid grounding. While the technology industry has established definitions and understanding of what open source means in the realm of software, this lexicon and shared understanding are yet to be developed for open-source AI.

Image
AI Image
Workstream 2

Generative AI and its Impact on Content Moderation

Existing frameworks like the EU’s Digital Services Act serve as the primary bulwarks against the dissemination of harmful content like hate speech, disinformation, and exploitative material.

Generative AI and its Impact on Content Moderation

For the past decade, policymakers have grappled with the challenge of understanding and regulating ever-complicated systems used for content moderation. Existing frameworks like the EU’s Digital Services Act serve as the primary bulwarks against the dissemination of harmful content like hate speech, disinformation, and exploitative material. But they are also premised on the current state of moderation technologies, when an unanticipated transformation is brewing within the tech hubs of Silicon Valley.

The advent of Generative AI is poised to introduce a fundamentally new paradigm for content moderation, bringing with it sweeping implications not only for transparency and accountability in moderation mechanisms but also for our understanding of the very meaning of content moderation. In response, we aim to provide a lucid, comprehensive view of both sides of this pending change.

 

  • For the past decade, policymakers from Europe to the United States have grappled with the challenge of understanding and regulating the ever-complicated systems platforms use for content moderation. Existing frameworks, like the Digital Services Act (DSA) in the European Union, are vastly premised on the current state of moderation technologies. These are the primary bulwarks against the dissemination of harmful content, such as hate speech, disinformation, and exploitative material. Yet, an unanticipated transformation is brewing within the tech hubs of Silicon Valley. The advent of Generative AI is poised to introduce a fundamentally new paradigm for content moderation, one that brings both challenges and opportunities for the Trust and Safety field. This shift carries sweeping implications for transparency and accountability in moderation mechanisms, as well as for our understanding of the very meaning of content moderation. The need to comprehend and adapt to this change is immediate, and with the pace of innovation, almost nothing on this new content moderation paradigm is available to help guide the policy conversation.

Workstream 3

Decoding Red Teaming and Escaping the Quagmire of Safety Theater

Image
AI image

Decoding Red Teaming and Escaping the Quagmire of Safety Theater

The consensus that Generative AI models ought to be rigorously vetted for safety before their public release is growing wide and deep. The challenge arises in defining the nature and scope of this vetting. Concepts including “safety by design” and "red teaming" have gained currency in recent years, but these imprecisely defined notions and buzzwords are too often presented to both regulators and the general public as the key to understanding and addressing the complex sociotechnical risks associated with AI.

As policymakers from the EU to the United States consider mandating such practices, we aim to convene industry leaders and scholars to develop a nuanced taxonomy of these AI-related assessments and help us position them alongside specific design levers for risk mitigation. The ultimate goal is to synthesize our insights into a set of best practices that can contribute to roadmaps for robust and meaningful safety assessments of AI models.

 

  • The consensus is that Generative AI models ought to be rigorously vetted for safety before their public release is growing wide and deep. The challenge arises in defining the nature and scope of this vetting.In recent years, the concept of “safety by design” has gained traction amongst regulators and industry leaders, and the cybersecurity-inspired approach of "red teaming" has gained currency as a go-to method for stress-testing these AI models to fathom their various safety implications. However, the term "red teaming" has been applied to a plethora of diverse methods, contributing to semantic dilution and leading to what might be termed 'safety theater.' This is a precarious situation wherein a buzzword is presented to regulators and the general public as a cure-all for understanding the complex sociotechnical risks associated with AI. This is particularly critical now, as policy-makers from the European Union to the United States are inching closer to mandating such assessments. A granular, methodologically sound understanding of what it truly takes to examine these models for their societal and democratic implications is an imperative.

Columbia University campus

For additional details, please get in touch with us.