Technology Initiative

Technology Initiative

Led by Maria Ressa and Camille François, IGP’s Technology Initiative advances impactful research and resources for policymakers, the tech community, and civil society on some of today’s most pressing technology issues from artificial intelligence and democracy to online harms to online foreign information operations targeting elections.

Featured News and Events

On the official ”road to the French Government’s AI Action Summit,” Mozilla and Columbia University’s Institute of Global Politics are bringing together AI experts and practitioners to advance AI safety approaches that embody the values of open source.

After nine months of intensive work and participatory consultations, the final report of the États Généraux de l’Information (EGI) was officially unveiled in Paris in early October. Camille François, who was appointed by French President Macron to steer the initiative, and Maria Ressa, who played a critical role in shaping the results, traveled to Paris to engage the public and the government around these findings.

In September 2024, IGP released a report with Vital Voices, which draws on a case study around the online harassment and abuse of Australian eSafety Commissioner Julie Inman Grant, assesses the state of research on TFGBV as well as recent global legislative and regulatory progress made on this issue, and offers a number of practical policy solutions to make women and girls safer online.

Our Team and Contributors

Image
Maria Ressa

Faculty
Maria Ressa
Nobel Peace Prize-Winning Journalist; Cofounder, CEO, and President of Rappler; Professor of Practice of International and Public Affairs

Image
Camille François

Faculty
Camille François
Associate Professor of Practice of International and Public Affairs

Image
Juliet Shen

Contributor
Juliet Shen
Product Lead, IGP Trust and Safety Tooling Consortium

Image
Jen Weedon

Contributor
Jen Weedon
Researcher and SIPA Adjunct

Image
Ludovic Péran

Contributor
Ludovic Péran
Researcher

Image
Nina Jankowicz

Contributor
Nina Jankowicz
Researcher

Image
Margot Hardy

Contributor
Margot Hardy
Researcher

What We’re Doing

Trust and Safety Tooling Consortium

This ambitious effort maps the state of the art with respect to Trust & Safety (T&S) tooling and identifies concrete pathways towards open source, modular, interoperable, and scalable tools that bridge AI Safety and T&S at this critical moment. The Consortium has assembled a project team of experts from industry, civil society, and academia to conduct a comprehensive assessment of existing tooling capabilities to better understand the practical needs of users, tech teams, platforms, and other key stakeholders.

Press Release

Image
Person holding a phone with an illustration of a network

États Généraux de l'Information

In October 2023, President Emmanuel Macron of France launched the États Généraux de l'Information and appointed an independent steering committee, which included Camille François, to work in coordination with Maria Ressa in addressing how artificial intelligence poses new challenges to world democracies.

As part of their work on the États Généraux de l'Information, Ressa and François established an AI and Democracy Innovation Lab at IGP to connect researchers from around the world to formulate, design, and test interventions that will aid democratic societies and individuals’ rights.

In September 2024, the steering committee released a final report with 15 core recommendations on topics, including tech platform accountability, algorithmic pluralism, and disinformation and foreign interference. Within the report, the Innovation Lab outlined 5 paths for action on the intersection of AI and democracy, which include: empowering media organizations with technological sovereignty, defending media intellectual property against AI abuses, strengthening platform responsibility for online hate and disinformation, making safety, security, and moderation tools available as open source, and supporting open-source AI and AI of public interest.

Report Blog Post

Image
États Généraux de l’Information

Columbia Convening on Openness and AI: Defining Openness

In February 2024, IGP and Mozilla convened over 40 experts and stakeholders in AI to explore the concept of "openness" in the AI era. This diverse group included representatives from leading AI startups, companies, non-profit AI labs, and civil society organizations. The convening aimed to help develop a better framework for what “open” means in the AI era, drawing inspiration from the pivotal role that open source software has played in technology, cybersecurity, and economic growth over the years. The convening resulted in a published report that presents a framework for grappling with openness across the AI stack and a policy brief for policymakers grappling with this question.

On November 19, IGP and Mozilla will bring together developers and builders for an in-person workshop on safety in Open Source AI in San Francisco to inform a repository of safety tools and approaches to be launched in time for the French AI Action Summit in February 2025.

Blog Post Report Policy Brief

Image
The Columbia Convening on Openness and AI

Columbia Convening on Openness and AI: Safety

On November 19, IGP and Mozilla brought together developers and builders for an in-person workshop on safety in Open Source AI in San Francisco. The convening aimed to inform a repository of safety tools and approaches to be launched in time for the French AI Action Summit in February 2025.

Blog Post Convening Readout

Image
Openness and AI Cover Photo

Deepfake Image-Based Sexual Abuse

As generative AI tools become more sophisticated and widely accessible, the rapidly advancing technology has given rise to a growing issue: the proliferation of nonconsensual intimate imagery, or “deepfake pornography.” IGP has released cutting-edge research and factsheets on deepfake image-based sexual abuse and convened experts for our community.

Report Factsheet Roundtable with Journalist Emmanuel Maiberg

Image
Publication Cover Art

Online Foreign Information Operations Targeting Elections Database

In 2016, a large-scale information operation targeted the US presidential election, catalyzing new fields of study and practices on the modern practices of foreign interference. Researchers still lack a unified view and analytical framework to make sense of these operations. To help put these in perspective and to encourage further research on information operations online, IGP released a comprehensive dataset expanding upon existing databases of foreign information manipulation and interference (FIMI) spanning 2014 to 2024 and specifically targeting elections.

The database will help researchers, journalists, and policymakers contextualize issues associated with online information operations risks ahead of worldwide elections in 2024 and beyond, as well as global events such as the Olympic Games and recently exposed persistent campaigns moving to new platforms.

Blog Post Working Paper Codebook Background Dataset

Image
Elections Voting

Red Teaming

The consensus that AI models ought to be rigorously vetted for safety before their public release is growing wide and deep. The challenge arises in defining the nature and scope of this vetting. Concepts including “safety by design” and "red teaming" have gained currency in recent years, but these imprecisely defined notions and buzzwords are too often presented to both regulators and the general public as the key to understanding and addressing the complex sociotechnical risks associated with AI.

As policymakers from the European Union to the United States consider mandating such practices, IGP is convening industry leaders and scholars to examine the critical practice of red teaming in AI development and governance, focusing on standardizing methodologies and bridging the gap between trust and safety practices and responsible AI. We seek to develop common frameworks and lexicon to ensure rigor, replicability, and accessibility in red teaming exercises in different contexts.  This initiative aims to document a “state of the field,” to inform standardized methods and make them applicable to a wider range of players.

Image
Person typing on a laptop
flags from around the world

Receive Updates.