Event Highlight

Former Google CEO Eric Schmidt Discusses AI and Its Impacts on National Security

By Katherine Noel
Posted Apr 26 2024
Eric Schmidt discussed AI and its impact on national security at a March 2024 workshop at SIPA.
photos / Jay Stout

 

Eric Schmidt, the former chairman and CEO of Google, discussed artificial intelligence and its implications for geopolitics and national security during an IGP roundtable on March 25. The discussion was moderated by Erica Lonergan, a SIPA assistant professor who was a lead writer of the 2023 U.S. Department of Defense Cyber Strategy.

Schmidt, who is one of IGP’s Carnegie Distinguished Fellows, spoke about the rapid growth of generative AI and the powerful benefits and potential risks of the technology’s development. 

“This notion of having every human have a programmer attached to them is a very big deal,” Schmidt told the group of students, explaining that AI models now have the ability to learn how to program, not just generate text. 

There was a thought that open-source AI in its current form could extinguish the world, and that’s clearly false. The consensus is that, at some future point, open-source AI will be much more dangerous — but we’re not there now.
— Eric Schmidt

He gave examples of the technology’s capabilities to process and analyze massive amounts of information, describing how a student working on a research paper could ask a system to simply read 50 books and summarize them all. “Another example is in chemistry, this has a lot of implications,” Schmidt said, explaining that “you can say ‘read all the chemistry papers’ – and the last estimate was there’s 10 million of them – train against it, and it starts generating chemistry…. If you can predict words, you can predict chemical bonds.”

Image
Professor Erica Lonergan moderated a conversation with Eric Schmidt.

The speed at which large language models (LLMs) are advancing raises critical questions about regulation and safety, as policymakers weigh the threat AI poses for cyberwarfare, biological attacks, and the spread of misinformation. While the EU has opted for legislation, passing the AI Act last year, the US has taken a more decentralized approach. “It is highly unlikely that the regulatory structures will keep up with this, it’s happening so fast,” Schmidt said of the pace of innovation and AI governance. 

Critics of open-source AI systems – which have free, public codes that anyone can use or build upon, unlike closed-source models where both the code and the data against which models are trained are private – have raised concerns about the safety of allowing such powerful tools to be so easily accessible. Several students asked Schmidt about these national security concerns, citing the use of AI by authoritarian regimes, building of biological weapons, and proliferation of misinformation as potential dangers. 

Schmidt acknowledged that the risks are significant, but added that some had been “magnified” by critics. “There was a thought that open-source in its current form could extinguish the world, and that’s clearly false,” he said. “The consensus is that, at some future point, open-source will be much more dangerous — but we’re not there now.”

Schmidt addressed the performance gap between open-source and closed-source, or frontier, AI models and predicted it would continue to increase, resulting in a concentration of power amongst key players in the closed-source set.  

Image
Students attended a workshop with Eric Schmidt.

He talked about another national security issue - the issue of deep fake videos, and their use in misinformation campaigns around the globe. “If you want to build a fake video, it’s relatively easy, and you can do it anywhere in the world,” he said, speaking on the power of videos to influence people. “When I ran YouTube, I learned a really important lesson, which was nobody cared what you write, but a video will cause people to kill each other.” (Schmidt cited the example of an anti-Islamic short film from 2012.)

The industry has done very little to address deepfakes, Schmidt said, and greater regulatory action needs to be taken. 

Finally, Schmidt talked about the implications of AI and warfare. As the arrival of AI and drones alters the landscape of combat, Schmidt emphasized the necessity of holding people – not machines or technology – responsible in war zones. 

“The legal system does not know how to jail a computer, except unplug it,” Schmidt said.

“The legal system can punish the computer by unplugging it, destroying it, and deleting its software, but it doesn’t have a moral compass, so it’s not a deterrent. It would just do it again.”