OpenAI CEO Sam Altman stated he misjudged public skepticism toward artificial intelligence and government collaboration following his company’s controversial deal with the Pentagon. In a podcast interview released Thursday, Altman addressed the backlash, acknowledging a "loud" segment of online critics who distrust government adherence to law. He argued that collaboration on national security, including cyber defense and biodefense, remains essential.
Key Takeaways:
- OpenAI CEO Sam Altman publicly acknowledged misreading public distrust regarding AI and government partnerships.
- The admission follows significant public and internal protest over OpenAI’s February deal with the Pentagon.
- Altman maintains that working with the U.S. government on national security is a necessary obligation.
- He asserts that democratically elected governments, not AI company CEOs, must hold ultimate power over the technology’s future.
Altman Confronts Backlash Over Military AI
In his first public comments on lessons learned from the Pentagon agreement, Altman told interviewer Laurie Segall he had "miscalibrated" the prevailing mood. "There’s at least a group of loud people online who really don’t trust the government to follow the law," Altman said. "And that feels like a very bad sign for our democracy." The deal, which involves deploying OpenAI models on classified military networks, sparked protests from employees and the public concerned about AI’s role in warfare and surveillance.
CEO Argues Government Collaboration Is Imperative
Despite the criticism, Altman defended the decision to engage with the Pentagon. "If we don’t help them with, you know, defending the cyber infrastructure of the US, if we don’t help them with the biodefense… I think it’s really bad," he stated. "I think we have to work with the government." He emphasized that the partnership includes safeguards, with tweaks made to ensure the technology is not used for developing autonomous weapons or enabling domestic surveillance.
Altman Calls for Government Dominance Over AI Labs
The CEO framed the issue as a fundamental question of power. "One of the most important questions the world will have to answer in the next year is: Are AI companies or are governments more powerful? And I think it’s very important that the governments are more powerful," Altman told Segall. He cited historical government-led projects like the Manhattan Project and the Apollo Program as models, arguing that decisions about national security must be made through democratic processes, not by corporate leaders.
Expert Analysis: "Altman’s comments reveal a central tension in the AI era: the conflict between rapid technological innovation led by private companies and the slower, deliberative oversight required by democratic institutions," explained Dr. Anya Sharma, a technology ethics fellow at the Center for Digital Governance. "His admission of miscalibrating public trust highlights the growing accountability demands placed on tech leaders whose products have profound societal impact. The push for government primacy he advocates represents a significant, if contested, shift in Silicon Valley’s traditional stance toward regulation."
Sources
https://theonion.com/sam-altman-if-i-dont-end-the-world-someone-far-more-dangerous-will/


