Pentagon, Anthropic
Digest more
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use Anthropic’s AI for mass domestic surveillance or for fully autonomous killing machines,
Shortly after the president's ban of artificial intelligence company Anthropic, rival OpenAI announced it had done a deal with the Defense Department to provide its technology for classified networks.
What began as a policy discussion about the risks of AI in warfare has escalated into a high-stakes standoff involving the Trump administration, the Pentagon and AI firm Anthropic.
Defense Secretary Pete Hegseth has spoken to widening concerns the U.S.-Israeli strikes in Iran could spiral into a protracted regional conflict by declaring: “This is not Iraq.
Start-up Anthropic and the U.S. military are careening toward a clash over government use of artificial intelligence — and whether it should be allowed to kill.
What a weekend! Anthropic is now poised to sue the Pentagon, after being labeled a “supply chain risk,” while OpenAIhas its own agreement allowing the agency to use OpenAI’s models in “classified environments.
Sam Altman posted an internal memo that OpenAI is working with the Pentagon to "make some additions" to their agreement.