Altman said no to military AI abuses – then signed Pentagon deal anyway
Opinion A week ago today, OpenAI CEO Sam Altman said he'd draw the same lines as Anthropic. By that night, he'd signed a Department of Defense deal that included no such AI protections. What's going on here?
We live in interesting times in AI land. First, the Trump administration's self-proclaimed "Department of War" (DoW) Secretary Pete Hegseth demanded AI giant Anthropic include contract language that would give the military the right to use Anthropic's LLMs for "any lawful use." Anthropic had already dropped its Responsible Scaling Policy clause, saying it wouldn't train AI models that it couldn't guarantee were safe, but that wasn't enough. Hegseth wanted to be able to use Anthropic's Claude for domestic mass surveillance and AI-controlled weapons. When Anthropic CEO Dario Amodei wouldn't agree, Hegseth and Trump stripped the company of all its government contracts.
READ MORE
In the meantime, OpenAI CEO Sam Altman came to defend Anthropic's position, saying in an internal memo: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." It was the morning of February 27. That night, Altman made a deal with the DoW, which did not include any such contractual agreements.
It's amazing what a company will do for a $200 million contract despite its endless billion-dollar funding announcements and a burn rate of over $9 billion in 2025.
President Trump's temper tantrum called Anthropic "Leftwing nut jobs," but the company is not some kind of liberal hotbed, far from it. Indeed, when Trump started the Iran war, the US armed forces were relying on Anthropic AI for its targeting. The problem wasn't that Anthropic was anti-military. It's that Amodei essentially argued he wouldn't kiss Trump's rump.
Tell me, though, how did OpenAI win this contract in no time flat unless, despite what Altman said, they gave Hegseth and Trump everything they wanted? As reported in The Verge: "The Pentagon wouldn't back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAI's technology to carry it out." And you may have noticed that under the Trump regime, pretty much anything – killing citizens, awarding $220 million contracts to friends, and blasting boats suspected of drug running out of the water – is declared to be legal.
Yeah, these are exactly the people I want to trust with an AI-powered domestic surveillance system and killer robots.
It's interesting as well that a few days later, OpenAI announced that its latest model, GPT 5.3, wouldn't be overly defensive or moralize before answering questions. So, I guess if I'm in the DoW and I ask it to target ACLU members in a given zip code for AI-driven drone attacks, it won't blink.
Not everyone is happy about this. In fact, a lot of people are ticked off. Some 34,000 people on the OpenAI subreddit upvoted a post demanding people show that they've cancelled their OpenAI contracts because OpenAI is training a war machine. The most popular post claims: "Greg Brockman, co-founder of OpenAI and the current President, made the largest ever donation to Trump's MAGA super PAC, at $25 million. And Jared Kushner has most of his wealth in OpenAI. In other words, the Trump administration was bribed by a company, OpenAI, into destroying its main competition, Anthropic. This is blatantly corrupt."
Welcome to the United States of Kleptocracy. Nonetheless, this is a deal that OpenAI may yet regret. In the aftermath of this contract and Anthropic's fight with the DoW, Anthropic's revenue has shot up to about a $20 billion revenue run rate. Meanwhile, even before the OpenAI deal happened, Menlo Ventures reported that, for the enterprise market anyway, Anthropic accounted for 40 percent of enterprise LLM spend in 2025, while OpenAI lost almost half of its enterprise share, dipping to 27 percent from 50 percent.
Will ChatGPT continue to dominate the consumer space? Maybe, but let's face it, no one's ever going to profit from being the top chatbot to Joe and Jane User. There's simply not enough money there. That said, on iPhones anyway, Anthropic's chatbots are now more popular than ChatGPT.
So now Altman is publicly claiming that he's improving the just-signed DoW deal. That's not what he's telling his employees. He's telling them that OpenAI gets no call on what the DoW does with its AI and: "So maybe you think the Iran strike was good, and the Venezuela invasion was bad... You don't get to weigh in on that."
Given that in the early days of the war, 59 percent of Americans disapproved of the attack, Altman tying OpenAI's future to a government that seems certain to lose even more support as American casualties increase is a move that appears destined to hurt OpenAI's popularity.
Sure, OpenAI will make an additional $200 million in revenue in the short run. In the long run, however, Altman handed Amodei a hammer he'll use to hit OpenAI again and again and again. ®