Trump directs all federal agencies to stop using AI company Anthropic's technology

Listen to this articleEstimated 5 minutesThe audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.U.S. President Donald Trump said Friday that he was ordering all federal agencies to stop using Anthropic's technology following the company's unusually public dispute with the Pentagon over artificial intelligence use in surveillance and autonomous weapons.Trump's comments came just over an hour before the Pentagon's deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences — and nearly 24 hours after CEO Dario Amodei said his company "cannot in good conscience accede" to the Defence Department's demands.The president said most agencies must immediately cease using Anthropic technology, but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms."We don't need it, we don't want it, and will not do business with them again!" Trump wrote.Anthropic didn't immediately reply to a request for comment on Trump's remarks.At issue in the defence contract was a clash over AI's role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.The move is likely to benefit Elon Musk's competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to two other competitors, Google and OpenAI, that also have contracts to supply their AI tools to the military.If Amodei does not budge, military officials said they would not just pull Anthropic's contract but also "deem them a supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. Trump didn't make such a designation in his announcement Friday, but said Anthropic could face "major civil and criminal consequences" if it's not helpful in the phase-out period.And if Amodei were to cave, he could have lost trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building AI that, without safeguards, could pose catastrophic dangers.Anthropic had said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons, which would go against the company's usage policy it has for all users of its tech.Anthropic CEO Dario Amodei at the Code with Claude developer conference on May 22, 2025 in San Francisco. Amodei rejected the Pentagon's ultimatum in a Thursday letter. (Don Feria/Associated Press Content Services for Anthropic)But after months of private talks exploded into public debate, it said in a  Thursday statement  that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."He emphasized that the Pentagon wants to "use Anthropic's model for all lawful purposes," but he and other officials haven't detailed how they want to use the technology.WATCH | Canada launches AI watchdog to oversee technology's development:Amid rapid global advances and deployment of artificial intelligence technologies, the federal government has invested millions to combine the minds of three existing institutes into one that can keep an eye on potential dangers ahead.Parnell also asserted that opening up use of the technology would prevent the company from "jeopardizing critical military operations.""We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell wrote.Emil Michael, the defence undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk."That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter.OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply their AI models to the military.OpenAI CEO Sam Altman speaks during the Microsoft Build conference at the Seattle Convention Center Summit Building in Seattle on May 21, 2024. (Jason Redmond/AFP)Musk sided with Trump's Republican administration on Friday, saying on his social media platform X that "Anthropic hates Western Civilization" after Michael drew attention to a previous version of Claude's guiding principles that encouraged "consideration of non-Western perspectives."In a surprise move from one of Amodei's fiercest rivals, OpenAI CEO Sam Altman on Friday sided with Anthropic and questioned the Pentagon's "threatening" move in a CNBC interview, suggesting that OpenAI and most of the AI field share the same red lines.Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021."For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety," Altman told CNBC. "I've been happy that they've been supporting our warfighters. I'm not sure where this is going to go."
AI Article