Anthropic CEO Dario Amodei may be wanting to follow Mark Zuckerberg and Sam Altman on their chip idea
AI giant Anthropic is reportedly exploring the design of its own custom chips, a move driven by its rapidly escalating revenue. While still in early stages, this potential shift mirrors industry leaders like Meta and OpenAI. Anthropic, already securing massive compute deals, might be eyeing greater control and cost-efficiency in its AI infrastructure. Anthropic wants to build its own AI chips. At least, it's thinking about it. Reuters reported Thursday, citing three sources, that the Claude-maker is exploring custom chip design—though the plans are early enough that the company hasn't locked in a specific design or put together a dedicated team. It could still walk away and keep buying chips from Nvidia, Google, and Amazon like it does today. Anthropic declined to comment.What's pushing this? Revenue, mostly. Anthropic said earlier this week that its annualised run rate has crossed $30 billion—up from $9 billion at the end of 2025. That's a lot of AI inference to run, and at that scale, owning your silicon starts looking attractive.This comes days after Anthropic signed its biggest compute deal yetJust before the Reuters report dropped, Anthropic announced a long-term agreement with Google and Broadcom for around 3.5 gigawatts of TPU capacity, coming online from 2027. That's three times what it was using earlier this year. AWS CEO meanwhile confirmed to CNBC that Anthropic's newest models—including Mythos, its most capable to date—are training on Amazon's Trainium chips.So Anthropic is not exactly starved for compute. It runs Claude across Google TPUs, Amazon Trainium, and Nvidia GPUs, and says it deliberately picks whichever chip fits the workload best. Custom chips would be an addition to that mix, not a replacement.Meta and OpenAI are already deep into custom silicon—Anthropic would be the latestThe broader trend here is hard to ignore. Meta has been quietly building out its MTIA chip line for training and inference. OpenAI struck a 10-gigawatt custom accelerator deal with Broadcom in October 2025. Google has its TPUs, Amazon has Trainium, and both have been using those chips to power their own AI workloads for years. Broadcom is now a design partner for multiple players, plus a fifth customer it hasn't named publicly.Every major name in AI is moving toward owning at least some of its silicon stack—and that's a problem Nvidia will eventually have to reckon with.Reuters puts the cost of designing an advanced AI chip at around $500 million. Anthropic is still unprofitable, but tripling revenue in four months changes the conversation.