The silent coup
On 20 April 2025, an official in the British government emailed their colleagues a story from that day’s Financial Times. The headline read: “UAE set to use AI to write laws in world first”. The officials, all of whom are involved in implementing AI in the running of the British state, read the article with amusement. “We were tempted to say: ‘We got there first,’” one of them told me. But they felt that the UK was “not fighting for the crown of the first AI-written line of legislation”, so they decided not to make public a fact very few people know: text composed by a large language model has made its way into an act of parliament. British laws are already being written by AI.
This is a matter of sovereignty. The software products we refer to as “AI” are all built on advanced “foundational models” from the US and China. This is a technology we do not control, but which plays an increasingly active role at every level of the British power structure. It is part of every conversation, drafting emails between officials, summarising ministers’ briefings and composing speeches delivered in the House of Commons. The Bank of England is using machine learning to inform its decisions on interest rates. The BBC uses AI to redraft articles. Every student at Oxford – where 31 of our previous prime ministers were educated – is now being educated with the help of OpenAI. There is little public understanding of how quickly this technology is moving through the institutions of power, or how enthusiastically it’s being pursued by a government that believes AI software could solve all its problems.
In dozens of interviews with current and former government officials and advisers, technologists and MPs – most of whom asked not to be named, in order to speak freely – I have been told about a quiet handing over of control in the frameworks of advice, intelligence and decision-making that underlie every government decision. This is not just a simple software upgrade. Large language models (LLMs), the software behind AI program such as ChatGPT, are built to produce answers that will be accepted by users – not to calculate, but to convince. This highly persuasive software, built primarily overseas, is being handed an unknown amount of political power.
In almost every interview conducted for this piece, I asked whether it was paranoid to suggest that the wholesale adoption of AI by our government, public services and wider economy is handing power to models built in the US and China. Even the most optimistic AI advocates agreed it was a reasonable argument. At a technology conference last year, I spoke to a person who had been involved at the highest level in the government’s use of AI. I asked if it worried them that foundational models could reflect the politics of the people who control them – people who have very different political ideas to our elected leaders. My concerns were not brushed off. This person told me about a power struggle between the engineers building AI models, the plutocrats who own them and the politicians who seek to control them. Far from the noise of the public debate, a battle is being fought that could have lasting implications for our politics. “Make no mistake,” this person told me. “This is a war.”
Subscribe to the New Statesman today and save 75%
This is not a story about how AI works. It’s not about whether it is going to become sentient, make us rich, or redundant. It is a story about power. It is about how politicians became distracted by a shiny new thing, and failed to understand – or chose not to ask – what it might cost. It is not about whether AI will help itself to your job. It is about whether the people who make AI are helping themselves to your country.
The government’s commitment to AI was agreed, long before it was announced to the public, at a cabinet meeting in November 2024. Keir Starmer’s ministers had spent two months discovering just how difficult their jobs were going to be. The ambitions of opposition were dissolving in the acid reality of spending reviews and budget cuts. Into a pensive silence the science minister, Patrick Vallance, introduced his guest.
Demis Hassabis has been talked about as a genius since he was a child. He began beating adults at chess aged four, wrote his first AI program at 12, was offered a place at Cambridge at 16. He sold his AI company, DeepMind, to Google in 2014 for £400m. The following month he would accept the Nobel Prize in Chemistry. Hassabis had advised No 10 before, when he briefed Rishi Sunak in 2023 about whether AI posed an existential threat. But to Starmer’s cabinet he described it not as a problem for government, but a solution.
Ministers listened to Hassabis describe a vision: as AI transformed the world the state would be transformed with it. LLMs would provide both economic growth and a faster, more productive government, by taking on the administrative duties of civil servants. The fiscal implications sounded incredible. In 2024, the Tony Blair Institute said AI could save the government £37bn a year. The following year, Peter Kyle (then the secretary of state for science, innovation and technology) said AI represented “a £45bn jackpot” for the public sector. Clearly, £45bn is a lot of money. It is enough to run the UK’s entire public transport and justice systems. It would cut government borrowing by a third.
Starmer had already been speaking with Hassabis in private, as well as with the British entrepreneur Matt Clifford (who also advised Sunak on AI), and Tony Blair, whose institute is funded primarily by the foundation of US tech entrepreneur Larry Ellison. In the cabinet meeting, Starmer told his ministers that he believed they should be optimistic about AI’s potential. Simon Case, who was then the cabinet secretary, remembers the shift that took place in the room: “They’d all been in for a couple of months and realised quite how difficult being a minister of the Crown is. So it was real enthusiasm,” he told me. “It was those presentations that made them think, ‘Actually, this is the thing that could deliver. This is the way out of this problem.’”
The meeting did not discuss the details of what these savings implied. But Clifford had already stated, months earlier, that there were no “holes” in government that could conveniently be plugged by AI. The only way to find out if these savings existed would be by “ripping up” and “rebuilding” parts of the state to accommodate it, in a “hard and painful” manner.
In her Mais Lecture this March the Chancellor, Rachel Reeves, outlined the three “big choices” that defined her plan for economic growth. The first two were obvious: a better trading relationship with the EU and a more balanced economy. The third was experimental: the UK should adopt AI – which she called “the defining technology of our era” – faster than any other major economy. This is a project that is already proceeding very quickly. Sitting in the room where Reeves spoke were representatives from AI companies that already have contracts with the government; one told me how his firm’s products are used to analyse new policies for education and defence.
Reeves herself had already presented policy formed with the help of AI to parliament in her June 2025 Spending Review. In it, the first full spending review since 2021 and the foundation of Labour’s plan for public finances, AI was used to analyse departments’ bids for money. A spending review is a contentious political process, a battle between cabinet ministers. Now, software made in other countries helps decide how much our government spends on housing, schools, hospitals and border control.
The Spending Review mentioned AI in 38 different places. It told government departments to make significant spending cuts – “at least” 16 per cent in real terms by 2029-30 – but it also allocated £2bn in new funding to AI, through the AI Opportunities Action Plan. A government adviser who had advocated strongly for AI told me: “I felt reasonably confident that we would get a good outcome.” That “we” refers not only to the department in which that person was employed, but a broader group of AI advocates in the government, backed by lobbyists and think tanks that argue for the state to be rewired by LLMs. One adviser told me the average civil servant was not as “deeply motivated” as AI advocates, whom they consider “a higher calibre of official”, propelled not just by competence but by a belief in the tech revolution.
This is also a story of a struggle for power in Whitehall. As US and Chinese technology becomes more influential in our political system, those who support it have the opportunity to become more influential, too. From the beginning, the AI revolution has been about more than upgrading systems. It has been about redistributing power.
For Dominic Cummings, redistributing power was a career, a life’s work, a guiding principle. The 2016 Brexit vote had been one step in a project he had begun at the turn of the century, when he started to campaign for civil service reform, and which had continued in his battle against what he called “the Blob” at the Department for Education. He wanted to overturn the obstinate lump of government itself, to make it faster, more capable, and AI would become part of this. In July 2019, on his first day in No 10, Cummings wore a grey T-shirt bearing the logo of OpenAI, then a largely unknown company. It would become a global news story three years later when it launched ChatGPT.
On the first working day of 2020, Cummings advertised for “weirdos and misfits” (scientists and technologists) to join him in Downing Street, and to change the way government was run. One of the first to come was the neuroscientist James Phillips. “I always had this image that the government must have all the answers,” Phillips told me. He believed that behind the circus of politics lay a “deep competence”. That impression soon shattered. The experts were “outnumbered”, and “deep technical expertise, or even some sort of familiarity with science and technology, was very often completely absent”.
Cummings hoped to change this. A job advert was posted for a “head of No 10 analytical unit” to build a new data science team called 10DS within Downing Street. The job went to Laura Gilbert, a technologist who had already built systems to understand particle physics, to detect heart disease through a patient’s thumbs, and to predict the behaviour of soldiers. Gilbert’s team began to use data science to inform policymaking. This sounds prosaic, but within Whitehall it was a revolutionary act.
A former senior civil servant explained why it was so controversial. When they entered the civil service in the 1990s, ministers were “prisoners of their officials”. Every piece of paperwork, every phone call, was mediated by the civil service, which had a “monopoly on advice”. Cummings wanted 10DS – who, being nerds, sometimes referred to themselves as the “10DS ninjas” – to inform ministers directly. This altered the balance of power. In front of the Prime Minister, a permanent secretary’s claims might be disproved by data.
The drive for a smarter, faster state did not end with Cummings’s departure in November 2020. In 2022 Henry de Zoete, who had worked with Cummings at the Department for Education and on the Vote Leave campaign, arrived in Downing Street. De Zoete had also worked in Silicon Valley with Sam Altman, the CEO of OpenAI. He was optimistic about the technology, thrilled by its potential, but also concerned by its power. On arrival in No 10 he went to Case to tell him executives from OpenAI and Anthropic had recently visited the White House with a warning: they did not fully understand what their models were doing, or what risk they might represent. De Zoete arranged for the same briefing to be given to the British government. In early 2023 Altman sat down with Sunak and played the prime minister a convincing deepfake of his own voice.
Sunak had spent time in Silicon Valley. He understood this new tech and its uncanny imitation of human language better than most in Westminster. He understood how people felt about it: it was weird and frightening. Sunak oversaw a new AI Security Institute – still regarded as a global leader in the field – and convened a summit at Bletchley Park. Not everyone was on board. One former official remembers Nick Clegg, then the chief lobbyist for Meta (which owns Facebook, Instagram and WhatsApp, and has invested heavily in AI) complaining about the focus on safety. “You’re putting people off AI,” they recall Clegg protesting. “This is really, really bad!”
What was truly bad was that neither Clegg nor his business secretary, Vince Cable, had done anything about the appropriation of the UK’s one globally significant AI company, DeepMind, when they were in the coalition; Google acquired the company in 2014. As a result, Sunak had to manage a government that had been almost totally unprepared for the arrival of generative AI, in a country that had no underlying “foundational models” of its own. He had no choice but to seek a role for Britain as a convener of discussions, rather than a place that had any say in how the technology was developed. The disruption that Cummings and others had long planned might be achieved, but it would be done with technology that did not belong entirely to us.
A pattern was being repeated. Britain was the first country to begin developing an atomic bomb (an idea conceived at the University of Birmingham) but allowed its programme to be taken over by the US. Today, we don’t own our Trident missiles but lease them from the US Navy. France, on the other hand, developed an independent nuclear deterrent. Today, France has built its own foundational model, Mistral. The UK produced the leading intellect in the AI field – Hassabis – and then stood by as his company was sold to Americans, on whose models we now depend.
The pace of change in Britain was nothing compared to the shift in the US, however. Donald Trump, whose 2024 victory was sponsored by the oligarchs of Silicon Valley, abandoned all pretence of caution around AI development. On his first day in the White House, Trump revoked Biden’s executive order on “AI safety and security”. The new policy was “winning the race” (against China). Sam Altman became “one of Trump’s favoured tycoons”, according to a recent investigation by the New Yorker. A new executive order, “Preventing Woke AI in the Federal Government”, made it clear that the Trump administration intended to intervene in the technology itself, to imbue it with their principles, which would sit behind every one of the answers given to the hundreds of millions of people who use chatbots every day.
This is politically important because AI products are built and tested to be convincing. A recent study tested how persuasive of LLMs were in conversations with nearly 80,000 British people. Kobi Hackenburg, who led the study, told me that in persuasion science, conversations are known to be much more effective than static messages. This is why canvassers appear at your door during elections, and why charities pay people to get you chatting in the street. Chatbots offer a huge political opportunity: to have persuasive conversations with millions of people, all at the same time.
Hackenburg and his colleagues established that chatbots are indeed very persuasive, and becoming more so. They use techniques familiar to barristers and debating experts. They also make things up. There are two possible explanations for this: either the model “learns” that “untrue facts are more persuasive” and then uses “facts which are less true” (it starts deliberately lying), or it runs out of accurate information and compensates with less accurate information (it starts bullshitting). To be clear, the chatbot does not have any thoughts or opinions about this; it is designed to find the response most likely to be accepted by a human user. It is a persuasion machine. It just so happens that persuasion is the root of political power.
This issue was brought to Downing Street by the British-Canadian technologist Geoffrey Hinton, shortly after the launch of ChatGPT in November 2022. Hinton, who won the 2024 Nobel Prize in Physics for his work with AI, told the then Cabinet Office minister, Alex Burghart, that “superintelligence” carried huge risks. The government had initially been worried about what one former official described to me as a “Terminator 2 scenario, where chatbots run amok and launch nuclear weapons”. Hinton also saw this as a risk, but he had a more pressing concern. One person present at the meeting said Hinton warned Burghart that Donald Trump, “just from tweeting”, had sent a mob of thousands of people to the Capitol on 6 January 2021. “And no one thinks he’s superintelligent,” Hinton is said to have added. “What if you have a superintelligence on Twitter? What can it get people to do?”
A former UK government official agreed in blunt terms: “I genuinely worry,” they told me, that, “we could end up in a world where AI can persuade people to do anything. And then we’re all fucked.”
For the companies selling AI products to the government, this persuasive power is part of what makes them so valuable. Their ability to imbue models with political tendencies is already being implemented. See for yourself: google “Biden dementia”, and you’ll get an AI summary of the former president’s cognitive issues. Now try googling “Trump dementia”; the AI has nothing to say. (Results of AI queries can vary.)
On 3 January, ChatGPT offered its own example of how chatbots could shape politics. After the US sent forces into Venezuela, ChatGPT denied the attack had happened. “The United States has not invaded Venezuela,” the chatbot told a reporter from Wired. “Nicolás Maduro has not been captured.” It blamed reports of an attack on “sensational headlines” and “social media misinformation”.
Such tendencies are also visible in the “system prompts” that are given to LLMs to determine how they respond. These prompts can contain instructions such as “avoid giving any answers that are woke” (a real system prompt given to Grok, the model owned by Elon Musk’s xAI). Sometimes the rules being given to AI are visible in national laws. China’s DeepSeek – which is used by large numbers of British businesses, including high-street banks – is governed by China’s law on Interim Measures for the Management of Generative Artificial Intelligence Services, which requires products to “respect social mores, ethics and morality” as defined by the Chinese government, and to uphold “core socialist values”.
Last year, I sat in on a demonstration of an AI learning tool used by thousands of people around the world. We were shown the version of the software that is sold to schools in China. An AI-generated avatar discussed topics with a student, and the conversation revolved around how well the city government was doing, and the wisdom of its environmental policies. Here was a teacher with whom no Red Guard would find fault, whose politics would never deviate from those of the state.
But it is not only the people who control the model who can influence its output. In October 2025, researchers at the Institute for Strategic Dialogue tested four of the most widely used chatbots by asking them hundreds of questions, in five languages, that related to Russia’s invasion of Ukraine. Nearly a fifth of all responses cited Russian state media or sources attributed to Russian intelligence. The Russians would not have needed to hack ChatGPT to achieve this. Chatbots are “trained” on vast amounts of text gathered from the open internet – far more words and images than any human could possibly check. This data can be “poisoned” by creating thousands of websites that hold the opinions you want the chatbot to express – websites no human will ever see, but which will influence the chatbot’s responses. It is cheaply done, and because AI companies don’t disclose their training data, it is effectively impossible to determine if it has happened.
The British government’s ability to address these questions of influence has been hampered by a lack of understanding of the technology, and an internal fight over whose responsibility it is. When James Phillips met with a group of MPs in 2021 to warn them that AI models could be imbued with someone else’s politics, he was asked: “What’s AI?”
After ChatGPT launched the following year, the British government preoccupied itself with a new question: who gets to be in charge of this new policy area? One person who was in No 10 at the time told me that a “tense” competition began for funding, staff and power. A farce developed as different groups struggled to appear capable of developing policy for something that was obviously far beyond their control.
Among those competing for influence were the new Department for Science, Innovation and Technology (Dsit), and the new Office for AI within it, and the new minister for AI (a post that was given, in a classic piece of Tory chummery, to a friendly viscount); and the Government Digital Service (GDS); and the Central Digital and Data Office; and the AI Council; and the AI Safety Institute; and the Incubator for Artificial Intelligence. None of these bodies had a clear authority over AI, and none of them was able to tell another government department what to do about it. The GDS staff failed to see why the government needed a minister for AI. It was like having a minister for email, they thought. Dsit officials thought GDS had been “drifting”, and “asleep at the wheel”. Another source told me of their ambivalence towards the Office for AI: “They published nothing in 2023. Nothing.”
For months, many of Dsit’s civil servants had “no office, no email addresses, no kit”, one source said. Rather than understand technology themselves, a source said, senior officials “outsourced technical understanding to universities and institutions”. The advisory system was stocked with professors and vice-chancellors, another told me, rather than people who actually worked in tech companies.
“We wanted the department to feel like a start-up,” one former Dsit adviser complained, but they were told they couldn’t even use the new AI tools that they were supposed to be creating policy for. The reason for this was mutual suspicion: Dsit’s senior officials knew that they shared a server with the Cabinet Office, and they worried that any embarrassing questions asked to the government’s internal chatbot might somehow show up on a screen in Downing Street. (A Cabinet Office source said they “couldn’t see” Dsit’s data, and “wouldn’t have been interested anyway”.)
The farce went unnoticed by parliamentarians, few of whom come from technical backgrounds. Many developed enthusiastic opinions about a technology they hadn’t taken the time to understand. Some began using AI to write emails to constituents, and speeches in the House of Commons. MPs began playing “ChatGPT bingo”, listening for the familiar words and cadence of chatbot text. The former security minister Tom Tugendhat told me he spent three hours listening to “speech after speech” that contained the same telltale phrases until, infuriated, he rose and accused his fellow MPs of reading “ChatGPT-generated press releases” that began, revealingly, with “I rise to speak” – a phrase the chatbot includes if you ask it to write a political speech, because it is used in Congress. Unwittingly these MPs were demonstrating where the power in the AI revolution lies.
The confusion that has developed in Westminster over AI is a lobbyist’s dream. A government that does not understand a technology and is too busy fighting itself for control of it will never effectively regulate it. Last summer, in the courtyard of a Westminster café, a lobbyist for one of the world’s biggest tech companies told me they thought any meaningful AI regulation is a distant prospect. It took Westminster two decades to even begin regulating social media, and a small minority of parliamentarians – the lobbyist guessed about 30 people – have any real understanding of AI.
Meanwhile, there are plenty of vested interests willing to help out: my research found more than £476m in government contracts awarded for consultancy services relating to AI, mostly since 2022, and 60 members of the House of Lords who have a declarable interest in an AI company. The revolving door between the AI industry and the institutions of public power is well oiled. The Competition and Markets Authority, which regulates tech companies, is run by Doug Gurr, the former head of Amazon UK. The BBC, which informs the public about technology, is now run by Matt Brittin, who spent 18 years as a Google executive. Sunak has just taken up advisory roles at Microsoft and Anthropic; George Osborne has taken a job at OpenAI.
What these interests tell our government is that the UK is a small market in which the giants of Silicon Valley would like to invest, but which they can also afford to avoid, which would mean losing the data centres that are planned by US companies. AI promises the two things Starmer’s government wants most of all: economic growth and the Cummings dream – shared by his successor, Morgan McSweeney – of a disruptive, fast-moving state that can get things done. Last year, a person who was then one of Downing Street’s most senior political appointees told me that realising these promises would inevitably involve trade-offs against “other priorities” such as “energy usage… planning permission… skills development, and obviously some of the more controversial areas like copyright”.
The Downing Street spokesperson conceded that while “you’ve got to develop as much of your own capability as possible”, resistance was futile. “You can’t seal your borders off,” they said. “It’s not without risk but it’s not really in our hands whether it’s developed or not, is it?” Britain is set on a specific path: “You have to have stronger partnership [on AI] with the US.” If not, we will be left behind in a great race for transformation.
When I asked what form this change would take, one senior AI adviser to the British government used the term situational awareness – a phrase they said I would find not in the mainstream media, but in the writing of Leopold Aschenbrenner.
Aschenbrenner was fired by OpenAI in April 2024. He had joined the company shortly after leaving university, and worked there for about a year. His time there convinced him that by the end of this decade, “billions of vastly superhuman AI agents” would entirely remake the global economy and geopolitics. In an essay that has been read by almost everyone in Silicon Valley, Aschenbrenner wrote that an LLM is a “primordial force”, a “demon”, a deus ex machina in the most literal sense. By the 2030s, he wrote, the successors to ChatGPT would make the US military “obsolete”. The world would be run by superintelligence, and the only humans with any control over it would be “a few hundred researchers” working on what he calls “the Project” in a “secure location” in the US. These philosopher-kings would be the unassailable architects of a new world.
A lot of very influential people really believe this is going to happen, and soon. The venture capitalist Marc Andreessen published a similarly divinatory “manifesto” in October 2023: “I am here to bring the good news,” Andreessen wrote: “Our descendents [sic] will live in the stars.” Andreessen casts AI scepticism as evil: questioning whether an AI company should help itself to NHS data, for example, is committing “a form of murder”, because it might delay AI-powered medicine. Saving energy, too, is morally wrong – “energy should be in an upward spiral”, he writes – because producing the new god is energy intensive.
Techno-libertarians have long been impatient for the future to arrive. “We wanted flying cars, instead we got 140 characters,” as Peter Thiel has put it. Now, for the first time, the establishment is broadly on their side. Ryan Wain, senior director of policy and politics at the Tony Blair Institute, told me the AI revolution is no “tech fantasy” but “a transformation”, to be embraced without hesitation.
But the truly bold decisions are being taken elsewhere. The US is aggressively pursuing political control of AI: Anthropic, the AI company that resisted the Trump administration’s demands, has been declared a “supply chain risk”, a designation normally applied to Chinese companies. The US is also investing far more aggressively. The head of one British AI company told me that against the trillions being gambled on the technology by Wall Street, the investments the UK government is making appear “laughable” and “pointless”.
When I asked Simon Case if he saw a problem with AI writing UK law, he asked me “why on Earth would you resist a technology” that could “save vast amounts of time” by writing legislation for us? “Why wouldn’t you do that?” The answer is to consider what an LLM is for. Its purpose, inarguably, is to take away at least some of the work of reading and writing. This can be very useful, but it comes at a price.
Emily Bender, professor of computational linguistics at the University of Washington, explained it to me like this: “Writing is thinking. Reading is thinking.” Politics is the business of reading and writing, speaking and thinking. In the making of laws, it is important that those in power take the opportunity to think. Legislation, Bender said, is written “to have an impact on the world now, and into the future”; a court case decades hence might hinge on the meaning of a single word. “You want those words to be chosen with utmost care.”
Writing a law is not something for which there is a technological solution. It is not a perfectible process, it is a moral act that requires belief and responsibility. It is a process of debate. As MPs, advisers and lobbyists know, the real business of our constitution happens in the background – in emails, notes, agendas. If everyone involved is asking the same software to condense emails and write replies, if they are reading research and updates and memos composed by the same software, that software increasingly assumes the power of the people who previously did the thinking. Reading is thinking, and writing is thinking, and thinking is power. And when the inefficiencies of human thought, deliberation and opinion are cleared aside, we are left asking: who is in charge?
[Further reading: A certain idea of Ed Miliband]
Content from our partners
Related