How Elon Musk and the Tech Billionaires Hijacked the State and Our Minds

Read our Monthly Magazine And support our mission to provide fearless stories about and outside the media system Polls consistently show that a majority of the British public hold a negative view of Elon Musk. Beyond his businesses and immense personal wealth, the US billionaire is best known for declaring that “civil war is inevitable” in the UK, and for amplifying far-right nativist movements across Europe. But as historian Quinn Slobodian and technologist Ben Tarnoff argue in their new book Muskism: A Guide for the Perplexed, Musk is better understood not just as an erratic culture warrior or an influential billionaire, but as an exaggerated caricature of our broader economy and social settlement.  Muskism, nurtured in the tech fortress of apartheid South Africa and hardened in the gold rush of early Silicon Valley, is a philosophy both cynical and utopian. It amasses vast private fortunes by selling sovereignty – and dependency – to people and states alike. It woos investors with science fiction dreamworlds. And at its extremes, it treats people like bugs in the code, human life as little more than a system to be engineered by an elite minority.  Slobodian spoke to Byline Times about the state-backed foundations of Musk’s success, the power of technological fairy tales, and how a politics that begins on networks now risks reducing politics – and human connection itself – to a programme to be optimised. “Elon Musk has been quieter recently – at least he’s not dominating the headlines as much – but it feels like we’re still living in his world. How would you best distill this broader idea of ‘Muskism‘?” Since the late 1990s, a small number of tech firms have come to dominate economic life, and with that, power has concentrated in a very small number of individuals. Even when these companies go public, founders retain extraordinary control through things like dual-class shares. So the old idea of shareholder democracy has been hollowed out. That’s what makes a figure like Musk – a self-described “techno-king” – possible. What we’re pointing to, then, is first the way the state has felt compelled to prop up and accelerate the growth of Silicon Valley firms, regardless of the phase – whether it’s social media, generative AI, or now hard tech and military applications. US governments, really since Clinton and Gore, have subordinated the future of the American political economy to the direction and desires of Silicon Valley. Muskism is the outcome of that. It’s a system in which states, individuals, and households become dependent on tech infrastructure and services to exercise basic forms of sovereignty, or even just to carry out everyday tasks. We’re all, quite literally, inside the “walled gardens” of tech firms from the moment we wake up to the moment we go to sleep – and, in many cases, even while we’re sleeping. EXCLUSIVE Anti-racist campaigners accuse the ex-Reform MP of seeking to “disguise” the extreme views and affiliations of his candidates Josiah Mortimer and Max Colbert “How did Musk benefit from state intervention? And how do we compare his empire to companies like Peter Thiel’s Palantir, embedded in our militaries and health institutions?“ In the book we call it “state symbiosis.” Musk sought to leverage state capacity and state backstops for his enterprises. You see that from the very beginning: the internet itself is state-funded infrastructure. PayPal, which he co-founded with Peter Thiel, depended on the banking system and federal deposit insurance. But we see it really clearly when Musk founds SpaceX in 2002, and the company exists largely because of the “network-centric” demands of the Global War on Terror. Musk steps into that space and, in doing so, helps displace legacy military contractors. He does it in part through a particular kind of legal argument about procurement – arguing for open tenders that allow new entrants, rather than defaulting to established defence firms. What’s striking is that Palantir later uses almost exactly the same argument to secure its own early state contracts. Then in 2008 Obama advances the idea that one way to achieve national security goals –specifically, disengagement from the Middle East – is to reduce fossil fuel dependency in the US economy. Musk was able to make a persuasive case that Tesla could be the carrier of Obama’s new green automotive future. That helped secure a nearly $500 million government loan at a critical moment – one that effectively saved the company from bankruptcy and set it up for what followed. As these essential elements of national strategy and sovereignty are contracted out, you can see the dependency begin to build. In the book, we call it sovereignty-as-a-service. “I’ll confess that, as a teenager in the mid 2010s, I read Elon Musk’s biography and I fully bought what he was selling. Can you speak to the power – and the profitability – of the science fiction stories he tells?“ Silicon Valley was always a high-risk, high-reward environment. What you’re expected to propose isn’t an incremental improvement, but something that creates an entirely new market. You have to come in and say: this product is going to change everything. So exaggeration – what could be called fabulation – is almost a necessary condition of success. You have to be outlandish. And Musk is interesting in that context, because he’s not an especially polished or charismatic speaker. Like Peter Thiel, he often seems to fumble, to speak in half-formed thoughts. But that itself becomes part of the mystique for investors backing his ventures. His promises about Mars, we argue, are often really about Earth. The grand visions tend to underwrite much more immediate, terrestrial projects. So, for example, you raise capital for something like Starship by talking about colonising Mars – building a vehicle large enough to carry people and infrastructure there. But the actual application, reportedly is likely to be suborbital delivery for the military. So you have to take these far-reaching speculations seriously – but also ask what more immediate, material outcomes they’re serving. It’s in that space, between the grand narrative and the practical application, that the engine of Muskism really operates. Streamers who generate millions of views on Youtube, while posing as ‘citizen journalists’, have a record of abusive behaviour towards women and sexual offences, a Byline Times investigation reveals Nic Murray Around 2015–16, Musk is arguing that the best way to guard against a runaway, malevolent superintelligence is for humans to merge more fully with machines – to become a kind of human–machine symbiote capable of keeping pace with AI development. So the mandate, as he sees it, is to deepen that integration. Neuralink, which follows quite soon after OpenAI – which he also co-founded – fits directly into that vision. And if you listen to how he talks about it at launch, it’s clear this isn’t just about medical applications – helping people with paralysis, for example. It’s conceived as a consumer technology, a way of expanding the interface between humans and networks. He talks about creating a kind of “giant cyborg” – a tightly integrated human–machine system that can operate at a much faster rate than our current, relatively slow interfaces of eyes, fingers, and thumbs allow. Our argument is that he’s operating with what you might call a cyborg vision of politics. In that view, political opinions don’t originate in the offline world – through material interests and social interaction – and then get expressed online. Instead, politics begins on networked platforms and then moves offline. So if you want to influence politics, you start with the machine. And from that perspective, the acquisition of Twitter – however it was framed at the time as a commercial misstep – starts to look more intelligible. It has a kind of rationality within this cyborg model of politics, even if it doesn’t always appear that way when taken on its own terms. “That was the beginning of what he called the ‘woke-mind virus’. Where did that idea come from, and how did it play out on ‘X’ and later when Donald Trump invited him into the White House?“ The “woke mind virus” idea sounds absurd, and it’s often treated that way by critics, for good reason. But it actually draws on something more established. The notion of a “mind virus” comes from Richard Dawkins, who, whatever his politics, is a very prominent evolutionary biologist – and the person who coined the term “meme.” Both concepts are built on the same premise: that human interaction operates in a way analogous to network technologies. Ideas aren’t necessarily processed rationally; they replicate through systems in a more mechanical way. That allows bad or delusional ideas to spread, often uncontrollably, and to become very difficult to contain once they’re in circulation. When Dawkins introduced the term in the early 90s, he was explicitly drawing an analogy with computer viruses, which were then becoming a major concern in an increasingly networked world. Musk takes up that framework, and it fits quite neatly with his broader, machine-first understanding of politics. The idea is that if you can control the network and eliminate the “virus,” then the system will function properly – people will fall into alignment with your goals. Projects like DOGE were very explicitly framed in those terms. Musk described them, repeatedly on X, as efforts to enter the state and eliminate the woke mind virus from within the federal government. One part of that involved targeting what Althusser would call the “ideological state apparatuses” – higher education, research funding, and institutions tied to soft power, like USAID. You saw this quite crudely in the way DOGE operatives used tools like ChatGPT to scan for anything associated with DEI, and then automatically flag or remove it. That’s the logic of deletion applied to politics – trying to excise the “virus” at its source. He starts from this idea that politics originates in networks and then moves offline, he ends up seeing people themselves in quasi-digital terms – as contaminants within a system. The “illegal immigrant” becomes, in effect, an embodied computer virus – something that disrupts the optimisation of the system in the same way malicious code disrupts a network. And just as you would clean a codebase, the implication is that you cleanse the territory of these elements to restore proper functioning. You can see that logic in how he operated at Twitter—becoming fixated on bots, on the idea of ghost employees, and then cutting the workforce dramatically. And then again in government, where a key part of the DOGE agenda was to make databases across different agencies interoperable—so they could be queried and cross-referenced more easily. The goal there was legibility and control: to identify and remove what he would see as “human bots,” which in practice means undocumented migrants. “Our latest edition looks at the AI industry today and its impact on society. Can you map Muskism on to what we’re seeing there – from notions of a ‘bubble’ to people communing with chatbots?“ You absolutely can. What has motivated Musk and Altman is this idea that you can’t think about your moral obligations as an entrepreneur in terms of a single human lifespan. If you’re asking how your product improves people’s lives tomorrow or next year, you’re asking the wrong question. The real question, in their view, is how what you’re building now might shape human life thousands, even millions, of years into the future. This idea of “longtermism” has fallen out of fashion lately, but I don’t think it’s gone away.  The problem is that this isn’t a timescale that maps onto democratic politics. It’s not how political accountability works. So translating that kind of vision into real political legitimacy has proven extremely difficult. The conditions for it are now breaking down. Energy is getting more expensive, capital is tightening – especially in places like the Gulf – and people are starting to see AI’s effects directly in their lives and workplaces. They’re also seeing the physical infrastructure – eyesore data centres being built in their communities – which makes the whole thing more tangible, and often more contentious. Musk was previously able to maintain a certain legitimacy through consumer-facing products. He made desirable cars. He ran a social media platform where, at least for some users, there was a visible payoff – monetising engagement, even getting paid for posting. So there was something like a “fan contract,” if not a social contract. People could see what they were getting out of it. With AI, it’s not clear there’s a contract at all. If you ask people what worries them most, it’s not necessarily job loss – it’s that the technology lies to them. They feel they’re being misled by generated text, images, and video, that they can’t trust what they’re seeing. That’s a very difficult foundation on which to build something as large as a transformation in the mode of production. “What emerging forces pose the greatest challenge to Muskism?” We started framing the response in terms of three “Rs”: regulation, replacement, and reimagination. Regulation is the most familiar. That includes a return to something like the Biden-era antitrust and oversight policy. You also see it in the EU, especially around content moderation and data privacy. In many ways, Europe is ahead of the US here, simply because it has regulatory frameworks – like data protection laws – that the US still lacks. But the more ambitious move now is replacement. There’s a growing push to build alternatives to Silicon Valley products. You can see that in things like the French government moving away from Microsoft systems toward Linux, or in the use of Eutelsat, the European low Earth orbit satellite network, for connectivity on major shipping fleets. But the one we’re most interested in, is reimagination. That goes beyond simply replicating existing systems. It asks more fundamental questions: why are we using these technologies in the first place? How are they shaping human interaction? And in the case of generative AI, how are they shaping creativity in ways that might not be beneficial – and are likely to be destructive? Our hope is that Musk, as a kind of extreme case, produces a sort of pedagogical shock. That it becomes harder to passively accept dependence on Silicon Valley infrastructures simply because they’re convenient. At a certain point, once someone like Musk openly asserts that technology is political, it forces a broader recognition that it always was. And that opens up space to think differently about what we actually want these systems to do. ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account. We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

Comments (0)

AI Article