Why Britain’s health tech AI rules may leave it watching from the sidelines
Orlando Agrippa comments on why tightening regulation must not come at the expense of progress for patients
In recent years, a quiet but consequential transformation has been unfolding far above the Earth’s surface.
Thousands of satellites have been placed into low-Earth orbit, forming a dense and expanding digital mesh around the planet.
With further large-scale deployment planned before the end of the decade, this network is not about spectacle or novelty.
It is the physical infrastructure required to support real-time data transmission, automation and artificial intelligence at global scale.
For Orlando Agrippa, founder and chief executive of Sanius Health, the significance is not technological bravado but strategic intent.
“When you see that level of infrastructure being built in such a compressed timeframe, with plans to expand it again within a few years, it tells you how seriously AI is being taken elsewhere,” he says.
“This isn’t a series of pilots. It’s long-term, irreversible commitment.”
That context matters as the UK recalibrates its approach to AI regulation, particularly within healthcare.
After years characterised by pilots, proofs of concept and experimentation, regulators are now focused on removing short-lived, weakly evidenced tools and tightening approval processes.
The rationale is easy to understand. Patients should not bear the risks of immature technology, and the NHS cannot absorb systems that introduce uncertainty into already fragile workflows.
Yet there is a growing concern that this shift is becoming one-directional.
In the effort to protect the system from low-quality AI, Britain risks delaying the adoption of more durable, enabling technologies that are already being embedded elsewhere.
History offers a cautionary parallel. In the late 1990s and early 2000s, the UK was among the global leaders in plant science and agricultural biotechnology.
But as regulatory and political opposition to genetically modified crops hardened across Europe, deployment stalled.
While the US, Brazil and parts of Asia integrated GM technology into food systems at scale, Europe prioritised precaution. UK food prices rose by roughly 25 to 35 per cent in nominal terms over the period, even as consumers were shielded by imports of GM-enabled crops grown overseas.
The economic benefits of productivity gains, investment and export leadership accrued elsewhere, leaving Britain importing technologies its own research base had helped pioneer.
The resonance with AI is striking.
Globally, investment is accelerating. The United States, operating as a single market of more than 330 million people, invested over $65 billion in AI in 2024.
Europe invested less than a quarter of that across all member states combined.
In healthcare, American regulators have authorised hundreds of AI-enabled medical devices, while European approvals remain slower and fragmented.
“The US deploys technology at population scale,” Agrippa observes.
“You can challenge the outcomes, but not the velocity. Europe is still debating frameworks while others are already iterating.”
From a strategic perspective, he argues, the comparison is no longer transatlantic.
“The real competition is between the US and China. In parts of China, some of the technologies we are still discussing are already considered mature.
“The pace of iteration there is something Europe hasn’t fully adjusted to.”
None of this diminishes the need for regulation.
Healthcare AI has earned scrutiny. Systems trained on biased data, opaque decision-making and insufficient clinical validation pose genuine risks.
Removing tools that fail to meet evidential standards is not anti-innovation; it is responsible governance.
The problem arises when regulation becomes purely subtractive. Innovation does not pause. It reroutes.
Patients already feed clinic letters into general-purpose chatbots to interpret diagnoses or treatment plans.
Clinicians test ambient voice tools to manage documentation burden, often outside formal frameworks.
“What worries me,” Agrippa says, “is that these behaviours are already happening.
“If regulation only removes options without guiding safe adoption, we lose oversight and learning at the same time.”
The UK retains powerful advantages.
The NHS serves more than 65 million people within a single-payer system, supported by deep longitudinal data and a workforce under immense pressure.
Technologies such as ambient AI could release thousands of clinical hours each year. Decision-support tools could reduce unwarranted variation and improve safety at scale.
“This isn’t a choice between innovation and safety,” Agrippa concludes.
“It’s about recognising that inaction carries its own risk.
“If we regulate with purpose rather than fear, the NHS could recover years of lost ground.”
In a world that is accelerating rather than stabilising, regulation cannot simply organise the present. It must also create space for the future.