UK consumers use AI widely but trust lags, EY finds
EY research shows 74% of UK consumers have used artificial intelligence in the past six months, even as trust and governance concerns rise.
The findings are based on a survey of 15,000 people across 15 countries, including 1,000 in the UK, and suggest AI is becoming part of everyday consumer and workplace activity. In Britain, respondents reported using AI for customer support, route planning, health-related information, research, content generation and decision support.
Use in everyday services appears to be expanding faster than comfort with more autonomous systems. While nearly three-quarters of UK respondents said they had used AI recently, only 14% said they would be comfortable relying on fully autonomous, agent-led AI.
The gap suggests consumers are more comfortable with AI that assists than with AI that acts independently. People want greater control, accountability and transparency when systems make decisions or take actions on their behalf.
Trust in the institutions handling AI data remains limited. Some 43% of UK respondents said they trust companies to manage AI-related data effectively, while 41% said the same of governments.
Frequent users were not necessarily more reassured by the technology. Scepticism was often highest among educated white-collar workers who use AI regularly, suggesting familiarity with the tools may sharpen concerns rather than reduce them.
Trust gap
Cyber security emerged as one of the clearest pressure points. About 73% of UK respondents said they were concerned about AI systems being hacked or breached, underlining how closely confidence in AI is tied to security and oversight.
At the same time, respondents showed a willingness to use AI where the benefits were clear. UK consumers were most comfortable adopting AI in practical settings, especially where it could improve response times, reliability or value for money.
Some 59% cited improved response times as a reason for being comfortable with AI use, while 52% pointed to better value and 35% to reliability. The results suggest many users judge AI less by novelty than by visible outcomes.
Use in more sensitive sectors was also evident, though often with an emphasis on safeguards. Half of UK respondents said they had consciously used AI as part of a health or wellness experience in the past six months, while 35% had used it in financial activities, where privacy and consistency are likely to carry greater weight.
Matthew Ringelheim, EY UK and Ireland AI Leader, said: "AI adoption in the UK is rapidly advancing, but trust is not keeping pace with technological capability. Whilst consumers are engaging with AI every day, many still want greater clarity about who is accountable when decisions are made on their behalf.
"This is a critical moment for organisations. As AI systems become more autonomous, trust must be embedded through strong data foundations, clear accountability and visible human oversight. Our research shows UK users want greater control and transparency, reinforcing the need to move beyond AI adoption for its own sake. Organisations that can clearly demonstrate how autonomy is governed, and how people retain meaningful control, will be best positioned to scale AI responsibly and unlock long-term value."
Skills question
The survey also highlighted a shortage of training. Only 23% of respondents said they had received significant training or education in AI, suggesting many users are engaging with the technology with little formal guidance.
That matters because training appears linked to confidence as well as practical use. A better understanding of how systems work, where they can go wrong and when human judgement is needed could help address some of the unease captured in the survey.
The findings point to a more cautious stage in AI adoption in the UK. Consumers continue to use AI across a widening range of tasks, but they are drawing firmer lines around where automation should stop and where human oversight should remain visible.
For businesses, that leaves a mixed picture. Demand for AI tools and services is clearly established, yet broader acceptance of more autonomous forms of AI may depend on whether organisations can show that systems are secure, decisions are accountable and users can intervene when needed.
Ringelheim said: "Alongside trust, skills development plays a critical role in successful AI adoption. As AI tools become more capable, people will need greater confidence in how they're used at work and clearer, practical guidance on how to use them responsibly. Training also better equips users to spot errors, challenge outputs and make more informed decisions about when to rely on AI and when to apply human judgement. Workforce confidence - built through the right skills - will be decisive in turning AI momentum into long-term growth for the UK."
Comments (0)