CII sets out groundwork for responsible AI adoption

A new roundtable report from the Chartered Insurance Institute (CII) has set out the foundations firms need in place before deploying artificial intelligence (AI) to manage customer vulnerability. The report argues that effective use of AI in this area depends on robust data infrastructure, clear governance frameworks and a supportive organisational culture. Without these pre-requisites, it warns, AI risks exacerbating harm rather than improving outcomes for customers in vulnerable circumstances. Based on discussions held at a roundtable hosted by the CII in September, the report examines the potential benefits and risks of using AI to identify and support vulnerable customers across insurance and financial services. It stresses that responsible implementation is essential if AI tools are to enhance, rather than undermine, consumer outcomes. At the event, the Financial Conduct Authority (FCA) reaffirmed its principles-based and “tech-positive” regulatory approach. Looking at the role of AI in supporting vulnerable customers The regulator said existing frameworks, including the Consumer Duty and its guidance on vulnerability, are sufficient to manage AI-related risks. The FCA does not plan to introduce prescriptive AI rules and continues to encourage innovation that aligns with the UK government’s five cross-economy responsible AI principles. Participants agreed that AI should augment, not replace, human judgement. Firms were urged to prioritise consumer outcomes over efficiency gains, carry out rigorous due diligence on vendors, pilot and test solutions thoroughly, maintain transparent decision-logging and monitor outcomes to demonstrate positive impact for vulnerable customers. CII chief executive Matthew Hill said AI had the potential to reduce the impact of vulnerability for both customers and firms. However, he warned that poor implementation “could harm those most in need of additional support”. The report also calls for greater sector collaboration, including the development of shared resources and the possible use of independent certification to build trust in AI-enabled services.
AI Article