Insurers need stronger data infrastructure, governance and culture before deploying artificial intelligence to manage customer vulnerability, according to new findings from the Chartered Insurance Institute.
The organisation says that while AI has the potential to improve the identification of vulnerable customers and tailor support, weak foundations risk worsening outcomes for those most in need. Effective use of AI in vulnerability management depends on reliable data, clear accountability and a focus on customer outcomes rather than operational efficiency.
The CII is urging firms to take practical steps to ensure that AI is used responsibly. These include maintaining human oversight of automated decisions, carrying out robust checks on technology providers, testing AI tools before wider deployment, and monitoring outcomes to demonstrate that AI improves support for vulnerable customers.
Matthew Hill, chief executive of the CII, said: “AI can help both businesses and customers reduce the impact of vulnerability, but if it isn't used properly, it could harm those most in need of additional support. The CII is working across the sector to help businesses make sense of these tensions, developing resources to ensure good customer outcomes can be achieved for all.”
Regulators have signalled that existing rules are sufficient to manage AI risks. The Financial Conduct Authority said current frameworks, including the Consumer Duty and vulnerability guidance, can address AI-related harms when applied correctly. It reiterated its principles-based and tech-positive approach and confirmed it does not plan to introduce prescriptive AI-specific regulation, instead encouraging innovation aligned with the government’s responsible AI principles.
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.







YOU MIGHT ALSO LIKE