Transparency key to successful AI adoption for financial services

The adoption of AI for use in financial services decision-making systems without sufficient transparency could undermine public trust, regulatory compliance and risk management, warns a new report from CFA Institute.

The report, Explainable AI in finance: addressing the needs of diverse stakeholders, examines the growing complexity of AI systems such as those used in credit scoring, investment management, insurance underwriting, and fraud detection. It makes the case for “explainable AI”, a class of techniques designed to make AI decision-making transparent, auditable, and aligned with human understanding.

Dr. Cheryll-Ann Wilson, the report’s author and a senior affiliate researcher at CFA Institute, said: “AI systems are no longer working quietly in the background, they are influencing high-stakes financial decisions that affect consumers, markets, and institutions.
“If we can’t explain how these systems work – or worse, if we misunderstand them – we risk creating a crisis of confidence in the very technologies meant to improve financial decision-making.”

The report emphasises that different stakeholders, regulators, risk managers, investment professionals, developers, and clients, require different kinds of explanations. By mapping specific explainability needs to distinct user roles, the study introduces a framework to embed transparency into AI deployment across the financial value chain.

Among the report’s key recommendations are the development of global standards and benchmarks for measuring the quality of AI explanations, and the promotion of real-time explainability in AI systems used for fast-paced financial decisions.

As regulatory momentum builds, with frameworks like the EU AI Act and the UK’s own regulatory initiatives in development, CFA Institute calls on financial institutions to move proactively. Rhodri Preece, senior head of research at CFA Institute, added: “This is not about slowing down innovation; it’s about implementing it responsibly. We must ensure that AI systems not only perform well but also earn the trust of those who rely on them.”



Share Story:

YOU MIGHT ALSO LIKE


Resilience Rooted in Reality
In this podcast, CIR speaks to CLDigital’s Tejas Katwala about why organisations must move beyond checklist compliance to build living, data driven resilience. He explains how rethinking governance, risk and compliance, breaking down silos and focusing on value streams can create sustainable, real time resilience that is rooted in the way businesses actually operate today.

Building cyber resilience in a complex threat landscape
Cyber threats are evolving faster than ever. This episode explores how organisations can strengthen defences, embed resilience, and navigate regulatory and human challenges in an increasingly complex digital environment.