DEPLOYING AI IN RISK MANAGEMENT

Understanding the risks of AI as it becomes embedded in a multitude of everyday business functions is one thing, but how can it best be brought to bear on risk management? Martin Allen-Smith examines the potential for managing risk at warp speed


True to Luddite traditions, the onset of any new technology is often accompanied by fear of potential job losses amid the prospect of people being replaced by machines. AI has stoked similar concerns and, while in certain sectors this could happen to some extent, the likely outlook for many professions looks to be far more nuanced.

The balance of risk and opportunity is the crucial calculation, and a Future of the Profession study last year from Airmic and WTW suggests that Millennial and Generation Z members of the association are ready to embrace the promise of AI. Its findings suggest that AI can help equip risk professionals to bring greater value to their organisations as strategic enablers and, far from replacing the risk manager role, fears of ethical and other risks around AI more widely could reinforce the need for human decision-making and oversight, a role that risk professionals are well positioned to take up.

Julia Graham, CEO of Airmic, says: “We need to recognise the opportunities the future will present as well as the threats it will create. This means identifying the right skills that risk professionals need to navigate their organisations through an AI-enabled future.”

Recent research from SkyQuest Global offers some insight into the extent to which AI is being used ‘hands on’ in risk functions. Some 39 per cent of the organisations it polled consider that they are using AI in a risk capacity, with another 24 per cent planning to do so in the next two years. Typical tasks include the traditionally manual functions of auditing reports, due diligence, vendor risk assessments and security questionnaires.

Arguably, some of this functionality is only really scratching the surface of AI’s capabilities.
Wes Cadby, head of risk management at Sellafield, believes that true artificial intelligence in risk management terms is a thing for the future, but suggests augmented intelligence is the here and now. “Augmented intelligence has the potential to revolutionise the traditional role of risk management in organisations. I do believe the days of large brainstorming workshops will soon be defunct as systems and algorithms will identify, assess and advise mitigations on our behalf.

“Across the nuclear sector we are already exploring the use of augmented intelligence through algorithmic assessment of large volumes of historic risk information to better create registers of potential threats and opportunities aligned to specific activities. There is also algorithmic assessment of historical actual financial spend to enable us to better forecast spend patterns in the future.”

Real time information gleaned from various regulated and non-regulated media will help enable right time identification and assessment of potential threats and opportunities and notify relevant risk owners that they may need to act. “This will begin to reduce the burden on humans to trawl through significant amounts of data, but enable us to provide insights to decision-makers earlier in the process,” Cadby adds.

A report published last year by the Institute of Risk Management – Friend or Foe: The Rise of AI and the Impact on Risk Managers – explored the myriad ways in which the technology could impact on the role of the risk manager as we currently know it.

Central to this is an emphasis on using its data analysis capabilities. Risk managers can analyse substantial amounts of data and identify patterns that may indicate a potential risk by fully using AI’s machine learning algorithms the report suggests, adding that this can be done much faster and more accurately than by using traditional methods, such as manual data entry and analysis. Risk identification and assessments will also become faster while referencing all local, geographical, industrial and situationally-specific available data, and making inferences where there are gaps.

It highlights as an example the petrochemical industry, where multivariant factors and scenarios could be used to identify live risks and reference all past and current data relating to the situation. This will enable risk managers to respond more quickly and effectively to emerging risks.

The IRM report suggests that by analysing data in real time, risk managers can identify potential risks as they are developing and take action to mitigate them before they become a major problem. As this technology develops, AI may even go further to suggest mitigation strategies or actions and, in some cases, implement them after running multiple simulations to anticipate the effect of the mitigation.

“We...are looking at some possible tools to help us in delivering more efficient services,” explains Hazel Arthur, risk director and head of risk function at Mott MacDonald. “The key with project risk management is that although we welcome tools that may give us an indication of how much money and time we need to put aside for that rainy day, what really matters is to understand how we can prevent them being needed at all.

“The data out there is mainly on forecast vs. outcome with regards to time and cost, whilst the information on the effectiveness of mitigation is lacking,” she adds, explaining that this current limitation needs to consider future opportunities by investing time and effort to create the data needed to capitalise on AI.

Arthur adds: “We’re using AI internally to support meeting facilitation, allowing risk managers to focus on the discussion rather than the meeting capture. In other areas of the business, our digital teams are building AI bots to support technical excellence, through drawing upon trusted sources of information on process and methodology for example.”

As with all emerging technologies, there are a host of advantages and potential pitfalls to consider when integrating AI more widely within an organisation’s approach to risk management. Ascertaining how these pros and cons weigh up is the key task facing risk managers and their organisations.

Cadby explains: “A significant advantage of AI being used appropriately is the reduction in wasted time and increase in productivity across a risk management function. Better, more timely information provided to the right people will help decision-makers make the most appropriate decision at the time. This can only be a good thing for an organisation wishing to deliver its objectives.”

The potential drawbacks of the increased use of AI, suggests Cadby, is the human assumption of the quality of the output. “We may start to believe that if AI provided the information, it must be correct, whereas we know some large language models contain incorrect information in their outputs. The key limitation is that the quality of the information used to feed the use of AI will determine the quality of the output received. This could lead to poor decision-making."

Per Broden, a risk director at Mott MacDonald, adds: “The advantages need to be counteracted by the limitations, a key one being knowledge management; if the data hasn’t been captured, stored, processed well in previous projects, there is limited value in drawing upon it to future forecast. Some of the other pitfalls include biases in the AI that might be difficult to identify and correct, cost efficiency for bespoke projects, and finding people with the relevant skills.”

The profession of the future

Looking to the future, how much further can AI develop as a tool to help businesses to manage and control their exposure to risk? Is it ‘just another tool’ for risk managers or does its potential go much further? Cadby says: “Recognising the current limitation of using AI in risk management regarding quality of input and output, the major benefits of AI will come when AI tells us something we didn’t know, upon which we need to act. This will enable us to identify the real risks we need to manage that don’t sit on any traditional risk register.

“When this becomes reality, an organisation will have the opportunity to become truly resilient to its threats and agile enough to grasp its opportunities. Until then, the role of the risk management function needs to quickly embrace AI as one of the major tools it has to support an organisation in managing its risks.

Arthur adds: “AI isn’t just another tool but instead a companion, or virtual assistant, for organisations allowing better analysis of large datasets and supporting decision making if used well.

By freeing up some of the data capture and processing space risk managers can better focus on culture and communication of risk.”



This article was published in the Q1 2024 issue of CIR Magazine.

View as PDF

Contact the editor



Share Story:

YOU MIGHT ALSO LIKE


Investec is disrupting premium finance – Podcast
Investec made waves in entering the premium finance market, where listening and evolving in response to brokers made a real difference.

Communicating in a crisis
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Advertisement