Earlier this year, Mark Read, the head of WPP, the world’s biggest advertising group, was the latest target of an elaborate deepfake scam that involved an artificial intelligence voice clone.
Criminals created a fake WhatsApp account with a publicly available image of Read and used it to set up an online video meeting that appeared to be between him and another senior WPP executive. Using a voice clone, the scam attempted to acquire money and personal details.
While ultimately unsuccessful, this is one of growing number of fraud attempts using deepfakes. Not long before the WPP incident, a member of the Hong Kong team of UK engineering firm Arup was elaborately tricked into transferring HK$200m (around £20m) to fraudsters’ accounts. The individual was targeted by fraudsters posing as the company’s chief financial officer, and even invited to a video conference where all the other participants were created by deepfake technology.
These deepfakes used library footage of the CFO and other colleagues of the staff member, combined with AI-generated audio, to create an impression convincing enough to remove any doubt around the validity of the request to transfer the money.
In the case of vishing, as little as 30 seconds of audio is required to capture a digital footprint of a voice that can then be used to communicate with someone by simply typing responses through a keyboard.
The Hong Kong incident demonstrates how far the technology that first surfaced only in 2017 has come. The sophisticated approach included laying the groundwork for the scam through WhatsApp messages, emails, and other 1:1 conference calls with various staff members. This helped set the scene and provided some legitimacy to the eventual deepfake video conference. It has been reported that the employee was initially sceptical of the pre-staged communications, but was sufficiently reassured by the presence of recognisable colleagues in the call to release the funds.
As businesses’ defence mechanisms become more sophisticated, and file security becomes more routine, so the effectiveness of other types of cyber attack – like ransomware – diminishes. We are likely to see attacks become ever more sophisticated and more common, particularly against cash-rich firms.
Medium-sized financial services firms are a particularly attractive target for these types of financially-motivated attacks – being perceived as a wealthy business but without the robust cyber defences of much larger organisations.
There is little tooling available to help protect against these types of attacks, due to the humanised nature of the fraud. While traditional phishing emails and messages can be more obvious to detect and leave digital signatures that cyber detection tools can identify, voice and video deepfakes take advantage of our human tendency to help and assist others, along with our tendency to trust and be persuaded by what we see and hear – especially when it appears to come from a known or trusted source.
As this problem becomes increasingly mainstream, we expect to see insurers respond with policies that include specific coverage for losses related to deepfake attacks, including direct financial losses, extortion and reputational damage. We also expect to see definitions of covered cyber incidents expanded to explicitly mention deepfake technologies and their potential impacts.
Given the rising sophistication and potential damage from deepfake attacks, premiums for cyber insurance policies are likely to increase. Meanwhile, policies might also come with higher deductibles to account for the increased risk associated with these attacks.
The claims process may become more rigorous, with insurers requiring detailed evidence and analysis to substantiate claims related to deepfakes. And we may also see the introduction of specific sub-limits for deepfake-related claims to manage the potentially high costs.
A further significant implication could be the impact of deepfakes on war clauses in cyber insurance policies, as these clauses traditionally exclude coverage for damages arising from acts of war, including cyber warfare. As deepfakes can blur the lines between state-sponsored cyber attacks and other forms of cyber crime, obscuring the true origin of an attack, it is more challenging to determine if an incident falls under an act of war.
As deepfakes become more widespread and sophisticated, a joined up approach will be needed to stop the threat in its tracks – encompassing technology, insurance, risk management, and, importantly, fostering a culture that is wise to the threat.
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.
YOU MIGHT ALSO LIKE