EDITOR'S COMMENT

Artificial intelligence is everywhere. From IT departments to boardrooms, and from conferences to the corridors of power, the recent excitement around generative AI, in particular, has precipitated a surge in efforts to predict and understand how the upsides of AI can be harnessed.

This excitement has been paralleled by fear of the ‘march of the robots’, with both regulators and technology firms themselves warning of the risks of getting it wrong. Widespread discussion and debate about ethics and standards are understandably tempering enthusiasm for AI’s transformative potential.

But never mind autonomous robots – the ramifications of AI’s power falling into the wrong human hands are surely cause for even greater concern. Just as AI has the power to reshape some aspects of the way we live, work, travel, bank, farm and manufacture, it also carries the potential for misuse by terrorists and violent extremists.

The terrorism threat landscape is a constantly evolving one, with capabilities, intent and methodologies shifting over time from bombs to contemporary methods including chemical and biological weapons, drones, cyber, vehicles and knives. In a joint report published in May by Pool Re and the Royal United Services Institute, terrorism experts are now warning of the threat posed by terrorists and other violent extremists deploying AI to further their own ends.

Against a backdrop of sharpening inequalities and polarisation, terrorist exploitation of AI creates another ‘perfect storm’, drawing together two of the greatest fears of modern times.

Whether for propaganda production or distribution, supercharged radicalisation or operational capabilities, there is clear evidence that terrorists and violent extremists hold an interest in AI and are actively experimenting with it.

According to the report’s author, a research fellow in the Terrorism and Conflict research group at RUSI, Dr Simon Copeland, AI has the potential to provide terrorists and violent extremists with “significant efficiencies” for conducting a broad range of activities and operations, including the planning, facilitation and execution of violent attacks.

Terrorists’ adoption of AI technologies for the creation and distribution of propaganda is amongst the key concerns of this new report. The ability of this technology to mass-produce content without human oversight, however, will likely remain limited for now, as, despite recent technological advancements, human review is generally required to remove material that does not appear credible, and to address mistakes.

Whilst exploitation of AI by terrorists and extremists is likely to overlap with those of other nefarious actors, such as AI-facilitated cyber crime, the report suggests that other uses are likely to be unique and inherently linked to the goal of advancing a political, religious, racial or ideological cause.

Despite these risks, Pool Re and RUSI have found little evidence to suggest a “widespread or transformative adoption” of these tools by terrorists in the immediate future, suggesting instead that the process will be incremental.

In the meantime, AI holds promising applications for counterterrorism itself. As the UK’s 2023 Strategy for Countering Terrorism states, AI has the potential to “radically speed up the process of threat detection”, and the security services are already deploying AI to sift through the online content habits of those consuming terrorist propaganda to help assess the risk individuals present. When it comes to propaganda, however, terrorists have for a long time honed strategies to stay one step ahead of social media companies and counterterrorism efforts to take down their content.

AI is everywhere – and much like the cyber security ‘cat-and-mouse game’ that emerged in the early days of computing, the AI arms race appears to have begun.



This article was published in the Q2 2024 issue of CIR Magazine.

View as PDF

Contact the editor



Share Story:

YOU MIGHT ALSO LIKE


COMMUNICATING IN A CRISIS
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Modelling and measuring transition and physical risks
CIR's editor, Deborah Ritchie speaks with Giorgio Baldasarri, global head of the Analytical Innovation & Development Group at S&P Global Market Intelligence; and James McMahon, CEO of The Climate Service, a S&P Global company. April 2023