Businesses have been warned to be vigilant of the potential risks from integrating AI-driven chatbots into their business, with research suggesting that they can be tricked into performing harmful tasks.
The National Cyber Security Centre warns that understanding is still limited about the potential security threats posed by algorithms that can generate human-sounding interactions – known as large language models (LLMs).
Specifically, it points to a potential security weakness for LLMs in their vulnerability to ‘prompt injection’ attacks, which is when a user creates an input designed to make the model behave in an unintended way. This could mean causing it to generate offensive content, reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input from the LLM.
In a blog post on its website, NCSC said: “As LLMs are increasingly used to pass data to third-party applications and services, the risks from malicious prompt injection will grow. At present, there are no failsafe security measures that will remove this risk. Consider your system architecture carefully and take care before introducing an LLM into a high-risk system.”
Jake Moore, global cyber security advisor at ESET, added: “The potential weakness of chatbots and the simplicity with which prompts can be exploited might lead to incidents like scams or data breaches. However, when developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it is possible to reduce the impact of cyber-attacks stemming from AI and machine learning.
“Unfortunately, speed to launch or cost savings can typically overwrite standard and future proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware of what they input into chatbots is not always protected.”
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.
YOU MIGHT ALSO LIKE