The National Cyber Security Centre has shared advice cautioning cyber security professionals against comparing prompt injection and more classical application vulnerabilities classed as SQL injection.
The NCSC advises that, contrary to first impressions, prompt injection attacks against generative artificial intelligence applications may never be totally mitigated in the way SQL injection attacks can be. Unlike SQL mitigation techniques, which hinge on enforcing a clear separation between data and instructions, prompt injection exploits the inability of large language models to distinguish between the two.
Without action addressing this misconception, the NCSC warns, websites risk falling victim to data breaches exceeding those seen from SQL injection attacks in the 2010s, impacting UK businesses and citizens into the next decade.
Backing proactive adoption of cyber risk management standards, the NCSC challenges claims that prompt injections can be ‘stopped’. Instead, it suggests efforts should turn to reducing the risk and impact of prompt injection and driving up resilience across AI supply chains.
As AI technologies become embedded in more UK business operations, the NCSC calls on AI system designers, builders and operators to take control of manageable variables, acknowledging that LLM systems are “inherently confusable” and their risks managed in different ways.
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.







YOU MIGHT ALSO LIKE