The UK’s National Cyber Security Centre has partnered with the US Cybersecurity and Infrastructure Security Agency to create a set of global guidelines aimed at ensuring the secure development of AI technology.
Created by the NCSC and CISA in cooperation with 21 other international agencies, the guidelines are the first of their kind to be agreed globally, with 17 countries agreeing to endorse them. They aim to help developers of any systems that use AI make informed cyber security decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others.
The guidelines help developers ensure that cyber security is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach.
Lindy Cameron, CEO of the NCSC, said: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.
“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities.”
The guidelines are broken down into four key areas – secure design, secure development, secure deployment, and secure operation and maintenance – and include suggested behaviours to help improve security.
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.
YOU MIGHT ALSO LIKE