contact

UK and US develop new global guidelines for AI Security

New guidelines for secure AI system development will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process.

Agencies from 18 countries, including the US, endorse new UK developed guidelines on AI cyber security. Guidelines for Secure AI System Development, led by GCHQ’s National Cyber Security Centre and developed with US’s Cybersecurity and Infrastructure Security Agency, build on AI Safety Summit to establish global collaboration on AI.

In testament to the UK’s leadership in AI safety, agencies from 17 other countries have confirmed they will endorse and co-seal the new guidelines. The guidelines aim to raise the cyber security levels of artificial intelligence and help ensure that it is designed, developed, and deployed securely.

The new UK led guidelines are the first of their kind to be agreed globally. They will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process, whether those systems have been created from scratch or built on top of tools and service provided by others.

The guidelines help developers ensure that cyber security is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach. The guidelines are broken down into four key areas – secure design, secure development, secure deployment, and secure operation and maintenance, complete with suggested behaviours to help improve security.

The product will be officially launched this afternoon at an event hosted by the NCSC, at which 100 key industry, government and international partners will gather for a panel discussion on the shared challenge of securing AI. Panellists include Microsoft, the Alan Turing Institute and UK, American, Canadian, and German cyber security agencies.

These guidelines are intended as a global, multi-stakeholder effort to address that issue, building on the UK Government’s AI Safety Summit’s legacy of sustained international cooperation on AI risks.