///
1 min read

The UK and US Collaborate on Establishing New International Standards for AI Security

The UK has introduced the world’s first global guidelines for ensuring the secure development of AI technology, with endorsements from agencies in 17 other countries.

Developed by the UK’s National Cyber Security Centre (NCSC) and the US’s Cybersecurity and Infrastructure Security Agency (CISA), these Guidelines for Secure AI System Development aim to elevate the cybersecurity standards of artificial intelligence, ensuring its secure design, development, deployment, and operation.

The guidelines, created in collaboration with industry experts and 21 international agencies and ministries, including those from G7 nations and the Global South, emphasize a “secure by design” approach. They provide insights for developers, covering systems built from scratch or based on tools and services provided by others.

The guidelines are categorized into four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. The multi-stakeholder effort addresses the growing importance of cybersecurity in AI systems, emphasizing the need to integrate security from the outset rather than retrofitting it later.

The guidelines received endorsements from cybersecurity agencies in various countries, such as the Australian Signals Directorate’s Australian Cyber Security Centre, Canada’s Canadian Centre for Cyber Security, France’s French Cybersecurity Agency (ANSSI), and the United States’ Cybersecurity and Infrastructure Agency (CISA).

The effort underscores the UK’s leadership in AI safety and builds on the legacy of international collaboration established at the AI Safety Summit. The guidelines were officially launched at an event hosted by the NCSC, featuring discussions on shared challenges in securing AI with industry, government, and international partners.

Leave a Reply