UK, U.S. unveil global AI guidelines

0 409

The UK and the U.S. have teamed up to establish a set of global safety guidelines aimed at raising the cyber security levels of artificial intelligence (AI) to help ensure that it is designed, developed, and deployed securely.

The Guidelines for Secure AI System Development have been developed by the UK’s National Cyber Security Centre (NCSC), a part of GCHQ, and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from across the world, including those from all members of the G7 group of nations and the Global South.

Endorsed by 18 countries and more than a dozen international agencies, the new UK-led guidelines are the first of their kind to be agreed upon globally. They will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process, whether those systems have been created from scratch or built on top of tools and services provided by others.

NCSC CEO Lindy Cameron said: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up.

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.

“I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities.”

Also Read: UN Chief Launches Advisory Body on Artificial Intelligence

According to a statement by the UK government, the guidelines will aid developers in “ensuring that cyber security is both an essential pre-condition of the AI safety system and integral to the development process from the outset and throughout.”

The guidelines are split into four key areas that evaluate the safety of the design, development, deployment and “operation and maintenance” stages. The UK’s cyber arm claimed it would prioritise transparency and accountability for a secure AI infrastructure and in turn make the tools safer for customers.

“When the pace of development is high, as is the case with AI, security can often be a secondary consideration. Security must be a core requirement, not just in the development phase but throughout the life cycle of the system,” the statement read.

Leave a Reply

Your email address will not be published. Required fields are marked *