Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

US, UK Cyber Agencies Spearhead Global AI Security Guidance

Global Cybersecurity Agencies Say 'Secure by Design' Is Key to AI Threat Mitigation
US, UK Cyber Agencies Spearhead Global AI Security Guidance
New guidance seeks to set universal directions for AI development security. (Image: Shutterstock)

Nearly two dozen national cybersecurity organizations on Sunday urged providers of artificial intelligence systems to embrace "secure by design" and other preventive measures aimed at keeping hackers out from the mushrooming world of AI systems.

The U.K. National Cyber Security Center, along with 22 domestic and global cyber partners, released joint guidance Sunday warning of security risks to AI and machine learning applications that stem from supply chain and models. The UK NCSC and U.S. Cybersecurity and Infrastructure Security Agency spearheaded development of the guidance, which focuses on four key areas: secure design, secure development, secure deployment and secure operation and maintenance.

See Also: GDPR & Generative AI: A Guide for Customers

The guidance, the agencies say, is good for all AI developers but is particularly aimed at AI providers who use models hosted by a third party or who use APIs to connect to a model. Risks include adversarial machine learning threats stemming from vulnerabilities in third-party software and hardware applications. Hackers can exploit those flaws to alter model classification and regression performance and to corrupt training data and carry out prompt injection or data poisoning attacks that influence the AI model's decisions.

Hackers can also target vulnerable systems to allow unauthorized actions as well as extract sensitive information. The guidance describes cybersecurity as a "necessary precondition" for AI safety and resilience.

CISA, in particular, has been on a protracted campaign to evangelize the benefits of secure by design while also warning tech companies that the era of releasing products to the public containing security flaws must come to an end (see: US CISA Urges Security by Design for AI).

The guidelines represent a "strong step" in providing universal standards and best practices for international AI security operations and maintenance, according to Tom Guarente, vice president of external and government affairs for the security firm Armis. "But the devil is in the details."

Guarente told Information Security Media Group that it will likely be easier for countries that co-signed the recommendations to implement the guidance, compared to others that were not included in the development process.

”What about countries who weren’t part of it and not accountable for those guidelines?” Guarente said, noting that foreign adversaries such as China, Russia and Iran have not agreed to any global AI regulations. "That’s the real challenge … how to enforce the guidelines and get buy-in from other countries."

The proposed recommendations include auditing external APIs for flaws, preventing an AI system from loading untrusted models, limiting the transfer of data to external sources and ensuring that training data is sanitized to prevent the AI systems from performing only defined "system behavior."

The guidance is nonbinding and features a wide array of general recommendations, though it "represents a collective acknowledgment of the critical role of cybersecurity in the rapidly evolving AI landscape," said Daniel Morgan, senior director of EMEA & APAC government affairs for the information security platform SecurityScorecard.

"This agreement marks a significant step toward harmonizing global efforts to safeguard AI technology from potential misuse and cyberthreats," Morgan told ISMG, adding that the focus on integrating security in the design phase of AI systems "is particularly noteworthy."

The guidance stresses the need for protecting AI models from potential tampering by using cryptographic hashes or signatures for system validation of AI models and training data. The agencies recommend auditing hardware and critical software components such as models, data, software libraries, modules, middleware and framework.

"We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up," NCSC chief Lindy Cameron said in a statement. She said these measures will ensure that "security is not a postscript to development but a core requirement throughout."

CISA Director Jen Easterly said in a statement that the international collaboration "underscores the global dedication to fostering transparency, accountability, and secure practices" in developing AI systems.

The latest measure comes weeks after the United Kingdom hosted a first-ever global summit on artificial intelligence safety aimed at addressing risks posed by the technology. At the event, 28 participating nations also signed the Bletchley Declaration calling for an urgent global consensus on managing various AI risks (see: UK AI Summit: Aspirations, Benefits and a Lack of 'Doom').

In October, U.S. President Joe Biden released an executive order that set governmentwide standards for agencies already using the emerging technology (see: White House Teases New AI Executive Order).


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.