Skip to main content

The world of technology has taken a giant step forward in securing artificial intelligence (AI). On Sunday 26 November, a historic agreement was signed by the United States, the United Kingdom, France and more than a dozen other countries, with the aim of strengthening cybersecurity in the field of AI.

This agreement, the first of its kind on an international scale, focuses on protecting AI against potential threats, by promoting the concept of “security by design”. This means that AI systems will be designed from the outset with built-in security measures.

The U.S Cybersecurity and Infrastructure Security Agency (CISA) and the UK’s National Cyber Security Centre (NCSC) have played a key role in publishing new guidelines for the development of secure AI systems. These guidelines, adopted by 18 countries, aim to raise cybersecurity levels in AI, by ensuring that this technology is designed, developed and deployed in a secure manner.

In France, ANSSI is one of the signatories, although no official communication has yet been issued.

This initiative comes shortly after the first global summit on AI safety at Bletchley Park, where the “Bletchley Declaration” was signed by 28 countries, highlighting the potential risks of AI to humanity.

For businesses, this international agreement represents a crucial step forward. It guides not only how AI is developed, but also how it is used ethically and safely. By adopting the principles of this agreement, companies can strengthen their own systems and guarantee the security of their use of AI.

This agreement, although mainly containing general recommendations, establishes a framework for increased international cooperation in the fight against AI abuse, data falsification and cyber threats.

This major step towards safer AI raises important questions about the appropriate use of AI and the collection of the underlying data, while also seeking to prevent the misuse of this technology by cybercriminals.

Verified by MonsterInsights