In a survey by Heidrick & Struggles, respondents most often identified Artificial Intelligence (AI) as a significant threat to organizations in the next five years. With this statistic in mind and the release of specific initiatives like the NIST AI Risk Management Framework and HITRUST AI Assurance Program, it’s no surprise that many security and compliance frameworks are looking for solutions that can enhance the trustworthiness of AI.
In 2024, ISO will also join the AI sphere by releasing ISO 42001, a new standard designed to help implement safeguards for the security, safety, privacy, fairness, transparency, and data quality of AI systems. ISO 42001 will include best practices for an AI management system—otherwise known as AIMS—and intends to help organizations that use AI responsibly perform their roles in using, developing, monitoring, or providing products or services.
So, what will the new standard mean for organizations that want to adhere to the new ISO AIMS? Let’s break down ISO 42001’s risk management features, unique safeguards, and structure of the upcoming framework.
As a new ISO management system standard (MSS), ISO 42001 will take a risk-based approach in applying the requirements for AI use. One of the most notable features to look forward to with ISO 42001 is that it’s been drafted in such a way as to integrate with other existing MSS, such as:
It’s important to note that ISO 42001 does not require organizations to implement or certify to other MSS as a prerequisite, nor is it the intent of ISO 42001 to replace other MSS. Instead, integrating ISO 42001 will help organizations who must meet the requirements of two or more of these standards. If your organization opts to adhere to ISO 42001, you’ll be expected to focus your application of the requirements on features unique to AI and the resulting issues and risks that arise with its use.
Since organizations should consider the management of issues and risks surrounding AI a comprehensive strategy, adopting an AIMS can enhance the effectiveness of an organization’s existing management systems in the areas of information security, privacy, and quality, as well as your overall compliance posture.
As AI continues to evolve, the ISO 42001 framework can help organizations implement safeguards for certain AI features that could create additional risks within a particular process or system.
Examples of features that may require specific safeguards are:
The structure of the upcoming ISO 42001 won’t look much different from the popular ISO 27001 framework. In fact, ISO 42001 will include similar features such as clauses 4-10, and an Annex A listing of controls that can help organizations meet objectives as they relate to the use of AI, and address the concerns identified during the risk assessment process related to the design and operation of AI systems.
Within the current draft of ISO 42001, the 39 Annex A controls touch on the following areas:
ISO 42001 will also contain Annexes B, C, and D. See the following descriptions for more information on these new annexes.
The potential objectives and risk sources addressed in Annex C will include the following areas:
Objectives:
Risk Sources:
ISO 42001 will undoubtedly play a key role in the development of AI security. While the exact release date has yet to be announced, we should know more about when ISO 42001 will be published by the end of 2023.
Contact BARR Advisory today to learn more about our ISO services and how we can help your organization adapt to new security and compliance AI standards and resources.