Skip to content
All posts

ISO 42001: The AI Management System Standard Your Organisation Needs to Understand

ISO 42001 is showing up in procurement requirements. Enterprise customers are asking for it. NHS supply chain assessments are beginning to reference it. If you are deploying AI in any regulated sector, the question of whether you are working towards ISO 42001 is coming — if it has not already.

Published in December 2023, ISO 42001 is the first international standard for AI management systems. It provides a framework for organisations to establish, implement, maintain, and continually improve their approach to responsible AI development and use. By 2026, it has become the reference standard for organisations seeking to demonstrate that their AI governance is systematic, documented, and independently verifiable.

This post explains what the standard actually requires, how it maps to AI agent deployments specifically, and where most organisations are currently falling short.

What ISO 42001 Is (and Is Not)

ISO 42001 follows the same high-level structure (Harmonized Structure, formerly Annex SL) as ISO 27001 (information security) and ISO 9001 (quality management). This is deliberate. Organisations already certified to those standards will find the structure familiar, and ISO 42001 is designed to integrate into an existing management system rather than be built from scratch.

The standard applies to any organisation that develops or uses AI systems. It is not limited to AI vendors. A financial services firm using AI agents for customer service, a healthcare organisation using AI for clinical decision support, or an NHS trust deploying AI for patient triage are all within scope as users of AI systems.

ISO 42001 is a management system standard, not a technical standard. It does not specify which AI algorithms to use or how to build AI systems. It specifies the processes, documentation, and oversight mechanisms that an organisation needs to manage AI responsibly.

This distinction matters. Many organisations confuse ISO 42001 with technical AI safety standards. It is not about making AI technically safe. It is about demonstrating that your organisation has a systematic, documented approach to managing AI risk.

The Core Requirements

The standard's requirements are organised around the Plan-Do-Check-Act cycle used by all ISO management system standards.

Context and Leadership (Clauses 4–5)

The organisation must understand its context — the internal and external factors that affect its AI activities — and demonstrate leadership commitment to responsible AI. This includes: establishing an AI policy, defining roles and responsibilities, and ensuring AI governance is integrated into the organisation's overall management structure.

In practice: this means senior leadership signing off on an AI policy that covers what AI systems you have, what they are used for, and what principles govern their use. It means naming someone responsible for AI governance — not just IT security.

Planning (Clause 6)

The organisation must identify the risks and opportunities associated with its AI activities and plan actions to address them. For AI agents specifically, this means documenting the risks associated with each agent deployment: what could go wrong, how likely it is, and what the impact would be.

This is not a theoretical exercise. The planning clause requires that identified risks be mapped to actual controls — not just described.

Support (Clause 7)

The organisation must provide the resources, competence, awareness, and documentation to operate its AI management system effectively. This includes ensuring that people working with AI systems understand the organisation's AI policy and their responsibilities.

A gap we see frequently: organisations have an AI acceptable use policy that employees have never read. Clause 7 requires evidence that awareness is maintained — not just that a policy exists.

Operation (Clause 8)

This is the most technically substantive clause. It requires that the organisation implement controls to manage AI risks, including controls for data quality, model performance, human oversight, and incident management.

For AI agents, Clause 8 requires documented controls for:

  • The agent's intended use and its limitations
  • The data the agent processes and the access controls governing that data
  • The actions the agent can take and the technical constraints on those actions
  • The human oversight mechanisms in place
  • The incident response procedures specific to the AI system

The critical point: the standard requires that controls be implemented and effective — not just described in a policy document. An organisation with a written AI policy but no technical controls enforcing it does not satisfy Clause 8.

Performance Evaluation (Clause 9)

The organisation must monitor, measure, analyse, and evaluate its AI management system. This includes internal audits and management reviews. For AI agents, this means maintaining metrics on agent performance, policy violations, incidents, and the effectiveness of governance controls.

This is where the logging and monitoring infrastructure becomes essential. If you cannot produce metrics on agent behaviour, you cannot satisfy Clause 9.

Improvement (Clause 10)

The organisation must continually improve its AI management system, addressing nonconformities and implementing corrective actions. In practice: when something goes wrong, you fix it and document the fix. When your monitoring identifies a pattern of violations, you update your controls.

The most important observation: ISO 42001's requirements are not satisfied by documentation alone. Controls must be implemented and effective. An organisation with a written AI policy but no technical enforcement is not compliant with the standard's operational requirements — and a Stage 2 certification audit will expose this.

The Certification Path

ISO 42001 certification follows the same path as ISO 27001. An accredited certification body audits the management system against the standard's requirements and issues a certificate.

The audit has two stages. Stage 1 is a documentation review: the auditor reviews policies, procedures, and records to assess whether the management system is adequately designed. Stage 2 is an implementation audit: the auditor reviews evidence that the management system is actually operating as designed.

For AI agent governance, the Stage 2 audit will look for:

  • Logs of policy evaluations showing controls are active and running
  • Records of violations that were blocked — demonstrating the controls are effective, not just present
  • Evidence of incident investigations — showing the incident management procedures work
  • Records of management reviews — demonstrating leadership engagement with AI governance

An organisation that cannot produce this evidence because it has no runtime monitoring infrastructure will not pass Stage 2. The certificate requires operational proof, not policy documents.

The Relationship with Other Standards and Regulations

ISO 42001 is complementary to other frameworks, not a replacement for them.

EU AI Act. The Act's risk management requirements (Article 9) are substantially aligned with ISO 42001's planning and operational requirements. An organisation that has implemented ISO 42001 will have addressed most of the EU AI Act's technical requirements as a byproduct. The Act does not require ISO 42001 certification, but certification provides strong evidence of compliance — particularly useful when demonstrating compliance to regulators or enterprise customers.

ISO 27001. AI agents are information systems, and their security is within ISO 27001's scope. ISO 42001 extends ISO 27001's controls to cover AI-specific risks that the information security standard does not address: model performance, bias, explainability, and AI-specific attack vectors such as prompt injection and goal hijacking.

For Periculo clients who are ISO 27001 certified: the gap to ISO 42001 is narrower than it looks. The management system infrastructure already exists. The AI-specific controls layer is the incremental work.

GDPR. AI agents processing personal data are subject to GDPR. ISO 42001's data quality and governance requirements complement GDPR's data protection requirements. An organisation implementing ISO 42001 needs to ensure its AI governance controls are integrated with its GDPR compliance programme — particularly around data minimisation, purpose limitation, and DPIA requirements for AI systems.

NHS DSPT and DTAC. For digital health organisations, ISO 42001 certification provides strong evidence that can support DSPT assessments and DTAC submissions. The clinical safety and data governance controls required by DTAC are substantially aligned with ISO 42001's operational requirements.

The Four Gaps Most Organisations Have

Based on working with organisations across digital health and AI, the most common ISO 42001 gaps are:

1. No formal AI system inventory. You cannot manage AI risks you have not identified. Most organisations do not have a complete, up-to-date register of the AI systems they are using — including AI embedded in third-party software and AI deployed by individual teams without central IT involvement.

2. Policy exists only as a document. Many organisations have AI acceptable use policies. Very few have technical controls that enforce those policies. ISO 42001 requires both.

3. No comprehensive logging. Without logs of agent actions and policy evaluations, the organisation cannot demonstrate that controls are working. This is the most common Stage 2 audit failure for organisations seeking ISO 42001 certification.

4. No AI-specific incident response procedures. General IT incident response procedures are a starting point, but they do not address the AI-specific dimensions of agent incidents: the agent may have taken many actions before detection, the root cause may be in the agent's instructions rather than a technical vulnerability, and remediation may require policy changes rather than patches.

Closing these gaps does not require a large programme. For organisations deploying agents through standard platforms, adding runtime monitoring and policy enforcement addresses the technical control and logging gaps simultaneously. The inventory, policy documentation, and incident response procedures can be developed in parallel.

Starting With a Gap Analysis

The practical starting point for most organisations is not to begin with the certification process. It is to use ISO 42001 as a gap analysis tool — review the standard's requirements against current AI governance practices and identify what is missing.

This gap analysis typically takes two to four weeks for a mid-sized organisation. It produces a prioritised list of controls to implement, with the highest-priority items being those that address the most significant risks and those that are required for Stage 1 certification.

Organisations that take this approach typically achieve Stage 1 certification within six months and Stage 2 within twelve. The certification timeline is realistic precisely because the management system infrastructure — the policies, procedures, and documentation — can be built in parallel with the technical controls.

The organisations that take longest are those that try to build technical controls first and documentation second, then discover that the documentation gaps are what prevents Stage 1 from completing.

Thinking about ISO 42001 for your organisation? We help digital health and AI companies build the management system infrastructure and technical controls required for certification. Contact us.