Home AI Solutions MedTech Cyber Security Clients Company Blog עברית Get Started
AIGovernanceISO 42001ComplianceEU AI Act

Implementing ISO/IEC 42001: An AI Governance Roadmap That Doesn't Stall

Pelican Tech 5 min read
Abstract dark composition with concentric blue governance rings around a glowing orange AI core, evoking AI management system architecture

ISO/IEC 42001 was published at the end of 2023 as the first international standard for an AI management system. Two years on, it has become the document that tells you whether an organisation actually governs its AI or only claims to. Unlike voluntary frameworks (NIST AI RMF, OECD AI principles) it is built for certification audits, which means the gap between "we have an AI policy" and "we can produce evidence the policy was operating" determines whether the certification holds up.

This piece is the implementation playbook we use with clients pursuing 42001, almost always in parallel with EU AI Act compliance work. It is opinionated about what to defer and what to invest in early, because the typical 42001 programme stalls in the same place: too much policy work, not enough operational evidence.

What 42001 actually requires that you probably aren't doing

The standard has 39 controls organised across 9 control areas. Reading the standard cover-to-cover gives the impression of a sprawling effort. In practice, 5 control areas account for the vast majority of where programmes succeed or fail. We focus on those first; the rest follows naturally.

A.5 — Policies related to AI. The standard requires an AI policy that specifically addresses AI risks, not a generic IT policy with "AI" pasted in. The fail mode here is reusing the existing security policy template; the artefact passes the document review and fails the practitioner interview.

A.6 — Internal organisation. Specifically the AI accountability framework: who decides which AI systems get deployed, who reviews high-risk ones, who has authority to halt deployment. This is where the standard starts to expose organisational gaps that have been papered over with informal arrangements.

A.7 — Resources for AI systems. Including the data resources — provenance, quality, intended-use boundaries. The compliance work here ends up doing the operational data hygiene work that the AI engineering team should have done anyway.

A.8 — Impact assessment of AI systems. The required impact assessment process needs to be operational, not theatre. We see too many programmes where the impact assessment is a one-page form filled in at deployment and never revisited.

A.9 — AI system life cycle. Concrete: development, deployment, operation, retirement, with required artefacts at each transition. This is the area most programmes underestimate.

If those five are operational, 42001 is in reach. If any of them is theatre, the standard will surface that.

The 90-day shape of a real implementation

We work to a 90-day shape for the foundation and then a 6-month operational ramp. The foundation work is what most consultancies extend into a 9-month engagement; we have not seen a case where that is genuinely required.

Days 0–30: AI inventory and scope

The single most important artefact in a 42001 programme is the AI system inventory. This is not a list of ML models; it is a list of decision points where AI is meaningfully involved in producing an outcome. A spam filter is in scope. A retention prediction model that informs a HR decision is in scope. A code-completion assistant for engineers is in scope. The inventory must tell you, for each system: purpose, data sources, decision authority, intended-use boundaries, owner, life-cycle stage, and risk classification.

The risk classification is where 42001 connects to EU AI Act compliance. The Act's risk tiers (prohibited / high-risk / limited-risk / minimal-risk) map cleanly onto the impact-assessment fields the standard requires. Doing the work once produces evidence usable for both regimes.

A common mistake is to wait for the inventory to be perfect before moving on. We recommend the same discipline as for security asset inventories: a 70%-correct inventory that is used for governance decisions is worth more than a 100%-correct one that gets one update a year.

Days 30–60: Operational governance setup

Three artefacts get built in this window:

  1. The AI governance committee charter. Specific roster, decision authority, escalation criteria, meeting cadence. Documented decisions and dissents. The committee should meet at least monthly during ramp; quarterly is too slow for an organisation with active AI deployments.

  2. The impact-assessment process. Not a one-page form. A structured assessment that includes intended use, foreseeable misuse, affected populations, data provenance, performance metrics under stress, monitoring plan, and decision authority for retirement. Built once, then templated for each system.

  3. The change-control gate. No new AI system goes to production without an impact assessment in the inventory. No material change to an existing system without an assessment update. This is the single piece of governance scaffolding that most distinguishes mature 42001 programmes from theatrical ones.

Days 60–90: First annual cycle, in compressed form

Run the full annual cycle compressed into 30 days for one or two AI systems already in production. Impact assessment, monitoring evidence, governance review, decision recorded. The artefacts produced are the templates the rest of the programme uses going forward, and the exercise exposes whatever the policy doesn't actually contemplate.

This compressed run is also the artefact your auditor will look at most carefully. It is the proof that the management system is real, not aspirational.

Where EU AI Act compliance overlaps and where it doesn't

Organisations subject to the EU AI Act and pursuing 42001 simultaneously can save substantial work, but the savings depend on doing the work in a specific order. Both regimes require:

  • An AI inventory with risk classification
  • A documented governance structure
  • An impact-assessment process
  • A monitoring plan for deployed systems
  • Records of decisions and incidents

The Act adds requirements that 42001 does not (notably, conformity assessment for high-risk systems, registration in the EU AI database, specific documentation per Annex IV). 42001 adds requirements the Act does not (notably, the management-system structure of objectives, internal audit, and continual improvement).

The order that works: build the 42001 management system first, then layer the Act-specific artefacts on top of the inventory and impact assessments. Reverse-engineering 42001 from Act-driven documentation produces a brittle compliance posture that fails the management-system audit.

Where this connects to our practice

Pelican Tech's AI Solutions practice builds 42001 programmes alongside the actual AI engineering work — model evaluation, monitoring infrastructure, the impact-assessment process — so the compliance artefacts are real evidence rather than retrospective documentation. We work with our risk management team when the AI governance overlaps cyber risk (which it usually does for high-risk systems), and with our MedTech regulatory practice when the AI sits inside a medical device under MDR or FDA scope.

If you are eight months into a 42001 implementation that has produced extensive policy documentation but minimal operational evidence, that is the conversation to have with us before the first surveillance audit.