ISO 27001 for AI Companies : Guide to Certifying Your AI Systems

Written by

Deepika

Published on

No headings found on page

The Complete Guide to Certifying Your AI Systems Under the 2022 Standard

If your company builds, deploys, or operates AI systems, ISO 27001 certification is no longer a nice-to-have. Enterprise buyers, healthcare organizations, financial institutions, and government agencies are now routinely requiring ISO 27001 as a baseline before signing contracts with AI vendors. What most AI companies discover too late is that certifying AI systems under the standard requires a different approach than certifying a traditional SaaS product. The threat model is different. The data flows are different. The controls that matter are different.

This guide breaks down exactly how to scope, implement, and maintain ISO 27001 certification for products powered by machine learning models, large language models, agentic workflows, or AI-driven automation.

Why AI Companies Face a Higher ISO 27001 Bar

ISO 27001 is a framework for building and operating an Information Security Management System (ISMS). At its core, it requires you to identify your information assets, assess the risks to those assets, and implement controls proportionate to those risks. For a conventional SaaS company, the asset inventory is relatively predictable: source code, customer databases, cloud infrastructure, and employee endpoints.

AI systems introduce information assets that traditional ISO 27001 implementations were never designed to protect. Training datasets containing sensitive personal or commercial data. Trained model weights that represent significant intellectual property. Inference pipelines susceptible to adversarial inputs. Embedding stores and vector databases that may contain sensitive document representations. Agentic workflows that take autonomous actions on behalf of users.

Auditors increasingly understand these distinctions. Scoping an ISO 27001 ISMS to include AI workloads without accounting for these asset classes will create gaps that surface during Stage 2 audits and generate nonconformities.

What ISO 27001:2022 Changed That Matters for AI

Implement controls proportionate to those risks. The 2022 revision of ISO 27001 introduced 11 new controls in Annex A and restructured the control set from 114 to 93 controls across four themes: organizational, people, physical, and technological. Several of the new controls map directly to AI-specific risks.

Threat Intelligence (A.5.7) requires organizations to gather, analyze, and act on information about threats relevant to their operations. For AI companies, this includes emerging adversarial attack techniques, prompt injection research, model extraction research, and vulnerabilities in the open-source ML frameworks your pipelines depend on. A static threat register updated annually will not satisfy this control for an AI workload.

Monitoring Activities (A.8.16) requires detecting anomalous behavior across networks, systems, and applications. Applied to AI systems, this means monitoring inference endpoints for unusual query patterns, tracking anomalies in token consumption that may indicate prompt injection attempts, and logging agentic actions for post-incident reconstruction. Most AI companies have application-level logging but lack the behavioral baseline required to detect model-targeting attacks.

Secure Coding (A.8.28) addresses software security in development. For AI teams, this extends to prompt engineering practices, the security of system prompts in deployed LLM applications, input validation before data enters model contexts, and output filtering before responses reach end users.

Configuration Management (A.8.9) requires documented, controlled configuration of technology assets. Applied to AI, this includes version control for model weights, reproducible training pipelines, and documented hyperparameter configurations — the kind of operational rigor that many AI teams lack outside of core product releases.

Data Masking (A.8.11) and Data Leakage Prevention (A.8.12) are directly relevant to training data governance and inference output controls, particularly for companies working with personal data, protected health information, or financial records in their model training or fine-tuning pipelines.

Scoping Your ISO 27001 ISMS for AI Systems

Scope definition is where most AI companies' ISO 27001 programs either succeed or get into trouble. A scope that is too narrow excludes the assets that carry the most risk. A scope that is too broad creates an audit surface that is practically impossible to manage.

For AI companies, a defensible scope typically includes the infrastructure where models are trained and fine-tuned, the inference infrastructure where models serve requests, the data pipelines that move training data from source to model, the vector databases and embedding stores used by retrieval-augmented systems, the APIs through which external systems interact with your models, and the internal tooling used by engineers and data scientists to develop and evaluate models.

What is often incorrectly excluded from scope: third-party foundation models accessed via API. If your product's security properties depend on a model provided by a third party, the supplier relationship falls under Annex A control A.5.19 (Information Security in Supplier Relationships) and A.5.20 (Addressing Information Security Within Supplier Agreements). You cannot simply assume that OpenAI, Anthropic, Google, or Mistral has handled all relevant security controls on your behalf without documented due diligence.

Risk Assessment for AI-Specific Threats

ISO 27001 requires a formal risk assessment methodology that identifies threats, assesses their likelihood and impact, and produces documented risk treatment decisions. AI systems introduce a threat taxonomy that most standard risk assessment templates do not include.

Prompt injection is the AI-native equivalent of SQL injection. An attacker provides input designed to override system-level instructions or extract information from the model's context window. For agentic systems with tool access — systems that can browse the web, execute code, or query databases — the impact of a successful prompt injection can extend far beyond information disclosure.

Model inversion and membership inference attacks enable adversaries to infer information about the training data from model outputs. If your models were trained on customer data, proprietary documents, or sensitive records, model inversion represents a confidentiality risk that must be assessed.

Model stealing enables competitors or adversaries to reconstruct a functional equivalent of your model by repeatedly querying its API. For companies where the model itself constitutes core intellectual property, this poses risks to availability and integrity.

Data poisoning attacks target the training pipeline rather than the deployed model. By introducing malicious samples into training data, an attacker can cause a model to behave incorrectly in specific, attacker-controlled circumstances. This is particularly relevant for companies that train on user-generated data or that use continuous learning pipelines.

Supply chain attacks targeting ML frameworks, pre-trained model weights downloaded from public repositories, and Python package dependencies used in training and inference pipelines are increasingly common. A risk assessment that does not account for the ML supply chain is incomplete.

Each of these threats requires a corresponding treatment decision: accept the risk, mitigate it, transfer it, or avoid it. The treatment selection must be documented and reviewed on a defined schedule.

Training Data Governance Under ISO 27001

One of the most common ISO 27001 nonconformities for AI companies concerns the classification and handling of training data. ISO 27001 requires that information assets be classified and that handling procedures be applied consistently based on classification.

Training datasets frequently contain mixed-sensitivity data: a single dataset may contain publicly available text alongside personally identifiable information, confidential customer records, or proprietary business documents. A compliant ISMS requires that you know what is in your training data, that it is classified at an appropriate level, that access to it is controlled based on that classification, and that retention and deletion policies are applied consistently.

For companies that fine-tune models on customer data, data-handling agreements and customer-consent records become part of the information asset inventory. An auditor reviewing your ISMS will expect to see documented data lineage — the ability to trace a training example from its source through to its use in a model and, if required, to its deletion.

Building Controls for Agentic AI Workflows

Agentic AI systems — systems that plan, take action, and interact with external services without requiring human approval at each step — introduce a category of information security risk with no direct analog in traditional IT systems. An agent that has access to email, calendar, file storage, code execution, or external APIs is an information asset that can cause significant harm if it behaves incorrectly, is manipulated, or is compromised.

ISO 27001 does not have controls written specifically for agentic systems, but several existing controls apply directly. Access control (A.8.2, A.8.3, A.8.4) requires that systems be granted only the privileges necessary for their function. Agents should operate under the principle of least privilege: if an agent does not need write access, it should not have write access. If an agent does not need access to production systems, it should not have access to production systems.

Logging and monitoring (A.8.15, A.8.16) require that actions taken by systems be recorded in a way that supports incident investigation. Agentic systems must produce audit logs that record which actions were taken, which inputs triggered them, and the resulting outcomes. Logs must be protected against tampering.

Change management (A.8.32) requires that changes to systems be assessed, approved, and implemented in a controlled manner. Changes to the system prompts, tool configurations, or model versions that underpin an agentic system are out of scope for this control.

Common ISO 27001 Gaps in AI Company Audits

Based on patterns in how AI companies approach ISO 27001, several categories of findings appear consistently in Stage 2 audits.

Incomplete asset inventories that list cloud infrastructure and source code repositories but omit model weights, training datasets, embedding stores, and experiment tracking systems. These are information assets. They must be inventoried, classified, and assigned an owner.

Supplier management gaps where foundation model providers, ML platform vendors, and data annotation services are not assessed under the supplier security management process. Every third party that touches an information asset within the scope of the ISMS must undergo documented due diligence.

Inadequate logging for AI systems. Application logs exist, but they do not capture model inputs and outputs in a way that supports incident investigation. Logs are not centralized or protected against modification.

Missing business continuity considerations for model availability. If a fine-tuned model is lost and cannot be reproduced, that is a business continuity event. ISO 27001 requires that business continuity be planned for, but AI companies rarely include model reconstruction in their continuity plans.

Organizations pursuing multiple certifications can leverage control mapping between ISO 27001 and frameworks such as GDPR to reduce duplicate effort.

Informal change management for model updates. Engineers push model updates to production via informal processes because these updates are perceived as distinct from software deployments. From an ISMS perspective, they are not.

Maintaining ISO 27001 Compliance Continuously for AI Systems

ISO 27001 certification is not a point-in-time achievement. Surveillance audits occur annually, and recertification occurs every three years. Between audits, you are expected to maintain the ISMS, conduct internal audits, perform management reviews, and respond to nonconformities.

For AI systems, continuous compliance requires automated evidence collection from training and inference infrastructure, regular threat intelligence reviews that include AI-specific research, periodic red team exercises that include prompt injection and adversarial testing, and documented review cycles for risk assessments that account for changes to models, data, and deployment configurations.

Automation matters here in proportion to the scale of your AI operations. Manually collecting evidence of access control reviews, logging configurations, and supplier assessments across a complex AI infrastructure will not scale. Compliance platforms that integrate with your cloud infrastructure, model serving layer, and data pipeline tooling dramatically reduce the overhead of maintaining a compliant ISMS.

Getting to Certification

ISO 27001 certification for an AI company follows the same structural path as certification for any technology organization: establish the ISMS, complete a risk assessment, implement and document controls, conduct an internal audit, perform a management review, and engage an accredited certification body for Stage 1 (documentation review) and Stage 2 (operational audit) assessments.

The difference for AI companies is that the risk assessment must account for AI-specific threats, the asset inventory must include AI-specific assets, the control implementations must address AI-specific vectors, and the audit evidence must demonstrate that controls are operating effectively across the AI system lifecycle.

Organizations that have already invested in structured documentation, access controls, and infrastructure-level logging often find that extending the ISMS to cover AI workloads requires incremental rather than foundational work. The gap is typically in the risk assessment (AI threats are underrepresented), the asset inventory (AI assets are missing), and the supplier management process (foundation model providers are not assessed).

For AI companies pursuing enterprise sales, healthcare contracts, or government relationships, ISO 27001 certification increasingly determines whether you are in the conversation at all. Building a compliant ISMS that genuinely covers your AI systems — not one that treats the AI layer as out of scope — is the difference between a certification that holds up under scrutiny and one that creates liability when customers ask the hard questions.

DSALTA helps AI companies build and maintain ISO 27001-compliant information security management systems tailored to the operational realities of modern AI infrastructure.

Stop losing deals to compliance.

Get compliant. Keep building.

Join 100s of startups who got audit-ready in days, not months.