Agentic AI Identity: The Gap SOC 2 and ISO 27001 Miss

Written by

Jon Ozdoruk

Published on

No headings found on page

Your enterprise has 45 AI identities for every human one.
Almost none of them are governed.

AI agents are now the fastest-growing identity category in enterprise technology. They have access to your systems, your data, and your production workflows, and 78% of organizations have no formal identity policies for them. SOC 2 and ISO 27001 don't close this gap. Here's what does.

Your quarterly access review covers every employee. You have MFA enforced on every human login. Joiners, movers, and leavers all go through a defined IAM process. Your SOC 2 auditor reviewed your access controls and issued a clean attestation.

And then there is the AI agent your engineering team deployed nine months ago as part of a proof of concept that was never decommissioned. It is running on a static API key committed to a repository. It has read access to the CRM, write access to the ticketing system, and an admin token for the knowledge base that was handed to it during a debug session and has never been revoked. It operates 24 hours a day, takes autonomous actions across four systems, and appears nowhere in your access review process because nobody built one for it.

This is not a hypothetical. According to the Identity Defined Security Alliance's 2026 report, non-human identities now outnumber human identities at a ratio of 45 to 1 in most enterprise environments. AI agents are the fastest-growing category within that ratio. And 78% of organizations have no formal identity policies covering them.

The compliance gap is not in your controls. It is in what your frameworks were built to see and to ignore.

The 45:1 problem and why it is getting worse

Five years ago, the ratio of non-human identities to human identities in a typical enterprise was roughly 10 to 1. That already represented a governance challenge: service accounts, API tokens, machine certificates, and automation scripts that accumulated access over time without the same oversight applied to human employees.

The ratio is now 45 to 1, and AI agents are the reason it is accelerating. Unlike a static service account that runs the same script every day, an AI agent makes novel decisions every session. It chains actions across systems. It can spawn sub-agents and delegate credentials. It accumulates permissions as it is given new tasks. And 92% of security leaders surveyed in 2026 say they are not confident their existing IAM infrastructure can handle this class of identity.

45:1 - Non-human identities to human identities in the average enterprise environment, 2026

78% - Organizations with no formal identity policies for AI agents, despite agents having admin-level access

70% - Enterprises already running AI agents in production, with 23% more planning deployments in 2026

$670K - Additional breach cost for organizations with no AI governance policies, IBM Cost of a Data Breach Report

The core problem is not that enterprises are being reckless. It is that AI agents do not fit any existing governance category. They are not employees, so HR does not manage them. They are not traditional software, so IT governance frameworks do not quite cover them. They are not third-party vendors, so procurement does not vet them. They fall through every framework and accumulate access in the gaps.

Why AI agents break every assumption your IAM was built on

Identity and access management, including the controls that SOC 2 and ISO 27001 evaluate, rests on a set of assumptions about how identities behave. AI agents violate every one of them.

Traditional IAM assumes

  • Identities are human or deterministic scripts

  • Actions are predictable and auditable per session

  • Access is requested and approved through defined workflows

  • Credentials are assigned once and reviewed quarterly

  • Trust boundaries are crossed by explicit, logged requests

  • One identity = one action type = one permission set

  • Identities are deprovisioned when no longer needed

AI agents actually do

  • Make non-deterministic decisions based on context and recent interactions

  • Chain actions across four or five trust boundaries in a single "task."

  • Spawn sub-agents and delegate credentials without human approval

  • Accumulate permissions silently as teams give them new tasks

  • Cross trust boundaries at machine speed, thousands of times per hour

  • Take on different permission scopes depending on what they are asked

  • Persist indefinitely — rarely deprovisioned, even when use cases end

The result is what security researchers are now calling identity dark matter — AI agent identities that are real, powerful, and carrying significant access, but entirely invisible to the governance infrastructure your compliance frameworks evaluate. The Hacker News reported in March 2026 that agents optimized for efficiency will naturally gravitate toward whatever "just works" — orphaned accounts, stale service identities, long-lived tokens, bypass auth paths — because those represent the path of least friction to task completion. The agent does not understand your governance intent. It understands what works.

The privilege drift problem: how agents accumulate access nobody intended

Of all the identity risks AI agents create, privilege drift is the one most likely to be invisible to your current compliance controls — and the most likely to become a material breach vector.

Privilege drift occurs when an agent's permission scope expands incrementally over time, with no single expansion large enough to trigger a review. Each individual access grant looks reasonable. The cumulative picture, six months later, is an agent with admin access to systems it was never designed to touch.

Month 1

Deployment — scoped correctly

Agent deployed with read access to the customer CRM. Credentials are fresh. The scope is appropriate to the stated use case.

Month 3

Task expansion — permission added informally

The team asks the agent to start updating the ticket status. Write access to the ticketing system is added informally during a sprint. No access review triggered — it is below the threshold requiring one.

Month 5

Debug session — admin token cached and never revoked

An engineer hands the agent an admin token during troubleshooting. The session ends, but the token is cached. Nobody revokes it because the immediate issue was resolved, and the token is not visible in any access review system.

Month 7

Sub-agent spawned — credentials inherited without review

The original agent is assigned a new workflow that requires a sub-agent. The sub-agent inherits the parent's full permission set, including the admin token from month five. The sub-agent was never provisioned through any IAM process.

Month 9

Breach surface — attacker inherits the drift

Nine months after deployment, the agent holds admin access to four systems, a cached credential that bypasses MFA, and a sub-agent that was never inventoried. Your quarterly access review did not catch any of it — because it was never designed to detect non-human identities.

The audit problem: Your SOC 2 auditor asks, "Who has access to what, and was it approved?" The answer for every human identity is documented and reviewable. For the AI agent, the answer is "we are not entirely sure" — because the accumulation happened incrementally, informally, and outside every access review process your compliance framework evaluates.

What SOC 2 and ISO 27001 actually cover — and where they stop

Both frameworks have robust access management controls. The problem is not the quality of those controls — it is the scope. They were designed for a world where identities are human or static, where permissions are assigned through defined workflows, and where access reviews occur on a schedule that aligns with human employment cycles. AI agents break every one of those assumptions.

AI agent identity risk — coverage by existing compliance framework

Identity Risk

SOC 2 Type II

ISO 27001

What's actually needed

Agent identity lifecycle

Not covered: No concept of provisioning or deprovisioning for non-human identities

Partial A. 9.2 covers user registration; it does not address the autonomous agent lifecycle

Agent-specific identity registry, automated deprovisioning, and human sponsor assignment per agent

Ephemeral credentials

Not covered. Credential management assumes long-lived, human-assigned tokens

Not covered.9.4 covers system access; it does not address session-scoped agent credentials

Just-in-time credential issuance, automatic expiry after task completion, no static API keys for agents

Privilege drift

Not covered: Access reviews target human identities; no mechanism for incremental agent permission accumulation

Not covered.9.5 covers access rights review for users; no equivalent for autonomous agents

Continuous permission scope monitoring, automated alerts on new access grants, and agent-specific quarterly reviews

Sub-agent delegation

Not covered: No framework concept for one AI system provisioning and delegating to another

Not covered

Anti-escalation policies, sub-agent provisioning through defined workflows, and delegation chain logging

Shadow agent inventory

Partial asset inventory requirements exist; AI agents deployed outside IT are invisible to them

Partial A. 8.1 covers asset management; shadow AI agents are not discoverable through standard methods

Active agent discovery across all environments, API key scanning, OAuth token mapping, LLM call detection

Non-deterministic access

Not covered. All access controls assume predictable, deterministic system behavior

Not covered

Behavioral baselining, runtime anomaly detection, purpose-aware monitoring beyond permission-aware monitoring

Cross-agent trust chains

Not covered: No framework concept for inter-agent delegation and trust chain validation

Not covered

On-behalf-of token exchange at trust boundaries, trust chain logging, and confused deputy attack prevention

Human-in-the-loop gates

Partial separation of duties controls exist; no equivalent for autonomous agent high-risk action approval

Partial change management controls cover some scenarios; autonomous agent actions are a new category

Mandatory human approval gates for irreversible agent actions — financial, data export, production deployment


The bottom line: Of the eight agentic identity risk categories, five have zero coverage in either SOC 2 or ISO 27001. The three with partial coverage are addressed only at the surface — the underlying frameworks were not designed for the non-deterministic, self-propagating, trust-boundary-crossing behavior of autonomous AI agents. Having a clean SOC 2 Type II and ISO 27001 certification does not mean your AI agent identity posture is governed. It means your human identity posture is.

The five governance layers your AI agent identity posture needs

The 2026 vendor landscape — Aembit, CyberArk, Saviynt, Strata, and others — is converging on five core capabilities for non-human identity governance. These are the layers that need to exist before your first AI-specific enterprise security review.

Layer 1

Discovery & Inventory

You cannot govern what you cannot see. Every AI agent — including shadow agents deployed outside IT governance — must be cataloged with its access scope, credential type, and human sponsor.

SOC 2 / ISO 27001: Partial coverage only

Layer 2

Authentication & Ephemeral Credentials

Replace static API keys with short-lived, task-scoped tokens generated at invocation and destroyed after completion. Cryptographic agent identity — not shared service accounts.

SOC 2 / ISO 27001: Not covered

Layer 3

Dynamic, Task-Scoped Authorization

Static RBAC was built for humans with predictable job functions. Agents need permissions evaluated per-request, scoped to the exact action being taken, with automatic expiry and anti-escalation enforcement.

SOC 2 / ISO 27001: Not covered

Layer 4

Behavioural Monitoring

Authentication says the agent is who it claims to be. Authorization allows this action. Behavioral monitoring answers: Is this agent doing what it should? Baseline normal patterns and alert on drift.

SOC 2 / ISO 27001: Not covered

Layer 5

Human Sponsor Accountability

Every agent must be tied to an accountable human owner. If the owner changes roles or leaves, the agent's access changes with them. Agent actions must trace back to the human who authorized the agent's existence.

SOC 2 / ISO 27001: No concept exists

What this means for your compliance posture and your deals

Enterprise buyers evaluating AI SaaS vendors in 2026 are beginning to ask questions that sit squarely in this gap. Not "do you have SOC 2?" That is table stakes. The new questions are:

  • How do you manage the identity and access scope of the AI agents in your product?

  • How do you prevent those agents from accumulating permissions beyond their intended function?

  • What happens if one of your agents is compromised — how quickly can you isolate it and reconstruct what it accessed?

  • Can you demonstrate that every autonomous action your agent takes is traceable back to an authorized user request?

  • Do you have a process for deprovisioning agents that are no longer needed?

Your SOC 2 report does not answer these questions. Your ISO 27001 certificate does not answer them. They are not failures of your compliance program; they are failures of the framework's scope. But the enterprise security team on the other side of your procurement call does not care about the distinction.


The competitive dynamic: The AI SaaS companies that close the most enterprise deals in the next two years will not just have SOC 2. They will have SOC 2 plus documented AI agent identity controls, an agent inventory, a credential policy, a privilege-drift audit, and a human-sponsor framework. That is the compliance posture that answers the questions enterprise buyers are already asking.

Where to start: a practical roadmap for AI agent identity governance

Identity governance for AI agents does not require replacing your existing IAM infrastructure. It requires extending it with a new layer — purpose-built for the non-deterministic, autonomous, trust-boundary-crossing behavior of AI agents.

Days 1–15: Discover what you have. Run a full agent discovery scan across cloud, on-premises, and SaaS environments. Catalog every AI agent — who deployed it, what systems it accesses, what credentials it holds, and whether it has a named human sponsor. Most organizations find agents they did not know existed. That inventory is the foundation of every control that follows.

Days 16–45: Eliminate static credentials. Every agent running on a long-lived API key is a breach waiting to happen. Move to ephemeral, task-scoped credentials issued at invocation time and destroyed after the session ends. This single change closes the most common vector for AI agent compromise, stolen static credentials that give persistent access to everything the agent was ever granted.

Days 46–60: Implement task-scoped permissions. Audit the current permission scope of every agent in your inventory. Replace broad standing access with dynamic, per-request authorization evaluated at the moment of each action. Any permission that an agent holds but has not used in the past 30 days should be revoked.

Days 61–90: Build the oversight layer. Assign a named human sponsor to every agent. Set up behavioral monitoring to establish a baseline of normal activity and alert to deviations. Implement human approval gates for any agent action that involves irreversible financial transactions, data exports, production deployments, or customer communications. Create a quarterly agent access review cadence that mirrors your human access review process.

The goal is not perfection at day 90. It is evidence: documented inventory, credential policy, permission audit, and oversight framework. That documentation is what enterprise security teams need to see, and what no current compliance certification produces for them.

The result: identity governance as an enterprise deal accelerator

Non-human identities are the fastest-growing attack surface in enterprise security. AI agents, autonomous, non-deterministic, and self-propagating, are the most ungoverned category within that surface. And enterprise buyers are beginning to ask specifically about them.

SOC 2 and ISO 27001 will remain essential. They establish the baseline trust that enterprise procurement depends on. But they were built for a world where identities are human, and actions are deterministic. That world no longer describes what your product does — and the companies that acknowledge that gap, document it, and close it will be the ones that win the enterprise AI deals of the next three years.

Identity dark matter is real. The governance infrastructure to bring it into the light exists. The question is which AI SaaS companies build it before their enterprise buyers start requiring it.

Explore more AI Compliance articles

AI Regulatory Compliance

Agentic AI Identity: The Gap SOC 2 and ISO 27001 Miss

MCP Security Compliance: What Every AI SaaS Company Must Know

OWASP Top 10 for Agentic AI: Your Compliance Posture at Risk

AIUC-1: The Compliance Framework Built for the Age of AI

NIST CSF 2.0 Explained: A Complete Implementation Guide for SaaS

How to Implement the NIST AI Risk Management Framework

ISO 42001: The Complete Guide to AI Management System Certification

AI Compliance 2026: Build Your Governance Framework

SOC 2, ISO 27001, and HIPAA Compliance Costs Compared

The AI Compliance Frameworks Every Organization Needs to Know

HIPAA for AI Copilots: Chatbots in Healthcare Workflows

ISO 27001 for AI Startups - LLMs, Agents, and Sensitive Training Data

Choosing the Right SOC 2 Penetration Testing Partner in 2026

EU AI Act Compliance Checklist: 7 Steps Every Business Needs

GRC Trends 2026: AI-First Platforms Are Reshaping Compliance

Protecting PHI: Navigating HIPAA Compliance with AI Automation

AI for GRC: Solving Capacity and Complexity in Risk Programs

Streamline SOC 2, ISO 27001, HIPAA & GDPR With One AI Engine

SOC 2 Continuous Compliance: How AI Replaces One-Time Audits

A Practical Guide to the EU AI Act & ISO 42001 Compliance

AI-Powered SOC 2 & HIPAA Compliance: Ditch Your Spreadsheets

SOC 2 Type 2 Audit Guide: 10 AI Controls for SaaS Teams

AI for GDPR & ISO 27001: Streamline Controls & Certification

Regulated SaaS: Agentic AI Transforming Compliance

AI Cybersecurity Compliance Checklist 2026: A Complete Guide

AI-Driven Vendor Monitoring for ISO 27001, GDPR & SOC 2

AI Compliance in 2026: From Spreadsheets to Audits

Streamline Compliance With AI: SOC 2, ISO 27001, GDPR & More

How AI Is Transforming Vendor Risk Management

Spreadsheets to AI: Achieve Compliance in Days, Not Months

AI Compliance Automation: What Works & Why It Matters

SOC 2 Controls: 20+ Real-World Examples for SaaS & AI

Achieve Audit Readiness: Streamline Compliance with AI Solutions

Autonomous Compliance Agents Are Revolutionizing Vendor Risk

Can AI Steal Stories? The Robot Rules Explained

What is an AI Audit? Complete 2026 Guide

Why AI Agents Need Compliance Too

Introducing the World's First AI-Powered Compliance Framework

AI revolutionizing - SOC2 Compliance

Stop losing deals to compliance.

Get compliant. Keep building.

Join 100s of startups who got audit-ready in days, not months.