ChaptersEventsBlog
Join Cohesity Catalyst on Tour at the data security and AI summit in NYC, Paris, or Singapore →

The Visibility Gap in Autonomous AI Agents

Published 02/24/2026

The Visibility Gap in Autonomous AI Agents

AI agents are quickly becoming autonomous digital actors embedded in enterprise workflows. Unfortunately, as organizations scale from dozens to hundreds of agents across clouds, platforms, and business units, the identity foundations inherited from human IAM are beginning to strain under new demands.

If you’re already experimenting with autonomous AI agents (or your business units are doing it for you), this topic from CSA’s Securing Autonomous AI Agents survey report (commissioned by Strata) should snap to the top of your priority list: discovery and traceability are clear blind spots.

 

The first question you should be able to answer

That question is: “What agents do we even have?”

Even as organizations expand their use of AI agents, most lack the visibility needed to manage them safely. Tooling is immature, and only 21% of organizations maintain a real-time registry or inventory of their agents. Another 32% rely on non-real-time records, 32% plan to build one within the next year, 8% have no registry at all, and 9% are unsure.

That’s a major identity governance gap.

A real-time inventory is the starting point for basic security questions:

  • Which agents exist today?
  • Where are they running (public cloud, private cloud, on-prem)?
  • What systems can they touch?
  • What credentials do they hold?
  • Who approved them? Who owns them now?

If you can’t answer these reliably, you’re flying blind in a threat environment where agents can operate continuously and at scale.

 

The registry problem

Even when organizations do track agents, they tend to have a patchwork approach that delivers partial visibility. Organizations can see some agents some of the time, but rarely in one place or in real time.

Registries are being maintained by:

  • A custom or standalone database (60%)
  • An identity provider (42%)
  • An internal service registry or fabric (34%)
  • A third-party agent platform or orchestration tool (17%)

This should sound familiar. It’s the same story we’ve lived through with cloud asset inventories, shadow SaaS, and machine identity sprawl, except now the “thing” you’re tracking can take actions, make decisions, and trigger workflows.

The survey report also highlights that agents are already highly distributed across environments. They run in public clouds (66%), on-prem (37%), in private clouds (36%), and in hybrid configurations (38%). So if your registry is “somewhere in a spreadsheet,” or “kind of in the IdP,” or “in a service catalog that only covers one platform,” the result is predictable: drift, gaps, and governance by guesswork.

 

“Who did what, and on whose behalf?” is still a hard question

Discovery is step one. Traceability is step two, where governance turns into accountability. Traceability is the ability to map what agents do and who they act on behalf of.

Only 28% of respondents can reliably trace agent actions to a human or system across all environments. 46% can do so only in some, 9% cannot at all, and 16% are unsure. This means that most organizations can’t consistently answer basic questions of accountability,

Monitoring practices are uneven as well:

  • 45% have end-to-end session tracing
  • 43% use context-aware audit logging
  • 19% have none of these controls
  • 17% are unsure

Without unified tracing, enterprises struggle to answer basic questions of accountability. That undermines regulatory expectations around auditability and forensics.

 

The “human-in-the-loop” surge is a symptom, not a strategy

When visibility is weak, organizations compensate with manual controls. 68% of respondents rate human-in-the-loop (HITL) oversight as ‘essential’ or ‘very important’ (20% and 48%, respectively).

And what do they require humans to approve?

  • Accessing sensitive data (69%)
  • Making system changes (68%)
  • Approving financial transactions (62%)
  • Granting permissions (51%)

This is a rational response: if you can’t consistently see or trace what an agent is doing, you add a person to reduce risk.

But HITL doesn’t scale cleanly. It becomes a bottleneck, a short-term safeguard that slows adoption. Agent governance has not yet reached continuous, auditable maturity.

 

Welcome to the “Time-to-Trust” phase

Many organizations are in a ‘Time-to-Trust’ phase. Full autonomy is still the goal, but most organizations are still building the visibility, auditability, and control mechanisms necessary to reach it.

This is a useful framing because it de-escalates the hype without dismissing the value. It acknowledges that agent systems are moving from concept to operational reality, but governance has to catch up.

“Catching up” means:

  • Continuous discovery (not annual inventories)
  • Traceable identity orchestration (not scattered logs)
  • Auditability built into workflows (not bolted on during incident response)

To safely unlock the potential of agents, organizations must invest in unified identity orchestration spanning discovery, authentication, authorization, and continuous traceability.

 

What this means for agent identity governance

A mature agent identity governance program should be able to:

 

1) Discover agents continuously

Agents aren’t all the same. Some are internally built, some are third-party vendor agents, and some are public SaaS agents introduced by users.

Discovery should help you separate:

  • Sanctioned vs. unsanctioned
  • Production vs. pilot
  • Agents with privileged access vs. limited access
  • Agents that can call tools and execute actions vs. agents that only generate outputs

 

2) Establish a real-time registry that’s actually authoritative

A registry needs to be more than a list. It should answer who owns the agent, what environment it runs in, what it’s allowed to access, and how it authenticates.

Without an authoritative registry, your governance becomes reactive and fragmented.

 

3) Map agent actions back to accountability

Many organizations cannot reliably map an agent’s actions back to a human sponsor, leading to accountability gaps.

Every meaningful action should be attributable to the:

  • Agent identity
  • Initiating human/system
  • Authorization context at the time of action
  • Environment (and policy set) where it ran

 

4) Make observability audit-friendly

“End-to-end session tracing” and “context-aware audit logging” show up as key capabilities.

But audit-friendly means logs aren’t just voluminous—they’re structured, retained, and tied to identity decisions and approvals.

 

If you can’t trace agents, you can’t trust agents

AI agents act on behalf of humans, accessing data and making autonomous decisions that carry real business impact. That’s why discovery and traceability are prerequisites for:

  • Enforcing policy consistently across hybrid environments
  • Reducing credential misuse and over-permissioning
  • Supporting compliance, forensics, and incident response
  • Scaling beyond a HITL bottleneck without crossing your risk threshold

Without continuous discovery and traceable identity orchestration, agent ecosystems will remain opaque: difficult to govern and impossible to fully trust.

Cover of the Securing Autonomous AI Agents survey report

 

Want the full data, charts, and findings?

The full survey report connects discovery and traceability to broader agent identity issues like static credentials, fragmented controls, and why IAM systems designed for human workflows are ill-suited to govern autonomous agents.

If you’re building (or inheriting) an agent program in 2026, Securing Autonomous AI Agents is worth a download and internal share.

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates