ChaptersEventsBlog
AI, evolving regulations, and rapid digital change are reshaping cybersecurity. Join leaders driving real impact and register to attend the CSA Summit at RSAC →

Securing the Agentic Control Plane: A New Foundation for Trust in AI

Published 03/20/2026

Securing the Agentic Control Plane: A New Foundation for Trust in AI
Written by Jim Reavis, Co-founder and Chief Executive Officer, CSA.

Over the past decade, we’ve watched cloud computing reshape infrastructure, Zero Trust redefine security architecture, and artificial intelligence begin to influence nearly every aspect of business and society. Each of these shifts introduced new risks, but also new control mechanisms that allowed us to move forward with confidence.

What we are seeing now with agentic AI is different.

This is not simply another layer of technology. It is the emergence of autonomous systems that can take action, make decisions, and interact with other systems—often without direct human involvement. As organizations begin to deploy these capabilities at scale, the traditional boundaries of security start to break down. The question is no longer just whether a model is safe or accurate. It is whether an entire ecosystem of agents can be trusted to operate within defined boundaries over time.

That shift is what led us to define and focus on what we now call the Agentic Control Plane.

At its core, the Agentic Control Plane is about governing how autonomous agents exist and operate within digital environments. It encompasses identity, authorization, orchestration, runtime behavior, and ultimately, trust. While these concepts are familiar in traditional IT systems, applying them to non-human actors introduces a level of complexity that the industry is only beginning to understand.

The reality is that we are moving from a world where software executes instructions to one where systems initiate actions. That distinction is subtle, but it has profound implications. When agents can initiate workflows, access resources, and interact with other agents, we are effectively creating a new class of digital participants. These participants need identities, permissions, oversight, and accountability, just like human users—only at a scale and speed that far exceeds what we have dealt with before.

This is why we launched the CSAI Foundation.

The Cloud Security Alliance has spent years developing guidance, frameworks, and certifications to help the industry adopt new technologies securely. Through initiatives like the AI Controls Matrix, STAR for AI, and the TAISE certificate program, we have built a strong foundation for AI security. What became clear over the past year, however, is that guidance alone is not enough for the agentic era. We need to move from defining best practices to actively operating the systems that enable trust.

That is the role CSAI is intended to play.

Rather than approaching this as a single program or framework, we organized CSAI around a set of capabilities that together form the foundation of the Agentic Control Plane. Each of these areas addresses a different dimension of the problem, but they are tightly interconnected.

One of the most immediate challenges is visibility. Organizations deploying agents often have limited insight into how those agents behave once they are in operation, particularly when they interact with external systems or other agents. Without that visibility, risk becomes difficult to quantify and even harder to manage. This is the motivation behind the AI Risk Observatory, which is focused on creating real-time insight into agentic activity and translating that into actionable security intelligence.

At the same time, we need to establish a shared understanding of how agents should be designed and governed. This is where best practices come into play, but not in the abstract sense. We are looking at practical guidance around identity-first design, runtime authorization, and the classification of agent capabilities. If organizations are going to rely on agents to perform meaningful tasks, they need a clear model for defining what those agents are allowed to do and how those permissions are enforced.

Education is another critical piece of the puzzle. One of the consistent lessons from previous technology shifts is that security often lags adoption, not because solutions don’t exist, but because the workforce is not prepared to implement them effectively. With agentic AI, the gap is even more pronounced. We are not just training security professionals; we are also helping executives, developers, and even students understand how to think about autonomous systems in a responsible way.

There is also a governance dimension that cannot be ignored. Many of the decisions around agentic AI will ultimately be made at the executive and board level, yet the language and frameworks for discussing these risks are still emerging. Through our CxOtrust initiative, we are working to bridge that gap by translating technical risk into business context and helping leaders make informed decisions about adoption.

Perhaps the most important element, however, is assurance. Trust does not scale unless it can be verified. This is an area where CSA has a long history through the STAR program, and we are extending that model into the AI domain. By combining established standards with AI-driven analysis through initiatives like Valid-AI-ted, we are beginning to move toward continuous, rather than point-in-time, assurance of agent behavior.

Finally, we are investing in what comes next. The pace of change in AI is such that any static framework will quickly become outdated. Through initiatives like CSA Pod and our work on agent certification and catastrophic risk research, we are creating environments where new ideas can be tested, observed, and refined in real time. This allows us to stay ahead of the curve rather than reacting to it.

Taken together, these efforts are not just a collection of programs. They represent an attempt to define and build a new layer of infrastructure for the digital economy—one that is specifically designed to support autonomous systems.

If I were to simplify what we are trying to achieve, it comes down to three fundamental questions:

  • How do we establish identity and accountability for non-human actors?
  • How do we enforce boundaries and permissions in dynamic, autonomous environments?
  • How do we continuously measure and validate trust at scale?

These are not easy questions, and there are no complete answers yet. But they are the right questions, and addressing them is essential if we want to realize the benefits of agentic AI without introducing unacceptable levels of risk.

What makes this moment particularly interesting is that we are still early. The agentic ecosystem is forming, standards are still evolving, and there is an opportunity to shape how this space develops in a way that prioritizes security and trust from the outset. That is a rare opportunity in technology, and one that we should take seriously.

CSAI is designed to be a collaborative foundation for that effort. We are bringing together cloud providers, enterprises, AI developers, auditors, and regulators to work toward a common goal. This is not about controlling the ecosystem, but about enabling it to grow in a way that is sustainable and trustworthy.

As we look ahead, I believe the concept of the Agentic Control Plane will become as fundamental as identity or network security is today. It will not be something most users think about, but it will underpin how autonomous systems operate safely and effectively.

We are at the beginning of that journey.

And if we get this right, we will not just secure AI—we will create the conditions for it to be trusted at scale.

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates