Agentic AI and Zero Trust
Published 08/07/2025
Agentic AI is a different kind of AI. It’s not like the generative AI everyone’s talking about—the one that stitches together an answer based on what it knows or guesses when it doesn’t. That’s great for content creation, for generating reports, for summarizing data, or for writing code. But that’s not what Agentic AI is here to do. Agentic AI isn’t about crafting answers. It’s about taking action. It’s about getting things done. Think of it as execution-first AI. It doesn’t just sit back and respond—it goes out, retrieves, connects to APIs, spins up workflows, calls other Agentic AIs, and gets to work.
This is where things start to click, especially when you start tying this back to non-human identities. We’ve been talking about non-human identities for a while now—bots, service accounts, machine identities, whatever you want to call them. But what happens when those identities aren’t just holding credentials, they’re actually doing something with them? What happens when they become autonomous actors in your ecosystem?
This is where Agentic AI becomes real. And if we’re going to deploy Agentic systems, we’ve got to step back and ask: what are they doing? What are the tasks they’re supposed to perform? What systems do they talk to? Let’s say you’ve got an Agentic AI managing travel for a corporate executive. Does it need to pull data from TripAdvisor? Check flight inventory on Delta.com? Does it talk to an internal travel AI that handles preferred pricing or calendar integration? Is it reaching out to Marriott to book rooms, or maybe even tapping into another Agentic system to request payment authorization?
From an identity lens, an Agent is just another principal. Human or non-human, it’s still making requests. And yes—PARC still applies: Principal, Action, Resource, Condition. Just because it’s an Agentic AI doesn’t mean we throw out the model. In fact, we double down on it.
Let’s break it down. You’ve got Executive X, a Delta Medallion flyer who only stays at Marriott properties. That preference isn’t just convenience—it becomes policy. So now, the Agentic AI acting on their behalf is scoped accordingly. It can search Delta flights, book Marriott rooms—but not random alternatives. Conditions get baked in. Where’s the destination? What’s the date range? Do they need a rental car? What’s the budget limit? Does this require approval? Does the Agentic AI need to reach out to a payment gateway or HR system to validate travel authorization?
This is where Zero Trust meets automation. Agentic AI becomes the principal. Delta APIs, Marriott booking engines, internal travel systems—those are the resources. The conditions? Those are the rules: when, where, how, and if the Agent can do what it’s trying to do.
And here’s the key: those conditions aren’t static. They’re dynamic, contextual, and they can be tied directly to the executive as part of the principal definition itself. That means the policy doesn’t just evaluate what the Agentic AI is doing, but also evaluates who it’s doing it for. If the executive only travels during business hours, or only flies Delta and stays at Marriott, those preferences are encoded as conditional constraints. The Agent’s access to systems, actions, and data is shaped by the executive’s identity profile, risk posture, and preferences.
It’s not just “can this Agent do X?” It’s “can this Agent do X, for this executive, under these defined conditions, right now, with this risk level, and inside this operational boundary?”
That’s where trust shifts from a one-time decision to a real-time policy check. This is enforcement at the speed of automation. It’s Zero Trust meeting orchestration—where identities (human or not) are continuously evaluated, and the boundaries are enforced not just by role or access, but by the full context of the principal.
Now layer in a Policy Decision Point (PDP), which sits at the center of this architecture. It’s watching everything in real-time. It's evaluating context, permissions, and behavior. It’s deciding whether that Agentic AI can execute this action on that system, under these conditions. It’s not about static rules—it’s dynamic, adaptive, and tightly governed.
This is the heart of how we manage automation in a Zero Trust world. Agentic AI doesn’t get a free pass just because it’s “smart.” It operates within clearly defined boundaries. And those boundaries are constantly evaluated because trust isn’t permanent. It’s earned. Over and over again.
Agentic AI isn’t just a mystical identity, it’s simply an old identity wearing a new hat that must be governed and secured!
Related Resources



Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Vulnerability Management Needs Agentic AI for Scale and Humans for Sense
Published: 08/22/2025
"Set It and Forget It” Access Control is No Longer Enough
Published: 08/20/2025
Securing the Agentic AI Control Plane: Announcing the MCP Security Resource Center
Published: 08/20/2025