Agentic AI, MCP, and the Identity Explosion You Can’t Ignore
Published 07/10/2025
Written by Itzik Alvas, Entro.
In late 2024, Anthropic introduced the Model Context Protocol (MCP), a universal framework that allows AI agents to interface with external systems like GitHub, Slack, Postgres, and more. It’s like USB-C for AI: plug in once, connect to anything.
It’s elegant, extensible, and open source.
But it’s also powering a new class of intelligent, autonomous software that brings its own set of security challenges, especially when it comes to identity.
What Is Agentic AI?
Agentic AI refers to systems—often powered by large language models (LLMs)—that can make autonomous decisions, interact with external tools, and carry out tasks with minimal human oversight. These aren’t just passive chatbots, they’re active participants in your infrastructure.
With the right permissions, an agent can:
- File GitHub issues or merge PRs
- Query databases or analyze logs
- Send Slack messages or triage tickets
- Modify cloud resources
The value is real. So is the risk.
Because behind every one of these actions is a credential - a key, a token, a service account. In short: a non-human identity (NHI).
MCP Makes It Easy to Connect, But Not to Secure
MCP uses a client-server model: the agent (client) connects to a tool server via JSON-RPC over a secure channel. The tool server defines what the agent can access.
What it doesn’t do is define identity ownership, rotate credentials, or track privilege usage over time. Instead, it relies on existing methods like OAuth 2.1, access tokens, and role-based scopes, designed for human users, not autonomous agents.
That leaves security teams asking hard questions:
- Who owns the agent and the credentials it uses?
- Are those credentials scoped to least privilege?
- Are they short-lived and rotated regularly?
- Is the agent’s behavior visible in IAM or audit logs?
In most environments, the answer is no.
AI Agents Are NHIs, Treat Them That Way
An agent might live in a terminal window or run as a cloud-native app, but it acts just like any other identity. It holds credentials, accesses systems, and makes decisions. And often, it does this continuously, across dozens of environments, without human-in-the-loop review.
The problem? Most security stacks don’t recognize it as such.
There’s no agent profile, no ownership metadata, and no lifecycle management. Developers spin up powerful assistants with real system access—and then move on. Credentials get hardcoded. Audit logs go dark. And the blast radius grows.
This Isn’t Just an Integration Framework - It’s an Identity Shift
The adoption of protocols like MCP is accelerating. OpenAI, Microsoft, AWS, Stripe, and others are embracing standardized agent-to-system communication. That’s a good thing for productivity.
But it also means we’re creating AI-powered NHIs at scale, without fully grappling with what that means for security.
If every agent is effectively a privileged identity, then every integration is a potential liability unless managed like one.
So before your next LLM-powered assistant starts querying production databases or syncing your cloud environments, ask:
- Can I see this identity in my IAM system?
- Are its credentials stored securely and rotated?
- Is there a human accountable for their actions?
Because AI isn’t just talking anymore, it’s acting. And every action is tied to an identity.
The future of security isn’t just about protecting people. It’s about managing the machines that think and act like them.
Related Resources



Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Introducing the CSA AI Controls Matrix: A Comprehensive Framework for Trustworthy AI
Published: 07/10/2025
Cloud Security Alliance Delivers the AI Guardrails You’ve Been Looking For
Published: 07/10/2025
Scattered Spider: The Group Behind Major ESXi Ransomware Attacks
Published: 07/09/2025
Understanding Security Risks in AI-Generated Code
Published: 07/09/2025