ChaptersCircleEventsBlog

Publication Peer Review

Agentic AI Red Teaming Guide
Agentic AI Red Teaming Guide

Agentic AI Red Teaming Guide

Open Until: 04/27/2025

Red teaming for Agentic AI requires a specialized approach due to several critical factors. Agentic AI systems demand more comprehensive evaluation because their planning, reasoning, tool utilization, and autonomous capabilities create attack surfaces and failure modes that extend far beyond those present in standard LLM or generative AI models. Additionally, the non-deterministic behavior of agentic systems, combined with the intricate communication patterns emerging in multi-agent environments, introduces complexity that traditional red teaming methodologies aren't equipped to address. These unique challenges underscore the urgent need for industry-specific guidance on effectively red teaming agentic AI applications.


This project is initially an internal research project by DistributedApps.ai with the objective of providing a practical guide with actionable steps for testing Agentic AI systems. Based on the Cross Industry Effort on Agentic AI Top Threats, which was initially created by Ken Huang, leveraging the research work initiated by Vishwas Manral of Precize Inc. and with many contributors from the AI and Cyber Security community, this document is revamped with a focus on testing the risk or vulnerability items documented in the Cross Industry Effort on  Agentic AI Top Threats’ framework.


The repository for this framework is located here.


This red teaming guide expands upon the top threats documented in the above repository to include additional threats documented at https://github.com/precize/OWASP-Agentic-AI. Further threats will be analyzed and added if we see realistic risks associated with Agentic AI systems.


As a continued community effort, this project is adopted as a joint effort between Cloud Security Alliance’s AI Organizational Responsibilities Working Group and OWASP AI Exchange. More contributors and reviewers from both CSA and OWASP AI exchanges joined the effort to publish this document. 

Contribute to Peer Review

Open Until: 04/27/2025