ChaptersEventsBlog
We're exploring how organizations adapt IAM to AI. Take the AI Identity and Risk Readiness Survey by September 5 →

Vulnerability Management Needs Agentic AI for Scale and Humans for Sense

Published 08/22/2025

Vulnerability Management Needs Agentic AI for Scale and Humans for Sense
Written by Paul Mote.

If we’re in AI’s Wild West, this much is clear: When it comes to vulnerability management, agentic AI technologies need human wranglers. (Though the humans need not ride horseback.) 

AI agents are upending vuln management by scaling up identification of suspected software flaws. They can clearly cover more of organizations’ attack surfaces, faster. 

But humans still have the edge when it comes to validating business-critical vulnerabilities and discovering complex zero-days in customer environments. 

 

Agentic AI + human in the loop: The best of both worlds

Together, humans and AI agents can turn a new page on effective, secure vulnerability discovery and remediation. Defensive AI agents operate at machine speed and, when fed the right intelligence, have a fair shot at identifying and flagging a vulnerability before it even gets assigned a CVE. The core challenge here is the speed at which defenders can ingest intel and act on it. 

Left to their own devices, agents can also chase down red herrings or turn up “CVSS 10” findings that turn out to be noise (or are unexploitable in practice for reasons that elude AI training data). Bringing humans into the loop is essential to rein in the untenable pile of so-called vulnerabilities that could emerge if AI agents are given free rein to poke and prod sans oversight in enterprise environments. 

While AI can undoubtedly help automate the “boring stuff”, like quickly identifying exposed SSH ports with old versions tied to known CVEs, these complex vulnerabilities, those requiring multiple logical leaps or understanding unreferenced functions, are still largely beyond the current capabilities of AI agents. We mere humans, with our intuition and ability to make “aha!” connections, are still essential for discovering more intricate flaws.

More alerts ≠ more security. The real goal is precision and identifying what’s exploitable, actionable, and truly risky.

Another significant risk with AI-powered vulnerability discovery is the potential for “slop”, a deluge of low-quality, unactionable reports. We’ve seen this in bug bounty programs, where AI outputs can hallucinate or pull irrelevant snippets. This amplifies an existing problem: vulnerability management teams are already underwater, struggling to prioritize and work through an ever-growing stack of known CVEs. More noise doesn’t help; it buries the critical “signal” that teams need to act on to protect their organizations.

 

AI expands discovery so security teams cover more ground

If AI can’t replace humans, the reverse is also true. That’s because adversaries are already leveraging AI to technologies to augment their offensive operations. They can research organizations, analyze attack surfaces and pair the right attacks against them exponentially faster. As a result, the time to exploit vulnerabilities in the wild is shrinking dramatically. Reports highlight that a significant percentage of vulnerabilities are automated and weaponized within 24 hours of public disclosure. This puts immense pressure on defensive teams who are already struggling to prioritize and remediate known vulnerabilities.

In short, that means we need AI just as much as it needs us. And we’ll need to make peace with our AI future: CISOs and their security teams will need to embrace the agentic culture shift. We can’t fully trust computers to make autonomous decisions on critical functions. But we’ll need to trust AI agents to scan enterprise environments and act quickly, even autonomously, to flag certain vulnerabilities before attackers can exploit them with their own AI tools. For defenders, that means building trust in AI outputs and integrating them into existing workflows while maintaining human oversight for verification. The goal isn’t full automation, but rather intelligent augmentation, allowing humans to focus on the high-value, complex tasks and giving AI the rest. 


About the Author

Paul Mote is vice president, solutions architects at Synack.

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates