AI agents are running hospital records and factory inspections. Enterprise IAM was never built for them.
A doctor in a hospital exam room watches as a medical transcription agent updates electronic health records, prompts prescription options, and surfaces patient history in real time. A computer vision agent on a manufacturing line is running quality control at speeds no human inspector can match. Both generate non-human identities that most enterprises cannot inventory, scope, or revoke at machine speed. That is the structural problem keeping agentic AI stuck in pilots. Not model capability. Not compute. Identity governance. Cisco President Jeetu Patel told VentureBeat at RSAC 2026 that 85% of enterprises are running agent pilots while only 5% have reached production. That 80-point gap is a trust problem. The first questions any CISO will ask: which agents have production access to sensitive systems, and who is accountable when one acts outside its scope? IANS Research found that most businesses still lack role-based access control mature enough for today's human identities, and agents will make it significantly harder. The 2026 IBM X-Force Threat Intelligence Index reported a 44% increase in attacks exploiting public-facing applications, driven by missing authentication controls and AI-enabled vulnerability discovery. Michael Dickman, SVP and GM of Cisco's Campus Networking business, laid out a trust framework in an exclusive interview with VentureBeat that security and networking leaders rarely hear stated this plainly. Before Cisco, Dickman served as Chief Product Officer at Gigamon and SVP of Product Management at Aruba Networks. Dickman said that the network sees what other telemetry sources miss: actual system-to-system communications rather than inferred activity. "It's that difference of knowing versus guessing," he said. "What the network can see are actual data communications … not, I think this system needs to talk to that system, but which systems are actually talking together." That raw behavioral data, he added, becomes the foundation for cross-domain correlation, and without it, organizations have no reliable way to enforce agent policy at what he called "machine speed." Dickman argues that agentic AI breaks a pattern he says defined every prior technology transition: deploy for productivity first, bolt on security later. "I don't think trust is one of those things where the business productivity comes first, and the security is an afterthought," Dickman told VentureBeat. "Trust actually is one of the key requirements. Just table stakes from the beginning." Observing data and recommending decisions carries consequences that stay contained. Execution changes everything. When agents autonomously update patient records, adjust network configurations, or process financial transactions, the blast radius of a compromised identity expands dramatically. "Now more than ever, it's that question of who has the right to do what," Dickman said. "The who is now much more complicated because you have the potential in our reality of these autonomous agents." Dickman breaks the trust problem into four conditions. The first is secure delegation, which starts by defining what an agent is permitted to do and maintaining a clear chain of human accountability. The second is cultural readiness; he pointed to alert fatigue as a case study. The traditional fix, Dickman noted, was to aggregate alerts, so analysts see fewer items. With agents capable of evaluating every alert, that logic changes entirely. "It is now possible for an agent to go through all alerts," Dickman said. "You can actually start to think about different workflows in a different way. And then how does that affect the culture of the work, which is amazing." The third is token economics: Every agent’s action carries a real computational cost. Dickman sees hybrid architectures as the answer, where agentic AI handles reasoning while traditional deterministic tools execute actions. The fourth is human judgment. For example, his team used an AI tool to draft a product requirements document. The agent produced 60 pages of repetitive filler that immediately provided how technically responsive the architecture was, yet showed signs of needing extensive fine-tuning to make the output relevant. "There's no substitute for the human judgment and the talent that's needed to be dextrous with AI," he said. Most enterprise data today is proprietary, internal, and fragmented across observability tools, application platforms, and security stacks. Each domain team builds its own view. None sees the full picture. "It's that difference of knowing versus guessing," Dickman said. "What the network can see are actual data communications. Not 'I think this system needs to talk to that system,' but which systems are actually talking together." That telemetry grows more valuable as IoT and physical AI proliferate. Computer vision agents analyzing shopper behavior and running factory-floor quality control generate highly sensitive data that demands precise access controls. "All of those things require that trust that we started with, because this is highly sensitive data around like who's doing what in the shop or what's happening on the factory floor," Dickman said. "It's not only aggregation, but actually the creation of knowledge from the network," Dickman said. "There are these new insights you can get when you see the real data communications. And so now it becomes what do we do first versus second versus third?" That last question reveals where Dickman’s focus lands: the strategic challenge is sequencing, not capability. "The real power comes from the cross-domain views. The real power comes from correlation," Dickman said. "Versus just aggregation and deduplication of alerts, which is good, but it's a little bit basic." This is where he sees the most common pitfall. Team A builds Agent A on top of Data A. Team B builds Agent B on top of Data B. Each silo produces incrementally useful automation. The cross-domain insight never materializes. Independent practitioners validate the pattern. Kayne McGladrey, an IEEE senior member, told VentureBeat that organizations are defaulting to cloning human user profiles for agents, and permission sprawl starts on day one. Carter Rees, VP of AI at Reputation, identified the structural reason. "A significant vulnerability in enterprise AI is broken access control, where the flat authorization plane of an LLM fails to respect user permissions," Rees told VentureBeat. Etay Maor, VP of Threat Intelligence at Cato Networks, reached the same conclusion from the adversarial side. "We need an HR view of agents," Maor told VentureBeat at RSAC 2026. "Onboarding, monitoring, offboarding." Use this matrix to evaluate any platform or combination of platforms against the five trust gaps Dickman identified. Note that the enforcement approaches in the right column reflect Cisco's framework. Dickman grounded his framework in a scenario from his own life. A family member recently broke an ankle, which put him in a hospital exam room watching a medical transcription agent update the EHR, prompt prescription options, and surface patient history in real time. The doctor approved each decision, but the agent handled tasks that previously required manual entry across multiple systems. The security implications hit differently when it is a loved one's records on the screen. "I would call it do governance slowly. But do the enforcement and implementation rapidly," he said. "It must be done in machine speed." It starts with agentic IAM, where each agent is registered with defined permitted actions and a human accountable for its behavior. "Here's my set of agents that I've built. Here are the agents. By the way, here's a human who's accountable for those agents," Dickman said. "So if something goes wrong, there's a person to talk to." That identity layer feeds microsegmentation — a network-enforced boundary Dickman says enforces least-privileged access and limits blast radius. "Microsegmentation guara…
Send this story to anyone — or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to VentureBeat AI.