Agentic AI, trust, and identity governance: from hospital records to factory floors
Agentic AI, trust, and identity governance: from hospital records to factory floors
Two vivid scenes illustrate a growing structural problem in enterprise AI. In a hospital exam room, a medical transcription agent updates electronic health records in real time, surfaces patient history, and even surfaces prescription options. On a factory floor, a computer vision agent runs quality control at speeds no human inspector can match. Both generate non human identities that enterprises struggle to inventory, scope, or revoke at machine speed. This is not about model capability or compute; it is about identity governance and the trust architecture that makes production possible.
Cisco president Jeetu Patel told VentureBeat at RSAC 2026 that about 85% of enterprises are running agent pilots while only 5% have reached production. That 80 point gap is a trust problem. The first questions any CISO will ask are which agents have production access to sensitive systems, and who is accountable when one acts outside its scope. IANS Research found that most businesses still lack mature role based access control for today’s human identities, and agents will make it significantly harder. The 2026 IBM X-Force Threat Intelligence Index reported a 44% increase in attacks exploiting public facing applications, driven by missing authentication controls and AI enabled vulnerability discovery.
Security and networking leadership increasingly argue that the trust prerequisite is architectural, not just a tooling problem. Michael Dickman, SVP and GM of Cisco’s Campus Networking business, described a framework in which the network sees actual system to system communications rather than inferred activity. That raw behavioral data becomes the foundation for cross domain correlation, and without it organizations have no reliable way to enforce agent policy at machine speed.
Four trust conditions for agentic AI
- Secure delegation: define exactly what an agent is permitted to do and maintain a clear human owner who can be held accountable
- Cultural readiness: alert fatigue shifts when agents can review and act on alerts at machine speed, changing workflows and culture
- Token economics: every action by an agent has a real computational cost, suggesting a hybrid approach where reasoning is handled by agentic AI and execution by deterministic tools
- Human judgment: no substitute for human oversight, as shown when an AI drafting tool produced pages of repetitive filler while missing key context
The network view matters because most enterprise data are private, internal, and scattered across observability tools, application platforms, and security stacks. Each domain tends to build its own view, and none sees the full picture. The practical implication is a risk of overtrusting automated results when signals come from fragmented telemetry. The next frontier is cross domain visibility: stitching together network, security, and application telemetry into a shared data fabric so agents can be governed across domains, not just within silos.
A real world allegory from Dickman offers a microsegmentation lesson. A family member breaks an ankle and lands in a hospital where a transcription agent handles data entry across multiple systems. The doctor reviews every decision, but the system now handles many steps automatically. The governance must move slowly but enforcement must move at machine speed. The identity layer, paired with microsegmentation, creates least privileged access and can dramatically limit blast radius when a compromised agent operates across systems.
Five priorities before agents reach production
- Force cross functional alignment now by defining what agentic AI should deliver across business units, IT, and security leadership
- Make IAM and PAM governance production ready for agents, with clear ownership and restrictions before scaling
- Adopt a platform approach to networking infrastructure to enable cross domain data sharing that fragmented point solutions cannot offer
- Design hybrid architectures where agentic reasoning handles decisions and deterministic tools execute actions, balancing trust and cost
- Choose two to three high value use cases and bake in RBAC, PAM, and microsegmentation from day one to prove trust and accelerate broader adoption
Ultimately, the argument is pragmatic: 85% of enterprises remain in pilots because they lack the identity governance, cross domain visibility, and policy enforcement infrastructure needed for production. Whether organizations build on Cisco’s platform or assemble their own stack, the throughline remains: identify all agents, assign accountable owners, secure the data flows with microsegmentation, and create a governance to enforcement pipeline that translates business intent into machine level policy.
Sources
Related posts
-
AI at Work: Meta Compute, Slackbot and the Enterprise AI Era
Today's AI news reads like a compass pointing toward infrastructure, governance, and the new work rituals that AI...
13 January 202677LikesBy Amir Najafi -
AI at Work and in Classrooms: Moderation, Ethics, and Greece’s AI Education Pilot
AI news this week shows a clear pattern: the push to deploy new systems often outpaces safety and...
22 November 202580LikesBy Amir Najafi -
Mythos Detects a New Ceiling: Security Teams Must Adopt a New Playbook
When Mythos first surfaced the 27‑year old OpenBSD TCP bug, it did so without human guidance after the...
9 April 202651LikesBy Amir Najafi