Shadow AI _ Agentic AI
| |

The AI agent in your workflow might be a backdoor you built yourself

The first two pieces in this series dealt with problems you could see if you looked hard enough. a) Ungoverned tools showing up in browser tabs; and
b) Invoices that didn’t match the budget.
Those risks are real, but they’re also relatively containable once you know where to look.

This post tackles a different risk. It’s the one that’s already inside your systems, operating with your credentials, and making decisions on your behalf. Often, without anyone realizing it was ever set up in the first place.

Welcome to the agentic AI era. And it’s moving considerably faster than most enterprise security models are prepared for.

Holographic human figures managing server racks in a futuristic data center
From tools to actors – the risks of Agentic AI

From tools to actors

Shadow AI started as an app problem. An employee using ChatGPT to draft emails. A developer running GitHub Copilot on a personal subscription. A marketer using an unapproved content tool. These were passive — they required a human to initiate, review, and decide what to do with the output.

Agentic AI is categorically different. These are systems that plan, execute, and act, such as reading your email, querying your CRM, writing and deploying code, and triggering workflows. Autonomously and at machine speed. They don’t wait for a human to hit send. That’s the point of them.

Gartner predicts that as many as 40% of enterprise applications will incorporate task-specific AI agents by the end of 2026. Deloitte anticipates that at least 75% of companies will use agentic AI to some extent by 2028. The adoption curve isn’t gradual — it’s already well underway, driven largely by the same bottom-up, unsanctioned pattern we’ve discussed in previous weeks.

The access problem nobody planned for

Here’s what makes agentic AI structurally different from every previous wave of shadow IT: agents require permissions. Real ones. To do useful work, an AI agent needs to read your files, access your databases, call your APIs, and authenticate against your internal systems. That means every agent introduced into an organization creates what security teams call a non-human identity. An entity with credentials, access scope, and the ability to act on that access autonomously.

Legacy identity and access management systems were never designed for this. They were built around humans logging in, doing a thing, and logging out. An AI agent is always on, always credentialed, and can traverse systems at a speed and scale no human ever could.

The risk compounds when you factor in how these agents typically get stood up. A developer builds an internal workflow automation — connecting Slack, Jira, a customer database, and a code repository. It works well. It gets shared with the team. Nobody files a security review because it’s “just internal tooling.” The agent now has continuous access to production data, customer records, and source code, with no audit trail and under credentials that may never expire.

“Compromised credentials, SSO platforms, or agent identities could enable large-scale service disruption or data exfiltration.”— Recorded Future, Emerging Enterprise Security Risks of AI, 2026

The vibe coding accelerant

There’s a specific behavior pattern that makes this worse and deserves its own name: vibe coding. It’s the practice of asking an AI to generate functional code based on loosely described intentions. “Build me something that automatically routes support tickets based on sentiment,” without fully reviewing what gets produced or how it connects to existing systems.

The output often works. That’s the problem. It works well enough to be deployed to production without a proper security review, defined access controls, or documentation that anyone else can audit later. Enterprises now have production AI code running in live environments that no human has fully read. And the leadership across the board is pushing their teams to do more and more of this, until something fails.

Shadow AI breaches already cost an average of $670,000 more than standard security incidents. Agentic AI, with its autonomous reach across systems, has the potential to amplify that damage by an order of magnitude. It may not be through a single breach, but through compounding access failures that no one connected until it was too late.

What good looks like

The honest answer is that most enterprises are not yet operating at the maturity level required by this threat. But a few principles are emerging from the organizations that are getting ahead of it.

Treat every agent as an identity, not a tool. The moment an AI system requires credentials and acts autonomously, it needs to be enrolled in your identity governance framework — with scoped permissions, expiry policies, and audit logging. The same rigor you’d apply to a human employee with system access.

Define blast radius before you deploy. Before any agent goes into production, document what it can access, what it can trigger, and what the worst-case failure looks like. If you can’t answer those questions in five minutes, the agent isn’t ready to deploy.

Review the code, every time. Vibe coding is a productivity accelerant, no doubt. It’s also a security liability if AI-generated code is deployed directly to production without a human review. This isn’t a knock on AI-assisted development. It’s a workflow discipline that the best engineering teams are already building into their processes.

Build an agent registry. Just as last week called for a single AI spend view, you need a running inventory of every agent operating in your environment, including what it does, what it accesses, who owns it, and when it was last reviewed. Most organizations don’t have this. Building it is the first step to governing what’s already running.


The risks of ungoverned AI aren’t coming. They’re already here, operating inside organizations that haven’t yet built the frameworks to see them clearly. Agentic AI doesn’t change that story — it just raises the stakes considerably.

A passive AI tool that leaks data is a problem. An autonomous agent with system credentials and no oversight is a problem of a different category entirely. The organizations that recognize that distinction now will be significantly better positioned than the ones that learn it the hard way.

Part 1: From Shadow IT to Shadow AI
Part 2: Your AI Spend is Out of Control

Similar Posts

Leave a Reply