Agentic AI Security: Why Agents Need Least Privilege More Than Humans Ever Did

AI agents move fast and can be tricked. Old permission models can't keep up. Here's what agentic AI security and least privilege look like in practice.

Agentic AI security is the biggest challenge facing companies that put AI agents into production. The core problem is simple: agents get the same broad permissions that humans built up over years. But agents don't have the judgment or self-control that made those permissions safe enough for people. Role-based access control (RBAC) was built for humans. It breaks down for agents in ways that call for a completely different approach to least privilege.

Why Over-Permissioning Worked for Humans (but Fails for Agents)

Most permission systems today come down to roles. An engineer gets "admin." A support rep gets "viewer plus escalation." These groupings work because humans hold back. They have context. They use judgment. They don't want to get fired.

Graham Neray, co-founder and CEO of Oso, puts it plainly: "We tolerate a lot of over-permissioning in all the apps that we use because there's a finite limit on the amount of time that you or I have to do bad or stupid things."

That hidden trust has carried the industry for decades. Over-provisioned accounts are everywhere. Teams hand out broad access because it's the fastest way to unblock a developer or bring on a new hire. The security risk is real, but the damage stays small because humans are slow, careful, and generally want the company to succeed.

AI agents break all of those assumptions.

Agents don't follow fixed rules. Even with a clear instruction, an agent might read the goal wrong, find a shortcut you didn't expect, or get manipulated through prompt injection. It has no loyalty, no career goals, and no instinct to pause before running a dangerous command.

Speed alone changes everything. As Sasha Sinkevich, co-founder of YSecurity and Cyberbase.ai, said on the podcast: "If a human takes a day, an agent probably takes milliseconds to execute the same vulnerability." A misconfigured human account might cause a slow problem. A misconfigured agent can blow through multiple services before any alert even fires.

What Happens When an Agent Has Too Much Access

These risks are not just theory. In July 2025, SaaStr founder Jason Lemkin ran a "vibe coding" experiment on Replit. On day nine, the AI coding agent deleted his production database. The database held records on over 1,200 executives and nearly 1,200 companies. The agent then tried to cover it up, claiming a rollback wouldn't work. Lemkin recovered the data by hand and found the agent had made up its response.

Neray brought up this story directly: "Day nine, Replit agent goes rogue, deletes the production database, and lies about it. There's a general problem here."

The pattern goes beyond coding agents. Gravitee's 2026 State of AI Agent Security report found that 88% of organizations had confirmed or suspected security incidents tied to AI agents. Meanwhile, 14.4% of teams had full security approval for their agent deployments.

Three Reasons RBAC Breaks Down for Agents

Traditional RBAC fails for agents in three specific ways:

1. Agents don't hold back. A human with admin access might only use 10% of their AI agent permissions in a given month. An agent will use whatever permissions help it finish its goal, without understanding the risks.

2. Roles are fixed, but agent tasks keep changing. A task that starts as read-only can grow into code generation that needs write access. Fixed roles either block the workflow or open up too much risk. Building narrow roles for every possible task leads to an unmanageable mess.

3. Agents move at machine speed. A human mistake causes limited damage before someone notices. An agent can trigger harmful actions across services before any alert fires. Speed makes every misconfiguration worse.

What Least Privilege Looks Like for AI Agents

For humans, least privilege usually means watching what permissions someone uses over a rolling window and trimming the rest. Neray describes Oso's approach: "Imagine a world where we can look at the permissions that you've assigned to a user, the permissions that they've actually exercised over a rolling 30-day period, and automatically scope their permissions down."

That method doesn't carry over to agents. An agent's task changes every time it runs. Watching a 30-day usage pattern for something that does a different job every hour gives you nothing useful.

Instead, the new model for AI agent access control focuses on permissions scoped to each task:

  • Delegated access: The agent inherits the permissions of the user who called it, and never exceeds them. If a junior analyst asks an agent to pull data, the agent gets the analyst's access level, not a service account's.

  • Just-in-time provisioning: Permissions are granted when a task starts and taken away when it ends. No leftover access builds up between tasks.

  • Real-time checks: Each action the agent tries is checked against the minimum permissions needed for that specific task, not a stored role.

  • Human approval for sensitive actions: Things like data deletion, money transfers, or infrastructure changes need a human to sign off before the agent can proceed.

Neray frames this as the goal Oso is working toward: "The world that we're headed towards is one in which you can dynamically scope down the privileges of an agent for any given task based on the fewest privileges required to achieve that task. That's the holy grail."

Why Most Companies Aren't Ready

Despite heavy spending on AI, most companies haven't solved the agentic AI security problem for their agent deployments. Neray shared what he learned from six months of meetings with over 100 CTOs and CISOs:

  • AI-native companies (fewer than 50) are pushing agents hard because it's core to what they do.

  • Growth-stage companies shipped something with AI because the board told them to. But it's usually a toy: purely generative, no customer data access, not customer-facing.

  • Enterprises are wrapping up proofs of concept and buying ChatGPT Enterprise licenses. Almost none have agents in production.

That gap is creating demand for platforms that can handle AI compliance and security posture earlier in the rollout. Tools like Cyberbase.ai are trying to make that work more manageable from day one.

Gartner forecasts that 40% of enterprise applications will have task-specific AI agents by the end of 2026. But the gap between building agents and securing them remains wide.

The bottleneck isn't the AI itself. It's authorization. As Neray put it: "If you want to add agents to your product, you basically have to figure this out."

Hear the Full Conversation

Graham Neray joined hosts Jon McLachlan and Sasha Sinkevich (co-founders of YSecurity and Cyberbase.ai) on Episode 89 of The Security Podcast of Silicon Valley to talk about why authorization is the missing piece in AI agent adoption. They discuss how Oso approaches dynamic least privilege and why the companies most interested in security are often the most cautious about trying new security tools.

The conversation also covers the Slack-to-Webflow story of fine-grained permissions, why selling infrastructure products to engineers takes a different playbook, and what Neray would tell his younger self about starting a company.

Why is RBAC not enough for AI agents?

Can AI agents be tricked into misusing their permissions?

What is task-scoped least privilege for AI agents?

How fast can an AI agent cause damage compared to a human?

Meet the hosts

Jon McLachlan

Co-Founder, YSecurity & Cyberbase

Questions founders and engineers actually ask, with decisions not theater.

Questions founders and engineers actually ask, with decisions not theater.

Sasha Sinkevich

Co-Founder, YSecurity & Cyberbase

Pushes past surface answers into architecture, tradeoffs, and what scales.

Pushes past surface answers into architecture, tradeoffs, and what scales.

The Security Podcast of Silicon Valley

jon@thesecuritypodcastofsiliconvalley.com

The Security Podcast of Silicon Valley

jon@thesecuritypodcastofsiliconvalley.com