81. How to put AI agents safely into production (with Eric Olden)

Hello, everyone, and welcome to another episode of the security podcast of Silicon Valley. I'm John McLaughlin, one of the hosts. I'm joined with our other host, Sasha. Hello, everyone.
And today we have an absolutely amazing guest, Eric Olden. Welcome to the show. Thanks for having me. I'm looking forward to it.
You're the co-founder and CEO of your latest startup, Strata. You're tackling identity, but from an agentic perspective. Did I capture that right? Yeah, you did.
We started the company on the premise that identity is the new perimeter and identity security is the most important aspect of that. And five years ago, this was really before the AI kind of third wave revolution kicked in. And we were focused on how to help humans access and do things securely using identity orchestration, which is a way to kind of decouple your applications from your identity infrastructure. And then fast forward to the last couple of years, agents show up and turns out they have the same kind of problems that people did.
And they need to authenticate. They need to do authorization. They need to do access and audit. But the difference is with AI is that you have to do all of the identity things in a different way at machine scale.
And so that's been a really interesting evolution into what we're doing. So now we're front and center in providing the guardrails for AI agents and the observability and the identity governance. It's been a lot of fun. What is the biggest problem or let's call it area of opportunity in the agentic workflows and the management of agents?
Yeah, there's two that are tied for first. I'll share them with you and you'll see why I say they're tied. You know, one of the first things we've been doing is building this product with input from design partners. We work with a lot of big financial institutions, insurance companies and so forth.
And about a year ago, a challenge was brought to us and they said, well, you know, we were looking for a way to see what would happen if a agent could spend money. And when we put it in a test environment and said, hey, let's see what happens in less than 15 minutes, all the money was spent. And they weren't expecting that. And they said, well, what would you do, Strata?
Well, Eric, what would you do to control that? And they said, well, what did you think was going to happen? And they said, well, we thought we could treat the AI like a service account. And I put a, for lack of a better term, like a straitjacket on a piece of software and control it.
And I said, oh, I can tell you that's not going to work because the only one who could do that would be the model builder themselves. So if OpenAI wanted to do that or, you know, Anthropic wanted to do that, then they could change how that AI works. What we could do is to take away the ability to press the API button to spend the money. And so we can control the AI, but you don't do it in the way that may seem like natural.
You do it indirectly through a control plane. And the way that we have evolved that into working software is to create these guardrails that give organizations the ability to set using policy how the AIs and the agents and the humans are going to authenticate, how they're going to do access, how they're going to do just-in-time registration and provisioning. And all of the AIs that we do in identity, but you do them in a different way of machine scale. So those guardrails are the things that I think these kind of organizations need before they're going to go and actually use these agents in a meaningful way.
Which leads to the second problem is once you start talking to people, I love to ask this because we've had hundreds of people sign up for our wait list on this new product, the sandbox that we're delivering. And when you ask them, what percent of your agents are in production? No spoiler, it's in the low single digits. And so that's, I think, the real problem that people are struggling with is, well, we got to get into production.
Otherwise, we're spending all this money on an investment that won't give us any return. But the reason we can put them into productions is that we don't have the guardrails. And so that's why I say they're tied for first because you're really trying to go into production, but you can't do that until you solve the guardrail problem. Is it just the guardrails or also observability?
Like, this is what the agent did. This is the full stack, so to say, the full stack trace of the agent's activity. And this is what agents did. Yeah, you're absolutely right, Sasha.
And I think the guardrails do all of the kind of the enforcement, right, of authentication, access and so on. And you deliver that into a audit log. And so what's different in agents and audit that we didn't have to think about with humans that are deterministic, more or less. But we never really had to ask and record intent.
And so that's what's changed in observability is what now our audit model has three components. Intent, which is generally the prompt. The second thing is context, which are typically the attributes that were used. Environmental situation, like, you know, details, the facts, if you will.
So that context. And then the last thing is the outcome. Like, what happened, right? Did something get purchased?
Did something get promoted to production? Whatever the case may be. So if you can't do the observability, if you don't have a system that's doing the guardrail in the first place. So as you kind of get, this is all sides of one coin, but it's actually more like one of the nerd alert D&D dice, right?
Six-sided die. So it's not just the one. It's how the one affects the other, which affects the other. And the last thing I would say, if we're drawn to be complete, you have your guardrails, you have your observability, and then you've got your identity governance.
So how do you manage the life cycle of all of these things? And questions around governance in agents. A lot of this is like, you think about the movie, the show, anyways, Stranger Things. Remember the upside down?
Yeah, yeah. Kind of the analog. Sort of a different, different area. That's right.
And it's like, not quite like the polar opposite, but it's different and it rhymes. But that's kind of what is going on in the agentic world is all of these things that we are used to doing with humans, we've got to flip it around and make it work with agents. And so I think of that as agents are people too. And that's your decoder ring about, well, how do you make all this stuff work?
Guess what? You don't have to reinvent it. It's not a non-human identity problem. It's the same thing as treating a human.
And then if you think about humans and agents as kind of peers, then the whole kind of methodology and structures that we are really good at, you can bring to the table. So it's been a really interesting way, a lot of mapping and translation. But yeah, it makes it difficult to say what the one thing is because there's more than one thing. In a way, there is a unit of work that needs to be done.
And we used to rely on human labor to execute on that unit of work. And now a lot of moat could be executed with agentic flows. And you mentioned something very interesting that the percentage of total number of agents that has been deployed in production is in low single digits. Business unit or what organizational unit blocks the deployment of agents?
Are those compliance team that are blocking deployment of agents? Is it the security team that is blocking the deployment of agents? Or just overall, holistically, the organization's belief that we're just not ready yet before we have this full observability of this new force? Because just like with humans, you have to do the background checks.
You have to ensure that proper access controls are in place in order for this workforce not to venture out in places where it shouldn't. You mentioned the controls should be very similar to the controls that we apply to a human. Yeah. So that is a great question.
Like who is saying no? I think the thing that's consistent, because I've talked to hundreds of organizations. And what I would say is, you know, going into the wait list I was mentioning. So we got about three, over 350 people and organizations in there.
And so I can kind of give you some trends about like who's in there. We look at their job title, for instance, and it's pretty interesting how it's split rather evenly between engineering titles like hands on keyboard. I do this for day in, day out. And then also less technical people.
So more on program managers and business analysts. And, you know, and you think about like, well, who's saying no? The thing that's, I think, holding people back is a lack of confidence. And if you think about how in the enterprise, anyways, it's a big deal to push something into production.
You want to have a checklist. You want to make sure you've done all of your things to make sure that as part of moving to production, that someone signs off on it. And, you know, you do the various things like blue-green testing. You do smoke tests.
You do various things that are like you would see with any kind of software that goes out. And I think what really gets people shook is when you run the same prompt and the AI does something different every time you do it. And that really brings that fear of like, I don't want to be a headline. And you hear about the horror stories.
There's so many of them. It's kind of pick the one that happened this week. But the fear of like, shoot, I don't know what it's going to do. We need to move past that fear.
And that can happen to technical people or non-technical people. I think we need to move past that by getting the data and validating. So the classic statement of trust, but verify, like never been more true than right now. And so I think as part of what we're seeing is really, it doesn't matter where in the organization you are.
What you need to do is get confident. And the way I believe anybody should get confident is like, see it, do it, and then do it five more times. And if you can see predictable responses and outcomes, then you're ready to go to production. I'd sign off on that.
But I can tell you, you talk to any of the people involved with premature, you know, putting a chatbot out there. And I think one of the airlines, forget which one, but the airline, the chatbot said, oh, you get a free ticket. And they're like, no, no, no, we didn't mean for you to get a free, whatever, $1, 000 ticket. And the court says, hey, that's on you, airline, give them the ticket.
Because you want to use the agent to do the work of a person. If it was a person who said that, we would hold you to that as well. So I don't think anybody wants to be having to give out free tickets because that's not anyone's business model. But at the same time, you got to find a way to get that confidence.
So does that make sense? What do you think? No, that makes sense. I think it's basic principle of trust or the lack of at the moment.
It's a wild, wild west. You are 100% spot on when without proper guardrails, you can do the same input. But the output will differ unless you have proper guardrails in place. If the context and additional context changes, the output will change as well.
And how do you protect from that? And then how do you test for that? Because these are not simple unit tests. These are integration tests.
And you've got a lot of moving parts. And that's kind of the inspiration for us with the sandbox is we do a lot of work in the aerospace and defense world. And I fly a lot. So I think about this when I get on airplanes.
Like, how good is the pilot? And how good does the airplane look? But if you were to say, hey, the three of us, we want to learn to fly. What we would not do is we would not go, hey, go to the airport and get behind the cockpit of a plane loaded with passengers going to fly across the Pacific to Hawaii.
It would be a very bad way to learn. I think we would all agree on that. Now, what you would do is, but people do fly across the Pacific. So how do you get from not knowing it to being confident?
You do a lot of simulation. And you get in a flight simulator. And in the beginning, you learn how to take off and land. Pretty straightforward.
Now you get that under your belly. Hey, I'm pretty good at that. And then you go more advanced where you say, okay, now I'm flying. Let's simulate losing an engine.
And then the simulator says, okay, I no longer have the right engine working. Deal with that pilot. Okay, well, if I crash that plane, no big deal. Just restart it and put another quarter in the machine and Bob's your uncle.
And that's the whole notion that we're talking about with these AI pieces is get them working in the first place. Get that day one experience where you say, hey, good, I can do a transaction and I know how this works. And then day two, all right, let's switch things up. Let's change the identity system.
Let's change the AI model. Let's change the MCP application or the API behind the MCP. And now you can start to break the agent, give it too much permission. What can it do?
What does it do if you are trying to peel back the minimum privileges, minimum standing privileges? Can I do the transaction with a super tightly scoped scope? Those are the kind of experiments that you don't want to do in production because you don't want to end up paying that $1, 000 ticket or worse, right? So that's kind of the notion that we've been seeing.
And it's been pretty exciting because there's no substitute for experience. And once you know how this goes, I mean, I think we've all been there with different systems that we work with for a while. The first version was usually pretty rough. And then it turns into something better.
And anyway, so that's been really exciting for us is to kind of bring this safe to fail mindset and say, hey, here you go. Spin it up. And it's free, too. That always helps.
So we're trying to help people learn by making it free to start. But, yeah, pretty excited about it. I love that you mentioned experience, too. It's not the first time you've tackled a difficult open security problem, is it?
We've been doing this for almost 30-something years. And identity has been in your track record in past lives helping solve tricky identity problems just on the human side of the fence, right? Yeah. Yeah.
Yeah. Going back to, and I've been doing this since the mid-90s. And going back into the early days of the web, I was fortunate to be in the right place at the right time. Berkeley in San Francisco when I was graduating.
And I saw right in front of us, like, hey, this web thing. And at the time, people said, oh, you can't use it for anything commercial. And I was like, yeah, that's going to last. No one's going to stop if they can find a way to do it.
And then the web server in the day was the early one, the Mosaic one from University of Illinois. And then the Netscape one, they were great. But they're very easy to serve content. And if you remember what people were doing, 94, 95, it was a world.
The web was a bunch of brochures, HTML and images. And it made sense because everyone's like, hey, I got to get on the internet and have a billboard. OK, but what people really wanted to do were e-commerce. And what was holding that up?
Another missing guardrail. And it took three things that happened in the web and identity. The first, give credit to Netscape and Tahar El-Gamel and his team for inventing SSL. Right now, people would have a different answer about putting their credit card on the internet.
Well, it's safe because that little key on my browser says it can't get stolen. Cool. And so people started to like find a way to build interesting things. The second thing was scale.
The web server, you had basic access control lists, but it wasn't anything that could be really scaled up beyond a couple dozen people. But when you have e-commerce sites that had millions of people, then the problem was how do we scale and management of all these humans? And that became the whole realm of web access management. And I was the founder and CTO of a company, Securant, that solved that problem.
And we went from two guys in a garage in Berkeley to, you know, a 300-person company in a very short amount of time because the right timing matters. And then the third thing was once you get these systems working, how do you make them work when people don't know the other entity? Which in the late 90s we called federation. And that created a need for a new standard, which became SAML, which was one of the four co-authors of that.
And we were solving the problem early days of like how do you link different data centers and different geographies with different technologies on either end? And so that was like, okay, a real interesting conceptual problem. And the part that I was most interested in was like how do you make that secure from a trust standpoint and make distributed computing work? And then here we are 2025 and I think SAML's much here to stay.
And I'm excited about being a small part of that in the beginning. But it's always been, you know, what can you do? Because the world will tell you what you can't do. And I love the mindset of don't tell me that.
Tell me what you can do. And finding a way where it's really hard and there's no examples. Like that's the fun part for me is solving those problems. So, yeah, you're right, John.
It has been a lifetime of solving really thorny problems. But you never know where it's going to happen. These are definitely thorny problems. And I'm super grateful that we've got some really smart people who have spent years and years tackling problems like this and in different flavors and in different companies and in different entrepreneurial ventures tackling this one for agents here in the modern day.
It was definitely a lot of fun and it's moving so fast. And I got to say, see how fast it moves? It's unbelievable. It's a flywheel that's just going to continue like increasing at an increasing rate.
It is. And when people asked me early on, they said, well, how how would you solve this problem? I didn't tell them the technical implementation. The first word was standards.
Use standards because there's anything that you want to do quickly. You got to do it in a standardized way. Otherwise, everything becomes bespoke and things just don't move fast enough. And there's all sorts of great standards out now, especially for distributed computing, which is this is a distributed computing problem.
You can use standards like OAuth. You can use OIDC. You can use SAML. You can use Pixie.
You can use Spiffy. I think that's what you need to do in order to move and meet the moment because people need the problem today. And I've heard some vendors say, oh, we're going to solve OAuth X. 0 three years in the future or this app to app integration three years in the future.
OK, well, last I checked, people are trying to do it today. And you give them a solution that's in three years in the future, even a year in the future. People, they've got problems today. So I love this quote.
I think it was Voltaire who said it, but to paraphrase it, don't let perfect be the enemy of good. And I'm a huge advocate of that. It's like, hey, what can you do? Sure, if it's not perfect, that's OK.
What can we do with what we've got? And that's this kind of idea to take OAuth and bootstrap it, push it to the edge. Find real good implementations of things like Depop and be able to provide ways that you don't just recycle and replay tokens. And then other things like especially getting rid of, and this would be my dream.
And if there's a small part that I get to play in all of this, I'm doing everything I can to get rid of passwords. So it's pretty cool that you brought up Tahir because he said the exactly same thing on the show when he was here last time. We've integrated that so many times in so many different companies with so many different products and services. And there's so many happy customers out there logging into systems.
But it's just wild because these things, the inspiration, they show up in all sorts of different places. And I think we as an industry, we need to be open-minded. You mentioned something very important, which is the adoption of technology by the community. And where do we as a community currently work together on solving the security opportunities in the agendistic space?
Yeah, I'm a big fan of two big standards bodies right now. The CNCF, the Cloud Native Computing Foundation. I think they're doing great work. And then the FIDO Alliance and what they've done with PaaSkeys.
We'll just keep going with PaaSkeys. I hate passwords. I know they're cheap-ish, but they're way more expensive when things go wrong. But I love that, again, some big players got together, Apple and Google and Microsoft amongst others.
And they said, hey, let's make this investment. We all agree on phishing-resistant PaaSkeys and look at the world now. And I don't have to remember a password. I just put my face in front of my phone and I log in.
So I love that. I love what they're doing. We implement a lot of stuff there at Strata. We also contribute there as well.
So within the authorization group inside of the FIDO Alliance is authorization. And so we've helped with AuthZen. So authorizationZen. So authorizationZen.
And it's for more distributed computing, distributed authorization. So you can have different enforcement points by different vendors. And we can all kind of execute the same policy is a kind of simple way to think about that. So big, big fan of that for those reasons.
And then the CNCF, part of the Linux Foundation, known most for Kubernetes, I think is probably their most popular project. And we use it all the time. But I think even beyond Kubernetes, you've got other things in identity like OPA, Open Policy Agent. I love OPA.
Yeah. Best kept little secret in the AuthZ space ever. Yeah. Well, I love it too.
But we also have contributed there. We put a tool out called Hexa and we wrote a standard for policy interoperability for distributed policy. Like if you're using multiple clouds like Amazon and Azure, but you want to have one policy that works in both places, then you use identity query language or IDQL. It's just something we built over the last couple of years with the community.
And what that allows you to do is move from one cloud stack to another and not worry about how you're going to rebuild all those policies. Because the Hexa open source tool, which is on CNCF, will basically point it at system A. It'll extract all your policies out of A, convert it into IDQL format, and then you point it at system B. And then it pushes it into the syntax of system B.
So that's how you get that kind of policy interoperability. Those are my kind of current favorites. So how long have you been focused and just laser focused on all of these identity problems at Strata? Well, I kind of was born into identity in my software career.
It's in your blood then. Yeah, I can't get out of it. But yeah, I think going to 1995 was the first time I got exposed to it. Early on, we didn't call it identity.
We called it web security. But security is one of those kind of amorphous terms, so overloaded. And so it became identity probably in the early 2000s. But I've been in it since before it was called that.
I think more recently with Strata, before I started this company, I was running Oracle's security and identity division and seeing what the world looks like from the hyperscaler's point of view. And when you're dealing with millions and millions of customers and downtime is in the tens of millions of dollars per hour, right? You have to think about things in a whole nother level. And I think that trained me well because now, fast forward, what we're doing here at Strata is certainly focused on the big enterprise.
But also we have that experience of running stuff at hyperscale, exascale. And that matters for a lot of the work that we do in Department of Defense and, you know, things like that, just to keep everything up and running and making sure that an outage can mean more than just lost money and downtime. In certain circumstances, it truly is life or death. So I'm here for that.
I trained my whole career to be the person who can say, I know this will work. I think I already shared how paranoid I am about making sure it works before you make that statement. But I don't, hopefully, you know, said everyone who thinks this, hopefully it's not arrogance, but it's evidence that makes me confident. We don't have people taking this serious and really saying, hey, good enough isn't good enough.
We got to be great and maybe not never perfect, but okay isn't going to cut it. Where things are moving as fast as they are, you got to be good. If not good, great, but anything less than good, I don't see how that's going to end out and in a good way. Yeah, I mean, you need that experience from the big companies to be able to understand, like, where is that bar?
You have to be able to see it firsthand, experience it day in, day out, beyond the on-call rotations, see what happens during an incident. And then you sort of start to understand, it's like, oh, okay, this is, that's where the bar is for software. And the word that I like to use is bulletproof. And now with all of these AI startups and all of this great innovation that's coming out of our AI space, it's a new territory.
But I like to think of innovation only being those pieces of it that are easy to adopt. And if it's not easy to adopt, it's not going to get used and it's not really innovation. It's just, it's maybe a nice side project. Maybe it's a cool demo.
But if you can't adopt it, like, is it going to change the world? No, it's a great point. I think there's a whole other side of like the vibe coding thing. And, you know, seeing some of the stories about like pointing out the obvious.
Certainly, you're not going to vibe code something that's going to take the place of like millions of dollars of technical innovation and hard work and all of that. And I'm really encouraged that like the AI tools are making it possible for more kind of citizens, so to say, to build things and to solve problems. Because that's, to me, what software is all about. So I love that we're able to democratize that and get more and more people into the game, if you will.
But at the same time, sometimes that expression, I know enough to be dangerous, this is really showing itself. I think the opportunity is that more people can get involved. But we also need to keep the bar high in terms of QA and not letting the go zero shot into production. What do you say has been your proudest moment as an entrepreneur?
That's a good question. I will say this and try and distill a couple of years at Oracle down to one thing. But when I joined Oracle, I was a startup guy coming into a really big organization where I was like, hey, if I was brought in to make this group work like a startup, let's build software like a startup does. So concepts like agile, concepts like rapid iteration, all that, they were kind of known, but not practiced.
And if they were practiced, it was kind of in a weird way. So I set out on a mission to get at least my division to basically start to do things differently. And there were a lot of people, it was a bell curve distribution. Some of the people were really willing to make a difference.
And then there was like the big bulk and the metal that were very skeptical. And then there were people at the end that were absolutely skeptical and saying, this is never going to work. And it was my kind of last month or so at Oracle. And we just finished that last class that had been like, hey, I don't want to do this.
And my proudest moment was at the kind of graduation dinner that we had. And this person walked up to me and he said, Eric, I owe you an apology. And I said, oh, I don't understand, but tell me more. And he says, well, a year and a half ago, got in front of the whole division.
And you said, we're going to do this crazy hard thing. I said at the time to my friend, I said, this guy is not going to last. He does not know what he's trying to do. He's never been at a company like Oracle.
It is going to end so poorly. And I'll just wait for the next guy to show up. I was like, OK, that's pretty cynical. But where's the apology?
And he goes, well, that's what I said. And I was the one in this last class because I thought this is not even going to happen. And I owe you an apology because you were right. And we can do things different.
We can do these hard things. And I appreciate the fact that you went out on a limb. It was a very risky thing that I guess in a company like Oracle, that's kind of risky. I'm in a startup, guys.
It's like company dies. It's like, hey, we woke up and went to work that day. Yeah. But when he said that he never thought it would happen and then he is a complete believer, that was proud.
It wasn't me teaching everything. I had a great team. They went and did it. We did it around the world.
But that was a proud moment because I think what it did was it showed that if you give people an opportunity, like truly do, and don't prejudge them, but just give them a shot. And they may not be the ones taking the first go around. But, you know, give them a shot. And even the most cynical, skeptical people through their own doing can show that this can work.
Inspiring people to their full potential, even in a new space, in a new territory, with new processes and a new approach to writing code or to doing whatever. I think a lot of that is at the core of that entrepreneurial journey to never really be satisfied with where we are today. And I really feel that energy. And that's awesome.
I'm super curious, though. It's not all peaches and roses, as I'm sure you're well aware of. I'm wondering if you might be open to sharing with us what's been the most difficult day along your journey. That's a great question.
So the most difficult thing, I think, is not freezing when the inevitable thing happens. And it doesn't matter where. It could be customer. It could be partner.
It could be financing. It could be God knows what it's going to be. But it's going to happen. And so your mindset of resiliency and just kind of mental toughness.
I learned to meditate a lot as a founder because in my head, you can just kind of slay dragons that if you tried to actually get out and do it, you're going to lose. So that's a roundabout way of saying I think it's just be ready for change because it happens, flow through it, and hope for the best. Get back on that horse. It's not a failure until you stop trying.
That's right. Well said. What legacy would you like to leave? Just a small, you know, minor question.
A small legacy? You know, I guess it really comes down to my two kids and the people in my company that we've been working. A lot of us have been working together for, you know, 10, 20, 30 years. And so I think of the kind of legacy is like, well, when I inevitably pass, like, what will people think of me and say about me?
And I hope that, you know, coming out of it is that I've never been one to shy away from the really hard things. I've never said it's going to be easy. And a friend of mine called me pathologically optimistic. And I thought that was actually beautiful.
I was like, yeah. That's the biggest compliment you could ever receive. So if that's my legacy is a pathological optimist, then so be it. Right.
What else are you going to do? I find myself saying this as a founder and CEO. You go to war with the army that you got. Mike Tyson said everyone's got a plan to get punched in the mouth.
And all of these are metaphors for what I was saying is like, hey, just keep going and find a way through. Don't tell me what you can't do. Find a way or make one. So maybe that's my legacy.
Love it. Well, thank you so much for joining us on this episode of the Security Podcast of Silicon Valley. Would you like to leave our listeners with any words of wisdom, Eric? Well, appreciate you having me on the show.
I think, you know, in terms of words of wisdom, go standards. Yay. Go standards. Yes, I love it.
No passwords. Yay. Please don't reinvent the wheel. And then don't test in production.
Don't test in production. Don't test on my airplane. That's right. Great pleasure having you on the show, Eric.
Thank you, Sasha. Yeah. Huge thank you. Huge thank you again.
As a friendly reminder to all of our listeners, I'm John McLaughlin, one of the hosts, and was joined with Sasha Sinkovich, the other host. And we had the pleasure of Eric Olden, the co-founder and CEO of strata. io. And a huge thank you to all of our listeners for tuning in to another episode of the Security Podcast.
Thank you. Thank you.