92. The Real Problem Isn't Deepfakes. It's Identity (with Jasson Casey)

Welcome everyone to another episode of the Security Podcast of Silicon Valley. Today's honorable guest is Jason Casey. Jason, welcome to the show. Thanks for having me.
Where are you calling in from, Jason? I am in the beautiful place of upstate New York. Awesome. Have you always been on the East Coast?
Have you started your career on the East Coast? No, I started my career in Texas, believe it or not. And I'd say I worked the first half of my career in Texas, and then I migrated east. You know, growing up, I always thought I'd end up in California.
And I remember when I was dating my wife at the time, it's like, you got to be able to work from California. And I guess I'm the one eating crow because A, we moved east, and B, we actually moved for her job. I was largely flying to California on a plane anyway. But yeah, I moved to D.C., lived there for a while, then moved to New York and left the city to experience the countryside during the pandemic and kind of fell in love.
Awesome. I know you started your career in the telecom space. And it's very interesting because telecommunications are very important for any business to be successful or for anyone to be successful in today's world. You have to be able to connect with people.
You have to be able to connect with organizations. Now we are in this ever-evolving space of the importance of secure communications. Where do you see the secure communications today? And what's the biggest pain point in secure communications, making sure that you speak with a trusted party?
It's honestly like a line from Star Wars, right? It surrounds us. It binds us. It touches everything.
There's nothing that you do that shouldn't involve secure communications in some way, shape, or form. And the reality is, is you're not actually getting it. For instance, most websites, when you log in, still use a username and password. And the developers, whether they rolled something themselves or even used some common open source tooling, are largely passing your password as clear text over what they believe is giving them privacy, integrity, and authenticity, i.e.
TLS or HTTPS. And the reality is, is we've kind of forgotten that the modern development practice is to use a third-party load balancer, is to use a third-party content distribution network, that most enterprises run local proxies to actually open and reterminate TLS connections. Maybe we have a Kubernetes cluster, right? With a service mesh.
So, like, there are all of these scenarios where TLS is certainly not end-to-end and generally being opened and reterminated by third parties. So, like, your surface area, knowing where confidential information exists, whether it was securely deleted, how is it tracked, is kind of exponential, combined with the fact that it's not really trackable, you can't really defend it. You know, there was a famous paper maybe 10 or 15 years ago in the academic community where they were testing the security of various online providers by creating an account, never logging in again. And their premise was, could they judge who had decent back-end security services by seeing that password get reused on third-party services?
I forget the exact numbers, but anything larger than, like, zero or 1% would be surprising. And it was definitely double digits. The way you can explain that is twofold. Number one, these sites are constantly getting rolled.
But number two, the way we communicate with each other isn't actually as secure as we think. You know, there's a lot of examples of that that have come about because of mistakes, right? So, if you remember CloudBleed from five or six years ago, without getting into all the details, because it's old news, but if you sent the right kind of crafted packet to Cloudflare, you got a 4096-byte dump of somebody else's TLS connection. And that was a mistake, but clearly the most sophisticated Cloudflare can make these sorts of mistakes, so certainly the others can.
And then right now, I can't remember if it was Unit 42 or I forget what researcher was reporting on this, but you've got a threat actor, secret blizzard, so it's Russian, likely SVR, targeting diplomatic corps, operating within Russia or Russian-influenced partners, basically capturing and man-in-the-middling TLS connections. And the way they do it is their malware is dropping kind of a CA that they control into the target device, but then it deletes itself, just leaves the CA or browser plug-in and some other versions, and kind of opens the connection and copies everything out. So a bit of a long response, but the security of communications is kind of important in almost everything we do because of how much information and data we share, and we're almost never getting it.
It's almost like an iceberg. When you communicate with a platform or with a system, it may seem that on the surface things are teaching, but you don't really know what happens under the hood, and you touched on this a little bit. You have the entire attack vector of the supply chain. You don't really know what packages are being used under the hood, and you don't actually get to control it being the end user of that system.
Passwords are still around. I recently spoke with Taher, the father of the SSL, and his goal was to eliminate the passwords in general because we have secure browser experience because of the TLS that exists today, but the passwords are still around. What are some of the solutions that you see for that pain point? So it's funny you called out Taher.
I was talking to him just two hours ago. You probably know this. He's on our board, and he was an early advisor in the company. So it goes deeper than passwords would be my first point.
Access tokens, API keys, session cookies. The idea that you're going to device bind something based on an IP address is kind of a very brittle and silly concept and certainly a security heuristic. When you look at classes of attacks that ultimately result in session hijacking, what are they doing but copy and pasting some sort of symmetric secret, right, or shared secret from one location into another? It's long-lived.
You don't rotate it. Well, I would argue it's even more fundamental than that. Imagine it wasn't long-lived. Imagine it was short-term use.
The fact that it moves is where the original surface area comes from. Let's make a terrible analogy. The opening for, like, Tron Legacy, I imagined I was a series of bits traveling through the microprocessor. But if we were to think like that, imagine I'm a piece of data traveling from the browser that you're typing something in on or the browser that's storing a session cookie or an access token all the way back to the database on the back end.
You're getting written and read from memory on the local system. If you're in an enterprise, probably in a Palo Alto or a Zscaler reverse proxy or forward proxy and a content distribution network, right, so probably in Cloudflare or CloudFront or one of the other CDNs, probably in the ALB if you're in Amazon, right, whatever the regional ALB is for your architecture, maybe in another load balancer in your zone, probably in a Kubernetes service mesh. So, like, you've now traveled through all of these things. Think back to fundamentals, right, load store architecture.
When I store and then load a piece of memory, the data is still in the memory unless I go back and write over that memory. And so our kind of take on all of this when we first got started was it's not necessarily that long-lived and rotation is the problem. Like, that's also a heuristic to the problem. The problem is that it moves in the first place.
And so we asked ourselves, is there a world in where it didn't have to move? And, of course, the answer is trivial, right? Yeah, you could adopt some sort of asymmetric cryptography, use a private key for signing, and then the only thing that has to move is the public key, which gives away nothing. But then one of our guys came up with this stronger idea, which is what if we could guarantee it didn't move?
And the observation he made was, look, HSMs give you a way to attest to create key pairs in servers to where you know the signing key can't move. And so it's, you know, a really good model for establishing a certificate. Why can't we do that for clients? He had also been following the world of mobile banking and mobile payments.
They're called something slightly different, but something very much like an HSM exists in your pocket and exists on your desktop. In fact, microprocessor vendors don't like to make a lot of varied products. They like to make similar products because from them there's no increasing cost. So they rolled almost all of their processors now with these secure crypto coprocessors.
So you can not just create material where something doesn't move, but in the construction of that material, you can provide a proof that it doesn't move. And if it never moves, it can never be copied. It can never be stolen. And so, like, a class of problems gets eliminated.
And, of course, I'm speaking of things like TPMs, Trusted Protection Modules. That's one class. This type of coprocessor, there's other classes. ARM calls theirs, I think, TrustZone.
There's the old school SGS instruction set. It's some type of secure enclave, essentially. And then you hold the secret inside that secure enclave, and then you have guarantee that nothing moves out. It can do operations.
It can do some type of proof operations. But that's as far as it goes. And the observation of one of our early engineers was, hey, this exists and no one takes advantage of it for anything that's not a mobile payment. What's the game that just got released?
Battlefield 6. Is it Battlefield 6? I didn't realize Battlefield 6 was using such an important piece of technology. So
thousand employees, that's a lot of self-inflicted wounds. Big lessons learned from us were, hey, the standard operating procedure in the identity world is to involve the entire workforce in rollouts. That's horrible. It's also the reason why identity products don't turn over regularly.
So we had to come up with a way around that. How do we not involve the end users? You depend on a human factor. Someone might not be available or someone is on vacation, but when they are back from vacation, maybe installing a new package is not the most top priority on their to-do list at that point in time because they have to catch up with a bunch of emails or whatever that function is.
So a simpler way of thinking about it is just asking a user to change their password is an example where we don't even interpret that as something strange because that's just how computers work. But when you think about it, asking a user to change their password is asking a user to be part of a deployment update for a technical control. You're basically asking everyone in your workforce, all of your parents, to go help you, person in charge of IT or security, go change or update a security control in the business. And the answer is obvious why you do it because it says to do it on the sheet of paper, but I put the failure more on the system designers like us.
Why did we come up with a system where the end user has to be involved, critically so, in what is fundamentally a technical operation? And the answer is, well, wait a minute, a password lives in someone's brain. If they're not involved, what are you even talking about? I'm being a little coy, but with device-bound credentials, with device possession and whatnot, you actually can get away from that.
Yeah, that was a key learning. What are the absolute necessary things I need the end users to do and nothing else? Being laser-focused on that was pretty key. So essentially, it's always important in any product to understand who is your ICP, what's the specific pinpoint that ICP has, and work together with them to solve that pinpoint.
Have you guys done some type of forward-deployed programs where you embed your engineers, your close partners together in a very closely embedded circle where those discussions can flow a lot easier and just more natural? Our most successful deployments, we do that exactly. On the ICP thing though, I would actually give it a little bit more nuance. So a lot of startup folks trained in kind of like the general consumer SaaS business, I think, don't really understand enterprise architecture.
And they come at this problem and they think of ICP as a bit of a monoculture and they don't really think about the ICP in terms of the selling process. The champion, the budget holder, the administrator, and the user, these are all very different people and they can all shut you down for different reasons. And the way that you make your product sell to these different ICPs or these different personas within your ICP of a customer can actually be at odds with each other. That's also something that I think, at least if you're going into enterprise SaaS and enterprise infrastructure, your UX, your product management, and your product marketing have to be in lockstep on those problems and really embracing the nuance.
And when it comes to deployment, it almost doesn't even matter if you're just the new magical hot stuff. Enterprise administrators have grown up and expect that if you are critical infrastructure to their business, you're going to be available and be in the trenches with them. And so, yeah, we've found our best deployments go that way. We try and make all of our deployments go that way, but you have to work within the culture of your customers and not all customer cultures are embracing.
Some of them are very much, you know, you deliver the equipment on this day at this parking lot and I'll take it from there. What's been the most proud day in your career? Most proud day in my career? I wish I thought about this more.
The days that I felt the best, I like building things from scratch. I like building teams from scratch. A lot of the core folks that I've worked with here, I've worked with off and on, maybe if not my whole career, certainly at least for the last half of it. We did a thing back when I was in grad school called flowgrammable that I was very proud of.
The gist of it was software-defined networking was a big deal at the time. I worked in the service provider industry for the first part of my career on both sides of the road, building equipment and then operating it. We took a team of PhD, master, even undergraduate students and basically built external drivers and user space applications that basically competed in reference design competitions against industry supplied open source software and like beat all of them but one. It was a lot of fun.
It was a big hairy problem. It required all of your knowledge of how the computer works. How does a CPU actually work? How does a cache actually work?
What are the things that the compiler is not going to figure out for you? How do you make all this craziness between six different versions of a protocol not be the developer's problem? Only so many developers are going to read a thousand page spec. That was fun because it was technically challenging and I think every person who went through that with me either now works with me at Beyond Identity or at previous companies, Security Scorecard or IronNet or I helped get them into their positions at some other companies out in the West Coast, some big tech companies.
So like that was a lot of fun because you know startups aren't families but they kind of are and I know there's a lot of kind of counterculture on oh work's not family and work-life balance and whatnot. I don't know I've fallen into the bucket where like I love this, this is who I am, I'm totally okay with it and there's a group of people where I think also agree with that statement. You spend so much time with people working, there is a relationship there, you have to like them. There has to be something that you enjoy about spending time together and so like you know these shared values of creating something from nothing, building something that solves a hard problem, solving a problem with purpose, protecting the livelihoods of people's organizations, of people's companies.
These are really unifying and bonding things and they certainly make me proud of that team. At the end of the day, company is just a collection of people. Yeah, 100%. That's all it makes company is just people and yeah the cool part about startups is you get to iterate over solutions for a specific problem super fast and you get to solve pinpoints that not only protect the livelihood of people, it protects the livelihood of organizations and companies that serve millions of people, millions of the consumers.
Oh yeah, but a little further in that story is we've had a couple of run-ins with state actors and our product performed and our team performed exceptionally. We kept the goods away from the bad guys so to speak. We learned a lot in the process and we built relationships we did not have before that incident or series of incidents. I think it also helped us understand the importance of what we do as a group, as a team, as a product, and as a company.
That was a pretty proud day. It's one thing to do something and know intellectually like why you do it and what you're doing it, but it's a whole other thing to actually see it do its job and come in contact by a very sophisticated actor and essentially, you know, get credited by not one but two instant response teams and red teams after the fact basically saying, yeah, this is one of the reasons couldn't get best X. Yeah, in the beginning of the show we talked about trying to identify deep fakes, which is in itself a sort of chasing the problem. It's a reactive function versus the systems that we all aspire to build are proactive.
Instead of doing the root cause analysis of the event, we much rather prevent the event from happening in the first place. 100%. And it sounds like you just had exactly that experience with one of the customers. So if you think about the 2010s, the 2010s was all about kind of making a strategic choice, at least in enterprise security.
And it was, hey, we're all compromised. We now live in a post-compromised world. How do we find the bad guy and evict them? And so like EDR was built around that rallying cry.
Defense in depth was popularized as a phrase, even though the concept existed already. And I think now in this decade is a unique time where I could maybe shift left a little on the security spectrum. I always like going back to NIST, right? So the NIST framework is identify, protect, detect, respond, repair.
Maybe the not repair, but like something else, doesn't matter. That's usually means call the insurer. And respond is basically, you know, what we do with EDRs and MDRs and XDRs and whatnot. I think this decade is interesting because we can shift a lot of our attention now from that segment a little bit left to kind of identify and protect, which is really more about prevention.
If I can guarantee credentials don't move, then credential theft goes away, stuffing, spraying, all of that goes away. If my authenticator is running on the device that the work is happening from, whether it's human or non-human, then I can actually detect man in the middle and I can prevent session hijacking. I can prevent