84. What gets missed when nobody reviews the code (with Jack Cable, Corridor)

Hello, everyone, and welcome to another episode of the Security Podcast of Silicon Valley. I'm your host, John McLaughlin, and I'm joined today by a very special guest, Jack Cable, the co-founder and CEO of Corridor, which is an AI-powered secure-by-design development platform. Welcome to the show, Jack. Thanks for having me, John.
Glad to be here. So an AI-powered secure-by-design development platform. Tell us about that. What is that?
That sounds really cool. So maybe I'll start by kind of taking a step back and talking about what I was doing right before this. I took a bit of an unconventional path into becoming a founder, and my background is in security research, computer science. I got into security through bug bounty programs.
But in short, I kind of cut my teeth through coding software development and finding vulnerabilities in production software out there. That eventually led me to an interesting place. I got into government work originally through the Hack the Pentagon bug bounties. So when I was in high school, I won the Hack the Air Force competition and then went to work for the Pentagon after high school and did a bunch of government work.
And that eventually led to me being at CISA for a couple years, leading on the Secure by Design initiative. And really through the course of that, set out to work with companies to get their software to be more secure by design, to be more resilient to common classes of vulnerabilities. But I really saw that the tools today that companies have aren't sufficient. As we continue to see these, again, preventable classes of vulnerabilities being exploited by our adversaries, it got me thinking there has to be a better way to actually secure a company's code from the start.
That's what we set out to do with Corridor. Wow, that's incredible. You've been pretty focused on this stuff for a while. Initially, like doing research in this space, hacking the Pentagon.
So you're a hacker. Yeah, it's funny. Someone accidentally got my way into the world of security. So I started coding in middle school.
And then when I was 15, I was building this integration with an API. It was a cryptocurrency website of all things. And, you know, as a curious kid, I entered zero into the transaction amount and noticed that the payment went through. And I thought, oh, that's a little weird.
And then I entered negative one dollars. And to my surprise, that also went through and actually took a dollar from that account. I realized that at that point, right, I had to make a decision. And for me, it was an easy decision.
I reported to the company. They had a bug bounty. So I said, hey, I can steal money from people's accounts. They gave me a bounty.
And then I really was encouraged from there to start teaching myself more about security, participating in more bug bounties. So that's really what kicked off my journey. I guess the decision that you were grappling with in that moment when you realized, like, you could transact on negative amounts was like, how many zeros do I add to the next one? Or should I tell them?
Yeah, exactly. I didn't even really consider the alternative. I think part of what was fortunate for me, too, was I got into a security at a time where really vulnerability disclosure bug bounties were much more of the norm. So I was fortunate that this company, they already were welcoming hackers with open arms.
They paid me, I think, $1, 000 for that first vulnerability, which at the time as a 15-year-old was a lot, still is a lot of money. And really then kind of sparked the interest to say, OK, what else can I do? How else can I help secure companies and really kind of test my skills of doing so? And I love that you had that natural inclination just to try things, just see what happens here.
I guess it's a great, great way to start a journey down security. Exactly. And that's the fun part about security. And I found, too, like as I was doing the more technical hands-on work, but even through policy work, I think a lot of the interesting part is that in many ways, it's the same.
It comes down to trying lots of different approaches, seeing what works. Many don't. But often there is a way to have an outsized impact, whether that's with the vulnerability, whether that's through the secure by design work, for instance. Started a pledge and got commitments from now over 300 companies.
And there's been some good tangible results coming out of that. I think kind of throughout my career, I've really just been focused on saying, OK, where can I have the greatest impact? And it's been exciting to explore that from the founder side. Yeah, that's super.
That's super interesting. Super exciting. So now as a founder, you get to embark on a journey and you're doing the zero to one thing. And, you know, unless you've been under a rock, like everyone has probably noticed that AI is changing things.
It's accelerating things. Maybe it's unclear, you know, in the big picture how how it's all going to land. But you have stepped forward with a vision and you can see how AI can help make a more secure future for all of us. And you founded Corridor.
Share with us your vision. What did you see there? What sparked that? Definitely a lot of this came out of the secure by design work.
I was doing at CISO as well as the bug bounty days. And I'd say one observation was that we would talk about things like rooting out entire classes of vulnerabilities from your products with the recognition that at the end of the day, many of the vulnerabilities that we see out there exploited aren't all that complicated. They are preventable. Things like SQL injections, cross-site scriptings, but even vulnerabilities like authorization flies, where we really ought to be doing more to prevent these at the source.
And just kind of from taking a look at the existing, say, stack analysis tools out there, as well as talking to, at this point, hundreds of companies, it became very clear that the existing approach just isn't cutting it. And in particular, I think really what this comes down to is adapting to a code base.
And if I think about, for instance, when I do a bug bounty or a pen test, and I'm kind of researching for vulnerabilities, if I have access to the code, searching through for vulnerabilities, that looks very different than the existing approach that these tools are taking, which essentially is just kind of searching through with kind of static pre-canned templates, because it doesn't actually understand how the code base functions. Our viewpoint is with AI, with the advances in foundation models, it's now much more possible to have tailored scans that actually take into account that functionality. That's, hey, okay, this is a vulnerability for these reasons.
We can see the path by which that's exploitable or vice versa. This one, we really don't need to care about for other reasons.
So I think what's really exciting about this, and just as tools like Cursor have brought upon significant room for innovation with software development, I think there's a ton of opportunity to help application security teams, not just do their own work, not just better identify vulnerabilities, go through and parse existing security reports, validate them, fix them, but ultimately tying this into the development process, where for most companies, it's not possible to have a security person go and review every pull request, every piece of code that's coming in, but we can get closer, we can do better now with AI, and really augment, increase the work that these security professionals are able to do.
Love it. I wish I could review every pull request, just can't. And you get to a certain size, and it's like, you don't even know all of the features that are coming in. And you have to kind of depend on other folks to bubble up like, hey, this maybe touches authentication, maybe we should do a security review.
Anything that we can help that scale problem, especially in startups that have to move fast, have to try things, have to fail quickly, have to move on to the next experiment, to do that in a way where there's the care given to the types of data that the system is handling. Exactly, exactly. Yeah, that sounds like a challenging problem. Part of it, too, is kind of thinking about, okay, the rate at which software development is changing.
Right now, developers can ship code many times faster than they were able to before. The amount of code that's getting written or generated is increasing significantly. I just don't think that the existing security tools are going to be able to keep up with that. And again, really, the thing it comes back to is not getting in the way.
If a security tool is going to slow down development, if it's going to reduce the pace at which a developer can work, that's kind of the last time that companies have to get out there and hit their competitors. So working towards this future where security can go really hand in hand, where it enables development and doesn't slow things down. Yeah, you know, I always think of it as a security tool. Adoption is part of the success of whether or not that tool makes a real difference in your organization.
So I love that I'm with you on all of those points. Corridor, you have a co-founder with you. I do have one. Happy to dive into that.
So my co-founder, super great, his name is Ashwin. We met together when we were both at Stanford. So we did our undergrad in computer science there. And really, so this must have been about five years ago, we did some work at Stanford, at the Stanford Internet Observatory.
And then we went over to CISA, Cybersecurity Infrastructure Security Agency, in 2020. Then we were working on, essentially, this was just as many of the attack surface management tools were starting to come out. We built our own open source attack surface management tool called CrossFeed, and we used that to secure election infrastructure. So we rolled that out to scan all 50 states, every county in the U.
S. , to look for vulnerabilities in their internet-facing sites. And through the course of that, we're ported hundreds of hiding critical vulnerabilities to states. So what was really cool about this, right, was that we had essentially almost built a startup within government.
As you can imagine, there's both the technical angles, but then there's also other bureaucratic elements that you need to work through. So when we started talking again this year, when we started thinking about, okay, how can we really drive forward security by design? He was the first person I thought of because we had done it before. We knew we worked well together.
We knew that we could really build something, scale it out. So ultimately, that's really the biggest thing in a co-founder is someone who you're going to work well with. And I think in more cases than not, it's really beneficial to have someone who you already have that trust with. Because ultimately, it's a many years-long commitment.
It's probably one of the biggest decisions of your life. So you want someone who can be there side by side with you. Yeah, definitely. You're marrying the person, basically, professional sense, right?
You're going to go through a lot together. There's going to be very high highs, and there's going to be very low lows. And that's kind of like the founder's journey. So make sure you trust the person.
That's important. Yeah, that's a really nice story of how you met. And so you had already worked together, sort of feel each other out. Are you similar to each other?
Are you complement each other? How did you decide who's going to be CEO? Yeah, it was a fun conversation we had. And it's interesting, too, right?
Because we're both from technical backgrounds. I've done a ton of engineering. And in the early days, we both were building it out. As it turned out, I became the CEO.
So nowadays, you know, I still try to code when I can. But a lot more of my time is focused on all the other parts of running a business. I think part of it, too, is it's been very helpful, especially given the work that I was doing most recently at CISA, where a lot of what my job is now is in making the case for what we're working on. And we're fortunate to have a ton of supporters along the way.
But particularly as we've been bringing companies onto the platform, you know, getting that really valuable feedback. It's thinking, OK, how can we continually be improving our product, but also telling that story? Because I think that's so much of what a company comes down to is the brand. It's how people perceive you.
And you need to generate that virality in some sense to make sure that traction is always increasing. So it's been a great journey so far. Yeah, so far. Sounds incredible.
Did you guys raise money? We did. So the approach we took, we did a angel round. And for context, we started this in January of this year.
We did an angel round in March from folks like Alex Damos, Christina Cacioppo, the CEO of Vanta, a number of other security founders for us from Socket, a deal from Trufflehog, and a bunch of CISOs as well. And then we followed that up in April with a seed round that was led by Conviction, Sarah Glow. So we're fortunate to be at a point where now we really can focus on the product. We can focus on getting those early customers, making sure people who are using our product are happy.
And again, that we can really demonstrate the value we've already shown and continue to build that out. So really continuing to push forward on that. Awesome. Congratulations.
It's tough. It's tough to do. It's tough to raise money. Your plan is to shift a little bit and raise from super happy, amazing customers that are faced with this huge problem of scaling out their secure by design and using AI to accelerate all of that.
Exactly. Exactly. And I think it's been super instructive as we've brought folks onto the platform. Because again, really how we've approached this, I think how you need to approach entrepreneurship is to kind of throw all of your assumptions aside and see what actually works in practice.
Because companies are going to have different problems than you originally envisioned. Of course, it's important to keep that kind of long-term vision and still work towards that. But also, it's really about making those micro adjustments along the way to say, okay, this didn't quite pan out as we thought it would, or they're having actually a different type of problem. Originally, when we set out to start the company, we were very focused just on the problem of fixing vulnerabilities.
We thought, okay, if we can fix all the vulnerabilities, then no one will have to worry about anything. But through the course of talking to folks, I'd say a lot of what we realized is that, okay, that the problem isn't just fixing the flaws, it's knowing which ones to fix. If you have thousands and thousands of vulnerabilities, and by the way, if 90 plus percent of those are false positives, then your biggest problem isn't once you've identified this is a valid vulnerability, going and fixing it, it's knowing which ones are valid, it's discovering the unknowns. So that's one example, but really, along the way, we are hearing different use cases from the companies we're working with.
And really, I think it's about prioritizing, figuring out, okay, which of these requests we're getting in, our common problems are going to be things that we can go and scale out to the next 50 or 500 companies. Yeah, and you mentioned some pretty impressive early investors. Dylan from Trufflehog, he's been on the show before, mentioned the CEO of Vantan. Are these some of your early adopters as well?
Or are they a little bit more partners? So they've been super helpful in terms of kind of guiding us, connecting us to early adopters. And we've been fortunate to kick off a number of pilots. And really, I'd say part of that is, yeah, this is my first time as a founder.
So it's been a super fun journey just to kind of understand everything that goes into a company and being able to learn from folks who've done this already. It's essential. Christina, for instance, from Vantai was the first intern at Vantan in 2019 back when there were six people. So I got to see.
. . So you got to work with her like one-on-one, basically. Exactly.
In the very early days. And that was a fantastic experience. So kind of being able to take that and now, of course, given where they are, learn from her, learn from Dylan, from others around the journey and really how to do something that works, how to do something that scales. Very privileged to have folks like that alongside us.
Love that. No, it's incredible that you had all of those great opportunities and you hit the ground running meeting just incredible folks and surrounding yourself with just. . .
It's sort of that energy that you find here in Silicon Valley, right? It's the drive or the fire, the passion that pushes everything for it. It's contagious too. Yeah, that's good.
It's really good. So along your journey, what would you say the proudest moment has been so far? So I'd say it's been super validating to just kind of see how people interact with the product. We're discovering vulnerabilities that they previously hadn't known about that.
Like we've heard are kind of on par with what they'd expect to see, for instance, from a human hacker and their bug bounty. So I think that's really the most, in some ways, contagious aspect is just kind of seeing the joy that people get from using your product. And ultimately, again, it's about how can we actually be trading value out there? Let's face it, there's a lot of AI products.
There's a lot of things that say they do something. And, you know, they're kind of promises that are made, but not kept. And I think there are many aspects of AI, of LLMs that can be transformative. We've seen this way with software development and even ourselves where we're able to move so much faster, kind of leveraging some of these AI tools.
But part of that is kind of recognizing the strengths and weaknesses, not using it as a panacea, but rather kind of taking the benefits and where necessary, for instance, combining them with more deterministic approaches in order to really build the best products. So that's how we've been approaching it. No, I appreciate that. There's a balance that's required in all of this movement and all of this movement towards change.
And even though the change is happening more quickly, it's important to find that balance too. And along those lines of finding like appropriate balances, maybe there's a moment that you would describe as your most challenging moment so far. I think especially early on, the whole angle with entrepreneurship is proving yourself and not just yourself, but proving your product, proving that you have something that actually works and solves a real problem. So, and I think especially early on, you know, often we would have conversations and people would say, okay, but how is this different from the next tool or what is the really unique value that you're adding?
And going back to some of what I was saying earlier, right? I think that the biggest thing is just having some humility there. It's knowing that our company is not our current product today that will evolve. It's really people.
It's our mindset that makes us unique, makes us different. And the fact that we can adapt, and especially given this age of AI where every month or two, we get a new model that's going to go and outperform all the previous models, you can't tie yourself to any one approach. You have to always be iterating. So I'd say that's really the approach we're taking is kind of keeping the end goal, the North Star of delivering value, of making something that can discover better vulnerabilities, that can prevent them from happening in the first place, and being kind of open-minded about how we get there, as well as what the unique pain points we're solving.
And ultimately, you can really only get that from talking to people and understanding what their challenges are. Yeah, no, I appreciate that. If we have this tool, we bring this thing into this world, and it's able to identify fresh vulnerabilities, new things that actually matter, bring our attention into those bits and those pieces that are, at the moment, maybe hidden, maybe a little bit less seen. And we have this extremely powerful tool.
And you're a hacker by trade. Have you ever thought about the implications of, OK, this is a tool, like a tool inherently is neither good nor bad. It's just a tool. You can use it for both sides, maybe.
And then on the good side, the positive side, we can improve our software development life cycles. But on the other side, we could use this maybe to hack into systems that aren't, at the moment, being fixed or patched as diligently as we would like. I could see, like, lots of things just accelerating in that space. I'm sure that's not how it's marketed or being approached to market or anything like that.
And it's not maybe specific even to corridors. It's just something that's happening, right? And we have, in some sense, we have to acknowledge, like, this is happening. Yeah, yeah.
I mean, and say my perspective there is that, one, it's looking at the asymmetry that exists today, the fact that attackers already have no trouble finding vulnerabilities in code. There's no shortage of attacks or vulnerabilities that get exploited by ransomware actors or nation states to point to there. So it's really what it comes down to. And to me, this reminds me, right, of some debates we've had in the InfoSec community previously around both vulnerability disclosure, around, for instance, dual use cybersecurity tools, even things like Metasploit, for instance, right, that both can be used to defense systems, but also, yes, can be used for attacks.
And really, my perspective there is that it's always better to give defenders more tools. The attackers already have no trouble going, finding, exploiting vulnerabilities. And, right, you need to, for every one vulnerability attackers are exploiting, you need to make sure that 100 others are prevented to avoid them getting in. So I think the more we can do to give defenders the tools to better discover, better prevent vulnerabilities in their code is how we ultimately win here.
And again, tying that back to the whole secure by design idea, it's, okay, if defenders can not just find vulnerabilities, but learn from their mistakes, look at the common root causes and see how to prevent those from happening in the first place, then we can start to move towards fundamentally more secure software. Yeah, I couldn't agree more. And because the, quote unquote, the blue teams or the defenders are always going to follow the rules much more strictly than the black hats out there. They're breaking all the rules anyway.
So you have to, like, level the playing field a little bit by building all of these great, amazing tools just like Corridor. So when you look into that future, and I'll let you decide how far into the future we'd like to travel here. But when you put on your vision and you can see that future, what do you think will be the most challenging problem that the security community faces in that future? Maybe it's helpful to kind of think about where software will be in the future, where I think we will move to a point where the vast, vast majority of software isn't written by humans.
It's generated by AI, likely with the supervision help of humans. But we're moving to a future where, again, the velocity is increasing significantly. The amount of code that's out there is also increasing significantly. But let's also ground it because there's a lot of people who talk about kind of all the unknown unknowns that are supposedly associated with AI.
But I think that if history shows anything, it's that the same types of vulnerabilities that we've known about for decades are going to continue to be the vulnerabilities that we deal with unless we dramatically change our approach. And again, these vulnerabilities that, like, I don't know if you take SQL injections, where we've had parameterized queries in MySQL since 2004, so 21 years now. And yet, we still see these SQL injections coming up despite having been fully preventable for decades. So it is possible to have a future where code is more secured by design.
I don't think that will be solved at the model level because so often security is contextual. It's not about just like a piece of code in isolation, but seeing how it interacts with a broader code base, especially when it comes to business logic or authorization issues, which we've found that AI can do quite well in understanding how applications actually work. So it's about bringing that contextual security in and preventing vulnerabilities at the earliest stages. And we're going to have a lot more ability to do this sort of large-scale compute to actually analyze every single piece of code that comes in.
So I think really that the challenge will be in making sure that this is a kind of proactive approach rather than, you know, contributing and building on the decades of technical debt that we already have. Because if we're only going to, I don't know, 10x, 100x the amount of code out there, I don't think we want a future where that code is riddled with the same exact types of vulnerabilities as we've been struggling with today. So we can get there, but it's going to take real work to do so. So let's get there.
Let's use these models and these new tools to help us get there more quickly. But let's not train on the mistakes of the past only. Like we can actually use our security know-how and technology like what you're building there at Corridor to help build a better future, a more secure future. I love it.
I guess that's what success looks like for Corridor. I think so. And really, what's cool for me too, right, is that this ties in so closely to the work I was doing at CISA, especially now as we, and CISA has been, okay, CISA, their agency is vocal about risks that we face, not just from ransomware groups, but from nation states like China, Russia. And the fact that if you actually sit down and look at the vulnerabilities they're exploiting, for instance, often it is SQL injections, command injections, memory safety vulnerabilities, things that if we were talking, I don't know, 10 years ago, we would think wouldn't be a problem in the future.
Likewise, I hope we're not sitting down a decade or two from now and still kind of rehashing these same old types of vulnerabilities. So I think part of it too is like, I think there often can be a mindset in the security community in some ways of defeatism and saying, okay, well, no system can ever be 100% secure. So as a result, it's hopeless and where we're not going to be able to build more and more secure software. But I think it's really about kind of the raising the cost on adversaries.
And yes, we're not going to ever be able to root out every single vulnerability from a piece of software. But I don't think you can say rationally that like we're always going to face memory safety vulnerabilities or like we're always going to have to deal with SQL injections because we know how to prevent those because companies have already been able to do so. So Google, for instance, has talked about how they haven't had SQL injections or cross-site scripting vulnerabilities in their products. So I think really it comes down to being optimistic and kind of leveraging what works to root out babies' known vulnerabilities.
I guess with a certain level of maturity and a certain level of like rigor, you can eliminate those legacy vulnerability types like the SQL injections that you're talking about. But I guess not everyone is a security expert. And when you don't have a security team, you don't have those security processes and you don't have the policies and you don't have the technologies in place, it's a little bit harder.
But I think what you're suggesting, correct me if I'm wrong here, is that with the advent of AI and most code now being generated or will soon be generated, you know, we can nudge it in a direction that eliminates a lot of that stuff just by using the proper libraries and the proper query structures for SQL or for whatever it is that you happen to be building. That is a more secure pattern. And I know that you're a first time founder and you're also early in your career. So congratulations on all of the success so far.
This is incredible. The pace that you're going, have you thought about the legacy that you would like to leave? Definitely, yeah, it's something I think about. And I'd say the biggest thing that has steered me throughout my career, whether through bug bounty is whether going to work in government, you know, there's not too many young computer science graduates who decide to go into government.
But I found that I was really driven by the ability to have an impact, to be able to, for instance, through the work at CISA, affect an entire industry and hopefully kind of leave us better off from that. So I'd say really that is what I'm driven by, a recognition that it's possible to, you know, scale impact beyond just one person. As much as I liked doing the bug bounties, for instance, at a certain point, I also started to feel that like, hey, there's probably a better way than kind of doing these right one-off tests, kind of hands-on with companies. How can I start to, for instance, through policy scale and reach thousands, tens of thousands of companies?
So I'd say really that that's what I'm hoping to accomplish is to look back and say, okay, I actually helped secure the critical software that we all rely on. I'm kind of seeing measurable reduction in the attacks we're facing, knowing that this is leading to a safer future, especially as we continue to become more and more dependent on technology systems, on AI. So that's ultimately what I'm after. No, that's incredible.
I can really see that. And I feel that too, almost in like a, I don't want to call it a civic duty, but an undertaking to help make everyone's life a little bit better. Yeah. So thank you.
I appreciate that. It's because of the hard work and the dedication and the focus and the drive and the vision of incredibly talented, smart folks focused on these difficult problems that the world does actually get better. So all the gratitude in the world. If we went the other direction, instead of going into the future, if we could go into the past just for a moment, and I'll let you decide like how far into the past, but if you had an opportunity to meet your younger self, would you take that opportunity?
And if you would, would you have any advice for yourself? So yeah, I think certainly I would take the opportunity. I would tell myself is, I think really to kind of always just follow the direction of your heart. And just to give one example where when I was 18 or 17, I was deciding, okay, what to do the summer after high school and had the standard tech internship at a company out in SF and what was weighing that against working at the Pentagon.
And of course, right, I ultimately chose the Pentagon, but there was kind of a lot of time spent deciding, trying to figure out, okay, which of these routes there is, you know, a lot of prestige and kind of value working in Silicon Valley. That's sort of the more conventional route. But I think a lot of it comes down to thinking about orthogonal directions. What is going to be the most unique experience?
What sets you apart from everyone else? And what's going to lead to the most money necessarily or the most prestige, but really the highest potential for impact? And that was what started my interest in policy and working in government. And I like what would, I imagine, be very, very different if I'd gone to work for that tech company.
So I think it ultimately is around taking the more interesting path and you never know what's going to come out of that. I love that. Just reinforcing this notion to optimize for the journey, something that you'll get out of the journey. It's always going to be a journey.
The question is like, which part is going to be today? And if I'm listening to the show and I'm curious and I would love to learn more, is there, should I just look up Jack Cable on LinkedIn? Should I look up Corridor? Yes.
Corridor. dev. So yeah, feel free to check that out. Reach out to me on LinkedIn or any other place you can find me.
And yes, always happy to chat. We're in San Francisco. Folks wanted to meet up in person as well. And I'm sure you'd be very happy to have like a security coffee with maybe a couple of security folks who might be interested to learn more.
Exactly. To hear more about your story. You know, we have a lot of entrepreneurs that listen to the show. So this is a little bit of a leading question.
Just a pinch. But is there anything out there that just bothers you deeply? And if someone could just like own this problem and solve this problem with a new service, you would be very happy to pay some money to have that problem solved that you just don't have time yourself to like sit down and tackle properly. But you wish that someone would do this.
So I think there's a bunch of areas. And it's interesting kind of seeing that there's some problems that you can only really understand going through them and kind of countering them yourself. I think that's one of the areas that I've followed most is kind of just, okay, seeing, kind of having hands-on experience. And like that there's, you know, folks who try to get into spaces where they haven't kind of directly operated.
And while that can work, I think often the biggest thing to build is just your own intuition, your own kind of sense of direction. And having been in the seat of your customer or yourself. So that's really, I think, what I would encourage people to do is go explore spaces. Really just kind of sit down and immerse yourself.
And there's always going to be kind of more problems to solve. And I think especially exciting kind of in this age of AI where the barriers to build a product are a lot lower. The kind of range of what you can do with a product, the possibilities are more unbounded. I think that there's kind of no better time to be starting something.
Fall in love with a problem. Go experience the pain firsthand. Okay. No, that's great advice.
You know, when you sit down as an entrepreneur and you build a thing and it's the first version of the thing. That's always been my experience that like that first version is close, but it's not quite there. You got to revisit the problem, revisit the pain and listen to the market. Okay, Jack.
Well, this has been incredible. Thank you so much for sharing your stories, your journey with us. I'm deeply grateful that there are smart people in this world actively thinking about how to make it better, how to make it more secure. And sometimes the security community, we have to stick together.
And this has been a great conversation. Thanks so much, Sean, for having me. Really enjoyed the chat. Definitely, anyone out there, please feel free to reach out in.
Any of the angles we talked about, I'll be happy to chat further. But this was a super enlightening conversation. Jack Cable, everyone. The co-founder and CEO of Corridor, one of the best up-and-coming AI-based security companies that you could go and check out.
Go learn a little bit more. Let's be part of the change. Let's make this world a better place. I'm your host, John McLaughlin.
And I would like to thank all of our listeners for tuning in to another episode of the Security Podcast of Silicon Valley. This is a YSecurity production. Thank you. Thank you.
Thank you again, Jack. Thank you. Thank you.