39. Steve Orrin, Federal CTO at Intel, on Tech, Trust, and Transformation

Hello, everyone, and welcome to another episode of Security Podcast in Silicon Valley. I'm here today with a very special guest from Intel, Steve Orrin, who is a senior principal engineer and CTO at Intel. Welcome to the show, Steve. Thanks for having me today, John.

Looking forward to a fun conversation. I'm looking forward to it too. Just looking at your LinkedIn, you've been at Intel for almost 20 years. It looks like you started your career there as a director of security architecture, as a senior security architect and principal engineer, and now you are CTO.

That is correct. Yeah, I got acquired by Intel back in 2005 and got the opportunity to lead what we call at the time security pathfinding for Intel. So basically all the innovations around cybersecurity above the chip into the OS, into the virtualization into the web. It was a fascinating time when we were looking at doing more with what was available today to augment what the research teams were doing as far as developing new chip technologies.

And I did that for about nine years, driving trusted cloud, trusted virtualization, anti-malware technologies, and web security before becoming the federal CTO back in 2013 and leading our federal technology practice. That's incredible. And so when you say you're the federal CTO and you're leading the federal efforts there in Intel, what is that? That's a really good question.

As the federal CTO, I really have sort of three roles inside the company. Part of my role is to work with federal customers in the federal ecosystem to help them understand how to leverage the Intel technologies, Intel's ecosystem, our architecture, hardware, and software to advance the enterprise and mission goals that the government has. So be able to come in and translate Intel capabilities into the federal mission areas. The second part of my role is to then turn around and speak, Intel speak, and translate the government requirements back into the business unit.

So they're building technologies and capabilities to meet those government needs down the line. And then the third area I would say of my job is doing some of the advanced innovation. So taking the best of our technology and the big hairy problems and working together with government agencies and the div and the primes, as well as Intel technologists and our cell and my own team to solve some of those problems, to innovate next generation capabilities, to show what is possible to solve those, whether that be in the area of AI and day science, security, high performance computing. So really those are the three main aspects of what keeps me busy on a daily basis.

I know that that sounds like so much fun and really important. And it sounds like Intel is really fortunate to have you on the team. Before you were at Intel, you had a little bit of that entrepreneurial spirit and you started something and you got acquired. What's that story?

So I did a series of startups throughout the nineties and two thousands in the security domain, starting back in 95 with a desktop file encryption and security product. Then moving on to the other end of the spectrum and doing mainframe security in my second startup that we founded back in 98 and then moved into the web security space where I really helped create the web security market as the CTO of Sanctum before moving on after they got acquired to do the same really for the XML security market. And it's from there that I was acquired into. So I had a life as a serial entrepreneur in the cybersecurity arena and had a lot of fun.

I had some great mentors and did some amazing things, really helped change the market. And then when I got acquired by Intel, it was interesting is that they presented me with the opportunity to continue to do that kind of innovation with Intel's budget. No more VC costs. Wow.

That's incredible. I've heard this referred to as intrapreneurship. Exactly. So as a CTO, you do have a very strong externally facing role and it sounds like you focus in on the government side of the house and you digest the requirements and you almost have to have your fingers right on the pulse of the industry, or maybe I should say the government, to almost anticipate what it is that they're going to need so that you can get it in the pipeline, get it built, get time to market like as tight as possible.

What's that like to be so externally facing in the security government space? So it's a combination of things, John. Part of it is having some really good people at your side so that you can keep a pulse on all the things that are happening. And one of the challenges is that we are in a rapidly transforming environment, whether you take AI that changes fundamentally every six months practically or faster now, the cybersecurity domain and all the threats that are emerging there.

So it helps to have really smart people surrounding you and listening to them, taking advantage of their talent. But from the experimental part, it's really getting in and talking to the customers. One of the key things that I find is fascinating and also helps drive that is when you sit down with the folks that own the problem or that are trying to solve that next generation capability or have the, like it's taking them too long to make a decision or too long to get the data in the right format, understanding their problems, seeing the environment they live in, and then having all that technology capabilities and seeing what other industries have done.

And what's nice about Intel is we're inside quite literally everything. And so while I focus heavily on the federal government, I am not that far from the financial services market, from healthcare, from industrial and manufacturing. And so you can bring the best from all those worlds. I like to joke that I federalize commercial solutions often, taking what worked really well in oil and gas, and I can apply that to a Navy ship.

And so you find a lot of those vertical markets inside the federal market. And so there's a lot of interplay between them. Yeah, that sounds really powerful. I think we both came from engineering backgrounds.

It really speaks to that notion of you should never reinvent the wheel. And if you've got a solution to a really complex problem and another vertical could reap that benefit, you really just have to see and fall in love with the problem itself, not necessarily solution particular. Exactly. I like to call it the 80-20 rule.

Like 80% of a solution in one space with 20% customization can solve multiple different things. And it goes both ways, taking a commercial solution and then modifying it for the government use case, or taking something that worked for the government and then scaling it out to multiple verticals. It goes both directions that way. Definitely.

So one of the really hot things right now is AI. And you've mentioned it a couple of times. What do you think is going to happen in terms of that intersection of artificial intelligence, hardware acceleration, security, privacy? What's your vision there?

And how is Intel positioning itself with your help to really come out as a leader? So those are some really good questions. And let's unpack a couple of those. I think starting with AI, there's a hype cycle, but it's actually things are really changing.

We're seeing the impacts much faster than we did with other technologies. Usually there's this long technology adoption cycle. And we're seeing AI that not only the application to real world problems, but the evolution of AI happening much quicker than really a lot of people can keep up with. And we definitely, like when you look at the regulatory and policy side, they're trying to keep up and trailing.

We even see in the business side, there's a lot of uncertainty about what is this going to mean for my organization? At the same time, their data scientists are off adopting. And so we're seeing AI transform business. One of the areas we're going to see absolutely transformative is in the cybersecurity domain, both on automating processes that are very manual or time-consuming today, getting deeper and better understanding of what's going on.

So there's AI tools that once we really can take advantage of them, will help do some of that prediction, that similarity analysis, the things we need to do that today we rely on just reams of data and gut instinct to say, okay, that's an APT campaign. Wouldn't it be great if we had AI that considered, hey, these indicators are actually an APT campaign. I don't need 40 hours of top tier scientists, of security people to tell me I can do that with the AI. Why this is important isn't that I want to replace my cybersecurity team.

My cybersecurity team or your cybersecurity are absolutely crucial. But the problem is we're burning them out by having them look at every firewall hit, every SIM alert. AI is going to be able to tackle the sort of the 70 to 80% of what I call the mundane or the stupid stuff. Let it solve those problems, identify, categorize, and mitigate, patch, what have you, those things.

And then take your top tier talent that's already, you don't have the budget for more. These are the top skilled individuals and let them do the 90%, the 10% hunting. Let them go after the really hard things that you really can't train an AI yet today to identify. And that way you get them working on the really hard, interesting stuff and you get faster time to remediation of everything else.

And that's one of the ways that AI is going to transform the cybersecurity defense side. We're already seeing applications of AI being transforming the way the threat actors are working. We've seen examples of phishing campaigns that are generated by the likes of a GPT style generator. And they are much better when you can ask them to write an English language from the South version of a phishing email than someone from Russia trying to figure out what that looks like.

And so we're seeing much better developed phishing campaigns. We're seeing them already use these tools. And I think one of the challenges that CISOs are dealing with is that their adversaries are using the tools and they're not taking advantage of them. I think that's part of what we need to get the message across that these are not things that are going to stress your organization.

They're things that can actually help. So first question is, how is it going to transform cybersecurity? I think both sides are already seeing how that's going to happen. The other side of the AI and what people think everyone's focuses on, oh, the cool model of the wheat, the transformer, the GPT, the large language model.

That's all the sexy stuff. And it is. It's awesome. The things you can do with it.

What people don't recognize is the amount of work and effort it took to get to that thing. And when you start looking at enterprise applications of AI, the heavy lift is we use the iceberg as the metaphor. The tip of the iceberg is what everyone sees and everyone's excited about. Underneath the water is all the work of data curation, data ingestion, normalization, weighting, labeling that has to happen in order for you to get quality out the other end.

That requires work. And that's where we're seeing the infrastructure really driving. How do you get faster time to training? How do you do tuning?

How do you get faster inferencing so you can make decisions quicker? All of that, the infrastructure matters. What do I mean by infrastructure? It's the hardware and software upon which you're running that AI.

A lot of people would have you think that if I just throw it in the cloud, I'm good to go. One thing is cloud is great at scale out, but it's not that great at scale up. And even the scale out can be very costly and time consuming. What people are starting to see, especially with these really large, massive language models, is there's more efficient ways to go at it with dedicated hardware accelerators or heterogeneous computer architectures for different stages of the lifecycle.

It's not a one size fits all. It's, am I doing my data ingestion? Am I doing my curation? Am I doing the initial training or the feedback and tuning?

Or am I doing the inferencing? Each one of them could have a different hardware software architecture that gets the most efficiency and the highest sort of cost for return on investment ratio that you want. And it's not always, let me just throw out the biggest GPU I can find and scale that up. That's not always the right answer.

It's the right answer for some use cases. Sometimes, yeah. And that's where we're seeing the innovation on the Intel and the ecosystem looking at is how do we raise the bar across the board on the AI lifecycle so that you can support the innovation curve that we're seeing that's off the charts right now. And so in terms of applying that, taking advantage of this new opportunity that's in the market because of all of the advances in AI and the tight intersection it has with security and the implications that we're going to start seeing, I guess, in the security world, there must be special projects going on at Intel.

Does your day-to-day involve you in those projects? So it's interesting you mentioned that. Intel has a variety of hardware architectures that we're providing for different aspects of the AI use case, whether it be low swap capabilities to do inferencing and iterative learning at the pointy end of the spear. So next to the camera or the sensor, we have things like the Gaudi chip architecture that's available in the cloud to be able to do high-speed training at scale.

So we have hardware architectures that are enabling, but often it's how do you leverage the hardware that you have? And so this is where our investment in software is important, giving you the capability to build your models across heterogeneous architectures because you don't want to have to build a custom implementation every time. So we built software abstractions called like OneAPI that allows you to build once and deploy everywhere and to have it compile for the different hardware architectures without you having to be an expert in FPGA or GPU or a custom ASIC AI accelerator. The software layer will help you there.

What we're seeing in the government space is you have really all sides of the problem. You have massive data sets that need to be trained upon, normalized, curated. And then you have what we call the spot training side where you've seen this thing once and how do I infer what it is? So you get both sides of the camp and it really is advancing the science, if you will.

How do I solve for both sides? And do it at speed. And when you talk about security, one of the things I didn't mention earlier that I think is interesting is everyone focuses a lot on how can security help me? How can AI better help me secure my enterprise?

Or how is my adversary using it? One thing we forget is how do we secure this AI? Yes, all these privacy implications. Private bias poisoning.

So many bias poisonings. Yeah, that's another one. So many people like, I'm not sure we realize how much data is ending up in the cloud. And then when it ends up in the cloud, yeah, we can do interesting things with that data.

Training models, being predictive, adding a lot of value to people's lives. But there's a cost that says, that's maybe not even seen yet around, oh, all of this data is now like up there. It's been up there for a long time. And I think a lot of people just willingly provide it in exchange for these services.

Sometimes. But, oh, go ahead. I was going to say, that is one of the challenges we're seeing in these large data sets is that not only are they being trained on a lot of data, that individual pieces may not seem so interesting, but once you start pulling it together, this is science that's been around for a while, it used to be six points of data could uniquely identify an individual. I think we're at like three now with the power of these AIs, somewhere around three or four.

And what's happening is that what we thought was not private data with a powerful enough AI or the sheer size of the data set can get to private information fairly quickly. But then there's the other side. And I think the large language models like ChatGPT and others are illustrating this problem is whatever you query it, it learns. And this was the case that happened with a large technology manufacturer is that ChatGPT trained on questions that their engineers were using that contained confidential IP.

Now that confidential IP was part of the model, part of the data set. And if I know what to ask, I can get, extract that confidential information. So to give an example, if I say, what is the API between this piece of software and this piece of software? And it was trained on efficiencies of that API, it will answer me, which means I can then find out what the APIs are.

I can learn about the software modules by querying it very specifically. That's one of the challenges that a lot of organizations, and we've seen a lot of policies come out about the proper use of ChatGPT. Some companies knee-jerked and just blocked it. That didn't do them much good.

We're seeing a lot of policies come out saying, only use it for personal information, for non-government, work with stuff. Don't post or query confidential. Use it on your personal. Like they're trying to create policies of the proper use, which is needed because it was Wild West.

Absolutely. And maybe still is to some extent. It almost seems like this situation lends itself to a suggestion, at least if you want to take advantage of all of this great technology, and I'm a large corporation, and I do care deeply about my IP and surprising and delighting my customers with a sense of confidentiality before I release my next big thing. And I still want to take advantage of this to go on-prem.

That's one approach. It's one approach. There's another that's gaining popularity, and that's confidential computing. Oh, tell us about that.

Yeah. The confidential computing is a relatively, only the past couple of years, it's been around a new concept for how do you do just that? Yes. Confidential computing in the cloud.

Yes. How do I protect my data, whether it be personal identifiable, corporate IP, trade secrets, or even government level information in a cloud environment where I'm protecting it not only from other tenants, but even from the cloud provider themselves. Right. And that is where using both hardware and software technologies from the ecosystem, so Intel and the other hardware players, as well as the cloud providers working together to provide an environment where the memory and the actual code, the data that's being processed by the CPU, by the cloud is encrypted.

So what you get, if you think about the lifecycle of data we talk about for years, data at rest protection, that's full disk encryption, data in transit, that's TLS, that's IPsec, that's your transport protocol encryption. Yes. What was missing was data in use. Yes.

I protect data when it's in memory and the CPU is processing and doing transactions. That's where confidential compute comes in. Yes. So now, if I can run my model or my training inside that, what we call a secure enclave or that confidential computing container, it's protected from both physical and virtual attacks, as well as the access by the admin or anyone else, even a co-tenant that's co-resident on that system can't get access to that memory container because it's encrypted and it's protected.

That's allowing for, and there's been a lot of really great papers out there showing how you can now do sensitive information training and protect the data from being exposed broadly. Now, you still have to have controls on who can query, but when you start talking about getting out of the broad chat GPT kind of things and you look at where it's actually going to make a difference on business is when you start doing domain-specific LLMs.

And so if I'm going to train a large language model on patient records to be able to do better predictions of interactions, for instance, of drugs and things like that, having confidential computing allows me to leverage the cloud scale but protect that sense of information based both on regulation and on confidentiality. And that's where we see confidential computing really starting to shine is how do I protect the AI from both data leak and exposure as well as from compromise? And not to mention, you mentioned HIPAA. If you have healthcare, patient data, we're basically required to protect that.

And there's other situations too. There's DSS. There's other very confidential information that sometimes for a business requirement or even classified information, if you're talking government contracts, all sorts of data out there that I think those verticals would benefit greatly from the great advancements that we're having in the AI space. but we have to move forward into that future responsibly.

Exactly. I love this idea of centralizing compute using trusted compute. What it does is it uses cryptography and I'm sure like several hardware mechanisms to logically segregate a section of the CPU essentially and memory so that only authorized components of the system gain access to that compute enclave. It's a secure enclave, right?

Exactly. Once you're inside that enclave, it's, help us understand it's not operating on top of the encrypted data. It's within an encrypted space that you're operating. Exactly.

So think of it this way. It's the, at the end of the day, there's new hardware components in the platform that is encrypting the memory from the CPU out. So think about it, it's a, an overlay. So your transaction, your application, once it's in the enclave, it's a database, it's an AI training set, it's web servers, whatever you're running in there.

It just happens to be running in an environment that is both CPU controlled from an access perspective and then encrypted. So the actual application is some, other than that you have to call a special instruction to land, to launch it, it's unaware that it's inside. It doesn't have to change your code. And there are actually some really good tools, open source and commercial for taking existing code and putting it into those secure containers without modifying your code.

And so that's the benefit is, and there's Graphene and Gramene or some of the open source ones. There's products like Fortenix and others that will take your code and move it into the enclave without you having to be, to change your legacy code or to recode what you've done already. It just, your stuff just operates as it did before, but now it's in this protected space. No, that's incredible.

And I think it's also very important. The startup that I, that I led, I had the great privilege to lead was Peacemaker. And we did almost exactly the same thing, but entirely in software without any of the hardware support. And needless to say, the market wasn't, it didn't catch on.

And I'm happy that the Intel has gone the extra length and has taken that security all the way down to the very, very low levels in the CPU and the architecture of the system. That's much stronger than just doing it in software. When I was at Apple, I was on the DRM team and it was a very similar threat model. You had to protect content that was purchased, movies, books, music, and get those delivered into an environment on your iPhone, your computer desktops, your iPods, for those that remember those things, and get that stuff decrypted so you could play it in an environment.

Essentially, you remember the days of the Napster era content theft. Like, you can imagine what those discussions sound like from Apple to all of the music label industries that were like, hey, we want to open a store and sell your music online. But in that context, they probably were like, excuse me? And it opened up, this type of security opened up that business so that the iPod and Apple could really have that refresh with its business and take on ubiquitous computing with all of the mobile devices that we have now a days.

So yeah, it's a really important problem. And I've bumped into it several times and so I have a great deal of respect for what you've helped to build and I look forward to seeing the world change for the better as people take more and more advantage of this type of security.

There's one other piece of this whole sort of situation that we find ourselves in, a path forward, which is homomorphic encryption, which has, yeah, exactly, not in competition with, maybe there's a way there's a path forward where this is actually symbiotic with the trusted compute because I always like to think of security in terms of layers of security, not just one, one giant door that blocks all attacks or mitigates all of your risks. There's no such thing as mitigating all of your risks or, go ahead. I was going to say, the interesting thing, and homomorphic is an exciting field. We're seeing both, there's a bunch of startups and there's some large investments happening.

I want to lay it out from two perspectives. So number one, today, one of the challenges with homomorphic encryption is that it is an absolute dog when it comes to performance and resources required to run. It is, part of that is the math behind it is from a computing perspective, an unnatural act. And so, it requires a heavy lift.

So there's got to be a real need for it and you want to be able to do it on a very discrete data set. The other thing to keep in mind is that it's a lot of what's the homomorphic encryption applications today are, I'd say, they're transactionally get driven. So a database with secure query access kind of model or a transaction processing kind of model, which is an important part of the story, but you need to transform your entire database into a homomorphic database. You have to transform your query engine into a homomorphic query.

So there's a lot of work to get into that environment. So the good news is that doesn't mean that's not going to happen. And we're seeing hardware accelerators come to drive and help increase that. We're seeing software optimization.

So Intel's put out some software optimization of how to make homomorphic scale better on commercial hardware. And we're going to see continued innovation here. This is going to be something that's not going to die. It's going to keep innovating as the hardware, the software, and the use case drives.

Where I see confidential computing and other models running alongside is there's going to be different kinds of transactions that are going to want to live in different places based on who owns the data. So if you own the data and you own the query, you can enforce a homomorphic approach once the technology catches up with the speed and performance. In the multi-party domain, where it takes a lot of coordination, you may go a confidential computing route. And what I hope to see as these worlds converge is a homomorphic confidential computing where basically your home is running in a hardware protected enclave.

So you protect the underlying encryption that you're doing within that homomorphic as you drive the homomorphic use cases. We're seeing a lot of exciting stuff happening in the homomorphic space, but I think we're still a couple years away before you see it, something I can sign up for in the cloud is give me my fully homomorphic VM, if you will. That's coming, but it's still a couple years away. Yeah, it's definitely coming.

I have to admit, I don't follow very closely the homomorphic bleeding edge, but you're right that there are a lot of startups and there's a lot of deep tech investment in that space. And so it will be very interesting to see what happens. And you're right, it's a pig when it comes to how much CPU processing power it needs. And those use cases have to be super specific.

And it is a heavy left to get your ecosystem into that space. cloudless. But one of the guys that I worked with back in Apple, he was a cryptographer. I was working on the sort of the compiling obfuscation component of DRM inside Apple.

And he was a cryptographer inside Apple. And so he would deliver like these white box AES mechanisms and I would integrate them into the ecosystem. But he went off and joined a startup called Zama, which is one of these little startups and they're based in Paris and that's where he lives. So that works out really well for him.

And they're working through it. And every once in a while I hear some really nice update. I'm like, oh, that's a huge improvement to what was happening before. And excited to, I'm excited that there's a lot of really smart people thinking through this problem and have fallen in love with this problem and that we're coming up with different ways to approach and think about the security in that space because there's no one right answer.

That's good. That's right. And there's going to, it's going to require a lot of innovative thinkers. To get this to scale and really help transform.

The nice thing is once we get there, we start to change the dynamic of how we protect information. Because when it, once it, whether it be a confidential computing or a fully homomorphic encryption, it's like you get built in protections from a whole slew of threats that just come off the table and then you can focus your, your budget of how do I mitigate threats and attacks to some of the other low hanging fruit when your data is just by default secure. And I think that's one of the things that will be interesting is if the data never gets decrypted in a fully homomorphic environment, then exfiltration attacks become a non-point in some respects. Exactly.

Yeah. And we are definitely, we're getting there. All of the foundational pieces are being built and being worked through right now as we speak. It's just very exciting to be part of that journey in a very small sense.

So at Intel with your unlimited budget, let me ask it, let me ask a quick question. Let me ask a question that I'm sure all of the entrepreneurs that listen to this are just dying to know. How do you think about acquisitions or that internal build versus buy decisions? It's an interesting question and a lot of it's going to be domain specific.

I think when we look at any large company, whether it be an Intel or an Oracle at Microsoft, I think a lot of the decision of organic versus inorganic comes down to two things. Number one, is it strategic to the business that we're either in or want to get into? How do we, whether it's, how do we accelerate the, something we're doing or how do we grow into a new market that leverages? But I think the ones, if you look at the companies and again, this is not just Intel, but all the big players, those successful acquisitions advanced the strategic goals that were already in place for the company.

Where you see a lot of times companies struggle is where they acquire something that I think is interesting, but it actually doesn't line up to the core business. And so eventually as you get down the road, you realize this isn't what we do. It's weird. And therefore, and I think some of the early, especially in the 2000s, there were a lot of companies on acquisition sprees that never fully integrated the things they acquired because of that.

So if you're looking to how do you interact, who, if you're a small company, think of who am I going to go to get bought by? Think about whose business is enhanced by your widget, your thing, whether it be faster time to market. Because at the end of the day, this is what big companies are. Faster time to closing the deal, expanding their addressable market beyond where they have today.

So new markets or new opportunities in the existing markets, or how do they leapfrog their competition, you know, as far as the day-to-day. And you'll see that like when we bought some AI companies, it really was to help accelerate our capabilities towards that core strategic mission of heterogeneous compute. one of the things that fundamentally Intel took the into its DNA several years ago is that x86 alone isn't going to solve every workload. I know that's heresy by the early Intel.

I know. Are you going to get fired for saying that? Not anymore. Not anymore.

Okay. That's the idea was, is when we bought Altera, what are some of the areas where, again, at our core, x86 is what we were known for, but our core is building hardware at scale. being able to funnel things through the manufacturing process and get that architecture through the ecosystem into the hands of the end user. So adding FPGA to that mix was a natural thing to do.

Adding AI accelerators allows us to go after those markets with very specific capabilities that leverage our same channel, leverage our manufacturing capacity and our engineering and the cross-pollinization between those. And so you're seeing examples of how that's transforming today, where we get the idea of heterogeneous architectures that are, whether it be different kinds of cores or cores and FPGAs working together to solve some of these big hairy problems. That's, so again, it's, is it strategic to the business? And it doesn't always have to be like core business.

Lots of acquisitions out there where it's really about how do I, if I look at my product set, what's my gaps? Where am I missing something that I can plug in? And again, the question that everyone asks internally is, do I spend two years and $800 million or whatever to go build that capability? Or do I buy it and then tweak it as needed to get it into what I need it to be?

And those are the questions that your M&A folks would typically ask. And I think that's something when you're looking at who your acquirers are, and I'll give you an example from early in my career where it's not always the obvious one. So my company, Sarvega, where I was the chief security officer for, Intel bought us. And it wasn't the logical person.

You wouldn't think they were the logical one to buy a software security and acceleration company for XML because we were thinking as an XML service capability as opposed to what if you took that capability and put it into hardware? What if I could accelerate and secure XML web services from the hardware up was the question that Intel was asking. Wasn't the question that at the time until we started talking to that we were asking, but that was the aha moment of where there was, they saw a gap, they saw a need to accelerate this new market. And could they use our technology in hardware to jump into it?

And so that was one example of a non-obvious at the time. It didn't make sense. It was like, really? Intel wants to talk to us?

But then when you get the message, you're like, ah, it makes a lot of sense. This is how you add that capability in and really make it a level playing field for everyone. And so that's something I think entrepreneurs need to think about is everyone thinks, so who are my key acquirers? It must be someone I'm really tightly partnered with because they see the value.

It's not always the case. Sometimes it's who do you give an inorganic opportunity to that they don't have today? Who's got the gap gap that you fill that may not be your obvious channel that you've already dealt with? That's really insightful.

Yeah. Thank you for sharing. So when you went through that process with Intel, you guys were surprised at first. We were.

I think when they first reached out to us, I think we were a, myself, our CEO and our others were like, that's an interesting conversation. Let's go. We're going to go hear what they have to say. Of course.

Once we got down the running, it made a lot of sense, but they weren't the ones that we were thinking would be the acquirers. If you think about at the time of XML firewalls and XML acceleration gateways. So they went after you guys mostly for the talent and the people and the product and the way that you brought something to the table from Intel's perspective that they just didn't have in-house at the moment. Did they acquire you for the market share?

So they acquired us, honestly, they acquired us for the technology and for the people. We did have good market share. We had some key customers that they, again, Intel has everyone as a customer. So they weren't looking for us for the market.

Let's face it. We were a small, relatively small startup that are Intel. But what, if you think about Intel has at the time, 2005 had really smart chip engineers and designs and architects and a software enabling team. They did not have XML expertise.

They did not have a lot of XML security or web or application security expertise. And so what they were buying was our expertise in those domains and the core technology of how we had accelerated in software, the processing of XML transactions. And that is what they, the kernel of what they wanted to put into hardware to extend that capability and scale it. Congratulations.

It sounds like everything has worked out and they must be doing something right to keep you around for 18 and a half years. I bet you've had incredible journey just even as an entrepreneur there at Intel and have helped with a lot of great projects and interesting things that you've gotten your hands into. But what's your, a typical day for you? I know you mentioned a little bit of internal, a little bit of external and some big hairy problems.

Is it mostly spent in meetings or do you like to get your hands dirty still? So I would say I wish I could spend as much time as I want to getting my hands dirty code, but I still get to play with the fun technologies. I have my own lab at the office where we get to play with the cool technologies, deploying them and trying things out. But I would say a lot, a vast majority of my time is spending, spent with end customers and our internal architects.

So I'm just, again, working with the government to innovate or understand their needs, getting deep into the challenges and then working with our teams to try to innovate on those. Whether, and again, one of the things that makes interesting, Intel interesting and the government interesting. Intel, we have technology, everything from the pointy end of the spirit of sensor platforms to high performance computing and networking and everything in between. So one day I could be talking IOT, the next day I'd be talking high performance computing or cloud.

So it keeps it interesting in that perspective. On the government side, they have literally every possible use case. So I could be talking about how do you do security of patient records at the VA, or I could be talking about how do I do better object recognition for manufacturing in a Navy base kind of thing. So you get the full gamut of use cases, which always makes things life interesting.

And then from the innovation side is being able to get your hands on the latest technology and seeing how does it work or how do I apply it? Some of the, or in the case of things that still excite me from the security side is we have new platforms come, new security capabilities and testing them out and plugging in the probes and pulling up the Kali Linux and seeing what happens. And so those are some of the fun things you get to do. Not nearly as often as I'd like in my role, but it's definitely something you get to do once in a while that still makes it fun.

Oh, for sure. Or maybe even just get to participate like on the demo day and see all of the great things that the team that you've surrounded yourself with brings to the table. Absolutely. And it's one of the mantras that I've had for most of my career is you surround yourself with really smart people and listen to them and give them enough rope to go and cause trouble and break glass.

That's incredible. I love that. Give them autonomy and a safe space to take those risks. Exactly.

Do you, speaking of teams and surrounding ourselves with smart people, do you have direct reports as the principal engineer? Yeah, I do. And so as in the federal city role, I have a team of solution architects. Technology architects and technology specialists and subject matter experts across multiple domains.

So AI, cybersecurity, scale computing, as well as sensing and sensor fusion topics. And when you build your team, I know this is probably like very specific to the role that you may be hiring, but let's talk interviews just for a second. Is there a favorite interview question that you have? The one I like to throw out there is I want to see how they, how an individual processes information to solve a problem.

So I want to understand because much of my job and much of things that excite me is give me a big hairy problem and what do I do? How do I go about it? So I want to hear, I present them a scenario and it's really about how would you go about tackling this and seeing their process? How, because ultimately I don't like to micromanage.

When I, when my team comes on board, I say, okay, here's the problem, go after it. And so understanding how they go after it does to the end. One, I want to make sure that they actually have the tools to go tackle these are interesting problems and get after them. Yeah.

But also it helps me understand once they're on the team, what do I need to feed them in order to make them successful? Is it, I need to give them the detailed documents of requirements? Do I need to point them to the right SMEs so them to gather the information? So I want to understand their process.

That's really the, and it's a hard question because I'm not asking what was your best day, worst day? What, how did you solve all this? It's here's a problem. How would you go about tackling it?

And I think that's what I find the most interesting because you're never going to get the same answer, but what you're getting is there how they process information, how they come at a conclusion. Yeah, exactly. It is there, is there a green flag that you look for that if you hear it or you see it or you smell it, maybe even is a better way to put it that you will fight to the death to bring that person on board. So there are two things I look for that, you know, that are those check checkbox.

One is where they go and they download the thing and they play with it. So getting your hands on to understand how something works or how something breaks or whichever side of the camp you're on. Roll your sleeves up a little bit. Exactly.

So when they say, first thing I'd have to do is download the framework and try this and try that. That's a check. The other is where they say something like first thing I'm going to do is I'm going to go talk to somebody about, you know, I'm going to go learn. And that's one of the things I value for myself and for my team is that no one can know everything and being humble enough to know that there are things we don't understand and knowing that the first step is reach out and find it and leverage your peers, leverage experts.

That's one of those other checkboxes is knowing that there, we have a team of people, but we're also Intel. So there's a fellow that did something. There's a, if you think about USB invented here, PCI, like these technologies are out there because of Intel and you can go ping that person and ask, Hey, I have a question. And so having that kind of mentality is really important for my teams.

And I think for successful architects is being able to reach out to their, the dial a friend. Dial a friend. Yeah. The humility to, to say, look, I don't, there's the more that I know, the more that I know that I don't know and fill in those gaps and just have fun and have little creative moments and connect the dots with what we do know.

Yeah. That's incredible. Maybe it's almost like a mentality. It's like a value.

It's a lens that you can see the world through and it feels like you have this. I think we share this as well. And you look for this in the interviews and you got to be a little bit crazy to be in security. Maybe there is, there was a story from childhood or what's the story about how you got into security?

It's a really good and it's an interesting question because if you go back to my degree and what I was doing in college, it wasn't security. It was research biology. That's a big, that's a big shift. It is.

Back in the 80s, I had two loves. I had biology and computers and hacking around. And so it was a hacker, liked to play around and see how things fell apart and how systems worked. But when I was looking towards going to college in the late 80s, there wasn't really a career that he had in cybersecurity yet.

And computer science was really basically COBOL programming, which wasn't exciting either. So I went the other love. I went the bio route and was going to do research biology, something in the microbiology, biochemistry space. And that's actually where I got my degree in and I did some graduate research.

And it just so happened in the mid 90s that after one of my grants ran out and I had a year off before med school, a friend of a friend was going to start a company. He had some money. He came from one of the legacy technology places and said, I'm going to do a startup in this new thing that was people were talking about, this internet. And a friend ventured and said, hey, you are a hacker.

You know the security stuff. Maybe you should talk to him. And we did a startup in 95 after three. I figured the way to help pay for med school.

It's going to be expensive. It's going to be expensive. Yeah. Three months later, I was all in.

I fell in love with what we were doing. This was like right before Netscape's IPO. So it was the early days of the internet. And we were out there talking about security before anyone else, really anyone major was.

And I had some really good mentors. So I got the opportunity to get mentored by Bruce Schneier of applied cryptography and many other wonderful books and things that he's done and the cryptography he's done. So I got to learn from him and some of his peers and other industry players and really got the opportunity to get in on the ground floor of what became this security market we know today. Both and did it across multiple fields.

And I think that was exciting. And like I said, three months after joining the company, being their CTO and chief architect, I never looked back. It took my parents a little bit longer to agree that I wasn't throwing my career away in medicine. But I'm sure that they have a lot of pride in where you ended up.

I believe they do. Absolutely. Yes. Yes.

That's really heartfelt. No, thank you for sharing. Did you actually work with Bruce Schneider? Bruce Schneider?

Yes. Bruce Schneider was an advisor at my first company and he was a partner at my second company. Oh, I think that's incredible. He's a legend.

He is a legend and just an interesting person to work with as far as just the depth of knowledge he has. And to give you an example of what I talked about going out and asking, when I first, when we were first starting this in data and I was coming up, we were doing desktop encryption and some things, I was coming up with some algorithms because again, there wasn't a standard set at the, in those days. So I'd come up with an algorithm and a way of applying it to a data. And I thought I need to get somebody to look at this.

So I, out of the blue, pinged Bruce Schneider and said, Hey, can I hire your consultancy to take a look at my algorithm? And we hired him and he tore it to shreds, literally tore it to shreds and beat me up with that side of the head. Like you're an idiot, but that's the best, that's the best type of love that there is. Absolutely.

And some of the advice he gave me, which again, being young, I didn't really understand it until he gave it to me was you can't really build a good algorithm. And really this is expansive, anything secure. You shouldn't be building security defenses unless you understand the offense. Right.

And his model was you shouldn't be building a new algorithm unless you know how to take apart an algorithm. So learn the crypto analytics side before you start to build a cryptographic algorithm. And that mentality has stuck with me ever since. And it's really something I always had is from the hacker side of the camp, but applying it to some of the cyber defense side is you should, you really can't do a good job of defending unless you understand how the adversary is going to get in or how things fall apart.

And it's that advice he gave me on the cryptographic side as well. That was absolutely critical. And the nice thing is then after he beat me up and beat it up, he worked with us to build a better, a better box. That's perfect.

Yeah, that's that. That sounds like exactly the right amount of tough love. Exactly. Thank you for sharing.

If you could, I think it's really important to have great mentors in our lives. And I know I've been blessed with amazing people that have cared deeply and have helped me take those big steps in my career, too. And it sounds like you have had very similar experiences to if you could go back in time and meet your younger self. And almost mentor your younger self, would you have any advice for your younger self?

And what would that be? That's a really good question. So advice, there would be definitely some investments I would give advice to go do. Oh, sure.

Go buy like this stock or that stock. Definitely a couple of those. Buy some bitcoins. Bitcoins.

Believe it or not, Walmart stock back in the day would have been a great investment. Amazon. But on a serious note, I think a couple of things I would give advice for. And again, these are things we learn.

I always say that I never regret the decisions, even the bad ones, because we learn from our failures. And so one of the things I probably would have given myself advice of is that think of the failures not as, oh, my gosh, the world's crashing around you, but learn from those. Because usually it's about a year or so after the major failure that you figure out, oh, this was a really good learning experience, but the pain for the year. And so when a couple of my early startups weren't successful and it was, it was, those are your babies and they're, they crush you when it doesn't happen.

I think the advice I would give myself is that every one of those failures was actually a launch pad. It just sometimes takes a little bit longer than you think to get up off your feet and do the next one. I think that was a piece of advice that would have helped a little bit. And I think the other is, and this was something, an epitome of the nineties is every startup back then you went all in, you are a hundred percent all in to your startup.

And I think one of the pieces of advice I probably would have given myself something I've been doing more recently in my career is diversify, you know, spread the wealth a little bit. Diversify your passion or diversify your passion, diversify the things you're involved with so that you can, it helps in the early days. I'll pick an example. When I was doing my main frame security product, we were the mainframe security product.

And it's only in the course of that journey that we expanded to other areas of understanding that mainframe security was important, but mainframe being able to do webification of mainframe that was so expanding horizons earlier, even if you're not going to do it, but just so you know that it's out there makes the leap easier. And I think that's another piece of advice is that I've learned on the fly, but maybe I would give myself the heads up is that it's the non-obvious things that thinking outside the box is supposed to having to learn it on the fly when what you didn't work because you were thinking very narrow, right?

Think more broad, at least have it in the back of your head so that you're better prepared for the curveballs that are absolutely going to come your way. Yeah. Change is inevitable and we got to build to move with it, not fight it. We are at the top of the hour.

In fact, you've shared with me one extra minute. I have all of the gratitude in the world. What an amazing conversation. You're always welcome back on the podcast anytime.

I really enjoyed our conversation. We have to go for a coffee. I'll buy you a beer or a beverage of choice and we'll pick up right where we left them. Sounds good.

Thank you very much for having me today, John. Thanks so much for joining and thanks to all of our listeners for tuning in to another episode of the Security Podcast in Silicon Valley. Thank you.