65. Yaron Singer, Cisco: The hard truth about deploying AI today
Hello, everyone, and welcome to another episode of the Security Podcast of Silicon Valley. I'm John McLaughlin, one of the hosts. I'm joined with the other host, Sasha Sinkovich. And we have an amazing guest.
It's a true honor to have you on the show, Yaren. This is Yaren Singer. Thanks, John. Thanks, Sasha.
So happy to be here. Welcome to the show, Yaren. Yaren is the co-founder and CEO of Robust Intelligence, recently acquired by Cisco. Just looking at your LinkedIn, you have an amazing background.
Right off the bat, you're a founder at Bitwave. You were CEO and co-founder there. You ended up at Google as a postdoctorate research scientist. Oh, not to mention you did your PhD in Berkeley in computer science.
After your Google stint, you were a research scientist at Microsoft AI. From there, you transitioned into Harvard, of all places, as assistant professor of computer science and then associate professor of computer science, and then a Gordon McKay professor of computer science and applied mathematics, tenured, might add. And which led you to your adventures with Robust Intelligence, which you chugged through for about five years. And it's a great honor to have you on the show.
Thank you so much. Yeah. Yeah. No, super happy to be here.
I'll just sort of want to be a little bit more precise. So I, you know, my time at Microsoft was actually during my time at Harvard. So, you know, like faculty, you know, you get to kind of do various things and you get to like, you know, one of the things that are nice about, you know, faculty positions is you actually get to like interact with industry. So during my time at Harvard, I actually spent a year and change in the Microsoft Research Lab, kind of working with friends and stuff.
And, you know, we actually did some fun stuff on, you know, robust machine learning. So, yeah. So it's all good stuff. But I just kind of, you know, just want to be like a little bit more accurate there.
No, I appreciate that. And in your last gig with robust intelligence acquired by Cisco, would you like to share with everyone what you were up to and how that journey sort of unfolded a little bit? Yeah, yeah, for sure. You know, where does one start?
I think I got interested in basically vulnerabilities of machine learning and kind of its implications to algorithms. Probably like during my time, kind of like during the end of my time at Google, you mentioned the start of BitWave. That's actually kind of where I first sort of started seeing these kinds of this kind of behavior where, you know, you sort of realize that, you know, machine learning is actually a very, very sensitive technology to, you know, so very, very small changes can have like, you know, detrimental effects. And it kind of like it means a lot when you're designing algorithms on top of kind of like machine learning models.
So that's something I got really interested in. And I saw that like, you know, in sort of like, you know, completely different scale, you know, during my time at Google. And that's kind of like something that I, you know, decided to put myself into during my time at Harvard. So my research at Harvard was really was motivated by trying to understand the vulnerabilities of AI, its implications to algorithm design.
And basically, you know, what would robust machine learning look like? What are the techniques? What are the some of the things that we need to do? So I, you know, kind of like didn't kind of like research on it and what have you.
And then there came a time when it just felt like, you know, it would be more fulfilling to do this as a company than research. So together with Kojin Oshiba, who's a student at Harvard that I worked with, and we wrote papers together, we created robust intelligence, moved to the West Coast, and, you know, had, you know, some fun times, some miserable times, you know, and then we can unpack here in this podcast, you know, and now we're, you know, all the way through, through, I guess, now the acquisition at Cisco, where we continue on working on this and other things. You identified a very critical problem, which is the AI and access.
What was the pivotal moment that made you realize, yes, there is business in this, we can deliver a lot of value to the organizations across the globe? Yeah, that's interesting. Like, in terms of business use case, I think that we were, that was something that, you know, that we spent a lot of time trying to kind of identify, right? Like, how do we basically take something that is pretty conceptual or arbitrary, you know, and how do you actually make a product into it?
Like, what is, what should be the product, right? So when we started the company, we didn't have, we didn't know, to be very, very honest and frank. Like, like, we started with these sort of very generic statements and hypotheses. And basically, you know, what we, what we knew is we said, look, with everything that we know about AI, it's just going to continue on growing exponential rate for the coming, like, you know, 20, 30 years, whatever, right?
And on the other hand, like, with everything that we know about the technology, everything that we know, kind of like, you know, we know AI from, you know, the vulnerabilities of AI, we know that pretty well. So with that, there is no way that AI can continue on growing at any rate without having a security solutions around it. So, those two axioms together just told us, like, you know, one way or another, there, there's going to be this massive market for, you know, for AI security. So that was kind of like the starting point.
It was like a very, very blue ocean. The timing is very interesting. As we rely more and more on technology to automate a lot of functions in our daily lives, we do rely on a lot of data. So the volume of data has increased, but there's also a lot of noise.
And now we have these new capabilities in form of AI to propel the productivity of our civilization and society as a whole. We've always had questions about data security and data privacy, but with AI, there's just, it's a Wild West. Yeah, yeah, I agree. You know, I think that there's been awareness, right, about privacy.
It basically started, you know, from the time of kind of the modern web. And then now kind of, what is it, like 15 years later, when we're looking at AI, AI is, you know, that technology is just like it's completely fueled by data. And that really kind of governs the quality and what you can do with it. And, you know, and that has very strong kind of like privacy tradeoffs.
So, yeah, I think that that does present a lot of important questions, both for regulators as well as, you know, for companies like developing AI, as well as, you know, companies that are protecting against potential breaches in privacy and security. So, yes, it's, I completely agree. This is a stop of mind for, you know, for a lot of organizations, a lot of people. But then on the other hand, you have the company leaders.
As a CEO, you want to make sure that you're using all of the available technology to continue building a successful company, which means using AI. But then there is a question at that point, how do you do it safely? And this is where the robust intelligence, now AI defense comes in. Yeah, I think that's right.
It feels like very much like an arms race that is going, you know, at an unprecedented pace when we're thinking about the, you know, the development of the technology and the capabilities, right? If we just kind of pause for a second and just sort of think about November 2023, when everybody got exposed to generative AI capabilities and started getting excited about it. If we go back and look at those models, I think it was like GPT-2 or something, right? When you compare the capabilities that, you know, that we have with AI today versus the capabilities that we had just two and a half years ago, it's really just difficult to kind of conceptualize that leap forward and capability.
So, and then that puts a lot of pressure not only on the companies that are building models, but I think it puts a lot of pressure on different organizations for like using the technology. I think that like what, one of the things that we realized doing robust intelligence, I can't, you know, on this podcast, I can't speak on behalf of Cisco. So I'll speak on behalf of my capacity at robust intelligence. One of the things we realized is that we realized, you know, like we're not really selling or we don't really think of ourselves as selling security.
We think of ourselves as basically enabling organizations to move faster. So the model that we had was securing the AI transformation. So what we identified is that for all these organizations, the biggest bottleneck in deploying AI was not an engineering problem. What we always saw is we saw especially AI, but then also especially generative AI is just a great technology.
And from an engineering perspective, very, very easy to deploy, very easy to use in ways that I never imagined that would be possible. Honestly, I don't know, even if in my lifetime, the way that it's just so powerful. So it's never, it felt, it never felt like it was an engineering problem. But, you know, when you go, when you ask organizations, like, why, you know, why is it that, you know, you're, you're, you're still in POC stages with, you know, with the technology?
It's, I think surveys show that it's been 70 and 80% of the time. What organizations say is that the biggest problem is security and privacy, right? They're worried about privacy of the data of their customers. They're worried about security implications, right?
With having such a large attack surface. So that was really kind of like the, the thing that, you know, they, they felt is limiting to moving forward with the technology. So from, I think kind of like our emphasis in building their product was how do we build a product in, in, in a way that, that just, you know, where people feel like kind of they can security, safety is just automated and just allows them to just like, to just move like really, really quickly with, with deploying the technology without having to think about it.
And I honestly, I think to me is like, that's the goal for like every security company out there, especially when it comes to AI, you know, it just to kind of like to be almost like invisible, right? It's kind of like enable, enable that deployment in a really seamless way. A good analogy that comes to my mind is HTTPS. Essentially, we are using CLS under the hood, but you don't have to be involved in the key exchange.
It's there to do the right thing for you as the consumer of the interwebs. Yeah, I think that's right. And you've been obsessed with the problem space. You've really have fallen in love with it deeply.
I can see that just like in the history and, you know, now that AI has kind of exploded, this is such an important topic for everyone everywhere. But you've been involved in machine learning and AI long before it was like the cool thing around the block, right? So, you know, look, I mean, like about loving the problem, I think you kind of have to, I guess, right? Like if you want to do something for so long, like, gosh, you better like it.
If you're going to be doing this, you should be thinking about doing this for like at least 10 years, right? You know, it better be something that you love and it honestly just better be something that is like big and deep, right? That you can kind of like see like not only what you can do, you know, for the next like, you know, have a six-month roadmap, but, you know, have a vision that is, you know, that really is kind of like a 10-year vision, right? And yeah, sure, it's going to change like a gazillion times over, but, you know, but at least it has that capacity.
So, yeah, that's something I recommend to everybody. You just sort of like love what it is that you do and then like make sure that it's like it has that depth. And I think, yeah, and I think AI security like happens to have that depth. So it's a great, great area.
I feel it. It has quite the depth to it. And I'm super curious in your journey, the whole thing that has led all the way up to this moment. What's been your proudest moment?
Wow. Proudest moment. Hard to say. I'm sure there's been so many.
Yeah. I don't know if it's the proudest, but it's like it was one of the best. So I'll describe it. So it was like 2020, I think it was like 2023 or 2022.
No, it was 2022. We did a POC with a very large customer and it went well. I was kind of like in close conversations with, you know, with like the champions and everything. And we were looking to close the deal.
It was getting kind of close to the end of the year. It was a seven-figure deal. And it was like early days in the company. So it mattered, right?
It matters a lot. You know, we were in a board meeting kind of like to close the year. And then kind of like one of the board members was just like giving me hell for like, you know, oh, this is like never going to close. And this and that.
You don't have a deal. This, that. And, you know, it's like, you know, you sort of like take this abuse for like, you know, 30 minutes in the boardroom. And, you know, and you feel like an incompetent CEO.
And, you know, like, oh, how are you? You know, like you're such a fool to be led to believe that this customer is going to sign like a seven-figure deal with like, you know, the Sledling startup. And this and that on this sort of like weird topic of air security. And then we had a holiday party.
And in the holiday party, I get the email with the signed contract. So that was like, that was a beautiful, you know, that was just kind of like a beautiful moment to like have that in the, you know, the company holiday party just to get that in on, I think it was like a December 16th or something like that. And then, of course, like I immediately, you know, forwarded to the board, you know, so. Just by the way, FYI, you know.
Exactly. That thing that we were talking about. Just wanted to let you know, you know, hey, just a minor update. Not a big deal, but.
Not a big deal. So it's a great sense of accomplishment to kind of like, you know, to sign a deal with a major customer. And, but it's, you know, it kind of like proving someone wrong at the same time kind of swings the whole deal. So that was fun.
Definitely, definitely. Well, you know, there's no such thing as a success without first a little bit of struggle. Yeah, yeah, yeah. Yaron, you mentioned something very important.
In security, how important it is the user experience? How is it important that someone with little to no experience in the data security, data privacy, can use the solution that will protect the organization from accidental data leakages and et cetera? Yeah, I mean, I think that the user experience is really key. And that's something that we felt a lot, you know.
A lot of the questions that we would always have is who is the buyer? Who is the customer? Who is the user, right? And in our case, there is sort of, you know, that question gets asked a lot because, you know, there's sort of, there's a lot of overlap.
Because, you know, we have, you know, in our product, there are the AI personas who, you know, who are using the product. They deploy the product and then they're, you know, but oftentimes it's the check comes from the, you know, security teams. So there's kind of like that kind of, you know, a lot of interplay. Right.
So, you know, and I guess kind of like what that means is it means that you have a lot of like stakeholders for like a single product. Right. So, you know, what that means is it means you somehow have to find the GCD. Right.
Between all these, you know, different personas. So you have to make it as, you know, as simple as you can. Right. Potentially there's a complex experience, right.
Because there's AI, there are endpoints, there's this, you know, like there's data. Like kind of models can like do these different things. They have different formats. Right.
So, so somehow like distilling that into, into, into kind of, you know, like a really simple experience is something that we worked on quite a bit. You know, John, you mentioned that there was sort of like, there's no success or, you know, oftentimes there's no success without struggle. So, I mean, I, I will forever remember the time that like my leadership almost quit on me for, for, for, for like driving that, you know, that simplicity in the product. You know, it was, it was, it was headed towards being, becoming like a potentially a complex product.
So we sort of made a, made a decision to actually make it a very simple, very simple product without compromising in any of the functionality of it. So it was, it was something that was maybe, maybe the way that it was done was, was not the, the, the gentlest. But, but, but fortunately everybody kind of like took a deep breath and, you know, kind of aligned on, on, I think kind of like on the, on the idea and then saw, saw the value. So, so everybody was, was kind of like on board.
So, you know, hopefully for, for any, anyone who's going to see our validation product, they'll, they'll sort of like appreciate the, you know, that, you know, the simplicity of it. And if they can't appreciate the simplicity of it, then they can maybe appreciate the fact that it could have been a lot more complex. So. Yeah.
Yeah. I know I, I detect strong remnants of like Steve Jobs energy. The dedication to simplicity is just. I'll take it.
No, you have it. And who was it? It's someone who maybe was Pascal, like an old mathematician. He was writing a letter to someone.
It was a really important topic. It was just like so important. And writing this letter and he said something along the lines of, I am so sorry. I didn't have enough time to write you a short letter.
So I wrote you a long one. And like simple is so difficult. So all the appreciation in the world to the. Helping like make the world a simpler play.
Yeah. Yeah. Hopefully, hopefully. Time will tell.
I mean, speaking of time telling things like, is there a legacy that you would that you'd like to think about? Or clearly you've you've been at this for quite some time and and you look into the future and you think about. How does all of this stuff add up to the bigger picture? Yeah.
No, that's a great question. Well, just, you know, to be remembered as the revolutionary genius, you know. No, I'm kidding. Yeah.
Yeah. I don't know. Hopefully, you know, hopefully people will, you know, kind of, you know, honestly, just like whenever someone thinks of you, will think of someone that has done things with substance and is a good person. I think that's the I think that's kind of like she can if you can get those two things, then I think you're, you know, I think you're in a good place.
So beautiful. Beautiful. I love it. Has there been anyone that you've looked up to and really admired and maybe in earlier years role model mentor?
I don't know. I never actually it's a great question. I should I should I should I should think about this and have a better answer. But like, you know, throughout my throughout my career, I've always, you know, academics like, you know, I there are quite a few academics that I've always looked up to.
And, you know, my advisor from, you know, from from grad school, he's he's someone that I definitely look up to. He's basically the father of computational complexity and algorithms. And then I think during my time at Harvard, actually, there are several faculty members at Harvard that I admired, one of which is has been Leslie Valiant, who won the Turing Award for basically the theory of machine learning. And, you know, and other other other folks at Harvard, Cynthia Dwork, who invented differential privacy.
My colleague at Harvard, Boz Barak, who's been kind of one of the heroes of cryptography, who's, you know, kind of like now spending some time at OpenAI. Yeah. Like Michael Mitzemacher, who basically took information theory out of business. So, you know, people, you know, there have been like a lot of I have like a lot of academic heroes.
During my time at Google, there's like this this legendary kind of VP of engineering called Bill Curran. And I was just ended up being fortunate enough to have Bill as my, you know, seed investor from Sequoia. So he's just been one of, you know, just a true inspiration and someone that I learned a lot from. And now when I'm when I'm at Cisco, I also get a chance to like interact with like just unbelievable engineering leaders and people kind of like in the cybersecurity space.
So the person who ended up ended up being my boss, she was she was on our on our on our she was on our board when we were robust intelligence. So Shaila Shankar, she's she's had this like unbelievable career as a as a leader in cybersecurity. And now she's leading kind of the entire security business group. Raj Chopra, who's also in the security business group.
You have G2, who's now kind of chief product officer at Cisco. So, yeah, so all these people like they're just like you sort of like you observe them and you look at like how like their career. It's it's like unbelievable the things that they've done. And you get to kind of like look look at them close and you sort of you see what they you know, like how they operate.
It's just an amazing, amazing learning experience. No, thank you for sharing. We've asked that question to a lot of folks. And usually there's like one, like maybe two.
So, no, that's that's really beautiful. That's really beautiful. We're all we're all connected. Yeah.
Yeah. How do you see cybersecurity domain evolve in 2025? There is clearly a lot of chatter about agents, agentic, cybersecurity, data privacy. Um, that's a great question.
Um, I personally think that cybersecurity is an industry is like on a verge of just like massive disruption. And, and I, and I personally think that the, however we think of cybersecurity products and whatever we're seeing, I think like a lot of them are just not, I mean, they're not going to exist in the way that they exist right now in a matter of, I would say like, I give it like three years, four years, you know, and, um, and the reason is because, well, of course, because of AI, right. And in very, very broad strokes, okay, cybersecurity is a discipline that is unbelievably data rich and very much kind of like relying on data.
It's, it's a discipline that, you know, that, that actually requires a lot of actionability on that data and a lot of decision making on that data. And, and, and I believe that, I believe that the existing technology that we have today, or, you know, or maybe like kind of the, somewhere close to the existing technology that we have today is appropriate for, uh, being able to leverage that data into making actionable, you know, basic, basic kind of like making, uh, actionable, uh, decisions. Um, there are a lot of cybersecurity products that are in some way, it's kind of like static. So they do a very good job at carrying the data.
They do a very good job at like kind of giving access to data and presenting that data. But then ultimately the decision is thrown, uh, on, um, on, on, on a team, whether it's a team inside kind of like existing organization, whether you're outsourcing it, but, but ultimately like there, there's a lot of, there's a lot of manual process, a lot of humans, a lot of decision making that needs to happen. And, and, and I think that the capabilities that we have with AI today are such that can, can basically automate a lot of that, um, you know, a lot, a lot of this.
So what that means is it means that what we're going to see, in my opinion, is we're going to see products that are basically really, really good at automation and decision making. And what that means is it means that the existing products that we have today are just basically just going to be changed dramatically. And some, you know, it's going to have implications like some organizations and companies will move fast enough and, you know, and, and, and lean into that change somewhat. So I think that there's also going to be kind of like, um, the, the dynamics in this field are going to change a little bit as well.
On the adoption curve, like I know obviously we're early, right. But just for AI in general, do you, do you think we're still in that tinkering phase? Do you see it? Did we cross the chasm yet with any big, you're kind of shaking your head?
No. I don't think that we truly crossed the chasm, like, you know, as a society. Right. Right.
As a society, that's the. That's just my, my, my, my, my, my, my, my, my interpretation. Like, you know, so, you know, and I meditate on that question a lot, you know, when it comes to AI security, right. I think that the market is big enough.
I think that we are in the, on the tipping point of it. So I think that right now there is still, I think that like we are, let's say two, two and a half years into, in a post-chat GPT world. Right. And now kind of like in this post-chat GPT world, then, then I think that we, organizations are, are now, are comfortable with the, with the technology, with the kind of like the engineering around it.
I think that they have use cases and they know how to use, and I think they're, you know, rather capable in, in, in kind of like in using that technology. And, and I think that like the biggest hurdles for, you know, as we said, for adoption, like in the beginning of the conversation are around the safety security. I think those, those hurdles are now going away. And in my opinion, what we're going to be seeing in the coming two years is we're basically going to be seeing kind of, you know, all, all software I think will be infused with AI.
So the, the term AI, in my opinion is like, you know, in two years, we're going to be looking at this podcast and, you know, not only, you know, we'll, we'll be, first of all, we'll be reflecting on how handsome we used to look two years ago. But, but, but, but, you know, but other than that, I think what we're also going to be kind of like, you know, snickering about is the fact that we actually use the term AI, right? Because in, I feel like in, in, in maybe in two years, AI is going to be, you know, synonymous with the software. You have been at Cisco for a few months now.
What is the next big evolution in terms of the product? Yeah, I think, gosh, there, there's so many things on the, honestly, there, there's so many things on the roadmap and, and, you know, and, and, and, and we're, we're investing and Cisco's investing so much in, in all of it. I think that what, what, what we're seeing right now and what we're going to continue on seeing is I think that we're going to see Cisco as leaning in very, very strongly into AI, like obviously AI security, but also, also AI and building super impressive technology for super impressive AI technology for security applications in general.
So you're, you're going to see more stuff coming out of, you know, of our team, our group is I think doing like really, really interesting things. And then also I think that what we're going to be seeing is we're going to be seeing, I think Cisco is like kind of amazing capability as a, as a platform, right? And, and, and basically kind of like securing AI from a, you know, from a platform perspective. And this is, I think one of the most compelling aspects of robust intelligence at Cisco is that when you're thinking about AI applications, AI applications are, you know, are, are everywhere in it and like kind of different levels.
And Cisco has this sort of like just unbelievable platform, you know, that allows you to actually, you know, secure AI in the sort of like seamless way because Cisco is essentially the network, right? So I think the ubiquitous, ubiquity of Cisco as a platform, and we'll see kind of like how that's being leveraged to secure AI. And then also we'll sort of like see the, the strong commitment that, you know, Cisco has to like AI in general, and we'll see kind of like the implications of that on the world of security. And I think those are kind of like exciting times ahead.
I'm sure a lot of people would love to meet you in person. Are there any conferences this year that you will attend? Yeah, sure. So, yeah.
So first of all, anybody who wants to see me in person is welcome to come to the Cisco office and ask me for a run. So, you know, so that I don't have an excuse. So, you know, so we have, we have wellness. Careful, you're talking with two runners here.
Oh, yeah. So wellness hour, 4 p. m. every day, you'll see us running kind of outside of the Cisco Meraki office.
So if you want to, you know, you want to meet for a run, that's always something that's good to do. But otherwise, I'll be at RSA, you know, like, you know, the various Cisco live events, AI conferences, NeurIPS, and, you know, ICML when it's in a nice location, and ICLR. So, so all of those, you know, you can definitely find me there. Beautiful.
Thank you so much for joining for this episode of the security podcast of Silicon Valley Yarn. It's a great honor. Thanks. Thanks, guys.
It's super happy to be here. Thanks for having me. It's a great pleasure having you get on. Thank you.
Thank you.