66. AI Expert Michal Pechoucek: How AI Is Targeting Your Mind Now

Hello, everyone, and welcome to another episode of the Security Podcast of Silicon Valley. I'm one of the hosts, John McLaughlin. I'm joined with Sasha Sienkiewicz, the other host, and we have an amazing guest today, Mikhail Pekicek. Thank you.
Thank you for having me and thank you for pronouncing my name with such precision. You know, it's the least that we can do. For all of our listeners, Mikhail is a venture partner at Evolution Equity Partners. You're also involved very heavily with the Czech Technical University in Prague.
You're the founder director of the Artificial Intelligence Center there, as well as an AI professor in the Department of Computer Science. And on your LinkedIn, you have an amazing list of angel investments and advisor roles that you've taken with several startups. Welcome to the show, Mikhail. Thank you for having me.
I'm excited. So let's chat about security and AI. This is a new space. There are a lot of developments underway.
You know, AI is breaking ground left and right. It's difficult to keep up with the rate of change. And when we think about security, what's some of the things that comes to mind? This is an exciting question because, in my opinion, and when it comes to my experience, AI security and security in AI is not as new topic as it may sound.
For a fact, the attackers have been using AI for a long time. I will even say a decade plus. And when it comes to defenders, you know, there's a tradition of the defenders to kind of use AI to its fullest to be able to deliver best-in-class safety to our users. One of my first startups that I started with my PhD students was cognitive security.
We started that in 2008, okay, more than a decade ago. AI was precisely the idea how we can use a machine learning from computer vision to be able to detect advanced persistent threats in net flow, in network data. It was very successful. So we were able to exit in Cisco and be able to service millions of customers with this technology.
So even kind of 15 years back, kind of people were kind of delivering on the promise of AI and cybersecurity. Fast forward today, you know, the world is different and kind of people really kind of think different things when it comes to AI security, AI safety. And, you know, I see AI security as kind of four different things. The most important one is AI-powered attacks and us as defenders to be able to sustain AI-powered attacks.
So we use, you know, we see kind of attackers kind of using different types of advanced AI, game theory, automated reasoning, large-rigged models to be able to kind of design malware and kind of malicious payload in a way that is very lethal and it's hard to be detected. Okay. So to me, this is us defenders to be able to respond. This is an AI security problem.
The second big problem is kind of attackers attacking our AI systems. AI systems are becoming kind of very, very important part of the software supply chain. In the future, kind of most of our software stack will be in one or the other way AI enabled. And to be able to protect the AI systems, it is like a net new class of cybersecurity problems.
And for fact, Mitra has responded by, besides the attack taxonomy, also designing the Atlas taxonomy, which is specific to AI system. So kind of big, big stuff and very important. The third one is, you know, the fact that we are using AI in building software systems is changing the software system as such. And from my perspective, this is a massive software change.
Similar software change as we saw when computers got connected through internet. It was a massive change that created new vulnerabilities. One, we've built web apps. When we deployed software on the cloud, you know, when we built mobile, those kind of paradigm shifts in software created new software vulnerabilities against which we as defenders need to respond.
These are massive technical shifts in social industry that creates vulnerabilities. Again, AI security piece. And the last thing that I think is kind of closest to my expertise and to what I've been doing is the fact that cyber warfare is moving big times from computers, desktops, mobile devices, network cloud, to people's eyes, to people's cognition, to what people think, what people do online, how people behave, what actions people execute. And those kind of human-centered attacks are now AI-powered.
The precision and the scale and the availability and the cost of the human-centered attacks is just unprecedented. I regard this as one of the most difficult problems where we as attackers need to protect people against negative consequences of AI or the attackers using AI for high-precision, well-cast-to-human-centered attacks. Often people ask me, would you agree with AI regulation? I know it's like the controversial topic in Silicon Valley because in California, this didn't go recently through Gavin Newsom.
But I'm from Europe and we have a different perspective on AI regulation. And I often say, imagine in 2000 when there would be a regulation that somebody would say, you know, in the future there's going to be this social network. And you know what? We're banning political campaigning on those networks.
Would that be a good piece of regulation? I think it'd be a great piece of regulation. But at the time, we didn't know, so we didn't regulate. And actually, I think this kind of applies to current days.
There's this kind of new piece of tech that we want to use and develop. And we still don't understand full implications of adoption. So I think it's only fair for people to try to kind of regulate and protect. You mentioned EU AI Act.
You guys went a little bit further and a little bit faster, making sure that AI systems are not in position to make decisions that will influence people's lives. Whereas it's not the case everywhere else in the world. This is correct. And actually, I think that this advantage of the European approach is that we were likely would regulate ourselves from a leading edge innovation, which kind of goes against the advancement of the European AI sector and innovation as such.
AI security is a very special kind of discipline in cyber security. Because in cyber security, we as defenders are very responsive. We are responding and we are building kind of tools and technologies to be as defensive, as responsive as we can be. Right?
While in AI security, we need to think ahead. We need to kind of understand what are the implications of the technology, how the technology can be misused, how the technology can evolve, and what guardrails and defensive mechanisms we ought to build for the future. And this is very difficult because people have different views of the future. People cannot argue that the future will evolve different ways.
Somebody argues that the future is going to be very safe and effective. Somebody presents a dooms perspective on the future. And so it's also kind of getting political. Like, Europe has got different perspective of worries for the future than the US and then China.
So I think this is the key difference from kind of cyber security as we know it. I would say kind of one of the future examples is going to be kind of the data piece. Because, you know, kind of AI as it's trained today is running out of data. So there will be a big push for a kind of new crisis of data.
And believe me, kind of AI will be asking for behavioral data. Data about how things happen, what is going on, trying to understand and predict behavior. Also people behavior, right? So there will be kind of lots of hunt for kind of getting good quality behavioral data.
China is ahead. You know, they've got kind of, they've been collecting behavioral data for ages. But we have. And I actually think that there will be lots of AI security problems around protecting kind of privacy.
Not necessarily against people, but against machines who would be kind of excited to understand their behavior. And also, you know, I wanted to say that I'm a techno-optimist, right? You know, I really want AI to kind of grow as quickly as we can because we have hard societal problems that I believe can be resolved by future AI. However, at the same time, we kind of need to continue investing in safety of AI.
The same way how in the past, investment in cyber attacks were sort of parallel to the investment in cybersecurity. Hence, we've got this big cybersecurity sector that is well-funded, but it's protecting the world from cyber attacks. This symmetry doesn't exist in AI development, which in a way, or AI progress, which in a way represents an opportunity for AI misuse or AI danger. And then the investment in AI safety is minimalistic in comparison, right?
And actually, I think this is kind of one of the problems that we need to fix, that the investment in the AI safety, AI security is proportional to AI progress. It's perhaps it's reward-driven. At the moment, there is a lot more reward attached to pushing the technology forward and not really paying attention to what potentially should be considered as equally important. I'm more focused on the horizon.
I think that the horizon is very important. We might think one-year horizon or we think 10 years horizon, actually. And I think that there needs to be quite some focus to think about the long-term horizon. We can take a massive advantage and massive yields out of AI in the near future, but we may be driving ourselves into huge problems in midterm.
Yeah, I think some might call it sustainable. And this pattern just, it shows over and over and over in things that we do as humans in general. Yeah, I agree. I agree more.
It's interesting. And the four problems that you sort of laid out very methodically in the beginning to just run through them one more time. Those are AI-powered attacks. They're attackers attacking AI systems.
That's number two. Number three is like using AI to build new things, which is very interesting space, right? And then the last one's cyber warfare moving off of machines and more into the human space, which is just kind of scary. I'm totally with you in being an optimist.
And I love thinking through ways to give the good guys an unfair advantage. And I'm sure that's what you look for as a partner at Evolution Equity as well. If you ask me what out of those four is a problem that kind of worries me the most, where I can see that I don't have an answer. I think in the first three, I can imagine the kind of technologists kind of working shoulder to shoulder against attackers as we've been doing for the last 30 years.
In the last one, in the space of deepfakes, I'm actually really scared because I'm seeing this from a scientific perspective. Well, give me a deepfake and I can build you a detector. Give me a detector that is unbeatable by next deepfake. It's impossible.
And it's only a matter of how much investment and creativity and kind of infrastructure building and trust creation on the inside that we would be willing to give to minimize the impact of deepfakes and manipulated reality. I think this is very hard. It's going to be very hard. I guess in some sense, we've had this problem for a long time.
Like the answer to the question, who are you, is supposed to be off end. And we have this idea in our head that like, oh, if we jump on a Zoom, I can see you. I can hear you. You're talking like a human being.
But now a machine can do all of those things too. This is kind of for me. This is a new perspective. I actually didn't think this way.
Even three years ago, I was thinking that the biggest problem of a society is different. I thought that the biggest problem of society are the AI and recommendation systems. That AI is recommending what we do, how we live, what we think, what we read, who do we different with. That AI is driving kind of what we are.
I thought this is the biggest problem of AI on the internet that we should really focus on and try to kind of fight. Truth is that perspective change, you know, there are things that are even more scary than AI-driven life on the internet. And this is AI-created reality. Right.
You've heard that saying maybe that you're the sum of the five people that you keep closest to you. And hopefully, like, those are all human beings. But do you think we'll move into a world where like one or two of them will become an AI LLM that is just fine-tuned for you and it helps you lead like a better life? It's one of the perspectives.
It's kind of coming back to recommendation, but not recommendation to existing stuff, but recommendation to kind of created stuff. Yeah, this is quite likely. I've been hearing, I don't know whether it's true or rumor, that TikTok works differently in China than in the West. That in China, TikTok recommenders are more recommended educational videos, unlike in Europe and in the U.
S. , where the recommendation is kind of focused on short content, disruptive content, kind of erratic content. So what if in the future the educational content is going to be constructed? And sometimes it can be misleading.
Sometimes it can be training things that you just want to be trained on. Giving you skills that you wish to have. Yeah. First of all, we're optimistic, right?
We're optimistic about this, right? And, you know, maybe those technologies, it's just a tool, right? And so I imagine like those tools can be used and deployed for good. If there's a conscious choice of like someone is like, oh, I would love a life coach to keep track of like how much exercise I have or what I'm eating or the way that I spend my time.
I would like to be more productive. I would like to build more close relationships with humans. Like, and if a thing like nudges me towards these goals, why not have the same tool, you know? However, it's a lot more complex tool than a hammer or something that I can understand by looking at it.
LLMs are essentially block boxes. And you don't really know what went into the composition of that block box. We put a lot of trust into the output that is a result of a function that happens inside the block box. The question is who controls the data input that goes into the training of these models?
And who in general is in control of the quality of these LLMs? I need two things. One is transparency. And the second one is charts.
Okay? I need both. The transparency is really important. I kind of need to know what is going on.
How the software is constructed. What kind of data it is being trained on. And why it's suggesting what it's suggesting. Why it's kind of generating what it's generating.
This is why, for me, it's very important. I acknowledge that it's not for everybody, right? Not everybody kind of strives for freedom on the internet. The second one is charts.
If I know, I need to have an opportunity to choose. What does it, you know, what God does explainably give me if I cannot choose, right? And choices have been removed from us lately. And if we go back to cybersecurity for a minute, you know, in the past, there was crowds of choice that, you know, I can use this solution over that solution because I trust this brand better than the other brand.
But now, when we are seeing this, that cybersecurity is moving, attacking humans' mind, the most danger is on closed platforms. Why? There is not an independent new cybersecurity firm that is focused only on scanning social media. And me as a customer, I cannot choose who is protecting my life on social network.
These days, it's impossible. Platforms are closed. It's very difficult to kind of transform my security expectation from product to product. So I don't have the choice.
Okay. So I think that we need both things. Transparency, understand what is it that we are doing, and then plethora, plentiful of choices to choose when we live online. But they tell us to trust them, and that sounds good enough.
Yeah. True. Well, one of the interesting things about LLMs and data is not just like the transparency and giving us the choice, but the value from a company's perspective of data. It's just gone through the roof.
Because you can do all of these amazing things now with data from their perspective that you couldn't do before. Right? This is opening new doors. And it goes back to this idea that if you're not paying for the product, maybe you are the product.
It goes kind of back to my previous point about TikTok in China and in Europe. Actually, it goes back to education. That we should be trained to be inquisitive and kind of questioning and asking for reasons and being trained so that we dig deep through this. We're kind of gaining more freedom.
I'm really happy. You know, the conversations that I've had with younger folks. That they're asking those questions. To me, kind of being a university professor is the most exciting part of the job is that, you know, I'm in touch with the generation.
And I can kind of see first and how they think. And trust me, these days we are underestimating some of the problems that this generation is not underestimating. And, you know, they will remind us in the future. Yes, they will.
I hear like, I told you so coming here pretty soon. In general, when we talk within a business with leadership teams about security and compliance subjects, often they are seen as an overhead. However, it's very important that we structure the dialogue such that it's not seen as an overhead, but rather as a core component of the system and of the product that we are delivering into the market. But having said that, what are your thoughts?
How do you usually approach the discussion of the research and development is important, but we need to think about the implication if we don't do it the right way? So, one remark I want to make is that, you know, as an investor, we just are investing for the purpose of generating returns, right? But at the same time, there is a number of ways how returns can be generated and people are kind of using different ways how to generate returns. And you can generate returns while solving important societal problems.
And I chose to kind of invest in AI and mainly in AI in such a way that I'm focused on problems that would reduce risks of AI explosion. And this is partly because I'm an AI scientist for the last 30 years. And, you know, even though I haven't invented anything related to LLMs, I still feel I've contributed to the AI progress. And it's my duty to kind of think about the negative consequences of the AI progress.
So, this is why investing precisely in the space of AI safety and security. I love that answer. We believe that it all boils down to the UX, the user experience. All of the systems are solvable.
Cyber security in itself is already very complex. There is a lot of noise. There is a lot of data. But once you roll up your sleeves and come up with a solution that is elegant and solves a very specific pain point, everyone wants it.
Everyone knows there is a problem. Everyone wants to solve it. You just have to come up with a solution that is elegant. Actually, I think that we are entering a phase where you can kind of worry about composition.
How am I composing the great capabilities that other people kind of came up with to deliver a layer that will make those composed parts of the system available in the seamless, safest, most exciting, and natural way to the user to solve their problems? I totally buy this. I totally buy this. And kind of improving in the world of interfaces can increase the adoption of AI.
Unfortunately, there is also a silver lining, which is some of the interfaces can be not such a great thing. Like when you use chatbot for search, it's easy. It's nice. It's elegant.
You know, you just type a question, chatbot tells you. But this type of interface precisely removed some freedom. Because in the past, when you kind of did your search yourself and did your research, it was kind of more laborious. But you got bad results because you were able to moderate the result.
You were able to remove hallucination and you contributed. This new level of interface took away this freedom and it's kind of pulling your legs with things that are not always 100% secure and safe. And precisely, this is why I'm excited about AI safety security. Because once you are building an AI system like AI enabled or LLM enabled chatbot, there are some negative consequences of the wider adoption.
I think it's fun to build tag that shows to people. Yeah, I agree 100% on all of those points. When we build things, if they're not easy to understand, if they're not easy to use, if they're not solving problems that real people actually face. Like what are we doing?
Technology is great, but you're going to see the adoption when it solves real problems. And depending on how we deliver that technology, it will be more seamless, less seamless. It will fit into our lives in new ways. And just like when Google came out and all of the search engines came out, it almost became a verb to know how to use them effectively.
And now I'm Googling something. It's like a very common phrase and it takes a little bit of skill to know how to use that tool effectively. I think maybe we'll see very similar new skills develop like around the way that we interact with this new type of LLM interface. Which is really, it's just an interface, right?
And it's interesting and a little bit scary because you can put as much data in there as possible. And it can come from anywhere. And we, you know, at the moment, it's not very clear where it's all coming from. It seems to be reading the entire internet like six or seven times over.
And then we put some security layers in front of it, behind it, on top of it, on the side. Actually, I think that you, I guess we've touched on this. I think that we are entering the area of new families of data. I've spoke about one, which is the behavioral models, kind of data about what is happening and what people are doing.
And even kind of loss of physics and stuff, you know, what are the kind of causalities in the events that we see around. I actually see there is another class of data that's coming. And this is the what is data. Data about things that didn't happen or things that may happen.
So actually, I think that there is this new huge area of kind of AI simulation. And different types of jobs and works and kind of intellectual kind of problems where AI will be kind of building all these kind of non-trivial, non-explainable, actually, simulators that would tell us what happens if things happen. And actually, I think that, you know, maybe we will be even kind of discovering new things using AI. Also, in things like software generation, maybe we will see that some things can be solved in the ways that, you know, we didn't think.
That there are maybe new programming patterns that, you know, we as people didn't use. That kind of software generation will not be only aggregation of human knowledge. But there will be new trait of knowledge that is generated through AI running and executing programs and kind of doing good quality analysis of the behavioral of different types of programs and then kind of suggesting new ways to program. So I'm actually thinking of this.
The data that is coming from execution is a new vector of data that I think will be feeding a new generation of our labs. I love it. And your 30 years of experience in the AI field and overlapping with security, I really feel the deep care. And I'm entirely, I'm just filled with gratitude to know that folks like yourself are thinking through some of these problems.
And on the forefront of like really where the tip of the spear is in terms of these world-changing technologies that are just there, they're evolving so quickly now. A quick question for you. If you could go back in time and meet a younger version of yourself, would you? And if you would, like, what would you, well, what message would you have for your younger self?
So I often think that maybe I would kind of give myself a recommendation to do less things and focus more. But when I think about this recommendation kind of deeply, then I come to a conclusion that I wouldn't. You know, unlike many other people, I were doing many different things. At the same time, I didn't win a Turing Award, right?
Which I regret very much. But the truth is, if I had, if I'd be choosing like a career or kind of hardcore computer scientist in one very narrow discipline and not being distracted by startups and other things, I wouldn't. Because I did enjoy kind of collecting a dissimilar experience that the life was there to offer me. And I actually think that, you know, at the age of 52, when I'm making choices, I'm making better choices.
And when people are asking me for advice, I think I'm giving better advice comparing to my other trajectory that, you know, I could choose. So I'm actually advising people to focus more and to go deeper, kind of respecting that, you know, I'm where I am, precisely because I took a trajectory of life that kind of gave me a diverse experience. That's an incredibly compassionate answer for yourself, for your younger self. And I believe wholeheartedly, like the things that happen to us and the choices that we make around those, they shake us.
And yeah, and you're an incredible human being. And so, you know, all the gratitude in the world for sharing openly and honor with us. Well, thank you. Thank you.
It was a great discussion. And I really appreciate your empathetic questions and your kind of deep interest in my thoughts and helping me do evolve thinking while debating those exciting topics. Thank you for this. One super leading question.
Is there anything that doesn't exist that you wish someone just would just sit down and build maybe with that sense of focus that you're talking about? So if I would kind of sit on a billion dollar fund that I would be only investing myself, you know, I'd be kind of really kind of helping a scientific discovery that would help with the climate. Because even though it's not a trendy topic these days, I still think that we owe it to the generation of young leaders that is coming in 10 years from now. And if we do this now, it would be easier than kind of working on this in 10 years from now.
Oh, yeah. That's spectacular. All of the gratitude in the world for coming on the security podcast of Silicon Valley. I'm one of the hosts, John McLaughlin.
And I was joined today by Sasha Sienkiewicz, the other host and our esteemed guest. And thank you to all of our listeners for tuning in for another episode of the Security Podcast.