59. Neil Serebryany, Founder & CEO of CalypsoAI: Securing AI's Future and Tackling Tomorrow's Risks

Hello, everyone, and welcome to another episode of the Security Podcast in Silicon Valley. I am your co-host, John McLaughlin, and I'm joined with co-host Safra Sinkovich. Today, we have an amazing guest, Neil Cervani, the founder. Did I say that correctly?
No, you got that perfectly. See, there's an example of something that we'll cut right up. He is the founder and CEO of Calypso, an amazing startup that helps protect and tackle a lot of the new interesting problems around AI and AI systems. Welcome to the show, Neil.
Dan, thanks so much for having me, John and Sasha. Look forward to the conversation. Look forward to any areas of agreement, but perhaps more interestingly, any areas of disagreement that we end up finding and having the ability to discuss. Well, this will be a great show, then.
I can tell right away. That's a perfect way to open it up. Yep. Them are fighting words.
I'm not sure I'd go that far. I think these are words that are indicative of the moment in time that we're in. The moment in time being very unclear in terms of what the future holds. It's very murky out there.
It's very murky. You have to be a little bit crazy to get into security. I was a little bit. You know, I see all of these difficult problems.
I just jump right in. Make build a career around the whole thing. But what's your story? How did you get into security?
Yeah, that's a great question. So for me, my security journey actually started to some degree incidentally. And when I say incidentally, what I really mean is that I was fascinated more so by a specific problem that I wanted to solve than I was by the kind of industry at large. I had always been interested in AI and I had been interested in the impact that effectively mimicking the human brain would be able to have on society for the very, very simple reason that if we are mimicking the human brain and we have the ability to do using GPUs, we then have the ability to increase the capacity of GPUs over and over and over again.
Meaning that our ability to create artificial version of the human brain or technologies that are better than the human brain are potentially infinite. Whereas the potential of a human brain to some degree is fixed. And through that kind of journey of being fascinated by AI, I ended up spending some time at the National Geospatial Intelligence Agency. And they had this mission to see and understand the world.
And as part of seeing and understanding the world, they had way too much data. As you can probably guess, the entire world produces enormous amount of data and they had no way to be able to analyze it at scale without leveraging AI. And in that kind of context, if you're using a technology, you have to assume that someone else is trying to figure out how to disrupt your use of that technology. And that's where you get into kind of the security side of the house.
How do I leverage this technology in a secure way where I understand what my risks are? Yeah. So what does secure AI mean? Like what it's like, this is such a new space.
This is a new technology. Security is fundamental to building great businesses and great systems and amazing user experiences. Everyone brings a different idea, I think, maybe to the table. How do you see it?
I think, realistically, we should start with the fact that it's impossible to ever have, you know, 100% secure kind of systems. Just like everything else in security, it's really about risk reduction. Much more so than it is about ever being able to reach this. Yeah.
Oh, perfect. So the question becomes, what does security for AI or secure AI mean? And really at the kind of most simplistic or basic level, before we get into the specific subsectors, before we get into the data training or deployment side of the house, it's really just having a system of controls and potentially technology in place to enable you to better monitor and control for AI-specific risk. So in a sense, the security in the AI is not different from the core concepts of the security as we know them in the existing systems.
You still have to protect the infrastructure, which hosts the services that provide AI services. We still have to protect the data. What do you see as the new pillar in the context of artificial intelligence and security? Is there some new function that we need to protect more closely than we have in the past?
Yeah, it's a really good question. So on the new function side of the house, the protection really needs to come in one of two places. One is on the underlying model side of the house, where you need a different set of technique in order to be able to protect against people poisoning your data or data poisoning kind of related attacks. And you need to be able to protect against effectively folks messing with your training or kind of retraining side of your house for models.
And then on the deployment side of the house, I think it's really bringing together a lot of technologies and systems that already exist into this new kind of context. And this new context is one where you're likely going to have multiple models, multiple applications, multiple kind of agents that you're working with in the context of your kind of global AI footprint. And you need some way to be able to have visibility across that entire kind of potential attack surface. Some way to control the kind of data ingress and egress.
And some way to be able to ultimately enable the compliance side of the house that's going to become an increasingly kind of large part of the AI surface. So it's very interesting that we came to a very natural next question, which is AI is essentially based on processing large volumes of data. And there are certain regulations that are evolving based on the developments in the AI space. Are there any specific regulations that we should all watch out for in the AI space?
What do you see? Yeah. I mean, you already have your kind of existing regulatory footprint. When I say existing regulatory footprint, I'm really referring to your data privacy regulations.
GDPR inside of Europe, the California Consumer Protection Act inside of California, the various sets of regulations you have in Canada and Japan around data privacy and what you're doing with sensitive information. And then you have all of this new legislation that is coming to the forefront. In the US, it's likely that we're going to be legislating in the same way as we have for privacy on a state by state level before we do anything on a federal level. And so there is a new act in California that is going to potentially affect model developers, but also those that end up using models as part of the work that they do.
In Europe, you have the new kind of EU AI app and you have different sets of restrictions based on high risk, medium risk, low risk systems. And we're also seeing legislation as well as this broader concept of AI nationalism in Japan, Canada, the UK, India, and certainly China as well. Yeah, as far as I know, the EU Act, one of the requirements is to disclose the fact that the user is communicating with the system that is powered by AI. This is just one of the examples to make sure that users are not misled into the understanding of what is providing the service behind the scene, so to say.
You mentioned a new act in California. Can you give us a little bit more information about what it is and how will it impact companies that operate and do business in California? Yeah, 100. And so I think it's helpful as we also separate out those that are building, let's call them foundation models, because that's the kind of art that everyone's been using for NICSTRA or OpenAI's CheckGPT or on Entropics Cloud versus those that are fine tuning or leveraging models.
For those folks that are actually building these foundation models, they have to effectively do kind of safety testing of these models prior to deploying them into production. So effectively, AI red team, if you want to think of it like that. And then they have to put in place various compliance measurements. They have to have the ability to shut down their AI system.
And they have to effectively have lower protections in place for these foundation model companies. When you're on the use side of the house, you sort of get into a similar structure as the EU AI Act in terms of the risk associated with the AI application. And you have a different set of regulatory requirements for higher risk versus lower risk applications of AI. So help us and our listeners understand, how does Calypso fit into this picture?
What is the key insider? What do you guys do that's better than everyone else in the world? Yeah, really good question. So what we do is really simple.
We allow you to prepare for this AI future. You don't necessarily know what model you're going to be using moving forward, what set of agents you're going to be using moving forward, just how much sprawl you're going to have across your AI infrastructure, particularly as you operate in multiple jurisdictions. And you need one place to be able to have this ability across all of that AI usage, as well as ability to be able to change out the underlying part of your AI stack that you're using.
One place to be able to control against specific sets of data flowing into your models or out of your models, as well as be able to protect against things like prompt injection attacks or other AI specific attacks. And so we give you that command place or that command footprint to be able to understand what you're doing across your AI usage to be able to govern and secure it. That sounds like a really challenging problem. And I'm very grateful that there are smart people working on this new space.
And when you fast forward into the future and you look at Calypso AI in that future, I'll let you decide how far into the future we'd like to go. But what does that future look like in terms of success for Calypso AI? Yeah. So fundamentally, I think that AI is really important to the world.
And the reason why is a couple fold. One, if you actually look at labor productivity has really stagnated, at least in the US over the last couple of decades. It's actually been declining in the EU in the kind of post-COVID era. And if you want to think about global wealth or global GDP, in the kind of post-Malthusian era, it's really been based on labor productivity, much more so than like the determinants of land.
And AI is a technology that has the potential to be able to increase productivity, which over a long enough time horizon should increase global wealth or global disciplines. And as part of that, we really want to enable that future. We want to enable folks to be able to leverage any AI vendor, any AI model, any AI system or systems across everything that they're doing and to be able to have control over that. And so success for us is really enabling as much AI adoption as possible for organizations and being able to give them that confidence that while they're leveraging this new kind of technology, they're doing so in a way that is secure.
What do you see as the biggest pain point for a modern organization that is looking to boost productivity with help of AI? And I know my question is very generic because it implies that we take an average company, which doesn't really exist. Different companies provide different functions. But based on your experience, what do you see as the biggest pain point for a company that is trying to adopt AI technology?
Yeah, super good question. I'll give you the non-technical answer and the technical answer. So the non-technical answer is really process change inside of this organization. If you really want to see the full benefit of AI, and most organizations are not there today, you need to be able to think about what do jobs look like inside of your organization?
What do processes look like inside of your organization? And you need to be able to adopt for a future where AI systems are only able to do a specific or certain part of a broader kind of job or broader jobs to be done kind of context. On the technical side of the house, I think it's being really clear in terms of what are those use cases that you want to actually start adopting AI for. And then as we're seeing significant amounts of change within AI, having the ability to quickly respond and react to these changes and potentially be able to change or modernize the infrastructure that you are building upon.
And that's one of the key value propositions that we offer for organizations that we work with. Double click on your go-to-market strategy a little bit and help our listeners understand what this ecosystem is shaping up as. You mentioned those core models in the beginning. So these are where our large language models are coming from.
There's a lot of companies working on them. They all tend to be big companies because it takes a lot of resources to build them. And then I'd see, I'd be interested to hear your opinion on this. I'd see like a middle layer where there's services being built on top of those large language models.
Things like code completion services, little helper coaches. But these tend to be more task-specific companies. And entire companies are being spun up in sort of this middle vertical. And then we have it in the B2B space.
We have companies, traditional companies, Toyota, NAS, you know, food companies, agricultural companies, whatever. This is the rest of the economy, like not in the AI space. And there's little connective tissue between those three layers, right? That middle layer of service-providing companies that are starting to pop up are using some large core or a service like OpenAI or Anthropic or whatever.
And then they're providing services out to, you know, all of our traditional economies. All of the traditional companies that are consuming the benefits and augmenting their staff. Calypso AI, which does it fit in both of those parts of connective tissue? Or do you primarily see traction coming in from non-AI companies trying to protect their use of specialized use of AI?
Or how does that, what does that discussion look like here? Yeah, we're really focused on organizations that are leveraging and adopting AI. So we do help protect quite a few applications that use or leverage AI. But we think that over time, the space of protecting foundation models is a space that is going, is not really going to exist at scale.
And it's not a space that necessarily needs companies like ours. We think that most of the innovation as well as most of the actual need for security is really going to come from organizations that are effectively, you can call it fine-tuning or leveraging these models that already exist. So in your parlance, the kind of ladder to side of the house. And I also think about this within the context of kind of the innovation-invention dilemma.
Now we're starting the lines a little bit between innovation and invention. But as a general rule of thumb, most of the kind of economic progress, most of the economic reward of any new technology really comes from innovation associated with that technology, rather than the kind of general invention of the technology in the first place. Do you want to explain to our listeners what I understand? I agree 100%, but what is the difference between innovation and invention?
Really valuable. I am Thomas Jefferson, because we're going to go historical here for a second. And I have helped man see at night beyond candles. I've helped invent light.
The invention of light in and of itself is incredibly valuable, of course. But the innovations related to that, from actually being able to produce light at scale, to the energy that's necessary in order to be able to power that, to even all of the industries and changes in kind of the way that people live, their lights, generated a lot more of the economic benefit associated with light and the actual invention of, you know, kind of artificial light in and of itself. That's a perfect example. What is innovation versus invention?
We touched on the most important component of most organizations is data. The data is what powers modern businesses. Calypso AI is built to ensure that the data is well protected in the new space of artificial intelligence functionalities. Is there an interesting use case for data tagging that Calypso is helping with?
And I'll explain what I'm actually trying to ask. One of the biggest pain points that we see in the industry is abundance of data. However, we don't always know which data sets are important for specific functions that could be augmented by artificial intelligence. How do you see this problem and what do you see the solution to this problem?
Yeah, super good question. I don't necessarily think that, as you said, more data is likely to lead to any sort of super positive results or outcomes for the average enterprise. It's much more of a question of what is the right type of data relative to the example or the use case that you have. And how do you augment that data within the context of a specific industry or within the context of a specific use case?
And so I think that's where you create that kind of natural space for data aggregators to exist, who have that ability to take a broader subset of data within a specific use case or industry context. And that's also the opportunity space for a lot of consulting firms who are really thinking about what is their play within the kind of AI future or AI space that we are now in. And the value of traditional consulting to some degree declines in the context of new kind of newer AI models progressing themselves forward. When you talk to the potential prospects and when you talk to existing customers, I'm sure you have ongoing conversations about what is the next big feature that Calypso will build.
And I'm very cautious and I acknowledge the fact that you will not be able to disclose the future roadmap of Calypso. But based on your experience, where do you see the industry is moving towards in the context of data and data protection and ensuring that models are not leaking data and models themselves? And the infrastructure and models have all of the proper security controls to ensure the safety and protection of the data? Yeah, a hundred percent.
So I just start with where we were as an industry and where we're going. From a where we were perspective, we were really in what I would describe as scanner wars for the last period of time. And what the scanner wars were everyone effectively building machine learning models to detect PII or to detect PCI or to be able to detect conjection attacks and basically building more and more scanners. Curating data sets, building scanner, curating data set, building scanner, and then competing over who had more scanners.
The problem there was twofold. One, it was a never-ending problem because you had more data types constantly. You had more geographies that people wanted to be able to account for. And you constantly had more and more attacks that needed to be protected against.
And then problem number two associated with that was simply the language problem. Most organizations operate in multiple languages. Most AI-specific attacks, particularly those that target language, are possible not just in English, but in other languages as well. And everyone's scanners really were English-centric.
And so what we came out with was the first MLM-powered scanner, the scanner to rule them all. That would enable you to be able to operate multilinguists. That would be able to allow you to protect against specific types of data or specific threats, no matter the jurisdiction. So as a simple example, covering Japanese driver's license data types with a single kind of line request for your model.
And so that's the kind of most recent advance that we had where everyone is going. You know, this is not really going to be a surprise for folks, but it's really within the kind of authentic AI space. It's sort of the next big thing for the kind of AI industry within the context of maximizing productivity or productive use cases. Fantastic.
And thank you for that. Very well structured and deep response to my question. So yeah, being a founder is lots of interesting moments. And I wonder, as you look back on your days with Calypso, if you might be so bold to pick out a day or maybe an event that happened that you could really point at and say, yeah, this is absolutely one of the best days that we've had at Calypso.
You think back on your entrepreneurial journey. Yeah, I think it's probably when we had our first large scale production customer and we finally felt like we crossed that chasm between, you know, here's something that we're envisioning. Here's something that we're building. And here's something that someone is actually using at scale and benefiting from.
And I think that being the biggest day in this context is really something that's true for companies that are focused on enterprise. I think within the kind of context of enterprise selling, by the time you actually make it into a large scale production release at a major kind of Fortune 500 style company, it's a day that by implication has had many previous days that have needed to go right. No, that's incredible. And you mentioned Crossing the Chasm.
Would you recommend that book to the listeners? You know, I feel like Clayton Christensen is one of those authors that pretty much everyone in startup land is probably familiar with. So I feel like it's less of a recommendation and more of just one of those kind of required readings for everyone in the kind of AI space. If you're asking for a book that I would recommend, probably something like, I don't know, Technological Revolutions and Financial Capital, who's written by a woman called Carlita, would be a recommendation.
And she effectively writes about how technology has shaped our modern society and the impact of individual technology advances on a macro basis within the context of global wealth, global determinism and kind of company building to some degree. Very nice. Very nice. I have a question to follow up on the previous statement.
Given what we know about AI and where AI might be in the near future, how do you see it will change the culture, the society and the way that we as humans approach day-to-day tasks and how we operate day-to-day? I think it's going to redesign what kind of how jobs are optimized for in the short to medium term. So I think that what we're finding is that AI systems are really good today at data synthesis and fairly good at what I'm going to describe as like product. And I think that what you're going to find is that we are going to have increased AI system human teaming and the need for folks to be able to design around the capabilities of AI systems.
For example, if you're a business analyst, you might be leveraging multiple AI agents as part of your work and you might be supervising or otherwise putting that kind of information or that presentation together in the context of your job. Yeah, and you did mention earlier that the productivity in the EU has dropped based on my knowledge. And I'm not claiming that I know much about levels of productivity, but based on what I know, the productivity levels in the US are lowest since the Second World War, which is pretty low. And it sounds based on what you believe the AI future will look like.
It will help to boost functions that exist today. And it will help us to get through functions that execute on a daily basis much quicker with a higher level of certainty and accuracy, which is definitely something that all of us, especially we as part of a larger system and as the group of people really strive for. Resistance is futile, like the Borg, if that's what you mean, Sasha. Yeah.
And I think that on a kind of more macro basis, we're not necessarily going to be able to get into kind of every technology and how it affects the macro of our society and kind of other societies. But everyone kind of talks about this kind of post-HEI future where you're not necessarily going to have any need. And I don't actually think that's the case. I think that just limiting it to two technologies, it's likelier and likelier that we're going to get to being a species that has the ability to travel to Mars at a fairly similar timeframe to what we're likely, where we're likely to see the Aventos depending on how you define AI.
And I think that kind of increased resources that we have as a society, thanks to our AI future, is likely to be one of the core kind of propelling capabilities or technologies that will allow us to become a species that exists on multi, multiple planets or a multi-planetary species. That's really incredible vision, the long-term vision, but also that shorter-term vision of, it sounded almost like everyone with a white-collar job could be promoted to manager. And then what you're managing is you're managing your AI minions. Yeah, I think that's a really well-articulated way of putting a chart.
Well, I mean, I'm just rephrasing your insights just, but going back to the founder journey that you've had so far, who was that first customer that you landed? I don't know if you're allowed to say. It was actually the Department of Defense. Oh, wow.
That's an incredible first customer. That must have been like quite the uphill battle. Yes, I think that ultimately one of the biggest advantages of working with customers like that is how they put you through your paces and just the level of kind of rigor that's required in order to be able to work with them. The converse question of your best day at Calypso is what was the most challenging day or moment?
Yeah, of course. I think the most challenging moment for us as a company is probably when we were in between two changes. This was prior to AI becoming a widely talked about subject. And we had been expecting this really large contract that was really going to make the company.
And it didn't end up coming in. And I think that the kind of fact that we had been finding a lot of hope on this specific contract that was supposed to be in place that everything had been talked on, and it didn't happen in coming. It was probably one of the more painful days inside of the company. Did they actually have a point in time where they said no, or did you just get dragged through the mud and eventually you had to realize like for oneself what's happening?
Correct. Even though there were challenging moments, this is one of those things that gives a testimonial to the character of a founder of the company. Regardless what happens, there are hard days and there are difficult days. But as long as you're persistent and as long as you believe in a product and you have great value add to the market, things just tend to work themselves out.
And that's a great testament to you, even though you had that challenging day. And we are where we are today. Yeah. It's not a failure unless you give up.
As long as you keep on persisting, it's just not a failure. When you had those discussions with large institutions that care about data, they care about systems that govern the data, what was the most challenging component to close the deal? Did you have to change the product and offer an on-prem deployment or a standard form of on-prem deployment? How were you able to navigate that space?
Yeah. I think one of the things you have to be cognizant of in the enterprise is to not orient yourself towards an individual specific customer, no matter how shiny that customer is, particularly in the context of a customer that wants things that are very unique to them. And that's one of the harder lessons you have to learn because that kind of incremental ARR boost during your perhaps early days looks really appealing and really shiny. But if it takes you off course of the broader goal or the broader kind of mission set, it isn't worth doing in the first place.
Yeah, that's a great way to answer the question. Essentially, we always have to weigh in the benefit versus cost of doing certain things, especially early on. Yeah. And I've also seen that same sentiment resonate true with the investors that you may bring on board because they may bring their own expectations.
And if they're not aligned with your vision, there can be some tension. I've seen that happen in a few companies in past lives, you know. What is an average profile or customer? Lips of a customer.
Who that customer is? What are they trying to solve immediately when they reach out? Yeah. So the average customer for us in the really large institution with a propensity right now towards being in the technology, financial service for insurance vertical.
But we have customers across many different verticals at this point that are agnostic, like most security companies, to the vertical that customers come from. They're often someone that has or comes from the security part of the organization. And the problem that they're trying to solve is we are deploying AI in whether a task bot or an agent or an application. And we need a system to be able to help to secure that deployment.
So they reach out to secure the deployment of the entire logic that serves or services AI functions? Or do they mainly reach out to govern and protect and ensure the integrity of the model itself? What is the main sticking traction, initial traction point? Yeah, I think a good question there.
It's really on the we are exposing this model to a set of users and we're potentially doing so via an integration into an application, an integration into an agent, or with users inside or outside of the company. And we need to be able to have visibility and guardrails over the usage of that application or that agent or that access point into the model. Essentially, you take the full ownership of the new understanding of what security means in the context of AI systems. And you take away that headache from the company and from the customer that reached out.
And you have the full ownership and promise that the now new service that is integral to the success of a company is fully protected and secure. One. Eight. Do you ever work with cybersecurity insurance companies that are trying to insure?
Are these products that are being released by your customers that you give them some sense that you've reduced or eliminated a huge section of the risk and they get a discount on their insurance rate? Yeah, it's definitely an area that we thought about and potentially something that we'll think about further in the future. I think one of the fascinating things about the insurance world is just how much of the world's risk is really reinsured by four companies, Munich, Swiss Re, Score, and Hanover Re, and the impact that one of these companies can have in terms of their policies being applied downstream to your primary insurance carriers. Yeah, definitely.
Yeah, definitely. Well, this is a huge success with Calypso AI. This is no small feat to build a product, have a division, build a company, track the talent, find the market. So congratulations on all of your success so far.
I'm really excited to see that success continue and really hockey stick. I'm curious, with all of your lessons learned through your journey so far, would you accept an opportunity to meet a younger version of yourself? And if you would accept that opportunity, would you have any advice for your younger self? Yeah, I think the kind of biggest single thing is continuously to be confident in your own abilities and steadfast in your own convictions.
Be resolved. Nice. Exactly. Neil, if you had an abundance of time on your hand and based on what you see in the industry, based on what you know about the pain points, based on the pain points that you witness yourself, is there a service or a tool you wish you had time to build or you wish someone else would just go off and build it?
Yeah, I think that particularly as you start to look at AI safety debate moving forward, it's increasingly likely that you are not actually going to get to it in the context of the transformer architecture models that we have available to us today. That effectively, rather than just doing word prediction, you're going to need models that have the ability to actually interpret the world around them and to actually understand beyond just doing token prediction.
And in a world where you have models that are actually understanding the world around us, I think that you have a number of very interesting opportunities and tools that need to be built from the kind of cross-cultural understanding side of the house, where within a lot of the psychology and kind of emotion literature, a lot of technologies around conformist versus individualist societies lead to very different results. Yeah, communication is very different. And it's very different. The last communications that we've had, but also enhance cross-border communications and cross-cultural communications.
100%. Beautiful. It's so funny when you're describing that example of what's really missing to get our autonomous agents, AGI, out the door is the piece that actually goes out, looks at the real world and sees the real world for what the real world actually is. And not just doing word prediction.
And that's just doing word prediction, as you put it. The thing that came to my mind was a backseat driver, AI version that could see the real world, but then articulate like, hey, turn left here. Go over here. Oh, no.
Watch out for the, watch out for the pedestrian. But this is where we, as a user of the system have to trust the system that it does the right thing. And this is where company such as Calypso AI comes in to build a trust. So thank you so much for taking on a very forward-looking and critical aspect of the integration of artificial intelligence in daily life.
And that function is to make sure that safety controls are in place. Yeah. Thanks so much for having me, John and Sasha. It's really great and look forward to it.
And thank you to all of our listeners to tuning in to another episode of the Security Podcast of Silicon Valley. I'm your co-host, John McLaughlin, joined by Sasha Sienkiewicz. And Neil, it's an absolute pleasure. Thank you again.
Neil, it's such a pleasure. Thanks, guys. Thank you, everyone.