21. Sergej Dechand, Co-Founder and CEO of Code Intelligence, on Fuzzing the Future

Hello, everyone, and welcome to another episode of the Security Podcast in Silicon Valley. Sergei has years of experience in vulnerability scanning, security consulting, as well as usable security research. So far, he has worked as a freelancer in application security consulting and advised enterprises. He's also worked as a researcher at Fraunhofer University of Bonn in Germany and the National Institute of Informatics in Tokyo.

In the last few years, he co-founded Code Intelligence with his research partners Khaled Yagdem and their professor Matthew Smith. As the CEO, Sergei leads Code Intelligence to enable companies to simplify their application security testing with the goal of making code intelligence usable for developers. Welcome to the show, Sergei. Thank you, John.

So most of my research, it was coming actually from the University of Bonn. This is where straight from the PhD program, we founded Code Intelligence. And so you were doing security research or you were doing research in fuzz testing, and this is where all of the sort of the intellectual grit, the meat for code intelligence sort of bubbled up, became inspired. Yes, exactly.

So we were doing research at technically in usable security and in usable security, you tried to evolve the human factors into the security. And at our research group, our humans or our users were not the typical end users, but we were looking on how helping IT administrators, developers who are also human, how to support them to make things more secure. And yeah, the research group was led by Matthew Smith, who was our supervisor and Khaled and I being the co-founders were PhD students. And then at the end of our graduate, we were thinking about what we do next.

And at that time, we were collaborating with some industry partners, and this is where we tried fuzz testing, tried static analysis there. And based on the research ideas, we kind of accidentally founded a company. Initially, we wanted to build a product and somehow ended up with, oh, now we founded a company, the three of us, and wanted to build the product. Amazing.

So it's the passion and the drive and the focus on getting that difficult problem of usable security. So if we unwind a little bit, I always love to ask folks about those formative years back in the day. So if you think back to your childhood, do you have any stories that really helped shape who you are today and where you got that passion for usable security? Yeah, maybe let's support the usability part, but let's say I started pretty early with computers.

So my father had a computer and used it mostly for gaming. And then around the age of 11, 12 or something like that, I was reading some novels. And one is the Cuckoo's Egg. It's a book written by Clifford Stone and has seen that someone was stealing resources on the network and kind of started to dig deeper and thought there is a software bug, but in the end, someone else was using the resources.

And this was like my first introduction into hacking and stuff like that. There is also another book, which is known in the US, but it's 23. So there was a famous hacker in Germany called Koch, and he was a little bit into the conspiracies, but also in the hacker scene. And it was a biography about this guy.

And based on the two books, I found it really fascinating as a child. And then at some point, I had some computer games and I needed to give the games back and I wanted to play them even though I didn't have a CD-ROM. And this was my first experience doing this very first time, doing it myself, basically by the copy protection. And then I realized this like passing as a child is more fun than the games itself.

And yeah, this is how I got accidentally a little bit into programming. So I started with books, something like Delphi for Kids. I started to learn Perl and PHP. We must be part of the same generation because I have had very similar experiences in good times.

Those are good times. Did you, as you were going through those years and you started to develop a strong sense of self, was there anything that jumped out at you in the world that you regarded as true that most people in this world totally miss or regard as false and inspiration? This came years later. Seriously, it wasn't in the childhood.

It was later when I was seated at the university and I was doing some consulting projects. I slipped into the IT security industry and this is where I found the inspiration. And being beginning of 20 years old, maybe a little bit later, so something like 25, and I would ex-aggregate a little bit. So what I have seen in the IT sec and app sec business, because those were the years where IT security got really important because more and more things were transferred to the internet.

The iPhone came out and one was connected already and the business started to thrive, but with that also like the criminality and based on that, the IT sec. And if I ex-aggregate a little bit, the IT sec and app sec business, it accidentally got in the direction where they started to sell certifications, certified policies, audits, assessments, compliance automation, and I don't know, certified enterprise pen tests and stuff like that. Basically focusing on the regulations. So it was really focusing on the managers and those have a lot of check marks.

This is basically also the right thing to do from their perspective because to be legal or to be legally on the safe side, they have to comply with all the rules kind of stuff. But at the same time, me working as an engineer, I have seen that it creates a lot of friction with the managers trying to focus on compliance and filling in all those kind of check marks. Then you have the security people and the security people were, hey, let's find more security issues. And they had the fun to attack the software and tell the developers how to improve it.

And then, yeah, the developers who got defensive because in the end, everyone was criticizing their work from the management. They got to release faster, but at the same time, fulfill all the check marks. And there is one truth we all know, and it's that basically all those kind of assessments happen, but in the end, the software and the networks were still insecure because a lot of parts of those certifications and audits just do it from a very high level perspective and don't dig into the technical details. So in the end, you've got the certified software, but it wasn't really secure.

And from the perspective of the end user, if we are reading any headline which is ending up in the newspapers, you read something, company X was got some ransomware and they were hacked. And then there is something like a reason someone opened a PDF and based on the PDF, everything was infected. And it looks all of a sudden like the person clicking on that email did something wrong. But in the end, it's because there was a security issue.

There was a misconfiguration and we blame the end users. But if I receive a CD from someone, I want to open that PDF without thinking. And of course, this is basically. Yeah, this is basically at what stage we are at.

What really inspires me if we dig deeper into the companies and stop doing this certification or security process top down, but instead start with the engineers who are writing the software, start with the people who know best what the software does because they are engineering it. And somehow based on their processes and their tooling, try to do the other way around. You get similar results with the certification. You still can achieve everything to be legally safe, but you get your products actually secure from an end user perspective.

I want my phone to be secure. I want every software I use to be secure. So in the end, based on this inspiration, on this insight, we found the code intelligence and said, we have to focus on the developers and the engineers because they are the ones that can fix the software. I love it.

Now, the story resonates with everything that I've seen in my career here in Silicon Valley as a security professional where there is often a tendency, let me say, to blame the user for security mishaps. And I don't, I have yet to meet a user that is, yep, I don't care about security. I'm just doing this thing. And or has actually, you know, done something unreasonable.

And to step back and to accept that part of our job as security professionals is to make this stuff usable and to make it to be the sort of security professional where you step into a problem space and you present a real solution that doesn't introduce new friction, that's easy to use and to understand. You know, I think a lot of times we just, there's a tendency to focus on only the technical grit and to regard the human experiences like the stuff that you just have to do or to deal with. And that security that you were talking about, you're alluding to of the certifications and everything, I call that checkbox security. And I make a distinction between checkbox security and actual security.

There's some overlap there, but there's definitely, there's two ways to think about security. And those are the two sort of general large ways that I've noticed companies do tend to think about security. The checkbox security, of course, because it's some legal and compliance reasons. But at the end of the day, what really matters, what keeps you out of the news for the wrong reasons is definitely the real security.

So I'd love that you have noticed this and have been inspired. So why didn't anyone else solve this? I wouldn't say no one else solved it yet. There is an entire movement already started with developer post security.

So now if you look on the different tools and there is static analysis, dynamic analysis, and software component analysis, you see that a lot of developer post companies come from the static analysis and software component analysis, but having more a look at the dynamic part where you actually execute the software, the attack is a robot side. It's still not there yet that developers use it. So if you're looking like at the most common end tester tools, those are called dynamic analysis tools, but themselves used by testers, not by developers. And this is where we come in and we say that we plan to create a dynamic analysis with the developer.

Amazing. And so may I ask, was it the experiences that you had, perhaps in your childhood that led you to bridge that gap between the static and the dynamic analysis, bringing it back to the, or what experiences led you to see that gap? Yeah, that's a good question. Basically, I can think of two experiences leading to that, one inside the business and one inside the research.

And in the business, I was a consultant, basically working as an engineer on a product. And this product was going through a certification and pen testing, as you say. And I was the engineer filled out a lot of check marks and was talking to the security people during this entire assessment. And we were using static analysis and they're paid and they had this label, we do enterprise pen test where they basically gave you a report and then the manager had the checkbooks for ISO 27 ones and some other certifications which were there.

And I was digging because I was already interested in IT security and was already looking out for a lot of patterns. So I knew what's going on and already found a few little issues myself. And basically during the static code analysis part, we got a lot of false positives. We basically ruled them out.

It was a little bit annoying, but not too much. So to filter out some of those issues. And that is the sub point, the actual pen test happened. So we pulled that back, everything out, and then they attacked the application from outside.

At the end, they gave a report with 20 findings and you would expect if someone is a pen tester, you wouldn't have false positives because there is a human involved. And 18 out of it 20, if I remember correctly, so that this is my head right now, like that only two issues were actual issues and 18 were, hey, we ran this specific query on your REST API and then it returned an error of 500, but because we did something in the username which had a semicolon, a question mark, and stuff like that, there is a SQL injection happened. But in the end, some other exception was trailing internally, so there was no SQL injection happening. This was basically the experience.

But at the same time, I was doing code reviews and seeing what goes well and what doesn't go well. And what I found is that there was an SSL certificate issue. So basically, if you, instead of logging in via username and password, they only do using the server authentication to the client, but if you use the smart card to activate the server and the client software basically didn't check everything correctly, so you could issue any valid certificate, but they didn't check for the domain name. So basically, this was one issue.

So you're just presenting an invalid certificate and it's, yeah, hello, welcome, welcome, sir. Exactly. But this only happened if you do the client authentication. So what they didn't do in the pen test is if you are authenticating via smart card, so they did the test with authenticated via username and password, but they didn't authenticate via the smart card, basically.

And this was the reason why they did not find it. And there was the second part, which I don't remember correctly, I think it was, oh, it was a command line injection, and the command line injection was if you were requesting too, too many results, there was a background job starting with an exec, and basically this background job triggered, but the exec got something from what the attacker was controlling. Basically, you could exec anything you want. So basically, this was also a very serious issue, which was found by the static analysis and the pen test.

And from the point of view from the pen test, so they wrapped the pen test, they had a week, they attacked the application from all directions, but no one measured code coverage. So basically, you didn't know what was going on. If you would have measured the code coverage, you would see, oh, there is an exec going on. It's only happening if you request more than whatever thousands of files.

And there is an authentication going on with SSL, but it only happens if you authenticate via smart card. This wasn't detected. So, and this is where I started to build white box testing. So you have to measure what you did at the end, otherwise the pen test is worthless.

Because if the attacker does just one little thing better than the other one, the attacker wins. And this is basically what a lot of companies have as an advantage. They know what's going on. They have the source code.

So why not play out that advantage? No, I love it. So your real-world experience gave you the insight into identifying these two gaps. So let me try to summarize these two gaps so that you notice two huge gaps.

One is that there were false positives. There were things that were being flagged in both static analysis and in dynamic analysis that were not actually issues. And then the other thing that it sounds like you're pointing out, which sounds like a much more grave gap, if I could use that word, is much less to do with the static analysis, but with the dynamic analysis when you have pen testing teams who knows what it is exactly that they're testing. You give them, you know, everyone that does a pen test, you sort of give your pen test contractors a time box, a time window, and they go off and they use their intuition and their creativity to poke and prod at your product.

And at the end of the day, they're looking for ways to break in, but who knows what sort of coverage you're getting. Exactly. If they're not testing your smart card login component on your client, they will miss that bug that you had just mentioned, which was, it sounds like a very serious bug if you just present any certificate and you're in. So this is perhaps a great segue into the product and perhaps some of your research that you've led you to build the code intelligence.

So we maybe would like to share with our listeners exactly what is the philosophy behind code intelligence and how does the product help fill that gap? Yeah, so as I said in the beginning, it started off the research at the University of Bonn, and we were working with some enterprises in Germany. Basically, we're doing software development and they were collaborating in the With the university inviting new ways how to do software testing more effective and more efficient. We were trying out to introduce static code analysis, which they already used, and it wasn't bad.

So in the end, it's a really cool thing. You type in the first parts into your IDE and some static analysis gives you already the feedback inside the IDE. So which is great, but in the end, it's kind of missing the pen test. I started to look there, and back then, so this was around 2016, 17, great box cost testing or coverage guided cost testing, feedback-based cost testing, so those are the names, were a big thing inside the research community and also led in the open source community by Google.

And the idea there was, hey, how about you instrument the source code, and based on the instrumentation, you get feedback from what kind of path has been taken inside the application, what kind of things were compared to basically getting the entire feedback back to the testing engine. And this way, not only just do the random testing, how fast testing started at the beginning, just produce randomly hooks, but get AI running on top of them. So genetic algorithms and stuff like that, basically to predict what might happen, and based on the code coverage measurement, you get more and more things discovered.

And it was really interesting because there were a lot of open source projects using that company at the first phase before even getting to there. Through the company source code, we started with the open source parts, and we had some parsers, the JSON parsers, the parsers, and whatnot. And it always took two hours to set up everything that it's running within the instrumentation, and then 10 minutes to say, hey, just use the JSON parse method, and then get to there, and then we got our first CDEs. So from all the perspectives, we were doing exactly what every researcher is doing, celebrating, refighting issues, refight issues, and stuff like that.

And then we started to talk to the development teams. And what we have experienced is they got really defensive. They didn't want to use the tools. They found excuses not to use it.

And it was culturally because we were in that mood, like celebrating that they make mistakes. Like, you're breaking their babies. Yeah, exactly. This is where we started to talk to the management as well.

So we started with the interviews. Usable security is a lot of theory interviews, theory studies, and stuff like that. And then we started with the interviews, saying, why don't you use it? Why, what is important to you?

From what we have seen from the developer's point of view, for them, it's important. They are using a specific set of tools, specific IDEs. They are using GitHub and maybe some CI/CD pipelines like Jenkins and stuff like that. And for them, it's really important that the tools we have are compatible with their entire development process and the development stack on the one hand.

But on the other hand, when we started to find features, it took some time to fix the issues on our side. And we started just to make a pull request, like fixing the issues silently, basically making a full request, and then it was accepted. And then all of a sudden, there was less friction with the developers, and we presented at some later point, by the way, we found 10 issues, but they are fixed. You don't have to care about anything.

This is how you can find it later. And all of a sudden, we got less resistance. Now you're heroes. You're helping make their baby even more beautiful, right?

Yeah, something like that. But still, so there were two types of developers. I mean, the ones say, you're introducing something new. I want to focus on my problem.

And the other ones raising the curiosity. Things got more interesting, though, when we started to talk to the management. So to the managers of those developers, what they see, they had in their head, look, we are moving now from we pen test every six months to we are releasing to 20 times a day that specific product. So for us, we cannot do 20 pen tests.

And with me, this six months, a lot of things can happen. And what they focus on is comply. So basically, they were already fulfilling the legal obligations. But what we have seen is also the managers, because those were former techies who moved up into the management, had a bad There, they said, hey, this needs to be part of the process.

We need to involve the developers. They already have friction. And yeah, based on those two experiences, the managers have to deal with the regulation, no matter what you do. The developers need assistance, help to fix a bug.

We need all the process support, tooling support. We said, okay, in the end, it's not that much the technique you are using to fight issues, but it's much more about how you get the people involved and get the attack. That's a cultural problem now that you're facing, not a technical problem. It's a cultural and product problem.

This is what motivated us to found Code Intelligence. And you put a strong emphasis on making this a seamless, easy-to-use service that really provides strong value to the customer, which perhaps like the, you know, is just the engineer building software. And there's always a desire to build the best software possible, right? Given the amount of time and resources that we have.

And it sounds like what's happening with code intelligence in your product is that instead of having these really long feedback cycles from a dynamic pen test where, you know, it's great feedback to have someone poke and prod at your software, but that you can't do that 20 times a day. If you're going to release 20 times a day, you just can't do that 20 times a day. And so any anything out there on the market that can help shorten that feedback loop is going to reduce the amount of overhead and reduce the cost, essentially.

The cost being in engineers' time, cost being the impact on the customer, you know, the cost being sort of the risk exposure that business has if there's a bug that is released and then it's caught later. There's a feedback loop between when a bug is introduced and when a bug is noticed. And the further, the greater that distance in terms of time, the more expensive it is to fix because the more places in the software it ends up. It's much easier to fix something in a pull request that's sitting on a separate branch.

It's not even in the main branch yet, than it is to fix something after it's been released to a hundred thousand users. It, yeah, exactly. It's just math. It's so the product that code intelligence shortens, it sounds like that opportunity for that feedback loop to come back into the engineer's view.

Because they're always going to go back to an engineer. If you want to change a piece of software, you need an engineer's help. They're the ones who control what the software does at the end of the day. Exactly.

Exactly. And we are also collaborating with the University of Bonn. And there are some researchers and there are some things basically suggesting that if you address the developers, then it's, they know best what kind of interfaces they have produced. And they know best if an issue comes up, whether it's a real issue or not.

Oh, definitely. Oh, they know instantly. Yes. Not always, though, but they still get defensive.

And I can understand that because let's say something is in the gray area. So obviously every developer admits if it's a clear hack. So, hey, there is a buffer overflow or command line injection and you can exploit it right away. But most of the discussions and friction getting with our software is something in between.

So what if you get a denial of service? Denial of service is important enough or not. So those are smaller discussions, but we try to give the developer the opportunity to triage a bug themselves. And if they say, I don't want to see it again for whatever reason, it will show up on a dashboard for some managers that they know that it is ignored by the developers.

And then they will talk to that. But it's not, hey, our software is a policy. You fix that right away. Although there are limitations.

Some of the denial of services can create. So if you don't fix that bug, we cannot continue security testing because the Fuzzer is always crashing the entire software and you cannot continue to feed the inputs. And those kind of issues have to be fixed. So we haven't found a good way around that yet, but this is sort of what we are working on right now.

Nice. No, I mean, this sounds, it sounds like the right approach and to empower teams to make business decisions around what needs to be fixed first and what can be fixed later. And of course, you have to tie that, you always have to tie that back into a business need because at the end of the day, like our time is limited, our resources are limited, and businesses do have to build profitable businesses. Like otherwise, if you're not doing that, I'm not sure what you're building, but maybe not a business.

So I sympathize. With the need to tie it into a business, a business direction, a business push. But you've noticed this gap in the market. You've noticed the opportunity.

You poured your heart and your soul, your blood, sweat, and tears into solving this really difficult technical problem. And now you have a company, almost by accident, it sounds. And that's a different beast entirely. So maybe you could share with our listeners the best day that you've had on your journey so far in your company at Code Intelligence.

I can think of one situation, and this is the engineer inside of me being very proud. So there was an RFP in the early days of Code Intelligence happening for an automotive enterprise, and they wanted to introduce tools for security testing. And we applied for that RFP. And what they did in the RFP is that they invited tool vendors for a hackathon.

So basically, for one day, come on their side, work with their computer, work with the software, and basically attack the automotive controller. And yeah, most of the. . .

So, but this is what came out later, but basically, the thing is, so what we know is, hey, a lot of vendors like there, and in the end, we won that specific RFP. So we were the ones which are used, and we know that most of the post-testing tools are focusing on bank box testing and whatnot. And yeah, it was a really interesting experience because we got so far, hey, it was like the other tools didn't find anything yet. What about you?

And then we started with the hackathon. It was like in the morning, we started off. And our product was very early stage, and we didn't have an air gap installer back then. So basically, we had some dependencies, Docker, which needed to be full.

And in that corporate, you know, the proxies. Oh, yeah. They will stop that inbound traffic so hard and so fast. Yeah, and it didn't work with that proxy.

So we were hacking a lot and it took four hours to get around that corporate proxy. And then we started really to sweat, hey, four hours left and we didn't even start. But we got everything running and my co-founder Rafal was really relaxed and in front of the keyboard working with the other system. And then we started to attack.

So there was IP network, there was CAN bus, and whatnot. Then we started and filed a couple of bugs over here and there. And then it was just a popping game. And this was a really good experience because in the end, they put in some things accidentally, some things on purpose, and what did we find from what we found out that it was like great performance that this is how Gabi needs to that company.

And from the engineer's perspective, it was like really cool because there were so many vendors out there, have our first RF going on. You have no idea how to do the business and Gabi just by producing good results. On the merit of the results and the product that produced those results. Yeah, it was a proud moment, but I don't know what the other side thinks.

Maybe we were cheaper than the others, but it doesn't matter. Okay, so how about the flip side of that? Is there a worst day that you've had in code intelligence? Is there something that scared you perhaps?

It's always a journey, right? Yes, so there was one point at code intelligence when we really got scared. And imagine from probably out of the university having the context in the enterprises and basically helping the enterprises to deal with software testing, COVID hit. And basically back then it was like, hey, we were just a legal premise because enterprises and we wanted to still validate our product.

And then COVID hit. And in the beginning, it was like, okay, let's see how it goes. But three weeks later, after the lockdown, all our purchasing processes delayed. So there was no cash flow coming in anymore.

Oh no. The press companies had to adjust and we were in the middle of a financing round at the same time. So there was a lot of uncertainty. And I know like that moment we sat together and we're thinking, how do we adjust in the end?

So how did you adjust it in the end? Yes, so we make the financing round despite all that thing happening. And from what we had seen, A few weeks later, things were better than expected because the companies switched to a process now working remotely, started to introduce more cloud services, which allowed SaaS. And even though SaaS was a deal breaker back then, it started to work out, hey, if we deploy it our Azure, our AWS, we can work with that.

This is where things started flowing again. Nice. And for our listeners who do not get to see our video feed, there is a poster behind you. I just want to mention it.

It says, if things are not failing, you are not innovating enough. And I just love that quote. So, you know, when you show resilience in the challenging moment, it's always easy to lead if things are going peachy and everyone is happy and the money is flowing. But it's when things get tough and you have to dig deep and find that grit to get through those moments.

That's where leadership sort of shines in it. I think that your poster behind you, it also suggests and resonates with those ideas of challenging moments are the best moments. Yes, so I mean, looking in the hindsight, like looking back, it's always, hey, I hope you would know why I worried that much. But if you are in that situation and you already have a little team and you have some kind of responsibility for the team, it's kind of what to do next.

And everything I do inside that company, so it's my first company in which I founded and everything what I do is a very first time. So like I level it to the next level, which is super exciting, but it's one big roller coaster run. Yeah, I mean, if it wasn't, I would be a little bit worried. So how big was your team when that happened?

You mentioned your team and you felt it. Yeah, we were already 12 people. 12 people. Yeah.

And to build a business means to be able to provide for those folks like, you know, so I understand completely and admire the resilience. If you look forward into the future and you see a very successful code intelligence, could you describe for us what's that look like? Maybe we will become the standard security testing platform. So what we are focusing is exactly on this coverage guided testing, so during the runtime based on the information you get everything from source code and from instrumentation.

And this is not implemented in most of the cases. So a lot of open source projects, especially written in C and C++ are already experiencing it. But if you look at the business, it's still missing. And we see that we will drive that, hey, there is a new technology, like about 20 years back when static analysis started to be the standard for the send-off, let's say, white box testing from the dynamic analysis part.

In Gartner, they have also a term, yes, for the active application security testing. So basically the two-in-one solution that will become the standard tool. Amazing. So you've mentioned open source a couple of times and the open source community, open source projects have pretty good traction here.

And you guys open source a lot of your software as well, such as your fuzzing solution for the JVM-based languages that, right? And so can you share with us what's the strategy behind your sharing so much of your technology and open sourcing it? Yes, so it's basically giving an easier access to the new testing solutions which are out there. So while most of the academic world focused on C and C++ or binary fuzzing, so that basically means without having the source, still achieving all those results.

No one actually focused on Java or the, let's say, memory-safe languages. So they were neglected. And if we see how much software is written right now, so JavaScript, the languages which are used far more than C and C++, and obviously there you don't deal with buffer overflows and stuff like that, but rather most of the Java and JavaScript services are something like web backends, so REST APIs, GraphQL APIs, GRPC, and whatnot. And this is where we started, okay, this in C++ is working quite well.

How about we apply the same techniques to what the business needs? And then you have also a job where even though it's a memory-safe language, you had remote code execution like in Log4j and one for. . .

Yeah, you found the X11 stuff that was there in 19. . . Yeah, lots of it.

Yeah, exactly. And we found CDEs in exactly similar things. We even onboarded Log4j and shown that Log4j have been found with the fuzzing with this and this way. And now we are collaborating with Google to onboard more and more open source projects using the fuzzing.

But basically, in the first part, you focus on that the developers, the techies, the open source community is using all those techniques. So we found a lot of security issues already, but there will be some advertisement going on as soon where we publish some numbers and also publish how we received the participation. And just recently, we published a JavaScript fuzzer, basically the same thing, but what was the JavaScript. Focusing mainly on Node.

js. So it's not for the front-end. Some of the front-end stuff can be tested with it, but we are focusing on the back-end side, basically Node. js applications, because what we experience is that in JavaScript, you often have more new developers working with less experience.

And this is where we see JavaScript with Node. js is one of the fastest growing language there. So basically, this is what we open source. And we can find typical host issues or remote code execution.

So something similar to Log4j or XML external entities or the command line injections I talked about in the beginning, which were happening in the enterprise so that we secure the open source landscape. I'm saying you get a lot of feedback from that open source community. And with a zero barrier to entry open source project, you're inviting folks to realize the value that you're offering instantly. And in exchange for that, sounds like the strategy is the product market fit.

You get feedback loops from real engineers trying to solve real problems and exactly what a company needs to actually bootstrap into a space. So even though there's all of this great feedback and that's super valuable, at some point, you do have to earn money. And so what's the plan to earn money through code intelligence, even with all of this great open source community and the feedback loops and the projects that are out there? Yeah, that's a good question.

We are basically focusing on enterprise features. So a lot of things what you have in open source, hey, they have a Jenkins running or a GitHub pipeline. You basically know how to build the stuff and everything works out well. If you look at the enterprise, though, you have large teams collaborating.

So one of the main requested features from enterprise, hey, can you give us a Jira integration, stuff like that. With the enterprises, it's all about, hey, we want to get a specific report. No open source project will ask you for compliance reports or something like that. So this isn't happening.

So basically in the enterprise, hey, can you do it with our Azure cloud? Can you do it with our CG integration with customized Jenkins? Can you export your things to some security management tools, Jira to whatever the part. So this is the one part.

And the second difference between open source and enterprise is in open source, you usually provide libraries. So you have a library to parse JSON. You have a library, a web framework where you really don't have the entire application, but basically a library where the developers are using a great work, a base for the API the libraries provide. But the enterprise apps like from SaaS vendors, they have a web application running.

So they have a web API based on REST. They have, let's say, all those kind of interfaces. And what we don't open source and what we close source, but in a similar white box testing manner, is basically those use cases which are less common in the open source, but for the enterprises. And basically, if you combine those two, you have a specific use case how you open source projects, use everything for free at the same time, focus on the enterprise issues that sit there with software using those kind of the same open source tools you are testing for free.

Nice. So tease me a little bit about how the product is structured if I'm an enterprise customer. Do you have a SaaS service? Is a multi-tenant SaaS, or is it like a single-tenant SaaS, or do you have like on-prem deployment where I have to stand up and run the entire technology stack in my own house and maintain it, or how does that typically work if I'm somewhat security conscious, but not like totally paranoid, like a large financial institution?

How would you sell that to me? Yeah, so it's, the SaaS is obviously our focus. Ideally, our customer has GitHub online. It is using GitHub.

They want to get additional deployments, so they have already a running pipeline. So it only makes sense for companies who already have a good CI/CD process running where they have a testing process before something is deployed into production. We are just another step inside that part. Obviously, with the large enterprises, we have an on-premise solution, but on-premise sounds like, hey, you are giving outware and everything is very old.

But in the end, what we call on-premise is we are deploying it inside the company's Azure or AWS or Google. So it's not like we have to support several operating systems. It's not like that, but basically focusing on, let's say, on-premise cloud or private chain. So those are the two main parts, how we do it with a SaaS obviously being the focus.

Yeah, no, I appreciate the share. I'm just curious. It's always an interesting question. It's a security question too, whether to run things in the cloud, to run things on-prem, but then you have operational overhead if you go on-prem.

But it sounds like you're taking care to make sure that the operational overhead is not going to introduce too much friction, either on your side, you know, on the code intelligence side, on the support side, or on the customer side. So it goes back to that usable, the usability of software. And you've shared your passion for usable software, for easy solutions to these tough problems. So thank you.

How about thinking about code intelligence, and as you grow, you have to bring on more and more team members. You have to bring on folks that are experts in areas that complement the existing team. Let's talk about interviews for a second. Do you have a favorite interview question, or what do you look for in an interview?

So what I like to do, so it's not an interview question per se, so basically not an interview question, then I ask the specific question, but how I often ask is if I'm taking some potential case, which is happening in the future, and I tell the scenario, and then I ask for the participants, say, so what are you afraid of, or what would you do, what could go wrong, and stuff like that. So what I experienced is that people are telling from their past instead of the future. So basically, they take the future and then they transfer it, what they experienced so far. And sometimes if you would ask directly, you would never get that honest or that unfiltered feedback.

They start opening up there, and for whatever reason, more than if you ask, look about the past and are more detached from the situation and try to analyze this. And this is interesting to see the thought process of a lot of candidates. And there, you really see that some people make mistakes in the past and analyze it and know exactly what they would do if things are completely different. And for me, it's really interesting to see that thought process.

Amazing. And so what do you look for when you see the thought process? Do you look for risk takers? Do you look for people that are more cautious or more analytical?

Or what's the sweet spot for you? The sweet spot to me is, so it depends on the position. So if you take someone for the legal team, probably it shouldn't be an issue for a risk taker. I'm a risk taker that you're legal.

Yeah, yeah. So it always depends on where folks you are first. I mean, it's probably shouldn't be probably the most risk-taking person, but in other areas, it's a good trade. So it really depends.

But what I'm looking for is the analytical process. Hey, I made a mistake. How I deal with that. Did I adjust my processes to basically how I interact with people?

And basically, how do I evaluate what happened? Is it like lay others or do I blame myself or basically take ownership of the problems yourself? Yeah. Exactly.

And this is something I would say. It's not only difficult for startups, but it is really always challenging regardless of the size of your company or the maturity of your operations. It's a human problem. And anywhere that we go, you're going to find people you're going to have to interact with them and work well together at the end of the day.

Exactly. So this is like a little assessment, and it's not too hard, but it's fascinating to see how people reflect on that. And yeah, those kind of questions focusing on the future, but in reality, hearing the candidates' thoughts is cool to see. That's awesome.

I've never heard of that trick before to ask about the future, but then read it in the take of, you know, we're using our experiences to filter and to color how we see the future. The people, so the candidates even say, hey, I have this experience in the past, so they already bring it up. That's what you mean by opening up, yeah. Yeah, it's not how I am tricking into something.

It's more you see them opening up themselves. And that's exactly, I think, what everyone wants. I think we want to feel like the door is open to be ourselves and to be our authentic self and to share that and to find a good fit where those experiences can be valuable. So if we fast forward into the future again, is there any service or tool or challenge that you wish someone would just go off and solve already that has maybe nothing to do with code intelligence?

I haven't found a good tool basically streamlining a lot of different channels. So the startup grows organically, iterate, and then you start using something like Slack with some customers to use. You have email, you have compliments, you have Jira or Notion, whatever the type of software it is that your company uses. Or if you have meeting notes, and at some point, if you want to know, oh, we had this issue or we decided to solve this, and then you don't remember, was it in Google threads?

Was it a Slack teams and whatnot? And basically to search through that information. And this is something where it would be great to see some tools. I have tried a few.

I wasn't happy with that yet, but this would be something where I think, yeah, there's opportunity. So you hear that all of those. So there's a lot of founders listening for that next big idea. If you're listening and you want to go off and build something that has a real market, you could build this and Sergey will go off and use it.

Yeah, most likely. Or maybe there is something out there and I just haven't discovered it yet. I'm grateful for all the tips. Maybe John, you knew something.

I have no particular solution for this problem, and I bumped into exactly the same problem myself. Oftentimes this is where like a rigid culture in a company will pay off. As much as I don't like rigidity or I don't like, you know, dogmatic views about like how to operate or how to communicate. If you are in a company and everyone is using one tool to communicate, it actually becomes pretty straightforward where to go look for things.

But then that assumes that you're only communicating internally and then you bump into the same problem if you're a salesperson or external, if you have any external facing role. Oh, how did I communicate with that person? You know, what was that email? Was that, you know, an external Slack thing?

Was that something else that, you know, so I sympathize. I'm also part of this market. So if someone goes off and builds something great or knows of something and you would like to ping me or Sergey or both of us, we would both be most grateful for any solutions here. But I love looking into the future.

And one of the great things about being a founder is that there's no limit in the imagination or the creativity. And so I always love to ask co-founders and founders of companies, like, what is your vision for the future? What about that vision gets you the most excited? Yeah, so with the vision, I think that there were huge advancements in the IT security.

Just looking on the past 20 years, how much more protected everything became and how much better everything is. Like, not everything can be hacked. So things already improved a lot, but we are still not there yet that, hey, I can just use open a file and be kind of secure from an end user perspective. Obviously, if you are a user in Linux and the specific systems are an expert, I think it's already worked out.

For the late papers, it would be more difficult. So if I'm looking at the future vision of code intelligence, I see that we are supporting the part where you can trust your products, rely on products you use every day because they received a lot of testing and not just checkbox security, but I'll you call it as a real security. Here I go. But getting all the security, and it's also right now, we are focusing just on applications, but from the vision where I see the future, where other companies might come in, is also, hey, there is also operations, how you could figure the access rights and maybe application security and these operations security will get tied together at some point.

But for now, I see this that we should focus on intelligence basically, hey, we will improve the products which are shipped, so we'll help the developers build the secure products and others can contribute on the other aspects of security in our lives. Thank you so much, Sergei. This has been an absolute pleasure. I think I speak for all of the listeners when I express gratitude for your sharing a small piece of your very long journey that has ended you up inside code intelligence, building a company from the ground up with all of your great insights.

Would you like to leave any of our listeners with any words of wisdom? Yeah, the final thoughts where I got asked what is the thing, what we want to achieve with them as customers. And I always say, hey, you don't release any software where there is a bug which could have been found by fuzz testing. So this is something like the short term, what we will definitely achieve.

Can we go to the GitHub site on code intelligence, just Google it and try out your product? Oh, sure. So if you search for code intelligence testing, you find three open source projects. There is Jazer.

It's the Java virtual machine. So basically every language compiled to the JVM libraries there can be fuzzed. You can download the CLI. It's basically, hey, I want to streamline the process and run it.

So it's called CI fuzz. And the third project is Jazer. js. It's the JavaScript fuzz.

And from there, if you are interested in the more enterprise use cases, just take a product tour on code-intelligence. com. About two weeks ago, we open sourced Jazer. js, the first effective JavaScript engine allowing companies to use the recent advancements of fuzz testing we just talked about for their Node.

js backends. And JavaScript is super performance. It is sensitive place in more and more backends nowadays. And next week, we are going to present a CI fuzz.

It's an open source interface for command lines and IDEs, which allows to make fuzz testing as easy as unit testing. So it's basically integrating all the language we support into one solution at all the IDEs so that the developers are continuing to use it like they know from unit testing. So they just play, click or play in their IDEs. And then it will do all the regressions and show the security issues if any come up.

I'll definitely be using it. Sergei, thank you again. Yeah, thank you. Thank you to all of our listeners for tuning in to the Security Podcast of Silicon Valley and stay tuned for another episode.

Thanks, everyone. Thank you, John, for the invitation and thank you for listening.