20. Benoit Chevallier-Mames, Team Lead of Machine Learning at Zama, On Open Source Homomorphic Encryption

Welcome, everyone, to the security podcast of Silicon Valley. I am here today with a very special guest, Benoit Chevalier-Maheux. Ben, thank you so much, Benoit. It is great to have you on the show and to see you again.

So, yeah, it's really my pleasure. So, yes, it has been some time that we haven't been together. So yes, it's really also my pleasure to be there. So to share with our listeners a little bit of background on Benoit, Benoit actually has a PhD in cryptography.

He has built his entire career around being a cryptographer. He's worked for amazing companies throughout his career, including Jump Plus, Apple. You were at Apple for 12 years. We were actually both at Apple.

And now you're at Zama, sort of leading the charge down the road towards fully homomorphic encryption, privacy-preserving AI and machine learning. Welcome to the show, Benoit. A pleasure. So usually when people hear Apple, they get excited.

So would you like to share with us, our listeners, a little bit about what you were up to in your time at Apple as a cryptographer? Yes, sure. And so, yes, we start by, you know, a tricky question, but my pleasure to answer with what I can. You know, you've been in Apple, so you know that it's really a bounce, let's see.

So I'm not going to give your listeners any creepy detail, but more or less, we worked together in the same team in the DRM team, Digital Right Management team, for like five years. And there we were working on protecting some contents, like, you know, audio, video, or things like that, with various techniques like white box cryptography or obfuscation. Yeah, those were the good old days, weren't they, Benoit? Like, I was there from what, 2009 to 2014, and we overlapped for my entire stint there, but you continued and you were there for a total of like 12 years.

Yes, 12 years, and most of them were amazing years. Yeah, you know, Apple, I really love these companies. They are making a very nice product and the company is also very, very well organized, you know, and the people, I mean, the people, the people they have, they hire, and they keep those people, most of them are really amazing, both in terms of, you know, personal people and technical skills. So I really learned a lot of things there, including organization, including technical stuff.

And yeah, it was really a pleasant moment in my career. I couldn't agree more. It was an amazing place. For me, that was my first real, you know, career job after graduate school.

And I couldn't agree more on every single point. So you mentioned protecting content, audio, video, you know, just to loop in our listeners, all of that stuff with iTunes. So the software that Benoit worked on and that I worked on, we worked on together. It ended up in the iPhone, the iPad, the iPod, the Apple TV, the entire ecosystem, right?

Yes, and if we want to, you know, go to be about what we've done, we can say that we have, actually, we have billions of users of what we have been doing there. And most of them use what we've done without even noticing that we have done it or that they are using our stuff. So it's really amazing. Absolutely.

So what, like, it sounds great at Apple. It's a tell. Could you share with us why did you, why did you leave? So, yes, yes, it was really great.

But, you know, in my mind, I always had the idea of wanting to start maybe a startup at the moment. And so I knew that at the time I would ask to jump and go to the adventure. And then at the moment, I had people in Paris starting a company, people that I know closely. The CTO, I mean, I know him for like 20 years.

And it was also in In the fields that I know quite well, I mean, the cryptography, which is a 20 fully homomorphic encryption, we are certainly going to go into the details. And so it was the occasion of never. So I had to go there. So I wouldn't say I didn't, I didn't left Apple.

I don't leave Apple. I joined Zama. You couldn't say no. You were pulled into it.

It was that moment or never. Yes, and yes, and I, okay, we are going to explain it, but I don't regret it. I mean, it's really a nice adventure. Of course, there were great things in Apple too, and I don't say, maybe you said the same thing, I don't say that I will never go back there because it's a very nice company, but I had to try it once.

What I always tell people about Apple is they make it incredibly nice to work there. It was, like you said, the people were spectacular. It's engineering driven. You get to work on things that have billions of users out there.

Yeah, yeah, yes. And it's so well done. I mean, well done. And when I left Apple, it was a very similar pull, you know.

And I, in particular, I was looking for something to really push the envelope, to really make, to explore an unsolved problem. So that's how I ended up at Symphony Communications, but it sounds like you were also pulled into an open problem with this fully homomorphic encryption. So can I say a few words about what is homomorphic encryption? Please.

So what is homomorphic encryption? What problem does it solve? So homomorphic encryption, just to say a few words, I'm going to be, you know, to use a rough concept. I'm not going into the deep details, but roughly, homomorphic encryption is a kind of a grand primitive that we have dreamed of in the cryptography research field for like, I don't know, 40 years maybe.

So it is a primitive that we imagined, but it was so crazy that they thought that it could not be possible until they did it. So actually, the idea is to, so normally an encryption scheme, it says that it's not malleable. You can't modify the ciphertext in any meaningful way. So when you encrypt something, you can't change it without destroying your encryption.

Yes, if you change a bit of the ciphertext, you modify it in a way that you have no idea of. And telling all of our listeners, please, please, please use some sort of authentication tool on top of that so you can detect when someone tampers with your ciphertext. Yes, because yes, depending on the kind of encryption that you use, you can see that the ciphertext was modified. In FHE, it's completely different.

By choice, you make a scheme so that it's possible to work with ciphertext, which means it's possible to do operations on ciphertext in a meaningful way. And it's really a great primitive because more or less, what you can do is you can be on your device, you can use that you trust, and then you can encrypt things. And then you can send them to a server that you don't need to trust. And there, this server can do whatever computation it wants.

So we are going to explain what I mean in the team maybe regarding machine learning, but any computation that they want just using public material, so nothing related to secret, and not being able to know what it is manipulating. At the end, it returns to you the encrypted result, and just you, you can get the final result. So really, it has been a very, for 40 years, they have tried to find a solution. And just recently, which is okay, in 2009, there was an American guy called Gentry who was able to find a solution.

And then several new generations of FHE schemes were proposed, which are better and better. Amazing. I think that was really good. But at the end of the day, most people do not understand this stuff.

So thank you for sharing, Benoit. And for dumbing it down a little bit for those of us who are not cryptographers, the context is really all that matters. And so from a security perspective, with this new tool called fully homomorphic encryption, We're giving entrepreneurs and companies and people that build software a new tool, and they can use this new tool to protect customer data, protect the confidentiality, for example, your PII, your personally identifiable information, or sensitive data, or perhaps, you know, a person's diary. It's just like all of your data that normally the company wants to own and they try to monetize and it can be sold.

But if it's encrypted and you're the only one with the private key, now we enable people to build third-party SaaS services that can perform meaningful operations, valuable operations for me as a user or for you as a user of these third-party SaaS services without having to share the actual data. And I think like that is it. I couldn't agree more that this is the holy grail. Yeah.

You remember the advertisement that Apple had in, I think it was in Las Vegas at the moment, which was, or maybe it's still an actual ad, which is what happens on your iPhone stays on your iPhone. So by choice, because they care about privacy, the app decided sometimes to keep things internal to the iPhone because it's so private that they don't want computation even on their server with FHE. They can do differently. They can encrypt things in the iPhone.

They don't send it anywhere. In any untrusted server, there is no worries about it. Computations are going to be done in a way that they don't know, right? Then at the end, it's back to the iPhone.

So it avoids to have limits just because of the privacy. You can have privacy now, but still do the big things that you wanted to do, you know, on servers. Yeah, so tell me, share with us, what's maybe an example of some of the big things that you could do in the servers that today requires you to share very sensitive information with some third party? Yeah, so yeah, so for example, imagine you want to test whether you have or not some disease, but for this test, you need to send your DNA.

So you are going to have your DNA and then to send it to a server somewhere. You don't know this company. You don't know if they are going to keep your records. You don't know if they are going to sell them to some other companies for some statistics or whatever.

So that's for the input. And then they are going to do the test and to send you the results. But what are they going to do with the result itself? Are they going to share it with your employer, with your bank, with your insurance, with your whatever?

You don't know. So it really, you know, there are tons of people who want to know if they have a disease. So yes, I understand. But at the same time, they agree to share their very private information.

And once your DNA is on the web, there is no way to erase it. And you can't, I mean, update your DNA as you update your password. So really, so it's really a case. And what would FHE do here?

You would just encrypt your DNA in your, you know, a trusted device. And then you use the server, the third party that runs this health check, would be doing the same test, but in FHE. So they would not have an access to your DNA. They can't share it.

They can't do anything with it. Then at the end, they have the encrypted result, but that doesn't mean they don't know if you have the disease. Right. But they can't share the status.

So it's really, it's really powerful. It's super powerful. Yeah. So could they do that with Zama today?

Could 23andMe and Ancestry. com? Zama was really, really founded in this perspective. So it was founded by, you know, by people.

So Pascal Payet, who is the CTO, who I know for like 20 years because we were working together in this. This is your good friend. This is, yeah. Yeah, I would love to hear how you bumped into and how you became friends with Pascal.

Yeah, I mean, we started together in Gentles and it was 22 years ago. It was, so yeah, we, and he is, I mean, okay, he's a very friend. He's also a very, very well-known photographer. I mean, he's one of the main specialists of homomorphic encryption.

So really in terms of science. He is one of the big names. And then the second founder is our CEO, Rant Inti. So Guillaume Rant is an AI scientist and a serial entrepreneur.

And notably, he has made a previous company just before Xammer, which was Snips, which was acquired. So which was also one of the reasons I was less, you know, it was less risky to me to jump to Xammer because I knew that he was, you know, he knew what he was doing. Yeah, that's actually an amazing setup because you had the trust, you trusted the team. Yes.

You knew them for 20 years. You've worked with them before. They're like. .

. I don't know if it was the best configuration, but it was now on it. You just walked right into this amazing opportunity, Benoit. Yes.

I'm happy for it. Yes, thanks. And so, so yes, Xammer, we are in Paris, so mainly in Paris because we are funded by Parisians, but we have people from everywhere. So for example, just to give you an example, we have someone in Brazil, so we start to be a bit everywhere on the planet.

We are like 50 people, half of which are, you know, PhDs. So it's a, the research is in the DNA of the company. Our CTO CEO have PhDs. Yeah, half of the people are PhDs.

Most of them are PhDs in crypto. Yep. And yes, and to finish with Xammer, we raised some money, so 50 million by investors. So if you, I don't know if you need that, but if you go to, you know, Crunchbase, you will see the.

. . We'll see. We'll check it out.

Yep. Is that the, which series is that? So yes, we have, so we have done a Series A. A Series A, 50 million.

Total, total in total. The seed plus Series A is 50. That's incredible. I don't think it doesn't, but yeah.

You've got some strong backing. You've got some runway. This is definitely a deep tech endeavor. Yes.

And as you know, they know that, so it's a big investment because FHE is great, but everything is to be based, you know. Right. And it's a big thing. It's, so it doesn't come for free, but there are, I think the investors are great because they know that it takes some time and, you know, some test and try maybe sometimes to have the good fit for the product.

Product market fit. It's, that's going to be the challenging piece, but you also face the extra challenging piece of exploring this new technology, which really, you know, is not widely adopted as far as I know anywhere yet. So. Not widely adopted and it's not even widely, let's say, designed.

It's still a field of research. We are still looking for things which are faster or best way to use things. So that's one of the reasons we have so many PhDs in the company. Because there's a lot of open problems.

So would you like to share with some of our listeners, what are the open challenges that you're grappling with or that you're still grappling with? Yes. So, yeah, so my pleasure. So today in my team, we, so we are, mainly I could start by telling what my team is doing.

Yeah. Do you run a team or are you the manager of the team? Yes. So now I am running a team, which is the machine learning team of the company, because we have seen that actually in the company, there are, for FHE, let's say, there are two kinds of applications.

There are, let's say, cloud applications. So applications, things that you want to run on the untrusted server, like the move of the DNA test. The third party is just, yep. PT is the machine learning is one of the kind.

And then you have a blockchain application. So it's another division that is starting in Xammer, was having to do things in the blockchain, but in a private way. You will not see what are the clear values of the, what that are used in the blockchain. So yeah.

So in terms of teams, so what I'm doing, I'm doing that. I'm running the machine learning team where I'm trying to make a product, which is called Concrete ML, which is a tool which is to be used by a data scientist without obviously needing to know the cryptography. So, you know, back to Apple once again, we try to make things which are, which just work. So you don't, you don't need to know how it works.

Right. It just works. There are not a lot of things to set up or to understand. So you don't need to know the cryptography, you don't need to be a PhD in crypto, it will just work.

And it's really one of the goals of the company to make things which are very simple. And then, so, yeah, in terms of challenges, so today we are running a machine learning algorithm which are limited to 8 bits. So actually, more or less, it's a bit fun. It's like what, somehow, where we were 40 years ago in the, you know, regular computers.

40 years ago, computers were very limited in terms of precision, and now with FHE, more or less, the story repeats itself. So you are back to having computation in 8 bits, but here it's 8 bits because you have, you know, the privacy. So the challenge is to run the machine learning algorithm with such small values. Then another constraint is a relative slow speed of the FHE for now.

So for now, many things are done on the, you know, on CPUs, so it makes that your computations are quite slow as compared to the clear equivalents. But we are working internally and with third-party companies to have, let's say, hardware accelerators. So if you go on the Zama blog, for example, you will have a blog by another company, which is called Optalysis, and that's crazy. So I'm not a specialist, but they are doing fast computations thanks to optics.

Optics, like fiber and. . . Yes, so don't ask me how it works, but thanks to that kind of things, and we have several companies trying to make accelerators for FHE, we are going to have FHE faster and faster, and by 2025, Zama, we expect that FHE will be sufficiently fast, I mean, as compared to the clear computations.

Yes, yes. And yes, this may be one big challenge that, you know, nothing exists. So we are just making things from scratch. So you will be, if you RAM, for example, he likes to compare our status with how we were, how they were at 15 or 20 years ago in terms of machine learning.

Machine learning was starting and they had to do everything by themselves somehow. And now, you know, you just, you are a student, you download Keras, and in 15 minutes later, you have done your training. So we hope to be doing the same kind of tools so that it becomes easier and easier for the people. Easier and easier and more secure and more secure.

I love it. So the security, we never trade it. So anything we do in Zama, it is secure. It is at least 128-bit security, which is, you know, the security of AES.

So it's really, I mean, a very good security. The only trade we have sometimes is about speed, because for now it's too long, or about, you know, you can use a lot of RAM or things like that, but security is never traded. So you mentioned the 8 bits in the, and I am absolutely no ML expert. I have dabbled and played with things, but I am not.

I have never deployed production ML. So help us understand, like, 8 bits. I know that 2 raised to the power of 8 is 256, just like probably all engineers in this world. You know, and they're kind of like probably nodding their head right now.

Does that mean that the ML results are limited to 256 different classification categories, or is there a, can you tease us a little bit about what? Yes, so yes, as you said, there are many topographicals. I know a bit more than you, but there are people in my team who are even more specialists than I am. So what we are doing in the team is that we are using what is called quantization, which is, let's say, a sub part of machine learning, which is normally used to compress networks.

When neural networks become bigger and bigger, when you have to send them over the air, it's too large. So companies try to make networks just on integers. which are like eight bits, eight times smaller. So normally quantization is used for this.

In our case, we use quantization to come from 64 bits floats to 8 bits of integers. And we use either what is called post-training quantization, which is you do your training and then you quantize, or something which is even more fascinating, which is quantization-aware training, which means that you tell your training that you are going to be quantized, and so it selects the best, you know, weights, so that the result after quantization is even better. So in terms of results, it means that depending on the kind of models, I'm not going to tell you that it works 100%. So for 3-based models, which is the kind of machine learning model, this is very used, quantization works very well.

It's very smooth for the user, and the results in terms of, you know, you compare SSH and clear results, the accuracy are very, very close. And then on the other side of the spectrum, let's say, you have neural networks where for now, the 8 bits limit is challenging. It's going to be better with the further release by the fact that we will use quantization-aware training. And also, a part of the work in Zama is to increase this 8 bits.

The 8 bits limit is not the end. We are working on the beginning. It's a first step. It's the beginning.

I think that it was, you know, the story with. . . Yes, yes, it does.

Now it's. . . On 8 bits to 16 bits, it will be larger.

Very nice. And so we're taking all of this fully homomorphic encryption. We can encrypt our private data. We can hand it off now to an 8-bit computer vision or an AI model that's already been trained, and it can work now on our encrypted data instead of on the plain text.

And I don't have to coordinate my key that I use with anyone else that's also using that same service. So it's completely independent. Absolutely. Your key remains on the device.

You just have to send the equivalent of the public key to the server. Ah, so it needs my public key. Okay. Yeah.

The compiled model can be used for. . . That's one user.

They just have to change the public part depending on who the user is. No, that's incredible. And I love the focus. One of the greatest tricks that entrepreneurs use is focus, right?

So what do I mean by that? In this case, it really feels like you're taking an open problem, which is fully homomorphic encryption, and you're giving it focus in this very specific application of AI. Yes. Machine learning is exactly a kind of focus on what we call the cloud computation.

So later in the company and the company is going to succeed, we are going to extend what we do in terms of cloud computation. And also, I told you the other day, the division is working on blockchain. On the blockchain stuff. Yeah, yeah.

So we have this. . . So you're already starting your second product.

Yeah, that's great. And you're going to use the same core technology as it. I believe the core is actually our first product. So we are seeing in the company, we are going to have three lines of products.

The concrete library, which is the core that can be used directly in Rust because, you know, we have chosen Rust. Oh, wait a second. You wrote all of this in Rust? So the concrete library is completely written in Rust.

And by the way, the rest of the product, it's open source if you want to check it out. And then on top of this, on top of this, we have two lines of products. We have a concrete NumPy and concrete ML. So concrete NumPy, if you want to turn your NumPy products, NumPy program, let's say, into an FHE equivalent, or concrete ML, if you even want to be at an higher level, where you, I don't know, you just take your decision tree and you replace it by the equivalent in FHE.

Okay, so you just dropped a bomb on us. You said all of this is now open source. Yes. So all of this, if you go to Zama.

ai on GitHub, you will have an access to concrete core and concrete libraries, which are, you know, The rest of the stuff, which is in the open source, and which was created, I think, more than one year ago. So it was our first product. So here it's mainly for people who want to work with Rust or for cryptographers. And at the top of this, you have concrete number and concrete ML.

And for people who are not cryptographers and who just want it to work and be easy. And so for now, the blockchain division hasn't open sourced its product because it's still in the beta. It's still baking. Yeah, it's still in the oven.

Yes, but at the end, everything in the company is open source. And the choice of the company, it's free of use for researchers or for students or for companies who want just to give it a try. And then you just need to have another license if you want to use it in a commercial product. I see.

I see. Okay, so that's the business model for some. Yes. And if you are interested in reading a bit more about this, recently, our CEO, Rand, he wrote a blog post about how we monetize our stuff.

He explains, because that's a recurring question I have. I imagine that all of the investors are very interested. They ask, oh, you are open source, but how can you live with it? And so he explained what are the ways, how other companies, especially in the US, who are able to be in some open source model, but be profitable at the end.

And so just to read into this a little bit, I'm gathering that your go-to-market strategy is bottom-up. Yes. And this is the zero barrier to entry sort of entry point. Exactly.

So for now, we are trying to have as many users as possible. And our users are the developers. So what we try to do is to have a product which are so easy to use and so sexy that the developers will just want to have a look and test it and maybe make a prototype of it and then show it to their boss. And then if they see that they can make a nice proof of concept, they come back to us because you know just to change the license and include it into the next big shipment of their SaaS release that needs to do a little bit of everything.

So do you target specifically ML engineers that are also excited about and passionate about privacy? Is that sort of like your niche? So in my team, yes. But it's a niche of millions of developers.

So it's. . . Oh, no, I say that with all of the respect in the world.

I think that in terms of the business, your aim is to be the biggest fish in the smallest pond and where the pond size is sort of your target market and you want to be the best, the absolute best thing available to all of those people in that very specific market. And really ML, ML in particular, and also it's really for us a way, a good example of what the cloud application will be. So once we have tackled this, once we've done it for ML, we should be able to extend it to other kind of applications. But you know, ML is really nice because I told you that, for example, things were slow.

In ML, often, that's things that you don't have to run in real time. So you can wait for, you know, you remember the health test, you can wait an hour or it's fine if you have the privacy. ML is typically the use case that you can do it today, even things are slow. And then later when things are faster with the hardware accelerators, we are going to enlarge the possible applications.

So would this library be applicable for self-driving cars that are constantly uploading data and GPS and LiDAR and all of this very sensitive information about what you do, for example, in your Tesla or your Rivian or your Kodiak truck? And I would say yes. You mentioned Tesla. If somewhere, someone in Tesla is listening to us, and I'm pretty sure that's the case, they can just try to download the concrete ML and use the equivalent of what they are doing in clear with our stuff.

You will see in the readme and the documentation that. . . I mean, I suppose that this is a really interesting business angle.

Approach, because never before in the history of companies here in Silicon Valley have you been able to mix the business model that Google has with the business model that Apple has. Apple's business model is protect privacy at all costs. Sell devices, make money on devices, but protect everyone's end-user privacy. Do not own the data.

Google, on the other hand, had the business model of give everything away for free that you possibly can, but own the data and then use that data to sell stuff. And in order to do that effectively, they do have to use ML and they have to figure out what you like, where you live, when you go to work, what are the coffee shops on the way to work, so that when you use Google Maps, a pop-up then says, oh, Starbucks is sponsored advertisement on your way to work right there. Like, that's it. With our stuff, it should be possible to show advertisements.

So as a user, you see the advertisements so you can be paid, but still Apple doesn't know that you are reading this advertisement and doesn't know where you are going or definitely, I don't say it's easy, but definitely it's possible to do kind of things like that. Yes, as you said, as the best of the best of both this world, right? And also one thing I said before, but I liked it. Apple, instead of avoiding to do machine learning algorithm because they care about privacy, they could stop to limit themselves.

They could just run it on servers just by using SSH or they could avoid to spend, I don't know, billions of dollars to make and to build, you know, neural engines. They could replace this piece of software which is long to design and I mean, it's one in every iPhone they sell. They just buy just software which is more or less free and a server which is okay, not free, but it's not the same price. So yes, that's also kind of things that Apple could be, all those kind of companies could be interested in.

Yeah, so for fully homomorphic encryption, I see it as opening up a lot of possibilities, but I also see like there's some other approaches here. One is the, is like just do everything on the device. Now I've heard this before. And when I hear that, I think to myself, oh my goodness, but there's so much computation or, you know, needed to get the, your, your ML trained and to actually execute against your neural nets.

And, and maybe that's a direction that Apple has been thinking about because you can control the experience and the performance and it's related to the chips and they're good at making chips and making things fast. Are you, are you not worried at all about any, anything like that? Like, That's the two things on the device. I mean, of course they can do whatever they want, but well, you know, when in machine learning, often they like to update their model so that, you know, it becomes more accurate because, you know, the data are changing from day to day.

So if you have to update a model on, you know, billions of devices as they have in Apple, it could be a nightmare. It's so much easier if you are Apple, update something on a single or a few servers that they have and still expect things to be private. Yeah. Well, remember when we were at Apple, remember one of the absolute critical show-stopping metrics that we looked at, it was always the battery life.

Like what are we doing with the battery life? Like, oh, we do all of those encryption, decryption, we're protecting the data. And we would stop a release if we didn't meet our goals, right? And if it took longer, you know, to decrypt the movie, then, then with an hour.

You know, we were on not in the sales team, but I'm pretty sure that also the number of hardware pieces that they have on the iPhone is important. If they have to add a new neural network per iPhone, it's like, I don't know, $1, $5. It's expensive. And then you would pay by the billions of iPhones that they have.

It's a lot of money. Well, I mean, not only that, that that's a very real, you know, upfront cost that they have to pay, but there's also a cost in terms of the user experience. If you put everything. On the device, and then it kills your battery in an hour and two hours or whatever, and even in three hours.

Like, guess what? I'm going to turn off that feature. I'm not going to enjoy it. Because I need my phone to work at least on one charge for the entire day.

And I would like to have my phone last at least three years. Like, this is just me. I don't know if there are people who are still there about it. So if we start doing everything on device and devices become more and more mobile, like this is not going to fly.

This is one of the big benefits of having the SaaS service. So if you beam a small piece of data up to the cloud, the cloud does all the expensive computation and just gives you a quick answer when it's done. You centralize all of that. So not only is your stuff always up to date because it's running on a server instead of a bunch of little itty-bitty devices scattered across your entire ecosystem of users, it's also going to chew away.

It's not going to chew away at your battery life. Interesting. These are all like very good considerations, and I'm sure like each application is going to have to sit down and think through their pros and cons. So we are Thursdays, so things become easier and easier, but still, so we are working with early adopters.

So you know people who want to, okay, we are going to say big words, make the history. So you know, they want to make for the first time some, I don't know, DNA test. Even for the branding of these companies. I mean, if you can say, I can do your test very accurately, plus I will not see your data.

I mean, it's really nice. We see more and more users, you know, escaping some big, for example, bigger message messaging apps because they care about their privacy. If companies like that use our stuff or stuff of other companies, they explain that the privacy is at the center of their mind. Amazing.

And I do believe that privacy should be the center of our focus as security professionals. Like, yeah, you know, yeah, there's a whole like philosophical discussion there of like, well, if you don't have anything to hide, why do you care? And I don't buy into any of those types of like rhetoric at all because I think in order to do great things, everyone does need some sense of privacy. And you need that private space always.

And just because you don't know, like as soon as you put something out there, maybe you do trust that first line, you know, to handle your data, you know, within their own systems. But if they turn around and they sell it and it gets trickled down and it's passed around, you have absolutely no control of who has your data. Yeah, but we have, you know, examples of companies who you trust, but who are bought later by other companies. And then you don't know, at that time you trusted them, but then you don't know what is happening.

And then things change and the companies are just entities that have to be profitable and they will do whatever is like legal to become profitable. And if it's legal to sell your data to shady other companies, like they're going to do that. That's just what a company does, right? They become profitable at the end of the day.

And as consumers, we don't have a lot of options. And so that's the philosophical approach. But there's also a very strong legal and compliance requirements for specific industries to protect some of the data. Oh, yeah.

Oh, yes. That's a good point because in Europe, I don't know how it is exactly in the U. S. , but there is the GDPR laws to protect the privacy of the users.

So it makes that, I'm not a specialist, but more or less, it makes that somehow companies are preventing from doing something because the servers, the data are badly manipulated. So things, especially in particular, or through other privacy preserving techniques in general, they could do it now, even with the restrictions. And certainly in the U. S.

, you are going to, or you already have the app. Yep, in the U. S. we have the CCPA here in California, the California Consumer Privacy Act, which is a little bit harsher than GDPR, I would say.

There are regulatory agencies. There's HIPAA for the healthcare space. There's the SEC for the, you know, regulatory, you know, Security and Exchange Commission. There's all So, New York DFS regulates financial transactions if you're registered in the state of New York.

So, essentially for banks and the financial verticals. And so there's lots of markets for these things, for privacy in general. And I don't think that the right thing to do is to do the bare minimum, which is comply with the law. I think as security professionals, we should always strive for securing data because we just, there's a lot of things in this world that you just don't know if they're going to happen or they're not.

And huge piece of security is protecting against and reducing your risk exposure, right? So I think this is all great work and it all collaborates or reinforces a future that is much better than what we've had so far in terms of data breaches and, you know, to take advantage of new services, but still preserve our privacy, right? So, so in that spirit, people can try Concrete and Concrete NumPy and Concrete ML if you go to GitHub. com slash Zama dash AI, right?

So yes, it would be my pleasure that they try it. Don't, don't take my word for truth. Just try it, discover the documentation. And normally I've really tried.

So if I took something from Apple, you know, to make things which just work, which are easy to use, which are tested, you know, it's, if it is not tested, it doesn't work. As really it was my pleasure, my proudness to do that, to try to do that in the team. So yes, try it. And then if ever you have issues with our stuff, you, you have several ways to communicate with us.

Like we'll put his LinkedIn in the description of the podcast. That's a way you can ping me on a, on a LinkedIn, for example, or you can go to community. zama. ai where you have a, you know, it's a discourse.

So there are several categories. And then you, you discuss directly with the developers. So we have a policy of answering very fast to questions and we try to, you know, make, make extensive answers such that we really add the users. You also have on FHE.

org, FHE. org is actually, let's say, association community, which, which is about FHE. So not only Zama's FHE, but all the FHEs. And then you can discuss with other, so there is a discord and you can discuss with other users of FHE to discuss what you are doing, if you want to, I mean, collaborate with other people or so on.

And finally, in the, in the FHE. org, we'll organize a meetup. So very regularly, like I think it's once a month where we invite, so sometimes it's people by Zama, sometimes it's also people outside and we invite someone to speak about FHE. So if you are interested in FHE in general, just sign in to the meetup and you will see that we have amazing speakers.

We also have recordings of previous sessions if you want to have a look to what, what happened. So yes, it would be my pleasure to have a lot of users to have a lot of bugs. If I have bugs, it means that I have users trying to, to make it work. And if I have bugs, I will make sure that we fix, we will fix them.

One thing that we said in, in Apple is that if you don't have a radar, radar, which is, you know, the equivalent of issues. If you don't have a radar, the bug doesn't exist. No one knows that it exists. If you have something that doesn't work, if you have something which is missing, just come to one of those channels and, and tell us what, what we can do for you and we will do it.

Amazing. It's like having an entire startup ready to go for free just by joining the community. I'll put, yeah, working. As users, so as we said, well, you got to take a value from it because you're getting product market fit.

So that's amazing. And just if I can mention, so we are open source. So if I can ask something, a small thing, something which is, you know, not expensive, just a bit of help from the community is if you can GitHub star us, it can help us to get some exposure and to, to show more or less that people are using it, amazed by what it does. And yeah, if you get GitHub star those repositories, all right.

Yes, I'm waiting for your GitHub star. Oh, yes, okay. I'm so sorry. I have to get on.

I will add it. Yep, I definitely will. So share with us, you were at Apple for almost 12 years. You have been with Zama for almost three years.

Which one is better? So it's hard to compare, but really I don't regret having done this choice. I had made my way to Apple, let's say, and now I am doing a very nice mission in Zama. So I mean, the privacy of the users, we are doing something which is really nice for users.

It's really very edge commission and I really like the vision of the founders about it. It's really great. Also, one thing which is really nice for me in Zama is that I've built my own team and I try, I think I conveyed it during the talk. I try to recreate a bit of the culture we had in Apple in my team or in the company.

And really it's my pleasure, my fondness if things are working smoothly. So for the people in my team, I want them to be happy and so on. I want them to work on, you know, edge technologies and yeah. Yes, but I also want the users to be happy with what we do.

You know, currently I'm doing some interviews with users and I'm very proud when they tell me, and it's quite often when they tell me that the documentation is very nice, it's very clear, they can use it without really knowing how it works under the hood. With that's the kind of things I like. And yes, maybe to finish about what is nice in Zama is that, you know, in Apple, we were in a very nice place, but we were in a closed place. So secretive, we had no idea what we were working on and these boards would just show up and we were like, what is this?

And we had to get our crap and work on it. You know, it's really my pleasure. It's the first time I have a podcast. So it's really my pleasure after 12 years of Apple to be able to speak a bit about what we are doing outside.

It's more restricted, you know, to Tim Cook or the big VPs of the companies. I can send, I can try to exchange with the developers. And so it's also one thing which is really nice in the company. That's amazing.

I love that they are all about the development and sharing and being part of the community and really looking after the people that work for the company and giving them opportunities to advance their careers and do talks and engage with those open source communities. Even though we used a ton of open source stuff at Apple, we were never allowed to go to conferences or talk about things. It's a very secretive company. And we were on a secretive team in a secretive company.

And we were not even allowed to say like really what we worked on ever. So we just didn't. And you had to be okay with secrecy in order to get those experiences, which, you know, were amazing experiences. I agree with you a hundred percent, but I wouldn't trade them for anything in the world.

But I'm happy that you get the opportunity now at Zama to participate a little bit more in the bigger picture and to feel that sense of sort of connectedness with the community and drive some meaning from that. Thank you so much, Benoit, for joining the Security Podcast of Silicon Valley. It was an absolute pleasure to discuss the future of cryptography and fully homomorphic encryption and for you to share, you know, what Zama's been up to and what you've been up to. I'm excited about the future, but it's shows like this that really give me optimism for what we can end up building and sort of get out of the security pickle that we seem to have put ourselves in.

Would you like to leave any of our listeners with some final words of wisdom? I would like to thank you for this amazing moment. You know, it was my first podcast and it was really a great experience. So if I had two things to recommend to your listeners, the first one would be to try ConcreteML and GitHub StarMe and so on.

And the second one would be. . . Ask you if they can have, you know, an equivalent interview with you because it's really an amazing moment.

I mean, it has been a very smooth, I was very comfortable and. . . Oh, good.

I'm super happy to hear that, Benoit. Like, that's important to me. I'm glad that you. .

. No, I mean, you make people very comfortable and easy by asking very interesting questions and very smooth, so thanks a lot for that. No, thank you, Benoit, for jumping on, and I know it was your first podcast, and I've always respected and enjoyed working with you 100%, and I've learned a ton about cryptography and obfuscation and that entire space. And I still consider myself a novice, even though I've had a great experience and a great teacher.

Everyone is learning in security, so that's all. So hey, share with our listeners, is Zama hiring at the moment? Yes, yes, yes. Okay.

So we are with the chief engineer that we raised by investors. So we have a goal of hiring 100 persons. So I told you, it's everywhere, so it can be in Paris. Mm-hmm.

But it's remote first, so San Francisco people, China. For now, we don't have American people. It's not because we don't want, maybe we come. You know, there is also some, you know, prices of salaries in the valley, but.

. . I don't understand what's going on here in the valley, but. .

. But I mean, it would be possible. So we are hiring 100 people by, I think it's by the end of 2023. So you can come to jobs.

zama. ai and you will see the open positions here. And if you don't see something which is, you know, for you, you can still apply as a spontaneous candidate. And you can drop me a mail or drop, you know, there is a way on the Zama.

ai to just speak to the company. So you can say, hey, I'm very excited about what you are doing. I want to apply. Yeah.

Thank you so much for all of that information. We will keep our eyes out and all the candidates will be flooding in. You know, I would be very happy to have you in my team. I'm flattered and I'm very honored, Benoit.

I'm not sure I'm qualified to be on your team, but thank you. There are probably some cryptographers out there that are getting very excited right now about hearing all of this stuff, but I'll let them have first dibs on your team, of course. Benoit, it's been an absolute pleasure. Thank you so much again.

Stay tuned. And thank you to our listeners too for tuning in for another episode of the Security Podcast in Silicon Valley. And stay tuned for another episode. Thank you, everyone.