52. Suha Can, CISO at Grammarly: Safeguarding User Data in Enterprise AI Systems

Hello, everyone, and welcome to another episode of the Security Podcast of Silicon Valley. I'm your host, John McLaughlin, and I'm joined with Sasha Sienkiewicz, co-host, and we have a very special guest for everyone today, Suha Khan, the CISO at Grammarly, the Chief Information Security Officer. Suha Khan, you bring a lot of great experience to the table. You're leading security, privacy, compliance, and identity at that global company, Grammarly.

He's dedicated to securing the data for Grammarly, which includes over 30 million users and 70, 000 teams at enterprises and organizations worldwide. Previously, Suha, you are the Director of Security at Amazon, overseeing security engineering globally for Amazon Payments and Alexa. While at Amazon, Suha, you built Lumos, the Secure Payment Processing Service, handling all customer payments for all Amazon businesses and relying on formal verification methods for security shirts. You also led global security research at Microsoft.

You were responsible for responding to all of those zero-day exploits, building mitigations to kill vulnerability classes, entire classes of vulnerabilities, and increasing the cost of exploits and unknown vulnerabilities. Welcome to the show, Suha. Thanks, John. Thanks for having me.

It's great to have you. It's great. It's awesome to be here. For all of our listeners, how did you get into security?

Yeah, that's the same. Okay. So I'm. .

. Okay. Just give me a note. So I grew up in Turkey.

And growing up, I was really into maths and computers. This is kind of, let's say, mid-90s. I'm in my 40s now. The things that I didn't have at home that I really wanted to have was internet access.

This was still the early days of the internet in Turkey. I was really looking for a way to get connected. I'm at a high school at the time, and we don't have internet in school as well. So I found this coffee shop in a nicer part of town.

And that was kind of pretty far from where I am. So I would go on a pretty long walk after school to go to this coffee shop and hit the bottom floor of the coffee shop there. But there was one computer. And this one computer was to have connected to the internet.

This is a nice part of town. Apparently, this is something that was starting to come in. So they put one tool to attract people. And through that computer, I fell in love with the IRC and started getting to know other people who are computer.

I'll be sure that I was in Turkey. The IRC community of Turkey at that time was also pretty much students from the few technical universities and managers and admins and the like. I started learning from this. And really, when you look at the time, it was amazing because I was seeking knowledge.

Here's other people who, all of whom are more knowledgeable than me. And there were two communities at the time. One was the Linux, let's say admin security community, people managing unique servers in various companies and universities. And the second was the demo scene community, people building fascinating graphics and very small amount of code.

And all of those communities were accessible to this medium. And I got the chance to join a unique security group. And I learned a lot from those. There was a sense of curiosity, seeking to understand how things work.

It was all about having fun, gaining knowledge, a little bit of friend competition, for sure. And I had a lot of fun learning from those people being part of that. And most of those people I've still connected with today. Many of them actually are in the US.

Several went on to become city sales and our security founders here. That's how I got into security. That's awesome. That's an incredible story.

How old were you when you discovered that coffee shop? 18, I think. Something like that. 16.

A very impressionable age. Very impressionable age. I was certainly one of the most annoying people of when I was 1830 at the time. And most people were older than me.

And yeah, it's truly, it was an extraordinary time. I love it. I love the sense of curiosity and just the exploring things. So now today, as the chief information security officer at Grammarly, an amazing product, by the way, I'm a huge fan.

I've been a paid member for, man, I don't even know how many years. As soon as I discovered you guys, I bought it. But what is it? What do you do at Grammarly as the CISO?

Great. Yeah, I'm a rare foolhard at Grammarly. Job number one is the CISO job. So I am responsible for all aspects of trust in the company.

So I did security, both product security, corporate security, privacy, compliance, corporate engineering, the IT infrastructure of Grammarly itself, and responsible AI. So our research team that's focused on making AI safe for our customers. So this job, number one, that's my main role. I also lead the engineering teams at Grammarly that build the enterprise world of Grammarly because typically the enterprise world has a lot of interesting security and IT management problems to solve.

And my team has also built those capabilities of Grammarly for our customers. Sounds like you're also chief product security officer. Yeah, absolutely. Kind of.

I would say that. Absolutely. Chief product security officer, chief compliance officer, IP, chief privacy officer, all of things. Yes.

Security has a lot of different subcategories. And the security umbrella is vast and goes into different aspects of the product and the organization. How do you make sure that the security as an organization runs fluently and it does not impede the research and development? Great question.

Sure. So I think one of the important things is really building solutions that scale. I think when we look at the security program, you really want to make sure that you're able to understand the risk tolerance of the company, which at a company like Grammarly clearly trust needs to be paramount. So we have a pretty high power in security.

You ensure that the investment is also strategic and on par with that importance. Obviously, if you work in a company and security is very important, but you don't have the right investment in the area and you don't have the right strategic voice to influence the company, then no matter who you hire, you're just going to be always behind and the business moves at a very fast speed. You definitely cannot be in the business of slowing things up. So first it starts with a strategic alignment.

You need to have a very clear vision. You need to bring the board along. You need to bring the corporate leadership around. And then you hire great people.

Obviously, you build teams. But clearly speaking, there's always this balance between velocity, enlighten the business, and then, okay, today it starts with a lot of automation, a lot of scaling or programming. I think that's always a great place to start. They can be able to get the basics in check.

I mean, once you've filled it in check, obviously, and I actually believe that scalable solutions, they are necessary but not sufficient. I always believe that for really high risk areas, for your crown jewels, you need to go a lot deeper. You need to build layers in defenses. And typically that requires really skilled security experts that can go really deep to a lot of manual introspection and really raise the barrier in a major way.

And the cool thing is this is kind of what you do. So you need to invest in things at scale. And I think that when I start, then you invest deep in certain areas that are really important and high impact to the business. But as you do these things, how you do is also important.

And I think security has to have a builder mindset. So my teams don't operate in a way around. We advise teams what to do and then we move away. We build solutions.

We tap. Most of our security engineers are also software engineers and they are builders. So no matter what your strategy is, the how of it should be by being a builder and being a builder mindset to the organization. And that's super important.

And that's also how you're getting credibility with developing things. Typically, one of the things that one of my fond memories from my time at Amazon, it is also repeated. It's a bit gravely. We will do a security deep dive and we will present our projects to the team who owns the services we are providing.

And the principal engineer of the team told me, hey, I didn't know security about this. And that's a nice compliment that one can hear because people are used to security being, hey, answer these questions. Check, like, make sure you do these things and like walk away. And I think that's just how fortunate a lot of people are conditioned.

But typically, that's not how you're going to be able to land impact on this. Not how you're going to gain the respect of the engineers with the child. Ultimately, our closest partner in security. Yeah, absolutely.

Having the right people in all of the right areas of the security is important to make sure that the conversation and communication with engineering teams is open and is ongoing. Security in isolation does not work. Security needs to be driven by business and it needs to be closely collabed with engineering teams that implement certain controls. I agree with you 100%.

I even go farther than that. In the sense that we actually could have other controls. And I think that's where I truly believe security is a builder profession. We have to have a good mix of security skills and software engineering skills.

And if we identify a particular threat and we want to build a control, we probably want to build it because it is very likely that safe control will also be useful to other teams in the company. So it's really thinking about it still and being a builder. That's going to make your strategy work for you. Yeah.

The engineering mindset of security teams is extremely welcomed and very well received by the other firms of the organization. We've seen it work really well so many times. And that's the amazing approach to take. Having said that, what does Grammarly do better than everyone else in the world?

I would say that Grammarly is the only AI company really focused on transforming how people communicate. No matter it's at work, it's school or personal communications. We've been doing this for about 15 years. And I think this focus led to a product that people really love.

That they really find very useful. But it's really about AI communications. And I think this is what Grammarly does in the world better than everyone else. And that's probably why we have used it like something like 95% of the Fortune 500.

So I think it's important to highlight that Grammarly has been doing AI before AI was as popular as it is in 2024. And we're super grateful to see you create a security organization that is dedicated to protect the data that all of us provide to Grammarly in order to get the service out of that data. Was it a question? No, yeah, it wasn't.

It was just a thank you for doing an amazing work for protecting that data. Okay, so let's talk about how security intersects that AI space a little bit. Does Grammarly, I know that Grammarly started off mostly as like aiming for consumers in terms of security. Consumer space is very important, but it is a very different game than when you go after enterprises.

Now, I see Grammarly going more for corporations, more for enterprises, more for corporate communication within your organization. How do you trim things up, make them more succinct, increase information density? I love the feature that actually does the analysis, the tone analysis. So when you write an email, it will show you like little emojis of like what you might sound like when someone reads that.

Super good way to just reflect on your word choice, the tone. But at the end, like in terms of security, if you sell that product to the enterprise space, usually there's like certification questions that come up. Oh, do you have SOC 2, ISO 27001 maybe? I don't know.

Does Grammarly go after those certifications for it to demonstrate their security posture? Or what is your approach? Yes, I think it is a call out. If just you look at what Grammarly does.

Clearly, security is going to be extremely important. And you can turn yourself to consumers. Privacy is definitely extremely important. There's a lot of questions about how the data is handled.

From a private space, when you go B2B and start talking to enterprises, security becomes extremely important. And a deep topic for conversation. And cold players, of course, is one of the ways we communicate to each other as companies that we meet certain standards. The third party also agrees with us that we meet certain standards.

So that's the radio cold players. Yes, we do. We do have SOC 2, type 2, IPA, ISO, all the common industrial certifications around security. We are also quite aggressive in pursuing new certifications, especially in the realm of AI and privacy.

Because, especially with AI, there is a lot of nervousness in the industry. And when you talk about Silicon Valley and when you talk companies like other startups, everybody is in the embracing mode and building things on top of it mode. But at the global scale, in Grammarly is a global company. So I talk to a lot of global seesaws.

There's still a lot of nervousness. There's a lot of timidity around adopting AI due to the risks and unknowns. So in the compliance space, for example, we are really pursuing, I think we are one of the first few to do that, the new ISO 42K1. The AI risk creation is compliance certification and we'll add it to our roster pretty soon, hopefully.

That's how we think about compliance. I think compliance is definitely for a security person. It is valuable, but that's not everything. There is a lot that we do that enables Grammarly to be safe with customers.

Maybe I can talk a little bit about Responsible AI. It's an area where we have a dedicated team, a robot set of researchers that work for Responsible AI. They ensure that all of our models go through a safety assurance procedure. As we build models, we use analytical linguists and machine learning models to actually make sure that these models have safety built in.

That they will then leak data. That they will then mislead our users by hallucinating. That they will then say things that are awful for Grammarly to say. And I think this is just for an example how, like by a dedicated team of world-class researchers and applied scientists and analytical linguists, focus on Responsible AI.

It's kind of how we are able to really continue bringing the latest technology to our customers while maintaining a high bar on safety with AI. So typically when I talk to other CISOs, they of course ask about compliance, but they're also curious. So I end up explaining to them what we do with Responsible AI. We also rely a lot on controls in the product.

Because you can tell someone that you care about privacy and that you have high security bar. It's a easy for them to really see what you're doing under the hood. By providing controls to users so that they can, for example, see what data we have stored against their user ID at any point in time. In product, I adjust the clip of a button.

This is kind of our transference with control. And by giving them ability to determine whether they would like to use a particular feature or not. And explain to them what data covers that particular feature. We believe we earn the trust of our users.

And this user transference and control is also part of our overall security and privacy. Typically, that's why I think about our security investments and how we describe them and explain them in order to trust of customers. I'm not surprised at all. Grammarly has always been a trailblazing type of company.

You've been on the forefront of all of this stuff for a very long time. And I'm super happy to hear how security is part of that core exploration. Because really, I'm not sure that the security community knows what it means to have secure AI quite yet. I think the jury is still out a little bit on that one.

But you mentioned ISO 42001. You mentioned the responsible AI team. Researchers dedicated just to thinking about this new space. That's incredible.

And so it's not just leadership within Grammarly for the benefit of all of Grammarly's customers. But it's also, I think, you're participating in a bigger discussion here that extends way out outside of Grammarly. A huge thanks for gathering together a world-class team and putting them to think about and trailblaze these super important questions in AI. Yeah, I don't know any company yet that's 42001 certified.

But I saw that come out a couple of months ago. And I was like, this is great. This is interesting. There is a lot of development in AI.

And the intersection between AI and security 42001 is a fine example of a new standard. I concur that we don't see that many companies having gone through that certification yet. And we're super excited to hear that Grammarly is on the path of getting that certification soon. And it's important that companies consider the certifications as standards and frameworks are not there for the checkmark purposes.

Often it has very good guidelines and guardrails of what to do and what not to do. So this is awesome. I have a related question to the AI security and compliance. The European regulators decided to release this new AI act.

And I imagine that Grammarly has a lot of customers in Europe. Grammarly is a global company. A lot of customers are in Europe. Has EU AI regulations impacted the velocity or the roadmap of certain controls that are being implemented?

Great question. So I think, first of all, I welcome and I think I really is very positive that there is a lot more clarity on how companies should handle user data generally in this space. So I think typically as a security appropriation, as a privacy appropriation, what I fear most is unknown and ambiguity. And there is still a lot of that in this space.

Don't know yet we're on. It's an extremely fast-forward space. But having clarity about the expectations from a regulator perspective is actually very hard. So we looked into this, of course, specific of specific.

They will open down. It didn't. The main reason is really, Grammarly is really, it's a deep focus on building ethical AI and user trust. And I think, you know, you look at how, for example, before that and after that, we were handling EU users' data.

It doesn't change. And why? Because we were handling it in an ethical way from the get-go. For example, the type of data that we use on my table.

Some companies use data to improve their models, improve their product. Some companies hold on to data indefinitely. That's not how Grammarly is operating. So the users in the EU, and we are intimate with GDPR, we already would not be saving their data or using it for improving our models or our products.

And in fact, after that, we've continued to be exactly the same thing and we remain compliant. And like I said, I think it's ultimately the principal alignment that the company has and the spirit of the EU regulator that they are allowing. And I think that helped us in this case. Fantastic.

So in other words, having strong foundation in data privacy, data governance, having those principles in place and having new regulations constantly coming out does not really change the roadmap or Grammarly. As the new regulations do not require sudden changes in how the product is built and how the product is operating. Fantastic. Absolutely.

Absolutely. And I think internally recall this is the sunlight test. And regardless of what the report regulation might require companies to do, like our internal compass is to make sure that whatever practices we have will pass the sunlight test. And I think that it's a lot of people in this case is serving us well in the sense that this type of news do not have been back to our immediate roadmap.

Because we are operating under the same underlying proof and principles, which are really the customer first principle, right? Use customer data only to provide a service to the customer. Amazing. Amazing.

So I'm actually super curious in terms of Grammarly being an AI company and a lot of the concerns focus around data. Sasha and I have, in our own professional experiences, heard a lot of requests come in for AI services to go on-prem or at least for the data ownership piece to live on-prem while all of the services stay up in a cloud that's fully managed. Have you guys had a lot of those requests come in for a cloud that has very strong security guarantees? Or how do you handle that?

You guys must navigate that with great poise and class. Great. So absolutely. I think that when you think about the underlying concern is really true.

And it's even more truth in the age of generated AI, which is companies are nervous about how their data is going to be handled. When a company uses an AI tool internally, their data goes somewhere else. And I think that's the sales model and it's totally fine, but there does need to be some controls and some standards that you provide to your customers. So in this space, it really boils down to concerns around retention.

Do you store this data? How long do you store this data? Who can access this data? There's securities, there's privacy-related concerns about regional data storage and things like that.

And the second piece is, do you use my data to train models? And if you do, is it to train models just for me or is it to train models for all of your customers? Yeah. Yeah.

Those are the good questions. Those are a little bit worse. So I think, and for us, we can tiered model and you look at, and we are constantly iterating on this from a customer feedback perspective. But where we currently found the best, the best, obviously, there's a couple of trade-offs here that are involved.

If you provide an on-prem solution, customer has to manage it. And some customers just don't want to do it. And it requires investment from the customer and they still want their data being kept and secure. So how do you do that?

First, I think we, for our enterprise customers, we have, we are launching a time of this month. It's this feature called Being Your Own Key, which ultimately encrypts data in a in-gram release cloud. But we take key that's controlled by the customer. So the customer hosts the key, gives us permission to the key so that we can use it to encrypt and decrypt data.

And that's the model that we are currently offering our enterprise customers. We are knowing exactly what's happening with the data. They can evoke our access to their data immediately by revoking our access to the key. They can rotate their key so that they have control over the characteristics of the encryption.

So pairing with their data. And I think Being Your Own Key solution for enterprise customers is kind of where we can currently strike the balance between the cost of the customer to operate a customer cloud environment and something where that's completely known by grammar and they have no visibility or they have no control over data. So that's kind of how we strike the balance on data security for enterprise customers. On training, we absolutely do not train on the data of our enterprise customers.

And it's just not something we do. And obviously, it's how we should be. So that's kind of our stance on training with customer data. Yeah, that's awesome.

I've seen application layer encryption bring your own key style encryption used in a past life to really open up the market for very similar types of products. That's great. If you were to fast forward into the future and you can put your goggles on that you can see, I'll let you decide how far into the future we want to go here. But what does success look like for Grammarly?

And how do you see security enabling that success or your team, you and your team enabling that type of success at Grammarly? Great question, John. It's the forefront of AI. And we build models of varying sizes and shapes for several distinct applications.

And this space is full of hard and unsold security problems. In the future, if we do a great job, our team is at the forefront of solving these security problems. We share the way we bring safety into AI models or generative AI models, large language models, traditional AI models. I think demonstrating our sharing what we have done is definitely a mark the guy having in my head about, yes, my security team has been successful.

We really made this technology safer for everybody, both for audiences, but for the public at large. So I think first, let's cut off on the technological side. The second aspect of our five successes, we have a very user-centric ethical approach to AI. And our investments in tackling specific problems instead of ignoring them for our customers, in bringing these capabilities as part of the user's experience with our product.

Earlier, I spoke about transference and control. It's actually keeping the user in the driver's seat in making the trade-offs between sharing some data and the video they get from the data and how transparent we are about it and how much we give them control. So that's the second piece. And I think whatever future we get to, we need to get there with being this user-centric and ethical AI company.

And that's the second measure for success here. So for first, solving hard AI safety and security problems, pushing the investor forward on that. Second, remaining in technical and user-centric AI provider, setting the standard on that. So if you do those things well, and of course, the third is the market is to reward our approach.

And it also has to be true for us to be successful. If all those three things, the weapon, my security team has been successful. Awesome. You've had an amazing journey leading up to Grammarly.

You've had an amazing time at Grammarly. You've built out the amazing team that is capable of making sure that the data controls and security controls are implemented. If you look back into your experience at Grammarly, what is the best day that you've had so far? I'm sure there are a lot more awesome days in the future.

But up to this point, what is the best day you've had at Grammarly? Awesome question. So I didn't allow the days. And I can't pinpoint a specific day, but there are definitely several days that come to mind when I think about this.

But I really love the days where the work is extremely hard, extremely long, and for a really good cause. And I did. Grammarly gave me many such days so far. And doing a run like this, a really hard, long day with my team, colleagues, that's just what gets me into my flow state.

Amazing. I always like a really good flow state. And sometimes it happens early in the morning, and sometimes it happens late at night, and sometimes it's both in the same day. Those are always awesome days.

To contrast that a little bit, you've been a security guy your whole life, and you've seen a lot of interesting stuff, I have absolutely no doubt. But if you look back on your entire career, would you say that there is a single most challenging day? And we're curious to hear how you ever overcome, how you overcame those challenges. This is a great question.

So I would say that by far the most interesting and challenging days of my security career, they were probably during my time at Microsoft. For information, I let security incidents response at Microsoft during the better part of 1000 times. And security incidents response at Microsoft, not for the faint of heart, it is there were certainly difficult days. You both have the most amazing team that you can imagine, and it is a very global team, very, very skilled team.

And also, you know, the adversaries that Microsoft deals with is that they are just truly outstanding, technically. It's absolutely the best of the best. And we kind of viewed ourselves there in the MSRC as kind of like the blue team of the internet. That's how we viewed ourselves.

Definitely a lot of difficult days. There are some that I don't think it's worth to talk about in any context. Yeah, no, no violating NDAs. But there is absolutely, I wouldn't do that.

No. There is one in particular that came to my mind just as I'm thinking about. It's probably around 2017, there was this incident, the shadow brokers. I don't know if you remember that.

But some unknown parties came up on the internet with remarkably bad grammar. And they were teasing that they have a bunch of really, really good zero days from the NSA that they will drop. Now, I think it's a tease. They enlisted for a few months.

I remember they did multiple posts. And obviously, whenever something like the sepulence, we look into it. At some point, they came out with doing their 3Ds. And one very such as Eternal Blue were part of the 3Ds.

So it was real. They absolutely were authentic in their claims, the shadow brokers group. And now, on the Microsoft side, the events leading up to that release were such that we didn't have high confidence. We knew exactly which vulnerabilities they were going to drop and which exploits they were going to drop.

But we did have high confidence, but we didn't have certainty that we were able to find out, figure out which vulnerabilities they were going to release. And fixing all of them correctly before they actually release it. And there's a time element here. We had to work really hard to make sure that we do fix all of these incorrectly.

But we didn't know exactly which vulnerabilities these folks were going to release. There's a lot of unknown and uncertainty there. And when the vulnerabilities, we are talking about our remote code executions, zero days. Interoperating a system like Windows.

That's very serious. You have to. So many people run Windows in a zero-day code execution. And it's not just individuals.

Most of the world back then was running on Windows system. It still is. Most of the world is still running on Windows machines. Yeah, absolutely.

It is absolutely true. So I think the task leading up to that release is you have to think about how this is, right? We are a team of people. We are searching high and low in millions of lines of code in Windows and trying to kind of find, based on a very limited amount of information, trying to find which ones are they talking about and fix them in the correct way.

And then we do the patch Tuesday release. We finished it as much as we can and we released it. In about a month from then, they actually dropped their release, shuttle. That day was a very stressful day for us.

Yeah. We got the release. We farmed out the exploits in it to my team. And we started looking at basically reverse engineer them and okay, oh, this is, yeah, we knew about this, we knew about this, we knew about this.

And thankfully we got it right. So we patched all of those vulnerabilities before the release happened. And obviously the opposite would have been a disaster. But then of course, a couple of months from then, one aircraft happened.

And then why, because when one aircraft is never just one of those vulnerabilities and those exploits and why people not they were with the patches. Yeah. It's one of those situations. Elaboration patches.

One of those situations where no stress, but a lot of things are dependent on how well you prioritize the specific set of criticals and highs. And thank you for doing that. It's not just a matter of identifying, but I imagine it's a matter of connecting with engineering teams to make sure that prioritized vulnerabilities are patched and rolled into the production as soon as possible. No, absolutely.

Communication channel with researchers. Absolutely. But I think in this case, just remember, and you asked how we overcame this challenge, right? This is not a prioritization exercise.

Because we definitely knew the importance of these vulnerabilities. And this is also not a case of we can communicate with the researcher. There is no researcher. No one is really telling us anything about this.

It's not like shadow brokers and us with a communication channel, right? So it's really about something else. I think the way we were able to overcome this is really, I think the principle for me that's valuable from this is what I call the no mystery principle. You really have to deal with this level of ambiguity.

And when you have to do that, the mindset has to be that there must be no mystery, no unanswered questions. You try to prove yourself wrong. You say, no, it can't be it. What if it's this?

What if it's that? You basically keep challenging yourself that you must be wrong in your assumption that this is the vulnerability. And then when you approach the problem in that way, such that through answering all of the questions, only the truth remains, then obviously this must be the vulnerability. So it's really, I think this is something that I use on my videos a lot.

We call it the no mystery principle. Really not being satisfied with the whole of the reason that's where really get down to the bottom of it. Really understand it. And really, so I have to prove yourself wrong.

And if you can't, then I think you must be right. So that's what I would say is the key ingredient for cyclists in these situations is to apply the no mystery principle. It's a great approach in security and in life in general. We have to understand the problem statement clearly before we start fix something.

In other words, we have to know what we're trying to fix before we fix it. And it's important to spend as much time as possible to understand the problem before we dive into the solutions for that problem. Okay. Super quick.

Last question here for you, Suhan Khan. If you could meet your younger self, would you? And what would you say? Yeah.

I would totally meet him just to freak the guy out. Just to freak him out. Absolutely. And I did not meet my older self.

So that's also true. But yeah, I think the only thing I can say is really, I think it took me a very long time to get to a place where I feel like I actually know what I'm doing. I think building, having that maybe unjustified confidence earlier on, would probably let me to take in more risk. So maybe what I will tell him, hey, leaving yourself more, take more risk.

That's probably what I will tell my younger self. Stop having imposter syndrome. Stop it. Absolutely.

Yeah. This has been absolutely wonderful. Thank you so much for joining on another episode of the security podcast of Silicon Valley. Actually, before I close down, I do have a question.

Please, Sasha. Please. Yeah, go for it. Okay.

Quick question. Suha. What are your predictions for security space? By security space, let's be more specific.

Product security. And what are your predictions for AI in the context of security? What developments do you see in the next three to five years? Okay.

I'll answer the second half of it, I think, first, which is really about the horizon for security. So I think there's two pieces. I think we will see a lot more misuse of advanced AI technology by adversaries, by malicious actors to accelerate cybersecurity capabilities that they may have. It could be vulnerability research, it could be malware development, it could be phishing and social engineering, other things.

So I think I definitely, I will be shocked if that's not already, but I think we will see a lot more of that. I also expect the defense team to leverage these tools at that core. And I know it's already happening in a little bit in the SOC and in the IR space, but there's just so much more that could happen. There's just so much more that could happen.

So I expect those things to be part of our life a couple years from now. On the risk side, I think when we think about the type of incidents, we will see we will probably run into cases where hallucinations actually have a real world impact. So something will happen in the physical world because of a hallucination, the piece of AI software that humans over-reliant on. And I think we will see it in the news.

And I think it's going to be pretty terrifying. And we'll start thinking a lot more about resilience in the context of AI and things like that and critically infrastructure and so forth. And I think if you read it, look in a little bit longer, I do believe with artificial generating, the misalignment issues will come to bear. And we'll see cases where AI will have its objectives, not aligned with what the humans are trying to achieve.

And I think that's also a very fascinating problem space. So that's kind of about AI. And about security, I think the security role and the role of the CIS in particular is kind of becoming a lot more broad than it originally was. I think it was originally quite corporate IT security and things of that nature.

Then it moved into product security and now all products of AI. A lot of companies are actually looking at the CIS to lead the company in the adoption of AI, in the bringing of AI in the work base. And I do believe the role of the CIS is becoming one of the chief trust officer in all aspects. And I like it.

I think it doesn't change the fundamental nature of the role, which is highly technical role. That's also very strategic. But it, I think, enables us to have a bigger seat at the table in the company conversations. Yeah, and we see it a lot more.

Most of the functions that companies produce as a value add to the market is built around data. And it's all about how do you protect the data? Does the data go into the training of the model? Who has access to the data, whether it's internal teams or it's internal services?

And I agree with you 100% that CISO and CSO roles are all about the data. And how do you build the trust between the company and customers in terms of how do you as a company protect the data that is given to you by the customer? And MAI is very fascinating space that is being added into that mix. That was the first time I heard it posed as a chief trust officer.

I like that a lot. I'm going to start using that. If that's okay. I know a couple of such people.

It exists already. Yeah. Amazing. And I think it makes sense.

Amazing. Well, thank you so much for joining on an episode of the security podcast of Silicon Valley. Your insights have been very enlightening. My pleasure, Vidi and Sasha.

It's great being here thinking a little bit about the past. That was definitely a good experience for me. Awesome to have you on the episode. We did talk a lot about your prior experiences that led up to the point with Grammarly.

Thank you so much for sharing your point of view on the development of security and trust. I feel like at the very end of the episode, we touched on a lot of forward-looking topics. And maybe if you are interested and curious and available, we can have a follow-up episode to this episode. And we can maybe discuss things that will happen in the future based on what we see in the present.

Yeah. Also, I like talking about the future. And a huge thank you to all of our listeners for tuning in to another episode of the security podcast of Silicon Valley. This is a YSecure production.

I am your co-host, John McLaughlin, joined with Sasha Sinkovich. So, it's been an absolute pleasure. Thank you so much for having me. The CISO of Grammarly.

Huge thank you. All of the appreciation in the world for sharing some vulnerable moments with us. And deep insights. Take care for you, man.

Thank you.