41. Michael Moore, Chief Privacy Officer at Lacework, Securing Tomorrow: Navigating the Cyber Frontier

Hello, everyone, and welcome to another episode of the security podcast of Silicon Valley. This is a Y Security production, and I'm your host, John McLaughlin. I'm here today with a very special guest, Michael Moore, who is the chief privacy officer at Lacework. Welcome to the show, Michael.
Thank you, John. Great to be here. So you bring a lot of awesome experience to the table. You came up through analog devices back in the day, and then you were at Cypress Semiconductors as a patent agent and applications engineer.
So you bring some of that engineering grit to the table. You were an associate at Morgan Lewis. You were the corporate counsel for IP at Symantec. You spent a good chunk of time after that at Rombus as vice president of intellectual property and deputy general counsel.
And then where our paths crossed at Pure Storage, you were the vice president's deputy general counsel, privacy, sales compliance, product, and IP. A lot to be responsible for. And now today you're at Lacework, and you've been there for three years or so now, and you're the chief privacy officer. Thank you so much for joining.
So do you want to share with our audience how you originally got into law? Yeah, absolutely. First of all, let me describe where I am right now and what we do, and let's give some context as to how I got here. Lacework is a cybersecurity cloud company.
And we protect customers against risks, threats, identity misuse, and code supply chain attack. We use anomaly detection to stop both known and unknown threats. So both known threats and also zero days that may exist out there. And it's a very interesting place to be in terms of both the technology and where the market is going.
Particularly, it ties into many things I've done as former parts of my career, both on the technical and legal side, to see how they all blend together. So what I do is a blend of technology, law, privacy, sales engagement, customer engagement, and a whole bunch of other stuff as well. And it's fun. It's intellectually a very interesting blend of things.
Yeah, and it's spectacular. As a security practitioner myself, I've followed Lacework, and I've used the products many times before, and I've always just been very impressed. And you guys set the standard for the industry there. Thank you.
So I'm curious, what's the relationship between technology and cyber and privacy and legal have with each other? It's an interesting blend. So I'll talk about how I got through my career and why those things all blend together. So I started out as an engineer.
My undergrad is double E engineering, circuit design, and so on. I came to the US in 99 as an engineer working for Cypress Semiconductor, doing product engineering, application engineering, certain code and system design. And as an engineer, I got into inventing. I was a successful inventor at Cypress.
I think I have 10 patents now at this point from there. And that really got me interested in the legal side of technology. So as an engineer, inventor, and I became a patent agent, which is somebody who's not quite an attorney, but is qualified before the USPTO to practice patent law, and then made my way through business school and law school and became an attorney. And my work was very tied to chip and software technology.
And really what interested me was the creative or the inventor aspect of people developing new cool stuff. And the tech itself is important, but it's really the creative and the innovative process that calls people to bring up these ideas and develop these ideas and put them into practice is really what interested me. So that's how I became an attorney. I practiced initially in a law firm as a patent attorney.
Then I went to Symantec and started getting more into security. And at Symantec, I became much more involved. I was still practicing as a patent attorney. I became much more involved in security side of technology, which is really interesting because my area of practice is intellectual property.
So intellectual property patents, including open source, trademarks, trade secrets, and so on. That's really the lifeblood of an innovation economy. But there's lots of people who want to steal that. And that's where security comes.
And there's also lots of people who want to misuse the data, the personal data that is gathered. And that's where the privacy aspect comes in. These are all tied together through various threads around data security, data privacy, trade secrets, copyright, patents, trademarks, and other intellectual property, mask works for chips, where really it's about you create something special, but how do you protect that something special?
How do you make sure after you spent many millions or hundreds of millions of dollars developing this R&D and putting a lot of time and effort into it, how do you avoid somebody just basically cloning it or knocking it off or putting counterfeits for cheap on Amazon or other marketplaces? And that's a real problem. Many companies who put a lot of time and effort into developing great products and many individual inventors, small inventors working in the garage who invent cool stuff, have their product knocked off at overseas, typically at very cheap manufacturing jobs. They just clone the product and sell it for a fraction of the price on online marketplaces.
And that is actually a disincentive to innovation. It's a disincentive to progress. It's a disincentive to investment. So that's where the legal aspect comes in.
How do you protect this stuff? And there's various ways you can protect it, including you can use legal mechanisms, like you can file patents on your inventions. A patent is a right to prevent others from using within certain jurisdictions. People are generally familiar with the concept of patents.
You can obviously use copyright to protect written expression or artistic expressions such as code, written articles, works of authorship, and so on. You can use trademarks to protect your brand, which is very important these days. And there's trade secrets to protect things that are secret. You want to keep them secret.
You've taken adequate steps to protect them. And there's value by maintaining the secret. Most companies will have one or many of these combinations of assets. And these assets can be worth an awful lot of money.
So if you're a knowledge company and basically an information company that doesn't have machinery and doesn't have a physical plant and doesn't have blades and mills and typical manufacturing type hardware, then really your value is largely in the intellectual property of your company. And in many cases, that can be fit on a single USB or a small SD card or just thrown over the internet in a compressed file and it's gone. Your entire investment in your IP and your R&D is jeopardized. So that's where the combination of technology and IP and legal all work and come together.
The other piece that I think is very interesting is that we process an awful lot of data, just in general, in the developed world. So I say we are talking about the US economy, the European economy, the developed economy in general, has vast amounts of data. And much of that data is personal data about people and their lives and their habits and their preferences and their finances and everything else. Things that are sensitive and personal to them.
Yeah. When that data gets stolen, there is significant potential for misuse. If the data includes financial data, social security, things like that, people's lives can be ruined. Their credit can be destroyed.
Their finances can be wrecked. They can have a very difficult time remaining in a house or just renting a house. Or just renting a house. Or maintaining a job sometimes.
Yeah. So all of these are really human negative effects that can come from people stealing and just using data, including personal information. And from the company's perspective, there are companies that get put out of business because they're unable to compete once their data is installed and somebody is cloning their product for a very small fraction of the price. So it puts people personally in jeopardy or in a difficult situation.
It puts companies and businesses in jeopardy. It's bad for the economy. These are all areas where having good security, both technical security, physical security, legal protections, good privacy protections, all help protect the company's assets, the company's employees, the company's customers, and people in general. That's where this all blends together.
And that's, I find to be a very interesting area to practice in and one that actually has meaningful impact on companies and their activities and their ability to give people jobs so they can feed their families and have a roof over their heads. I love how the focus revolved around the human element and people's lives. And yeah, sometimes it's easy to get lost in all of this data and we think of it as just ones and zeros, but that stuff is connected to our lives. So I appreciate that.
And your story about how you got into the law space, that really resonated as well. It sounded like you really enjoy that creative, getting those creative bubbly juices going. And that instigated a closer look at law and then IP and then, oh, what are these things? Patents.
And it just launched your career. And all of that has led up to exactly where you are today at Lacework as the chief privacy officer. And I just couldn't imagine a better role for someone like you with those passions, with all of that insight. I am just filled with the gratitude that there are smart people thinking about all of these difficult problems and that we're building the future that we all deserve.
And technology that's coming out of a place like Lacework has a very important in that future. So huge thanks, Michael. Thank you. And it's an interesting place to be.
And look, we're all fighting the good fight in terms of trying to protect companies and trying to protect people and trying to protect their information and data. It's, I think it's an important fight to have because there's a, there are a lot of bad actors out there who for various motivations are trying to get the data. I attended a conference this week and spoke at a conference, the Association of Corporate Council Cybersecurity Summit on Monday this week, which was actually very interesting because it touched on many of these topics and the, it was an excellent conference. I thank the ACC for putting these together.
And there was a very broad range of attendees, including folks from US law enforcement, Department of Justice, military, CISA, and many other very senior, seasoned and experienced people who do this day in, day out. And they had some really excellent perspectives to share. And in terms of those perspectives, a lot of them revolved around actors out there that are seeking to steal data and seeking to misuse it for any number of reasons. And I won't name any countries.
There, there are places on the far side of the Pacific and the place on the far side of Europe that are actively engaged in activity days. Some of it physical or kinetic, some of it's in the cyber arena, some of it's in the gray space in between. And they have different motivations. If you look at the nation state, nation states tend to be very sophisticated, very well equipped, very motivated, and they're not after it for financial gain, at least not directly.
Right? So they're the folks who, frankly, they're trying to break into your systems and steal your personal data, steal your code, steal your product designs, commercial data, anything that would be of strategic value to their nation in a hostile scenario against our friendly nations. That's a third one. The strategic actor who's not in it for the money, they're in it for the information.
The power, the information. Then you have other actors who are clearly financially motivated, often the ransomware gangs. And the ransomware gangs don't exist in a vacuum. They are sheltered by various nation states that tend to be less friendly towards the Western world.
They operate either with official or tacit protection and cooperation of the nation states in which they are present. And the ransomware gangs are largely about causing disruption, stealing data, and using that data for monetary gain. So for example, I have an article I published recently in the ACC docket on this, don't let your company's reputation be held for ransom, which ties into this topic. This will be available for our listeners in the show description.
Thank you. Yeah. And it's just, it's an interesting overview of the area, but the ransomware folks are very much motivated to get into your system, steal your data. They want to get in quickly.
They want to exfiltrate your data very quickly. And then they'll, for monetary gain, they'll lock your, often encrypt or lock your files, cause business disruption. But ransom note up saying, if you don't pay us within whatever period of time, we'll delete the key and then we'll share your data publicly too and make you look bad. And so it's a double extortion.
It's a double whammy in that one, they disrupt your business by locking systems, which is an immediate tactical problem that you've got to solve because you can't ship product. You can't function. So many companies have been hit by ransomware. And, but the problem is it's not just a monetary issue.
There are companies that have absolutely gone out of business because they got ransomed and they didn't know what to do and couldn't recover it. And the data was deleted. The companies are shut down. The other problem is the, and that's been, that impacts people's jobs, their ability to have a paycheck, feed their families and put a roof over their heads.
So it's definitely a human aspect. This is not just a technical problem or a technical attack. Oh, it's a very interesting attack on people's livelihoods. But beyond that, there are ransomware actors who are indiscriminate and who are going after hospitals.
United Healthcare recently got ransomed. A large pharmacy provider was ransomed. I'm Irish. We had a situation a couple of years ago in Ireland where the National Health Service in Ireland, the Irish version, was seriously ransomed.
And an awful lot of people were unable to get medical services. People got very ill or missed surgeries or all kinds of bad things happened. Their personal data, some very personal health data got stolen. And those are the human impacts.
That's very detrimental to individual humans. People who have nothing to do with the tech space, nothing to do with the security space. These could be people who don't even own a computer. Some grandma, some her, couldn't get her doctor appointment or couldn't get her medication and may suffer serious ill effects as a result of that.
Someone's daughter and someone's mother. Exactly. This is not just some abstract technical thing. Not just a bunch of hackers banging on code.
This impacts real people, impacts real lives. And in many cases, it's actually called serious health issues up leading up to death of individuals and they couldn't get treatment or an ambulance couldn't bring them into hospital. That is the negative impact here. So we've talked about the nation state attackers who want the data.
We've talked about ransomware attackers who want the money. And frankly, they don't care what they do to get there. And there are also other actors out there who, the primary purpose is to get money by different means, which is to steal crypto you hold or to use resources to mine crypto and stiff you with the bill. And there are many cases, countries that are using those funds either for criminal activity or potentially to fund their own weapons development programs and so on and so on.
So there's just a lot of bad things that can happen due to ransomware and related criminal enterprises occurring. And we've seen so many examples of that. The NotPetya attack, which was directed at Ukraine, shut down Maers shipping. There's so much stuff that impacts people's lives, it impacts jobs.
People can't work in the factory because the parts don't arrive and therefore people are furloughed from work and again, miss a paycheck or two because they're waiting for this stuff to happen. Those are the human aspects that are caused by some of the bad actors here. And I think it's up to all of us in the security industry to work to make sure we prevent and block these attacks and call out the attackers who are doing it and raise people's level of awareness about how they protect their data and also protect their privacy, which is often greatly impacted by some of those too. Right.
Privacy is part of it and no pressure for all of the security community that's listening. Well, it's part of all of our shared mission. It is. Absolutely.
It's part of all of our shared mission to be good actors, to keep people safe, to keep people's lives and personal data and dignity safe. Yes. Some of the data that has been stolen and shared is very personal sense of data and things that where people have an expectation of privacy around their personal life or health or whatever. And it's getting stolen and it's dumped on the dark web for anyone to peruse.
That's not okay. Or buy and sell this information has been bought and sold. A litmus test that I use every once in a while is I ask myself, is what I'm doing having a positive impact on someone's life? And it doesn't even matter if it's my fellow engineer, if it's a business leader, if it's an investor, if it's a customer, if it's a sales team person, or just the security community in general.
If I'm not having a positive impact on people's lives around me with what I'm doing, I step back and I ask myself, why? And like, why am I doing that? What am I doing exactly? So I love that the focus is on people.
That is what matters at the end of the day. I really resonate. It resonates with my values as well. And I think probably most of the security community.
Yes. Yeah. And I think, look, the majority of people here are out to do good and to do right. And we can all do this by helping secure all the systems, all the networks, all the activities, all the devices that we operate on or with.
Because any one of those victims could be our spouse, a grandparent, a parent, a child, relative. Who knows? It's someone's. It is.
Yeah, absolutely. Absolutely. It is. So at Lacework, with your role as the chief privacy officer, what does a typical day might look like for you?
That's a great question. Every day is different. Every, I wear many hats. Privacy is one of the hats I wear, but I wear many.
And often it's time dependent and situation dependent. Typically, I'm an early riser. I'll typically get up at 4. 30 or 5.
All right. So early. I thought I was an early riser, but. I'm generally up early.
But I just get, I get on, I support Europe. So I get on, get a cup of coffee or two, check Europe, see what's going on. Anything needed there. Then.
Is that GDPR that you're checking? All kinds of stuff. Yeah. It's just, that is absolutely one of those things.
But just general security things, transactions, deals, requests from customers, requests from our sales folks and so on. That's part of it. On a given day, I'll cover anything from intellectual property, patents, trademarks, open source, security questions, sales questions, transactions. Both in the sell side and the buy side.
Typically a fair amount of privacy requests, customer requests, speaking engagements. I tend to write and speak quite a bit as well. And that's, for example, podcasts like this often takes up some of my time. I deal with quite a number of compliance aspects too.
Sometimes employment related things, sometimes export related things. The whole wide variety. That's one of the areas that's interesting. It keeps, to a certain extent, my base, I guess my base strength would be in the area of intellectual property and privacy.
That's where I grew up as an attorney. I cover a much broader range of things that I'm generally much more of a generalist these days covering whatever comes in. And that's part of what keeps it interesting. There's a wide variety of questions.
Ultimately, we're there to take care of our customers. We're there to make sure that they are successful and they are safe. And spending increasingly a lot more time on a customer-facing role or partner-facing role, sort of educating, working with them, helping, in some cases, training, helping share knowledge and thoughts. And trying to raise their level of awareness of some of these topics where, particularly if I hear something interesting like at the conference I sat recently, I'll often share that towards customers or internal teams and help educate them on those areas.
Well, that sounds amazing and spectacular. I know you mentioned a little bit about how Lacework contributes to the security community and all of these important dialogues. But is there one thing that you would say that Lacework is really famous for? That's the flagship product that you would expect?
One thing we're traditionally known for is being really good at threat detection. That's where Lacework grew up. What's interesting is we have developed technology. Now, of course, everything we have is well-protected and patented.
We have over 200 patents and pending applications, which for the size of the company we are is pretty big. But a lot of it is around very careful and thoughtful early detection of intruders in your system. That's something we're really good at. And what's interesting is that's becoming increasingly necessary in the market that we're in today.
So if you look at some recent changes that have occurred in the U. S. around SEC cyber incident materiality reporting, there is now a tight timeline to what after an incident occurs. There's a fairly tight timeline to analyze the incident, make a determination of materiality, which is a particular legal term defined by the SEC.
And then within four business days, report to the SEC regarding what has happened. So the interesting aspect is if you're in the middle of an attack, so if you're in the middle of an ransomware attack and the stuff has hit the fan, your systems are down, your email is not working, your Slack's not working, your internal communication terms system is not working, you can't get into your record, you can't get into things going on in your company. That's a very stressful, very difficult place to be when you're trying to make a difficult determination around something material and then have to file a report on that within a pretty tight window.
And what we've noticed is that it has been the SEC cyber incident reporting came into effect for most U. S. public companies on December 18th of 2023. So we're now about three months into it.
And we've noticed a sort of a pattern of companies who had an incident filing with the SEC saying, hey, we had an incident. We're not yet sure if it's material. We'll continue to investigate and update. And that's okay.
That's at least they're flagging something occurred. But the intent of the rule, the SEC's goal is to protect investors. So do investors have the information necessary to make decisions around, do I buy, do I hold, do I sell, pick your company stock? So the SEC wants to make sure investors are protected against fraud or against insider trading and many other things.
So it's not super helpful to an investor if a company files something saying, yes, we had an incident, but we don't know if it's material and we're still investigating. That just means something happened. We don't know if it's good, bad, or ugly yet. Probably not.
We don't know if it's bad and we don't know if it's ugly. So to a certain extent, being able to provide more detailed information sooner is good. But even better is the best scenario is when the bad actor gets into your systems, you detect them really early. That's key.
And if you can't, if you're unable to detect them early, at least limit what they can do to limit the last trade. And if you can do those two things well, you're more likely to not get to a point where you have to make a determination materiality and if you determine material of the SEC. So that's one of the things where I think this work is very helpful. And one of our strengths is that we're really good at catching early intrusions into your system.
We have a feature called Comps Alerts, which basically takes lots of little breadcrumbs of data that typically would fall below the threshold of detection for traditional rules-based security. So most traditional security uses rules. You write rules, say if this happens, do that. If this happens, that's great.
And rules have been helpful in the past. But the problem is, it's generally very difficult to write enough rules to catch every scenario. And it's also difficult if you set the threshold of triggering so low that it just constantly triggers and buries you under false alerts because you just get completely buried under stuff that is not very meaningful. You end up with a haystack and you're looking for a needle.
So what we've done is build outcomes alerts that take very little fine breadcrumbs of data that we observe across our customer system. And also, as we observe many customer systems, you learn patterns of say, hey, we noticed these three little things happening. And we noticed that when that happens, usually the next two things happen and then you get ransomed. So you're able to flag the pattern very early and flag it to the customer such that they realize that somebody has entered their systems.
And the nation state actors and ransomware gangs often are very smart and very stealthy about how they'll do it. So they'll get into the system and they'll lay this sort of lay low and slow and spy across the system really slowly, just learning and observing. They'll learn what your data is, where it sits. They'll follow a privilege of escalation that builds to step up and cross over rules.
They'll use a lot of your existing tooling. So CISA put out some guidance recently, I think it was February 6th or 7th, on living off the land. Which is basically using tooling that exists within your system to conduct their activities in such a way that it looks normal and that it looks just like normal traffic. So it's a very, it's a very sort of sophisticated and yeah, exactly.
It's like ninjas creeping through your system very quietly. And that's your shoes. You do the creeping. And so it sounds like familiar traffic.
That's a good analogy. It's exactly what they're using your shoes to creep through your system. We can pick that up. We can pick it up very early.
And that is, so if you look at the recent CISA guidance on this topic, they call out specifically, you should be using anomaly detection. You should be using event logging. You should be able to detect these things really early when they occur in your system. And we can do that.
And that's, it's, I'm sure you've heard the old expression, an ounce of prevention is worth a pound of cure. And that's exactly what this is about. You want to detect the bad actor really early. You want to store your logs and make sure your logs are protected in a safe sort of out of band location where the bad actors can't delete them.
In many cases, they'll go in and they'll delete your backups. So you can't restore from the backups. So the first thing they do, or they'll corrupt them. So the backups will exist, but the data will be corrupt.
It's only when you try and restore them that you realize, oh crap, this thing's not usable. And they'll delete event logs. So if they delete your event logs, if you store them locally, it's very hard to figure out what happened. And that's where we can store all the stuff.
We can store it securely, separately, and basically ensure that the bad actors can't get and wipe them. CISA also recommends you maintain baselines of normal activity and reduce your privileges to the lowest level necessary to the job. And we do that as well with the cloud identity, that basically allows you to look at identities, make sure that you only give identities that are necessary for the task and remove or scope down identities that are either unnecessary or overly broad. And that applies to both human identities and machine identities.
So think of in many cases, many companies will have accounts that will perform employees or contract or temps or something. And there isn't always good discipline around shutting those down when the person leaves. Or sometimes the person just goes away for two or three months and goes back for two or three months and the account stays active and unused during that time. But those can be absolutely vectors for tech.
And you want to make sure that your identity is very tightly controlled and scoped down to only that which is required for the task and nothing beyond that. The other aspect of what CISA points out is that you want to make sure you have alerts that actually matter. One of the problems right now, I think this is common across the industry, that people are just getting buried under alerts. And when you're trying to sort and find the needles in the haystack and you've just got multiple haystacks of alerts, that makes the life of security people difficult.
It's also probably frustrating and burning when you're just plowing through false alert after false alert, trying to find the one that matters. So one of the things we do is really tune down the alert noise up to 98 or 99% and give you that which matters, that which is actionable today, so you can hit the things that matter fast. And when you look at security, one of the biggest problems the good guys face is that time is against them. And typically it'll take 250 or so days to detect an intrusion in a system, particularly if you're dealing with a nation-state actor that's using the low and slow technique.
Yep. It takes months and months. They could be in there creeping around for months and you'll never go. They're just doing recon and things.
They're figuring out what's valuable. Exactly. And so they're being exceptionally stealthy and careful. And creep around and they'll do all the stuff.
You've got to catch them early. And that's where using something like Compus Alerts allows you to catch them very early, hours sometimes of them getting into your system and limiting and basically blocking them or showing up the system. The other aspect is having good identity management using Kim or some other solution allows you to limit what they can do. That's about keeping the blast radius down.
So if you have a carefully segmented set of privileges and a carefully segmented network, such that even if they get into one piece, they can't get into the rest. Think of it like sometimes on ships, you'll have multiple partition hulls or submarines where if one gets breached, you can seal it all and the vessel will still continue. And you can get home. Yeah.
Yeah. And I think that's exactly where this advice, having good identity management, having good segmentation, having good privilege management can actually limit the damage the bad actors can do. And it can contain them to different pieces of the system so they can't creep out throughout the entire infrastructure. And then you've got to obviously look at user behavior, look at entity behavior, look at what is normal for a human actor, look at what's normal for a machine actor, and learn from that.
So when you see something that deviates from the norm, you can flag it early. So even if somebody gets in through a compromised user credential, which might be a valid credential, you want to make sure that the behavior is normal for that kind of user. Right. I'm an engineer.
I can write code. I'd probably write caveman code if I wrote it today because it's been so long since I've done it. But I could. But in my role, I shouldn't be poking around in our code system and I shouldn't be making changes in there.
Now, of course, I don't want to do that. But if even I attempted, that'd be an unusual thing for me to do. And that should be flagged as, hey, that's an almost person has no business doing that. That's user behavior.
And of course, their machine behavior where certain machines shouldn't be communicating outside of certain paths or trying to engage in certain activities. These are all things you have to learn using machine learning to observe this and learn what's normal. And then flagged what's not normal. That's key to protecting the system against malicious bad actors who were determined, who were skilled, frankly, and who are stealthy.
This is not just some kid using social engineering to grab a credential and do something silly. This goes beyond. This is real security. This is where the rubber hits the road.
I remember a fun story. We use lacework at Pure Storage back in the day. And there was some production outage or emergency. I had to get into some box.
I used to break class credential to get into some box in some environments where I don't even remember the details. But what I do remember is that Alex, who was sitting right next to me, also on the security team, as soon as I SSH'd, like five seconds later, he turned to me and he said something like, did you just go into this box as written? And I was like, whoa, what are you looking at my screen? But no, it was just the technology.
It was just lacework flagging the access as unusual. And that's what's key there, John. It's hard to write rules for every possible scenario that could occur. You can cover one of them.
But when you're dealing with bad actors who are skilled and determined and have an unlimited set of resources and people to work on this problem and attack you, rules aren't going to cut it. They can be helpful, but you need, I believe it's actually learning of what's normal in the environment and flagging. We think that's the single most effective way to quickly identify what's going on before the bad actor can exfiltrate your data or they can ransom you or conduct other bad activities. As you were going through a lot of the new controls that SE's put into place for publicly traded companies, a lot of that actually resonated with what's required for just healthy security operation posture.
Also, if you're going after a certification, like a SOC 2, type 1 or type 2, or your ISO 27001, I think a lot of that stuff overlaps in that space. So if you're going. Oh, I was going to say, what's really interesting here, John, is that if you remember back in 2018 when GDPR was coming into force, like there was a mad scramble by everybody getting ready for GDPR. And there's huge amounts of consultants out there and everybody offering GDPR services and data identification and all this kind of stuff.
But what GDPR did was the European Union took their market power and said, we're going to use our market power to force change for good. And whether their motivation was to slow down US tech companies, some people have made that allegation or whether it's to just improve privacy in general, which I think is a good thing. I think it's a good thing too. It did have a good thing.
Yes. What GDPR did was that it forced companies who wanted to do business with or in Europe to take privacy seriously. And prior to that, I think a lot of companies had not taken it very seriously because they weren't forced. So GDPR used a combination of a threat of very significant fines and regulatory action and also, frankly, embarrassment by the companies, putting their reputation at risk if they didn't comply with these rules.
So all of a sudden, companies started taking privacy seriously. And now I think we're in a much healthier environment from a privacy perspective where companies across the US, companies in Europe, anybody doing business with or in Europe has to follow these rules. And generally, I think it's much better about following them now than they were five years ago or six years ago. Absolutely.
Absolutely. And they just, the EU just did it again with all of those AI regulations. I will commend the EU for what they're doing here. They're also doing it for security.
Yes. So what's really interesting is there's a new law called NIS-2, which is coming into effect in October 17th of this year, 2025. And the second one called DORA, the Digital Operational Resilience Act, which is coming into effect in January, I believe 17th of 2025, that are counterparts effectively. And NIS-2 expands the original NIS rule, which was somewhat limited in terms of its scope, to basically really enhance and broaden the scope of what is covered.
And companies that are operating in critical or important industries, things like energy, transport, banking, water, digital infrastructure, which many of us work in, that's things like cloud, telecom, data centers, tech, trust, all the things we generally work on in Silicon Valley, including space, the Postal or Mail Service, waste management, chemicals, food manufacturing. Anything that's required to make a modern industry run now fall under NIS-2. And it means that companies that wish to do business in, with, or in Europe need to follow these rules.
And the rules are quite prescriptive in terms of requiring security technology, risk analysis, vulnerability analysis, AI for detecting anomalies. All of this stuff that is generally considered good security practice and hygiene has gone from being a nice-to-have or recommendation to a need-to-have required to do business in Europe. The U. S.
is following suit. So the SEC cyber incident materiality reporting, one of the reasons for that is to basically ensure that companies flag to investors, if they have an incident, if it's material, and also share information with the SEC to make sure people are aware that these attacks are happening. The only way you can really target these is by understanding the pattern. Where are the attacks happening?
What sectors are they in? Yeah. You know what's crazy is when businesses do business with other businesses in the B2B space, there's oftentimes like a DPA involved, a data protection agreement, especially in Europe. And there's already clauses around like required disclosure times.
And even if you have a SAR too, often you'll make a commitment to have a public announcement or at least inform customers about potential security amounts that may have happened. And it's just crazy that we need legislation to follow through and to make this a little bit more mainstream, a little bit more central, bring it to the B2C space and the consumer space maybe is what's happening here. Yeah. I actually think that's, you're absolutely right there, John.
But it's also the case, I think companies tend to be a lot more afraid of regulators than they tend to be afraid of other companies. And the motivation, look, there's always, if you agree to something, you should always comply with what you agree to. That's part of the whole purpose of a data protection agreement. It's a contract, yeah.
Exactly, it is. But I think many companies perceive a greater threat from regulators. If a regulator can get on your case and go after your management, your management liable, block your management from operating in the industry, that suddenly has much greater teeth than the prospect of contractual damages around a DPA bridge. So you think errors in emissions insurance is going to go way up?
We'll see, we'll see. I do think that the bigger issue that management of companies are going to take this far more seriously than they may have had in the past. And security has often been considered, oh, it's a cost, it's expensive. Yeah, but it's a lot cheaper than having your name dragged through the papers and paying a ransom and having customers lose trust in you and move their product or service away from you.
No, I heard, oh, go ahead. I was going to say, so that's news too on the, in Europe, but also DORA, which is, applies primarily to financial banking, brokerages, exchange, insurance, anything in the financial space. It has almost a superset of rules, particularly for the financial space. And some of them are very prescriptive around technologies you have to use or technologies where they're making rules that you need to use things like anomaly detection, risk and vol management, a whole variety of areas that were considered nice to have, but now they're in need to have.
And we're actually seeing a lot of interest from partners, from customers, just a lot of inquiries. There's a lot of education discussion going on in this topic. And it's great. It's about time because people are realizing this is security really does matter.
And it's not just about money. It's not just about data. It's about people. It impacts people's lives.
It impacts people's health. It impacts their privacy and their personal life. It impacts their investments and their companies and their money they spend on R&D. So these all, the security rules all go back to protecting people, protecting investments, protecting economy, protecting trust.
Allowed into a perspective of security that I bumped into all the time. And that's that security is a cost center. And it can be. And oftentimes that's how it's viewed from the business side of the house.
I often find it's much more productive to see security as a growth opportunity for the business. And so what that means is like you dot your I's and you cross your T's in the security space. All of a sudden more markets are opened. You can sell into healthcare.
You can sell into defense. You can sell into finance. Yeah, that's exactly true. And it's also a case you build up customer trust.
So one of the things I hit up privacy at Lacework. One of the things they're an awful lot about is privacy by design. And I work very closely with our engineering and tech teams in building out products in such a way that they are privacy protecting products. And we do take that philosophy seriously.
Personally, I think security and privacy are two sides of the same coin. They're closely related. They absolutely are. Security is about what you can do if you access the data.
Privacy is about what you should do if you get access to the data. What's the right thing to do? Right. It's often a question that I've heard from founders.
It's like, when is the right time for a startup that is growing? I actually start grappling with and maybe pull in a security team. Maybe. And I've always been of the opinion.
It's as soon as you have private data. That's when that starts. That's when you need to take that serious. Exactly.
As soon as you go from having what I call, let's call economic data code to actually, which is important, of course, for the investors. When you actually go to have data on people, be they your employees or your customers or your prospects or your users, their data, they're entrusting you with it. You need to handle it properly. And if you don't, you're violating their trust and potentially putting your company's reputation at risk.
And that's problematic. Definitely. So if we fast forward into the future just a little bit, would you describe for us what does success look like for your team at Lacework and for Lacework in general as a company? I think at a high level, our goal is to keep our customers secure, keep our partners secure and keep their customers' data secure.
And I think if we can do that well and help protect them against nation state attackers, ransomware gangs, crypto miners, other bad actors, digital vandals, other people trying to get in and disrupt their systems, that's success. And do that at scale, do it repeatedly and do it as widely as possible. That's spectacular. So more of the, just continue solving that problem.
Yes. Continuing solving the problem. Giving back to that security. Yeah.
And doing it in a way that protects the people. Because it all goes back to people at the end. Either the employees of the companies, the users of the companies, their users or their users' data. Keeping trust and making sure the data is protected and appropriately used, particularly personal data and the economic data.
So things like code and trade secrets and confidential data. Yeah, absolutely. And there's such a wide range of threat actors out there that we've discussed and talked about. Yes.
One of the really interesting bits is, speaking about people, is people change over time. And if you had an opportunity to go back and meet a younger Michael, I'll let you decide how far back you'd like to go. But if you could meet your younger self, would you have any advice for yourself? Yes, I would.
When I was younger, I tended to be more introvert. And I wasn't as active in networking or getting out or speaking, publishing or getting out in front of people. I would say to anybody at any stage in their career, get out, network, engage with people, make connections. Because technology is great, but the business we're in generally is all about people and who you know and your ability to help folks and influence them.
So I think that's it. You make connections. When I say, I'm jokingly, get out from behind your keyboard and get in front of people. And I do think that's helpful advice in general for anybody in any career path.
I would echo that 100%. Just if you take that mentality that you do bring something interesting to the table and that other people would love to meet you, then it becomes so much easier. And so that's one of the mind tricks that I've used over the years to help. Because I was one of those like very introverted engineers when in reality, at some point I had that epiphany.
It sounds like you also had actually what matters at the end of the day are people. Yes. Yeah. I also like educating on the topic.
So I really enjoy just getting in front of people and teaching about this stuff. If I wasn't doing this, I'd probably be a lecturer or teacher somewhere. I do enjoy that aspect of it. Just don't enjoy writing papers.
Yeah. I don't enjoy judging people either. But Michael, this has been absolutely spectacular. Thank you so much for all of the valuable insight, your time, the energy.
And would you like to leave our listeners with any final words of wisdom? So one thing I advise, I mentor younger attorneys and students who are in law sometimes. And one of the things I advise is just continually learn. Always keep learning.
Always keeping open to ideas. Keep learning, advancing your career, advancing your qualifications. I do try and write on various topics to just share things I've learned. There's a couple of recent articles that are probably relevant to this audience.
One is on product privacy by design. It's called Product Privacy Done Right. So you can link that below. So the other one was the ransomware article, Don't Let Your Company's Reputation Be Held for Ransom.
And I think those are just viewpoints of things I've observed. And there's just, if you will, there's an hopper and plenty more articles on my LinkedIn. People want to go read them on intellectual property and legal operations and things like that. But I do think that when we develop skills or develop knowledge or just useful tips, it's a good thing to share them.
Don't hoard them. Share them with others who are sometimes more junior and also could benefit from learning from them. Yeah, that's great advice. Network.
Use LinkedIn. Connect with Michael. We'll put a link to your LinkedIn. We'll put a link to all the articles.
And let's get out there and start talking with each other. That's amazing, spectacular advice. Thank you. And thank you for the opportunity to speak here.
It's always a pleasure. I greatly enjoyed working with you in the past and great to reconnect here. The pleasure was a two-way street and I have all the gratitude in the world that there are very smart people out there in the world working on these tough problems. And I look forward to building the future to take up the entire security community.
And also a huge thanks to all of our listeners for tuning in to another episode of the Security Podcast in Silicon Valley, a YSecurity production. I'm your host, John McLaughlin. And wrapping up the show with Michael Moore. Thank you so much, Michael.
Thank you, John.