75. How to Use AI Without Giving Up Your Data (with Jonathan Mortensen)

Hello, everyone, and welcome to another episode of the security podcast of Silicon Valley. I'm one of the hosts, John McLaughlin. I'm joined today with Sasha Sienkiewicz, our other host. And we have an amazing guest, Jonathan Mortensen, the founder and president, CEO of Confident Security.
Thanks, John. Thanks, Sasha, for having me. Welcome, J-Mo. So I know starting a company can be a little bit challenging sometimes, and you got to be a little bit crazy to do it.
And it's amazing that you're starting one here in the security space, and it's kind of the intersection of security and AI. Maybe you'd like to share a little bit with our listeners about exactly what prompted you to start Confident Security and what you guys do there. Yeah, thanks for asking. Well, first, I would say I've always been entrepreneurial, and this is my third one.
I sold my last company to Databricks. This one was very much about privacy. And I think that for many years, there's been kind of two sides of the debate around privacy. It's like, well, what do you have to hide?
Privacy is not important, whatever. I think there's now a lot of proof points that say privacy is important, but it's only being magnified with AI. And the reason why is we're worried about social security numbers leaking and all of your passwords leaking. But now that your privacy, you know, is even more important because the data that you're giving to the AI could be used to train the AI, right?
And maybe that data is private, like it's a healthcare record, but maybe that data is also how you do your job. And so training the AI to do your job is maybe something that will be on the minds of certain people. And it's certainly maybe on a personal level, but for businesses, they still have highly proprietary data sets that are internal, and they don't want those to be used to remove their competitive advantage. So when I left Databricks, I was very interested in privacy.
And I worked at a cybersecurity company before and just saw that there wasn't a large model that you could control that gave you actual privacy and security guarantees. There was no option out there. You could use meta if you wanted a large open model, but there's no privacy there. You could use Apple, but it's only their models.
You don't control it. And so I said, why? Why shouldn't there be something that is really privacy first AI? And that's what we built Confident Security to do.
So if we were to boil it down to a single sentence problem statement, would it be you lose control over data and how the data is being used once you send it to a LLM vendor or LLM provider? Yeah. Once you throw it over the wall, you have no idea what's happening behind the scenes. And they can make you assurances, but those assurances don't always follow through.
So, you know, once you throw it over the wall, who knows what they do with your data? So in other words, there are a lot of contractual commitments and obligations that a vendor might claim or states that there's zero retention data. But from a technical point of view, there is no guarantee that that will be the case. That's right.
And the incentives are not aligned toward that, right? Right. It's not just this contractual saying zero data retention, but open AI is an incentive to train on the data, right? That's going to be their core competitive advantage as, you know, all of the, you know, open data becomes less valuable and used by all the vendors.
And today, one, you mentioned a very interesting point about enterprises and enterprises having lots of proprietary data. And that data is extremely valuable. We started talking about data being the new oil or new gold. It's something that is extremely valuable.
And today, when this enterprises want to use LLMs or AI technology in general, they often have to deploy something in-house in order to guarantee that that data is not being trained on, or there's no retention on that data. But there is a better version of that approach, or there is a better approach in general. Yeah, I mean, there's a variety of approaches. Confident security is doing an approach that allows those enterprises to still use a remote service, but have a guarantee, a technical guarantee about what's happening with their data.
If you're Disney or you're Pfizer, a contractual promise probably isn't enough for your proprietary data. And so, yeah, they either have to buy a bunch of GPUs and run it themselves, or figure out, you know, some other service that can manage that all for them. And that's what we do, right? We make it really easy for them to run remotely, but have real technical guarantees.
What I've heard from folks is that it's easy. If you just want to take an open model and run it internally on your data, easy. Like, just simple open model, it's like running Olama. But the second that you want to do anything more complicated, you have a lot of different model weights, you're trying to serve it to everyone in your organization.
Any sort of production level thing, suddenly, like, that little VM with an attached GPU that was running Olama is no longer, like, what you want. And you end up spending a lot of money for that. And for good reason, I think there's a lot of value to unlock. There's scale questions, but no, it almost sounds like a confident security, you're taking, you know, the services that are offered, maybe by Bedrock, maybe Vertex AI, Google, or maybe even like fireworks.
ai and just using rigorous security controls, technical security controls, to provide a layer of privacy, so that people can confidently use the service without worrying about, oh, what are there going to be the legal ramifications? Where's my data going to end up? Because at a very core level, it's impossible for confident security to see the requests, the contents of the requests coming in, right? Yeah, I've been kind of not directly answering what the product is yet.
I've kind of been motivating the problem. But yes, we essentially make a guarantee that we can't see the data. And our liability and identification clauses reflect that, right? So we have unlimited liability and identification for data breaches in our contracts, which is unheard of, but.
. . That's what I was about to say. That's totally unheard of and unprecedented.
But we can't see the data. And it feels like if we say that we have a technical control and our contracts don't reflect it, like, what's the point? So our product is based off the same reference architecture of Apple's private cloud compute. We call it a provably private AI inference engine.
And we use all the same techniques that Apple's PCC architecture uses. And I'm happy to describe what PCC is for some of the viewers. But yeah, and that's. .
. So you think about it as we can't see your data, we can't leak your data. Only you can see it, only you can encrypt and decrypt it. But we can run the model.
And in fact, we can also guarantee that the model is private. So if you had a custom model you wanted to run at scale, we could guarantee that that remains proprietary too. That sounds really powerful. I know that there's a lot of interesting, fine-tuned models out there, a lot of them proprietary.
And to double-click on the problem. . . So if I'm a founder of a company, you know, and we have an AI innovation, a new AI product out there, and I want to be able to guarantee to the customers that I'm selling to, maybe those are B2B customers, maybe it's regulated industry, you know, those large enterprise deals.
And I'm being bombarded with questions around privacy and security guarantees and technical controls to help protect all of that sensitive data that's passing through my innovative AI startup. I could use Confident Security to help gain a lot of those technical controls at the LLM level. Exactly. So if you have those concerns, your customers, you're in healthcare finance legal, you're selling into healthcare finance legal, you can swap your endpoint, right?
We have a standard OpenAI API endpoint, right? Everyone's everything standard. You can swap your usage of any other model provider to us and get those amplified liability indemnification clauses, as well as the technical controls that say essentially we're beyond compliant, right? Like there's, we can't see the data.
It's effectively like we never saw it. So any sort of compliance framework, there's no at-rest, you know, things to worry about. There's no access control to worry about. And those are kind of the fundamentals of most of the compliance stuff, which is like, where does the data end up?
Who can access it? And the answer is no and no. Amazing. And then if I have this amazing team of researchers and they're churning through new models and we're fine tuning and we're training, you know, our own in-house models, I can actually upload those models into Confident Security.
You can. And now it's a white glove service right now. Yeah. So it's not as easy as just clicking a button in the UI.
But yes, you, in fact, you encrypt the weights yourself and then give us the encrypted weights. And then when you make requests, you include that decryption key for us to ver it so that the confidential environment can use those weights without showing us the weights to get it into the confidential environment. And I can see how this is absolutely critical for B2B deals. But what if I was building something that was more B2C aligned?
I could imagine that there's going to be some use cases out there too. Yeah, I think people are increasingly sharing increasingly private data, right? Sensitive personal data because of the value of the LLM, right? And I think partially because we anthropomorphize it, it's like a little more human.
And the interface is so human, like, oh, I could just share this picture or I can just share my agenda for the week or whatever. And we're like, I think a little more free in that for a variety of reasons. The signal that I've gotten is that consumers still don't care that much about privacy. Obviously, people use WhatsApp, people use Signal, people use Telegram, although I would not say that Telegram is anywhere near.
But that's still a small proportion. A lot of people, right, if you have to make a trade, you want something now, you need something now versus dealing with privacy consequences later, including getting your data trained on, you're going to make the trade as a consumer. So we would be happy to help any B2C company who says, hey, we actually want to make really strong guarantees to our customers or where security is a primary feature of our product. It would be awesome, for example, if we were in Signal.
And then it's kind of like if you think of Signal as end-to-end transmission, like we're Signal for AI, right, where it's an end-to-end encrypted, like bit to an AI model, we'd be happy to help. But we haven't been targeting that audience. I'd love to be in a world where everyone gets this level of privacy, where you're controlling what data you're giving to whom. And the default is no one else can see it but you.
Signal for AI. I love that. That's just a nice little one-liner zinger. Very recently, privacy has become front and center.
Maybe it's not front and center yet, but it's becoming more and more popular. People start talking about privacy a lot more often. On billboards, when you see Apple's billboards, you can see privacy, almost usually the first word on the billboard. Why do you think that is the case?
Why do people and companies start to care about privacy more so than they used 10 years ago? Well, Apple does it to differentiate itself in the market. I think that's one major thing. And because they're a premium product, they're not getting value from your data or they don't need to get value from your data, right?
When you sell something cheap, you're essentially reselling the user to someone else for ads or other reasons. I think that's one thing. There's also been a ton of high-profile data breaches over the last couple of years. Now, the Snowflake breaches weren't really Snowflake's faults, kind of, around two-factor enforcement.
But that was pretty massive in terms of the amount of leak. Earlier this year, DeepSeq leaked like a million plus records of data of chat logs with DeepSeq. That's one thing. And like I said before, I think it's this heightened level that is such a direct value from your data.
Like if I give you, if I let you watch how I work every day and give up that privacy, you can train on it. And then, hopefully not put me out of a job, but like that's the risk. And so, I think some people haven't been explicitly saying that. And I was actually talking with someone else recently who wrote a piece on this so directly, which is like every time you're making a trade, you trade privacy for like some future loss, right?
It's more expensive than you think. So, I think that's why privacy is becoming more important. But I don't know what you guys, what have you guys heard? There is a saying that if you don't pay for the product, you are the product.
And often, if you get service for free, you usually trade something for that free service. And usually, that's your data. Data, we hinted on this a little bit earlier that data is the new gold, it's the new oil, it's the new trade currency. And that is especially true in this age.
We're recording this in 2025. There are a lot of papers out there that talk about the fact that we've exhausted all of the publicly available information for the training or for the future training of the models. And at this point, it's the proprietary data that very few companies have their hands on. That is extremely valuable.
Yeah, I think I just saw, I mean, YouTube is a great example, right? Google's got a huge advantage. They have like a huge amount of YouTube data. Some of it's publicly available, but certainly they're in an advantageous position to place it.
But I heard that Meta is not going to start training on your feeds, like all of the private stuff maybe that you've sent in your chats and other things, because they need a differentiation. It's funny, because I think everyone, or for me, at least, I was like, well, AI is changing everything. We've got these new crazy models, open AI and anthropics models are like incredibly good. But in five years, like, they're going to be, again, the value that open AI and anthropic is going to hold is the data that they've captured from these users using it right now that the competitor doesn't have.
And the models otherwise would be identical. And the reason why you stay is because of your data, the special context that they have about you and the special training data that they have. So as always, data remains the proprietary thing. It's absolutely becoming more and more valuable every day with all of the new opportunities.
And we're exhausting the amount of publicly available data. And so like, now all of the private data is becoming much more valuable. And maybe companies may feel compelled to monetize, you know, change terms of service, maybe monetize all of the data. But even Confidence Security is putting itself in a position where it will not be able to retroactively change anything, anything that comes in sounds ephemeral and protected with cryptographic mechanisms that just ensure privacy both today, but also well into the future.
Yeah, it would be impossible. We don't hold the private keys and the private keys are inside of a conf, are generated in excited confidential environment we can't access. And so once the data is encrypted with those keys, it can only be decrypted in that confidential environment and that environment is thrown out after the request. So yes, the data, we couldn't log it and then like decrypt it later on.
Unless we had a giant, you know, quantum computer in the future. But that's several threat models deep. Sure, sure, sure. You know, I guess everything always comes back to the threat model asking the question secure against who or what.
And for us, for us, we're saying it's secure against us, right? The primary threat model that we're selling to have solved is that we can't tamper with your data. And so all the things that we do make it so that it's hard as hard for us to see your data as it would be for any, you know, third party attacker. Yeah.
So essentially saying nothing in the infrastructure itself can see inside the confidential compute module. Yep. And what's inside the confidential compute model module is attested to. So we know exactly what's running in there and you can check what's running in there and what's running in there is not log all of your data.
That's the most important. Actually, you enable large enterprises or any size business that understands that they have to evolve and change to the new technology that is taken over. Everything that we touch on daily basis is using some type of flavor of AI. And if you are an existing company and you would like to participate in the future economical benefits of using AI, you have to start adopting.
But maybe there are not a lot of good options to do it safely and securely in order to protect the crown jewel, the data sets that you have. Hence, we enter the market with the company and with the product that you've you've guys built. Right. Yeah.
Essentially, it's like you need the value of AI, but you're blocked from adopting it because you don't want to give out the crown jewels. You can use confident security and not have to make that trade off. Now, just to be clear, we charge a little bit more on a per token rate for the AI. But I think that that upcharge is way less than the cost of giving up your proprietary data.
That's essentially the way I think about it. Share with. Oh, sorry. No, it's good to pay for privacy, I think.
It aligns incentives. It absolutely does. You know, I look back and look at the just like the macro view of the internet and Facebook came out and MySpace even before all of that stuff and everyone was just putting stuff out on the internet. No one had any idea.
Like there was no foresight into looking into the future and, you know, realizing all of that could be harvested in the future by these AI models that we have today to actually train models against your voice, against your tone, you know, the essence of who you are. And I think a lot of people put a lot of stuff on the internet without realizing the long-term implications there. And now it's all out there. But it doesn't mean like we have to continue down that road going forward.
We can be much more privacy conscious, security aware. And I really feel like that that shift is happening right now, right underneath our feet. People are more aware of what's going out on the internet, more concerned about their privacy, more willing to pay for a service instead of be the product, you know, with advertising and all of that connected data largely being used to advertise back to consumers. I think people are just done with it and are happy to pay for services that add value to their lives and then professionally as well in the workspace.
So I totally get it. I mean, it's, you start to see with AI, the trade-off is so much clearer in that like if I pay 20 bucks a month for like augment or something like that, like the value is so high that like the business can very easily justify it. And so it's like, okay, I'm making a very clear trade and like, I know what I want to pay a premium for this thing because it just gives me so much value. On the consumer side, you mentioned something earlier about quantum computer threat model.
And I was thinking about it, but I'm not really concerned about it because I'm sure you guys build crypto agility as part of your product, meaning that you can plug different crypto resistant algorithm as that becomes more and more pronounced threat. So it's something that even though people talk about quantum breaking all of the encryption, et cetera, I'm not really concerned about it that much because if you have crypto agility built into your product, you can adapt to threats fairly quickly. Yep. Yep.
That's right. That's right. We're not post-quantum right now, but once the VTPM support post-quantum, we're in. Well, just for all of our listeners, Sasha, what is crypto agility?
On a high level, it's just the ability to plug in any algorithms moving forward and you can ensure that data is safely guarded using new type of encryption. And you can do it on the fly without breaking changes. Super important feature at the cryptography level for any product. Hopefully the entire internet has it.
We'll see when that day arrives. Yeah, we can hope for that. JMO, what's been the proudest day that you've had so far as an entrepreneur? Great question.
So the company I did before this was Bit. io. It was a serverless Postgres company before its time, possibly, with the recent acquisition of Neon and Crunchy Data by Databricks and Snowflake, respectively. And yeah, we built a serverless Postgres product.
And really, I think, like building a confidential inference platform, building a serverless database is a non-trivial amount of infrastructure and software engineering, which is kind of my style, I guess. I like really getting into the complicated stuff, infrastructure stuff. But the first, the most proudest moment, I think the team was firing on all cylinders and we got our first sale for $22. It's just been the culmination of a ton of effort from everyone on the team.
I think that was probably my proudest moment. But it was $22 of MRR or ARR? It would have been MRR. So we had created something of value, right?
Somebody was willing to pay for this thing. We had helped them along with their software journey and gave them a place to store state really quickly. So. Which is a very important point.
You mentioned firing on all cylinders. The point here is it takes a lot of work to get that first sale. But once you start seeing the sales cycle increase, it's a proud moment. You know that you have a product people willing to pay for.
And all of the time that you spend prior to that moment is starting to pay off. And it's important to stick with it. It's important to grind through all of the difficulties and challenges. Because let's be honest, when you start a company, there are a lot of those.
And it's how you react to challenges is what matters. Yeah. Yeah. It's before that first sale, you're like, are we doing the right thing?
I mean, even when you do it, you still kind of think that. But all before that, yeah, you're just hoping and believing that what you're building is valuable. And then when you land the sale and you get the market confirmation, you start getting the feedback from all of your new customers. That's really special.
Yeah. It's an important mark. For everyone, the team too. I mean, the team put in a ton of work.
Yeah. No, I love the focus on the team and how that's a shared, proud moment with the entire team. Yeah. I played a lot of soccer when I was a kid.
And the team analogy is what works for me, right? We're all playing different positions on the field while trying to score a goal. Yep. Yep.
No, I get that entirely. Bay Area is where a lot of innovation is happening right now. What do you think will be the biggest pain pain point using AI in the next year that has not been addressed yet? The biggest barrier to using AI over the next year.
In order to adopt AI technology in a responsible manner, what is the biggest pain point that should be addressed that has not been addressed yet? A responsible manner is an interesting word there. But I think that right now we're on, for a lot of use cases, on the cusp of something being accurate enough. What I mean by that is like, Cloud4 and OpenAI 03 are just like, I think everyone has finally been like, okay, yes, this is going to be incredible for coding.
And of course it's noisy and it makes errors, but it's like, you know, net positive. Like this promise that we've been seeing is getting there. I think there's still a lot more use cases that are right on the edge, whether it's building a system to put an accuracy or fine tuning your models, coming up with a data set to do evaluations, your own personal evaluations and not relying on just the benchmarks, which there's that classic line, which is like once by paraphrasing, but once it becomes a measure, it's no longer like valuable, right? If that's what you're trying to strive to.
So I think creating your own benchmarks internally. And I know a lot of folks who are doing that, right? And you mentioned accuracy, which tightly coupled with data inputs. In general, it's garbage in, garbage out, which is where the proprietary data sets and the quality of the retrieved data makes a lot of sense.
And then the question is, how do you handle the data controls around this or security controls around the data sets that goes into the models? And it's a full circle back into the data governance. How do you ensure that data that you share with models stays private and stay secured? Yeah.
And with this whole agentic thing, I feel like where you put the security controls and how you delegate permissions for what things those agents can access, it's going to be chaos for the next year. Now, there's a lot of companies that are coming out to try to tackle that. Actually, there's so many seem to have just relabeled their IAM product as agentic IAM. Like, how do you even keep track of what agent is doing what, where, and when it should have permission to do it and access what data?
I think that's going to be complicated. That's a funny thought to think like, oh, all of these AIM companies managing, oh, no one's using our products. So maybe the agents will use our product. This new market here, all of these agentic users, they're really not even users.
I'm not sure what to call them. Machines. It's funny that you mentioned companies add agentic to their pitch and that is meant to change what company does. Back when crypto was just exploding by crypto, I mean the cryptocurrency.
When I say crypto, I usually mean cryptography, but in this context, it's the cryptocurrency. There was a tea company that added word crypto to its name or to its product and then the evaluation of the company just exploded. I feel like we're in a similar space, but in a technology. So how about the most difficult day that you've had as an entrepreneur so far, JMO?
What was that day on your journey? That's another good question. I mean, was it any day before we got the first sale? Any day before we got the first sale?
Every one of those days was so challenging. Every day seems to get just more new challenges. You keep thinking that you're going to like, oh, now I know how to handle this thing. So it's hard to pinpoint a specific day.
You know, I've sold two companies. The M&A process is certainly pretty stressful. There's like a lot of lawyers and everyone else at the table has done N of these and I've done two of them. It's just like fundraising where the investor has done N fundraisers and you've certainly done much less than N.
And so it's a lot to learn really quickly with a lot at stake. So those, I remember those as maybe the most stressful. I don't know if that's the same thing as most difficult, but certainly stressful when you can't really know the exact right answer and everyone else at all. What's the biggest learning from that M&A process?
You've done two of them. Well, companies are bought, not sold. That's the most classic line, right? Don't seek to get your company sold.
Wait for the call. Wait for the call. And when you have a buyer who's interested, now you have a price on the market. And the most important thing with any negotiation is to have a BATNA, the best alternative to a negotiated agreement.
Because if you don't have the BATNA, particularly for those situations, you have no way to say no, essentially, right? If someone says like, I'll buy you for a dollar and no one else says, I'll buy you for $2, then at some point, if you need to take the $1, the market value is what the highest bidder is willing to pay. Of course. Exactly.
I mean, it's, but it's so important that you recognize that the second that you give away your ability to have a BATNA, or if you don't seek one and understand what that cost of that all is, then you essentially lose all power. And unfortunately, like it's a trade. And in those trades, you have to know what your value is and have alternatives. What legacy would you like to leave, JMO?
Wow. Okay. That's a good, that's a good one. In business or in life?
I mean, it could be you personally in business or in life, or it could even be like confidence security in the context of confidence security. I'd love for confidence security to be a catalyst for having a bunch more privacy around AI. If everyone, all of the major AI model vendors, all of the GPU resellers, all of like the SAS platforms and big businesses who all own their own GPUs, or reselling access to GPUs, which effectively is what everyone's doing, is doing that with private inference and confidence security either provided that functionality or spurred on everyone doing that functionality, I'll be happy.
I think everyone will benefit from privacy, or at least the choice around privacy and what they share. What do you think will incentivize businesses or consumers to opt in for more services that are privacy aware or built around privacy? Will it be some type of economical units? Will it be compliance or regulations?
What do you think will drive privacy adoption? I think regulations is going to be one. Now, privacy is like a double-edged sword, right? In the same way that encryption generally is kind of a double-edged sword around regulatory frameworks, right?
A lot of people don't want encryption. I tend to be on the side of encryption is better. So I think regulation is one. And then I think, you know, there's going to be some, unfortunately, there's going to be some cost, some data leak or some, you know, like I said earlier, you know, OpenAI has a case that has caused them to retain data.
Like, I don't think that case is going to cause an issue per se, but there's going to be some situation where someone accidentally retained the data that they said they weren't going to, and then either accidentally trained on it or someone leaked it. And it was super proprietary, but they needed the value from AI. Yeah. And there's a lot of value in using essentially application layer encryption, where even if there are backups of data, even if the data is not a formal, but it's encrypted at the initiation state, you can guarantee secure deletion by destroying the keys.
Yep. Absolutely. So I think, I suspect that will be the primary motivator. It's just privacy laws.
And again, I think corporations kind of understand, unlike consumers can kind of quantify the value of their proprietary data and make informed business decisions based on that generally, obviously not everyone. And I think this is an excellent point. We see it more and more often in, in the space, in the marketplace where the security and privacy controls are becoming differentiators when two products are being considered for the same function. Products with more security and more privacy built into the product tend to win contracts over those that don't have those.
Great. I want that to create that competitive pressure. Yeah, that's interesting. I love that.
And I love us moving towards a future that's more secure than where we've come from. I don't know. It kind of reminds me when I was in graduate school, I received a phone call from a news reporter who wanted to comment. She, she looked me up and she saw I was a security person at the university of Minnesota.
And she asked me the question when I thought about people posting stuff on Facebook and then suffering a huge data leak. They had gone through one of their very first data leaks. Like when I was back in graduate school and I was kind of overly simplistic, you know, maybe a little bit too sharp. And I said something along the lines of, well, if you want your stuff private, maybe you shouldn't put it on the internet.
And well, it is kind of, it is true. There's truth to this, but it's also dismissive, I think, and almost blaming the victim. I think that there are ways to build technology that respects people's privacy. As a security person, I'm sure you feel this responsibility too.
It's like, how do we build that future where you can put stuff on the internet? You can share things with your family or your loved ones and not have to worry about constantly being advertised to, you know, and if that world requires me to pay a little bit every month for the service, like, and it adds value to my life, like, I'm super happy to do that. I'm super happy to do that. Same for all of this new AI technology.
You're going to be way left in the dust if you're in a white collar job and you're not using AI services or, you know, AI augmenting platforms to help accelerate what you're doing. Like, just, just feels like you're at a huge disadvantage. Yeah. Now this assumes that you have the money to pay for the privacy versus the free version, right?
And you need this technology to not get left in the dust, but that's more of a societal commentator. I think that's a very important question and I could have a whole show just on that one question. I love that you asked that question because it does instigate like a lot of the responsibility and social, societal, economic, like forces at play here. So these are complex topics for sure.
Though, J-Mo, I'm super curious. You know, we've done a lot of talking about the future, but maybe we talk about the past for a second. If we could go back in time and you could meet your younger self and I'll let you decide how far back in time you'd like to meet your younger self. Would you meet your younger self?
Would you take that opportunity to meet yourself? And if you would, what advice would you have? Hmm. Would I meet my past self?
Probably. I think, uh, I like to have a phrase which is like trust past J-Mo, which is like, I probably made a decision in the past that made sense. And so like, there's a reason why we're here, but maybe there should also be a trust future J-Mo line. Love that.
Trust past J-Mo. Yeah. I mean, he probably thought, I can't even prove I exist, but like whatever thing is a constant is time. So like, anyhow, maybe, maybe past kind of version of me did something rational, but then you have to sometimes disagree with your past self.
Anyhow, I, where would I go? I mean, I'd go back, I already told you one item. I'd go back and be like, spend more money on marketing and sales on a personal level. I go back to college years.
I would say, take more classes for fun. So like, while I was in it, at least, and maybe other people feel this. So it was like, I need to get the grade so that I can get the degree so that I can move on. And then the second that I got the degree, I was like, oh, but I didn't take that.
I should have stayed in that aerospace class. That was really interesting. Why didn't I stay in that? Um, but at the time it was like, I was like, it's rational.
I don't have time to do this aerospace class. This has nothing to do with my degree. So that's, that's that. And I think then the last one is, and I know I said, I played a lot of soccer and I like the team analogy, but don't play soccer.
I'm really don't play soccer. Play another team sport that is less damaging to your musculoskeletal system. A lot of tackles that tend to get you closer to the ground. Yeah.
And if you're competitive, you know, you're going to play hard. So, and I'm competitive, but being physical is good for our mental health and other things. I just, maybe there's a version of soccer that I can be competitive at and not ruin my body. All right.
Well, thank you so much for such vulnerable shares, such insightful advice. It's been an honor to have you on the show, JMO. Yeah. Thanks for having me.
That's my honor. Uh, thanks for asking all the questions and fingers crossed when we talk again, there's a bunch more privacy out there. I'm looking forward to that future, but it's not just going to show up. It's going to be super smart folks, super dedicated folks like yourself building that future.
And so all of the gratitude in the world for tackling these hard problems. I'm excited to see that future become a reality and a huge, huge thank you to all of our listeners for tuning into another episode of the security podcast of Silicon Valley. I'm John McLaughlin, one of the hosts joined with Sasha Sinkovich, the other host, and this has been a Y security production. Thank you everyone.
Thanks. Excellent show, JMO.