53. Lorenzo Thione, Co-Founder and Chairman of StartOut, on AI's Role in Security and the Power of Diverse Investment

Hello everyone, and welcome to another episode of the security podcast in Silicon Valley. I'm here today with a very special guest, Lorenzo Tioni. Hey, thank you for having me. How's it going?

Great to have you on the show. Lorenzo is a philanthropist, an LGBTQ advocate, an investor, the co-founder and chairman of Startup, the only LGBTQ incubator. You're also the managing director of Gangles, a syndicate of investors and a gay angel investor yourself too. That's right.

Yeah. Well, welcome to the show. Thank you. So share with us, do you think of yourself as a security person?

So no, not specifically. I do not consider myself a security person. I guess my security knowledge stems from a good enough education in computer science. I did take security courses in college for what matters, and they were pretty eye-opening.

Knowledge in cryptography and so on. What is really interesting, though, is I have gotten significantly more interested in the intersection of how artificial intelligence in its most current and advanced forms represents both an opportunity to improve our security as well as a risk of seeing more surfaces of attack and more sophisticated ways in which, you know, these can be breached. Because with very advanced technical models being open sourced and available with computational costs significantly going down, with the wide accessibility of data, both on the open web as on the dark web, you actually start to really run the risk of being completely unprepared.

If all you have is the traditional ways of protecting yourself from cyber threats. And I am particularly interested in this because I'm an active investor in the artificial intelligence space. And one of the areas that I focus on is what I have defined or called containment. And the idea here is, and it's broader than just cybersecurity, but the idea is that you introduce these really sophisticated and powerful models and powerful technologies.

And what you have is, on one hand, the exacerbation of the seriousness and the risks that existed, anything from fraud to cyber intrusion to data, say security issues. At the other end of the spectrum, you have completely new problems that are created by these technologies, including things like proper copyright attribution, detection of misinformation. And obviously, the dovetail, because when you get like deep fake detection, issues, you also are acting against the potential of those deep fakes being used for fraud or for other type of social engineering and so on. But just generally speaking, it is an area I'm very interested in.

So I try to take it one day at a time and learn as much as possible. I've made some good investments in the area. And I continue to look at how these technologies are applied papers that are being written around the intersection between artificial intelligence and cybersecurity. Yeah, that sounds like an incredible intersection of worlds there.

I'm sure it's grabbing everyone's attention. There's a lot of, there's a lot of attention being had in that AI space, especially in the intersection of AI and security. Actually, one of the things that you posted on LinkedIn really grabbed my attention. And it was this whole story of your phone rings and someone calls you and, oh, it's your best friend.

And you pick up and you hear the voice of your best friend. And actually, are you sure that it's actually your best friend that you're talking to? It could just be a deep fake, you know? I wish that were a scenario that was sci-fi and made up and far from being possible.

The reality is that we are dealing as a society with these advanced type of scams and social engineering and otherwise already. In fact, I can't remember the name, but a politician was actually testifying to Congress around some of the issues. I'm sure we will talk on this podcast around regulation and policy and how that also needs to kind of come into play in this domain. But this politician was telling directly the story of having been like, he was in a minor car crash and completely unrelated.

But, you know, this contextually kind of gives you the sense that he was already in a heightened state of mind, not necessarily paying attention to every single cue and so on and got a call from his son saying that he was in prison, I think, or something and needed money to be sent to the some address or whatever it is. The thing was, I believe he wised up to the kind of the situation and ended up not kind of falling trap for it.

But the witness kind of like, he basically describes described how incredibly sort of just shocking the situation was because all of your defenses go down when your primordial brain, the one that acts on instincts and really fast to react is getting all of these signals that say, this is your son, this is a voice really well, they speak with the same affectation, and they're saying things that theoretically could be true. So your defenses completely go down and you become much less rational and much less sort of wise to those risks. And this is happening. This is happening already.

The question though, I don't know if you want to talk about that or not, but like the question becomes, to what extent technology is the culprit? And to what extent are we as a society kind of losing track of the true balance of benefit and costs that any new transformational technology really brings? I think just generally as a society, but as evolved apes, we are overestimating the dangers, the risks, the costs, then we are seeing able to see the long-term benefit that accrue exponentially to each one of us in society as a whole.

And I think it's really easy with AI right now to fall trap to, on one hand, the doomsday and sort of the criticisms around technology being bad and opening up all these opportunities for fraudsters and trying to keep models close to source and whatnot. But at the other end, really also there's a risk of being far too Pollyannish and in reaction to those negative sentiments, technologists, inventors, investors may tend to basically say, ah, whatever, there's no problem. There's no, let's just keep going as fast as we can with no guardrails in developing these technologies.

And as it always happens, I think in life, it's a good lesson to be learned that the truth lies somewhere in the middle and we should be walking in this with open eyes. And I believe we should be investing rather than negating the benefits of the technology itself or trying to constrain the application of these very transformative enabling factors that are creating opportunities for productivity gains and for completely new applications and products and ultimately an increased standard of living. I think the right thing to do is to invest in new technologies and in advanced methods to fight against these misuse or even the side effects of these.

Some people call it an arms race and to some extent it is, but that's the story of humanity and of its technological process progress. I think everything from the invention of the currency to the industrialization to the, God knows the internet has been used for fraudulent transactions and fraudulent stuff and cryptocurrency is another good example of that, right? Instead of constraining the technology, we need to invest in figuring out how to have good cops against the thieves and robbers and fraudsters that want to manipulate that technology for their own gain. Yeah, I couldn't agree more.

I think of technology as a tool. It turns out that I believe wholeheartedly that 99. 9999% of people in this world are good, have good intentions and want to help make the world a better place. We all have different ideas of how to go about doing that, but at our core, we are good.

But there are, it turns out, you look around and there are some like 0. 00001% of folks around who would just push the giant red button and creating chaos and up to no good. So there are evil people in the world and hence that when a new technology comes out, I think of it as a tool. It is itself not inherently good.

It is itself not inherently bad. It is how we use it that matters. And we can use it for good. We can also use it for some pretty evil things too.

And I think that it's partly some of the shock and some of the sort of disbelief in, in our lizard brain has evolved to skew with a very low probability that there's a lion hiding in the bush. And so when the bush moves, it must be a lion. So we run our fear reaction to things is baked into our evolved hardware inside of our bodies, like because it's helped our ancestors survive. And now we're here in modern society and we get those same knee jerk reactions.

And so we see a new technology and we come up with a way that you might be able to use it for something bad or something evil. And we all share this like knee jerk reaction of like trying to run in the other direction. And really like most of the uses around us and I think where most of humanity wants to use it is to build something good, but we still have to deal with it. So the edge case, I would say of that, that there are evil people in the world and hence security.

Right. And so on that post, on the post that you shared on LinkedIn, we had a very brief back and forth. And I was like, oh, this is a good point. Like an interesting scenario, a deep fake, someone calls you up and all of a sudden you hear a trusted voice, but it's not who you think it is.

You can't trust the voice on the phone anymore. Just like, I think everyone has kind of been trained to not trust those websites that don't have the little lock icon that are asking for your password and maybe smell funny a little bit. Maybe we'll get to a place for something like that. So I wish, I wish the majority of people even knew how to look for the lock icon and, and stay away of phishing websites.

And otherwise, I think that technology companies have done a lot to try and find a guard for the ignorance or the, you know, the naivete of users and helped. When I talk to people now in regard to this scenario, which maybe until two years ago would have seemed pretty far-fetched in its sort of implementability, but now it's a reality already. Then I tell people just stop relying on what you see and what you hear, or rather the characteristics of what you see and what you hear and rely on much like we do with security and passwords on knowledge. Right.

And the one thing that is still going to be difficult is for someone who's not supposed to have certain knowledge to display it, to show it, to give it to you. And so with your loved ones, especially the people that you would think you wouldn't think twice about sending money to, if they were in trouble, make sure you have passwords and shared secrets that you don't discuss with anyone else and that you can use in very critical situations to basically make sure that the other person knows it's actually really you. You're the managing director of the gangels. Yes.

Gangels. Gangels. Gangels. Not the gangels.

It's like angels with a G, with a hard G in front of it. Gangels. I love it. Would you like to share with our listeners a little bit about?

Yeah. You mentioned some of my background, but once I sort of started my career as a operator entrepreneur, I started an AI company back about 20 years ago. After a successful acquisition, one of the things I turned my attention to were the inequities and imbalances that existed in the venture capital ecosystem in terms of who this incredible engine for innovation and wealth creation was really benefiting. And it was benefiting a pretty, and to some extent still is, pretty, a pretty rarefied small group of people who ultimately all were connected through the same socioeconomic, educational, financial backgrounds, which overlapped a lot with other sociological classes and groups.

And it just so happened that in 2008, women already were being noticed to be receiving a fraction of the capital of their male counterparts in when founding companies. And there were pretty much almost no female general partners of large established venture capital firms. The situation was even dire, more dire when it came to underrepresented ethnic groups like African Americans. And nobody was paying attention at the additional kind of inequities that existed with respect to the LGBTQ community accessing capital.

A lot of that, to some extent, not as much when you looked at gay white men, which could exercise privilege in other ways and sort of get access to the right networks of people in that way. But certainly when we started looking at other dimensions of the LGBTQ community. And so what emerged was the idea of creating a space and organization that would help LGBTQ entrepreneurs to learn from each other, to mentor each other, to network with each other, to support each other, and to gain more success in starting and growing their businesses. And ultimately, the idea was to help create a more accessible and more equitable venture ecosystem across multiple dimensions.

And that organization is great. You mentioned it. I'm co-founder and founding chair of the organization. The ultimately one of the things that the organization couldn't do was to invest directly into the companies that it sought to help.

And partially that was the nonprofit nature of the organization. So we started GainJules and you jokingly said Gay Angels, which is certainly part of the portmanteau that kind of brought us to that, even though it's spelled with an I, as if it were gain by profit, right? And it was a fortunate misspelling or decision. But we started out as a network of investors that were supporting directly LGBT founders.

And we did that as a hobby slash side project for a number of years. The reality though, was that we one day realized the opportunity and the need was much broader and that the ecosystem fast forward to 2018 was still willfully underserved and underrepresented when it came to multiple minorities. And also that when you look at a company or at a partnership or at a cap table, you're not really focusing on one kind of diversity. You may have a team that has two women founders and no representation from other groups.

And again, this is not about quotas. I despise quotas. It's really about, it's really about the idea of creating paths for access to people that have the talent and the means, but not the opportunity because of the circumstances in which sort of they've evolved and grew up and lived or were born or all of those things. And recognizing that, I think creating more paths for access and for, and more incentives for companies and investors to become more diverse was going to be a net positive for the world.

And so we basically started changing our organization towards an investor that would no longer limit themselves to which companies or which teams or which founders it would invest, but rather investing companies with an eye to making money as investors for ourselves and for our members. But with the value add and the proposition for the companies that we would invest in, that we would help them build a truly inclusive organization, helping them with recruitment, helping them with adding leadership, adding governance, board members, advisors from underrepresented pools of talent that they would not normally have access to.

And ultimately the decisions for who to bring on and who to hire were always decisions that remained with the company and not aimed at building a quota, but rather at creating more opportunities. And the last piece of that, which was not something that anyone was talking about back in 2018, is the importance of diversifying the cap table. And the idea here is that, yeah, investors are the ones that give you the money to get started, but if everything goes well, investors are the ones that benefit enormously from the wealth and the value that the team and the company creates in the world.

And if it's always the same people who get to invest into the best companies, then it's always the same people who get to get wealthier and more powerful and then recycle that money and that power and that wealth into more companies. And there's no path for the this wealth generation to pervade and come into communities that historically were just not there when this entire ecosystem got started.

And so a big piece of what we do is we create an opportunity for individual investors, many of which are from multiple ethnic and sociological backgrounds that make them underrepresented as investors in that in venture to have the opportunity to invest sometimes small amounts into some of the best companies that are venture backed out there. And it becomes this really interesting flywheel where companies want us involved because we're value add and we're great. We do what we say we do. We're good on our work and we're easy to work with.

And then that generates the friendliness or the friendship or the goodwill from the part of the big venture capitalists who see us as a value add to their portfolio companies, who create an opportunity for the optics to be better for those portfolio companies and create the opportunity for those companies to ultimately do what they all say they want to do, but don't necessarily know how, which is to create truly inclusive organizations that reflect the reality of their customers and the world in which they're in. And for our members and for investors, we create this incredible opportunity to invest in companies that they would normally not have access to.

And that flywheel started spinning really fast in 2018 and kind of got us to where we are today. One of the most active and largest investment groups in the venture capital space and the only one with a meaningful size that is dedicated to supporting and promoting more inclusivity into the venture capital space in all the section of capital leadership and governance. All the gratitude in the world for sharing and for being part of the change that you'd like to see in the world. I think that's so important.

And speaking as a founder, we, when I was founder CEO of Peacemaker, we went through the start hours forward eight, and it was just an incredible experience, like nothing I'd ever expected or had been through before. And the camaraderie and the supports and being together and learning and growing together as a group was really special. So all of the gratitude in the world for starting start out the only LGBTQ incubator in the world and for having us a peacemaker. So it at in terms of breaking this flywheel, this echo chamber that seems to be so prevalent in today's like distribution of capital.

How do you see AI or even like the subsection of the intersection of security and AI being an opportunity to break some of that? Do you guys invest it and pretty heavily in, in AI space? We do, right? Yeah.

We invest very heavily in the AI space combination of my own background and experience, expertise and network, as well as sort of generally where capital wants to go right now and where a lot of founders are realizing the opportunity is. I mean, there's no question you can sort of size this however you want, but there's multiple reports and all of the big consulting firms have done analyses that say the impact that AI is going to have on the global economy is in the many trillions of dollars added.

There is probably the first time it is the first time in a very long time that we're seeing a technology that has the opportunity to grow the pie instead of just reallocating it in some quote unquote, more efficient way. And that is exciting. That is exciting to investors. It's exciting to innovators and founders.

It's exciting to users and workers everywhere. I think everyone is experiencing their own stories of how AI is helping them do their work better, faster, or how AI is kind of coming into the products that we use every day and making those better. And so there's a huge opportunity to invest. We do this both through our groups and syndicate where people can join and get access, or we do this through traditional venture funds that invest in multiple companies throughout the life of the funds like any other VC.

When it comes to security, we also have a long history of investing in security businesses because again, that's been in many dimensions, one, a story of innovation and growth, right? Because anytime you have a new risk, you have a new opportunity. And now that equation is just accelerating when you introduce AI in it. And personally, it is one of the thesis, one of the pillars of what I intend and am focusing on investing in, which is this intersection of AI first technologies designed to combat, counteract, or contain the misuse and side effects and kind of bad outcomes that a bad actor's in control or using AI can inflict.

So very much a focus of this. So far as how this helps break the circle of sort of a limited allocation or echo chamber, as you described it, I think it doesn't do this in any more or less way than all the other ways in which we operate. I think maybe the one thing that we can say is, because AI is such a general once in a generation kind of change to the technological landscape, because AI is believed to be this potentially massive increment to productivity value, and therefore there is profit to be made by investing in AI early. It is also the most competitive place for investors to put capital in.

And competitiveness in the private market, in the public market, competitiveness just determines price, right? Price goes up. But in the private market, private markets are not symmetric like the public market are, meaning not everybody has access to invest in certain companies. And we believe that, again, because of the flywheel, the goodwill, and the value add that we've demonstrated to the ecosystem, we are regularly able to access and therefore provide that access to investors who on their own would simply not have the ability to invest in some of these businesses.

And so it becomes a really great competitive advantage to basically invest as part of the angels. And that in turns creates more of that opportunity to bring change both for those companies and for the ecosystem at large. Yeah, it sounds like there's power in numbers. But there is absolutely power in numbers.

It's not just a matter of mass. It's also a matter of over a period of time having demonstrated that we are effectively do what we say we do and that we make an impact and a difference for the companies we invest in. We invite every company that we that joins our portfolio to look at and if they have no objection to sign what we call our Gangel's letter. It is a non-binding pledge designed to be non-binding for a lot of reasons because this is really about alignment of interest is not about dragging along organizations into some level of unified standard that is good for everybody.

But we ask them to consider things that are important to us including parity in hiring and recruiting when you're looking at pipelines across multiple dimensions of inclusivity and diversity, applying the same to the board of directors and advisors that you're building and provides the governance and control and then to the cap table as well as some broader way in which we believe corporate responsibility needs to impact society including philanthropy. We are signers and partners of the Give 1% organizations that encourages corporations to assign 1% of either equity, profit or service or product to causes that they care about, philanthropic causes that they and their employees care about.

And the adoption of non-discrimination standards that include and provide benefits for many of the categories that still are being excluded by law. No, it's important to be the change that we want to see in the world. And it's good. It's good to encourage each other to remind each other like what that is, what that might look like.

I remember seeing the pledge when we were at this part of startup and I remember thinking like, oh, 1%, oh, that's not enough. And I changed it to 10. That's very generous. Good for you.

I mean, I would have loved to have given more on top of that, had it hockey sticks, but you know, not in the deck of cards for that one. Look, 1% of organizations like Salesforce, which is one of, Mark Benioff has been one of the generating forces of this foundation is a massive amount. So if we can actually just create a culture where people believe that it's a, like you said, it's a small enough piece of the pie that it should be nothing but a second thought to actually do this. It can still have an incredible impact on the world when those outliers achieve absolutely great things.

And with AI coming and this pie getting bigger, if we believe that, if we really believe that, then the impact on global poverty, on climate change, on so many of the causes that really can't be solved by any one given party and really need our collective effort, they really can make a huge step forward. No, thank you. I appreciate that. So I'm broadcasting from San Francisco.

And when you go for a walk in the Castro, which is the gay district in San Francisco, they have placards on the sidewalks. One of the placards are, they're all of historical LGBTQ figures. And one of the placards is of Alan Turing. And Alan Turing has a really interesting history.

I believe a history unlearned is a history repeated. And Alan Turing is regarded as the father of computing. And he helped the allies win World War II by breaking the German Enigma machine with his machine, with his mathematics. And he did his PhD, his thesis he introduced.

He introduced this idea called a Turing test. Have you heard of the Turing test? Alan Turing test. For all of our listeners out there, the Turing test is whether or not you can detect that you're interacting with a machine or a human.

If you can make that distinction, then the machine is dead to have failed the Turing test. So if you can't tell if you're interacting with a human or a machine, and it is actually a machine and you get that wrong, then the machine is actually passing the Turing test. And for the longest time, like this has kind of been held as a gold standard or of how powerful computers can be, how could they interpret humans, human behavior. And now with all of the LLMs, it feels like these machines today with these LLMs are passing the Turing test.

And I'm just curious what you think Alan Turing in modern, if you were around today, like what he would think of all of the ingenuity. Well, so the Turing test is a very interesting intellectual exercise. And there are a few of these intellectual exercise. The Chinese room is another one.

I don't know if you're familiar with this, but you know, it says, if you give a person who does not speak Chinese, an enormous, unlimited amount of rules that says, if someone passes you this piece of paper that says this exact sentence, you can look it up in this dictionary. And this is the translation of that sentence in Chinese and you return it. And from the outside, you're basically looking at this and say, does the person inside the room understand Chinese? You know that there's a person inside the room, you know, that they just have things with them and they are conversing with you in Chinese.

Right. So these are interesting intellectual exercises that solve us, you know, gain a better understanding of what we mean when we sit, when we use word like intelligence or understanding. They're words that work for us because we only communicate with other human beings and we have this shared knowledge or understanding, if you want, of what these words mean and how we use them and what we refer, what we use them to refer to. But in reality, they're very ill-defined exercises.

If you actually wanted to use them as tests, right, which is something that in our normal way of describing tests have objective, peteable qualities, they're most certainly not objective because the whole point is, does the person who is administering the Turing test have a subjective decision as to whether they think the other person, whether understands Chinese or is a real person, right? So they're not subjective. They're hardly repeatable. Right.

And they ultimately only, especially the Turing test only proves or gives you information about how good that machine is at fooling a human and fooling a human is very different as a skill than what we normally intend have full and full intelligence or even superhuman intelligence, which is even yet another very ill-defined terms like AGI is, right? Right. Totally ill-defined. We really don't know.

But back when the Turing test really started to become something that people talked about or taught in schools, can't remember exactly the years, must be in the seventies, this happened. But there was the first example of a program that was the nascent kind of area of artificial intelligence. I don't know if you've ever heard of a program called ELISA. No, I have not.

Okay. Don't remember who wrote it. Some psycho psychologist slash sociologist slash computer scientist and a lot of those things kind of overlapped in the early days. But it was this very basic program that was designed to act as a talk therapy bot.

And it would do very simple things like respond to half of the things that you said with things like, that's interesting. Tell me more about that. Or how do you feel about this? And sort of very stark phrases.

But the thing is, a number of people when first framed this, not knowing what it was, it was like, wait, like, am I talking to another person? And there was enough of a variety and sort of smartness to those rules to fool the person. Like a good example, not with computers is fortune tellers and hermancers and all of these kinds of things, where ultimately is just people who have developed a very good skill at listening, mirroring, asking the right questions, being generic enough that things pattern matched into what people expect to have a response. And what they get on the other side is the conviction.

Sometimes people would go to their deathbed, absolutely swearing that person can talk to the spirits, right? Well, I mean, this is that you're referring to Chris Voss and some of his negotiation tactics there too. Great. Mirroring is a very well understood tactic in manipulation and persuasion.

And there's a number of them, right? But the point is, the Turing test was never going to be the way in which we determined whether or not machines are intelligent. First of all, because we haven't really defined what intelligence is and why do we care about it. I was just reading a really interesting LinkedIn post, I think, by someone who was saying, we are really focused right now.

And I think he was coming at it from a negative angle. I don't think that's a bad thing necessarily, but we're really focused at on getting computers to do things that humans are fairly good at and do them better faster so that humans don't need to do them anymore, including things like painting and like creative efforts and so on. I do think that there's enormous goodness in that, but he was clearly coming at it from a negative angle. But the piece that I agreed with it with him about was when he said, why are we not focusing on especially on getting machines to do things that humans are terrible at doing?

Like they can't do it at scale. They can't do it repeatedly. They can't do it correctly. I can't remember all the examples that he was giving, but it was pretty sort of insightful in the way in which sometimes we define intelligence in really odd ways.

And we're focused on getting machines almost as if it were like a God creative effort, right? Like this idea of building and being in the image of ourselves, as opposed to figuring out what is the most benefit accruing thing that to us collectively as a society, as a species, we can actually focus on building. So love that Alan Turing is a great figure. Love the Turing test.

I think it sparked so many conversations and important insight, but it's hardly the right test to think about when really looking at artificial intelligence. No, that's very insightful. That's very insightful. Do you believe in the singularity, the idea of a point in time where AI will start to compound on top of itself and it will use itself to generate better versions of itself and so on and so forth?

I think the idea of singularity and exponential accrual and therefore sort of the acceleration beyond what we are used to kind of think of when we think of innovation and progress is absolutely compatible with everything we know about the world, the physics of it, the kind of evolutionary pressures that kind of create the idea of making things better and kind of remove the things that don't work quite as well. Whether or not it will work exactly the way that Kurzweil and others have hypothesized, I think it's really more the territory of belief and different opinions and so on. But it's absolutely compatible. I just don't know what we drive from it.

Do we derive, because we have I think at least three ways that we can think of it. We have the creation of a completely new sort of type of life, right? Right. Which self-replicates, has its own needs, demands, desires, and that could lead to it seeing itself as superior to humans and therefore wipe the human race out.

It also could simply just be a better tool that becomes more and more powerful and more powerful because it lacks the very old, very primitive sort of elements of self-preservation needs and wants and self-realization that humans and all organic living creatures as we understand them really have. And I think it's a good thing if people were not to get it in into their head that the right thing to do is the Frankenstein kind of mode where we really want to build a being that has those same kind of lives of wants and needs and pressures and also is endowed with this superhuman ability to calculate, compute, remember, and manipulate the world.

That would probably result into creating a super life form. And in everything we know about life forms that are superior to others in our world, they tend to subjugate and limit and constrain the freedom and the opportunities for the life forms that they see inferior to them. And then there's the other possibility, which is we are always in control of that replication, that sort of increase in speed, and that singularity actually never really happens. But I think all of these are legitimate sort of scenarios that people need to kind of be able to discuss.

And I don't think any of them kind of drives to the conclusion that we should stop what we're doing right now and technology is bad and artificial intelligence this and that because in reality, the impact that we can have on saving the planet, making this food much more not not scarce, I don't know what the plentiful, I guess, abundant, abundant, plentiful, and solve a global health, all of these are things that are likely to be helped quite significantly by the developments in artificial intelligence that we're seeing now, and that we will be seeing in the next decade. Yeah, definitely.

I heard someone put it that AI is just a fancy parrot, that it reads things, and it knows how to regurgitate what it's heard. And when you put it in the context of that, it sounds harmless. It's just. .

. Well, it's not a fancy parrot. It's not a stochastic parrot. I think that people that say that LLM specifically, not artificial intelligence, are very broad, that LLMs are stochastic parrots, are have a very poor understanding of how these algorithms and these machines actually work, and they have a poor understanding of how human brains work.

To the extent that we have an understanding of it, there are a lot of similarities between what multi-layered neural networks, which are at the core of transformers, which are at the core of large language models, and brain networks, neural networks in the brain, working a lot of similar ways, creating a chain of events that is stochastic in nature, and that selects the most fit next signal, whether it's a word or something that needs to go to the next layer, the most fit next signal that optimizes some function, some reward function that our brain or those networks have learned. And these work in the exact same way. It is not averaging out the content in it.

It is not predicting the most probable, as in frequent, chain of text. Anyone who says that has a very poor understanding of what these machines do, these machinery does. And what's more, it is very clear that it is able to generate sequences that have never been generated before, that have never been seen before in the text, in the training text, at least not in full, and that demonstrate at least some level, not full, but some level of understanding and abstract reasoning, which is completely incompatible with the idea that you're just a stochastic parent, whatever that really may imply.

It's obviously being used as a derogatory, diminished term that says, oh, these things, they will never approach what humans can do. These things are not anywhere near what humans can do, but the idea that they will never approach it is kind of a, I don't know, wishful thinking for some, and doomsday brothers. Yes, yes. So, no, I appreciate the tribute to Alan Turing and engaging the question of the Turing test a little bit.

I really believe that his story is not told enough, and not many people know, actually, that Alan Turing. . . There's a great movie.

I forgot what it was called. The imitation game. Yeah. And the way he was mistreated by his government and ultimately drawn to suicide and all of these things are just.

. . I think only recently, the UK government kind of issued an apology. A postpartum, yeah.

Apology. I do believe so. Yeah. It's a good tribute to all of the contributions that he made, not just to society, but to and the LGBTQ community as well.

That's right. If you were alive today, would you have any sentiment or messages for him? I mean, I guess I would be delighted to meet him and have a conversation with him and discuss all the ways in which the world is computation and computation is physics. And I think the best, not talking about a gay icon in this case, but the best approximation to that and someone I admire very much and would love to one day have an opportunity to meet and converse with is Stephen Wolfram.

And that in a lot of ways has kind of picked up some of that work and really shown the extent to which a better understanding of the world is a better understanding of computation and algorithms. And as a geek, I think I would have a field day with that. Oh, amazing. Where would you take him for dinner?

I would probably make dinner. You would probably make dinner? Oh, that's the perfect. It's easier.

It's less noisy. You know exactly what's going to be. Also, if I'm going to bring a celebrity to a restaurant, we're going to get interrupted every moment by people wanting to take their selfie with them. So I'd rather have invited them over and make dinner myself.

And speaking of LGBTQ characters out there, there seem to be quite a few, especially in the tech community. I'm sure you're familiar with Peter Thiel, but or maybe with Kara Swinson. Peter Thiel, yes. Very spiky personalities.

What are your thoughts on their approach to life and business and diversity, capitalism? People are interesting. People are different. I love when people voice their opinion.

We need more of that. We need more freedom of speech, not less speech. And I think the marketplace of ideas is how not just innovation, but social progress actually does happen. I have no problem with people being not contentious, but controversial in various ways.

I certainly don't agree with everything Peter has ever said or with what Kara has ever said, always said. Peter served on the board of directors of my very first company. I learned a ton from him. This was 20 plus years ago.

And I think it's great that there's no one dimension to every single person. So I am not surprised that the LGBTQ community includes people who can be incendiary, that can be controversial, that can have opinions that are sometimes reviled and sometimes vehemently disagreed with. And like I said, there is enormous space between many of the things that Peter has said and where he aligns politically and myself, but just speaks to the fact that we need more speech, not less. I agree wholeheartedly.

I think that there's really, there's something very special about creating something new in the world, bringing a new idea to the table. You have to have a little bit of this crazy, you have to be willing to break the rules. And whenever I get a sense of that, even if it is controversial a little bit, that sort of raises the eyebrow and gets you to rethink sort of maybe how society or the world could be or could be shaped. If we introduce something new or a different idea or a different way to think.

And so at its very core diversity in every sense of the word, I think absolutely diversity of ideas. Absolutely. Yeah. And I think diversity of ideas comes from diversity of experiences.

If we bring our experiences with us, we have no choice but to see the world through them. It's sort of the lenses that help us understand what's going on. And if you can find that alignment, a common goal, a little bit of crazy, where you focus on something out there in the future, and you can agree with other people that this is where we want to go, but there's maybe different answers to the question, how do we get there? And that's guided through our diverse experience can be just very exciting.

And I see a lot of that in the LGBTQ community because in some sense, I think very comfortable breaking the rules, so to speak, which is good. And it gets us creative bubbly juices flowing. We are all of the time, but do you have time for one lens? Oh, yes.

If you could meet your younger self, would you? And would you have any advice? Yes, I would. And I would give myself the advice of sleeping more.

I was young. I thought sleeping as little as I could get away with was a superpower, was something that made me more productive. And in reality, I now understand the importance, the critical importance that sleep, nutrition, exercise, and mental health have in your productivity, in your creativity, in your ability to impact the world. So I would go back and it's amazing how sleeping is very pleasurable, but people just kind of have this aversion to it.

And I would go back and tell myself, sleep more, not less. If you need to give up the partying and the fun, sometimes that's the right thing to do. That's fine. I don't know if I would listen.

Einstein? I don't know if it was Einstein who said it was don't sleep when they're dead. And I was like, no, I reject that. To sleep when you're dead only means to taste the time that it will, that will tell you.

That it will fast approach. Yes. Exactly. Lorenzo, thank you so much for joining on this episode of the Security Podcast in Silicon Valley.

I'm your host, John McLaughlin. This is a Y Security production. And thank you to all of our listeners for tuning in for this episode and stay tuned for another episode. Lorenzo, thank you.

Thank you so much. Thank you.