93. The Conversation Nobody’s Having About AI (with Jacob Andra and Stephen Karafiath)

Hello everyone and welcome to another episode of the Security Podcast of Silicon Valley. I'm your host, Jon McLachlan. I'm joined today with two very special, amazing guests, Stephen and Jacob. Welcome.

The co-founders of Talbot West. It's great to be here. No, thanks for joining. Would you like to share with us and all of our listeners, what do you guys do at Talbot West?

Yeah, thanks, John. Talbot West is an AI-first digital transformation company. So what I mean by that is we do business process consulting, but with an AI-native focus. So if you can picture a Venn diagram of two overlapping circles, one is your traditional business process consulting, deep understanding of business systems and processes, and then the other is like the AI expertise.

And often these two circles don't overlap. They're two different skill sets, and we sit right at the intersection and kind of bring both to bear to help companies modernize and adopt technology. Would you add anything to that, Steve? I mean, I think that's a really good kind of overview of the bridge between the technical and the business.

I'm sure we'll get more into it. Right. Thank you so much for that. Jacob was giving me a little bit of background, but you guys co-founded this company?

Yeah, so Sasha and myself, we co-founded Y Security together. Maybe two years ago, we just used it as a vehicle to start helping folks in our network who are amazing people out there starting companies. Needed a little bit of security help, but it made absolutely no business sense to hire even just one person full time. And so we could step in and we could provide some of that guidance, some of that leadership, some of that where that rubber hits the road just really early in a company's lifecycle before, you know, usually you even hire your first security person.

And then word got around that we were doing this and it kind of took off. Here we are, you know, two years later, 35 folks on the team. It's kind of like a VC, so but we bring with us an entire team of experts who can jump in on an on-demand basis and help. I can really see the niche that that filled because, you know, working for Oracle when security was paramount.

But, you know, these are million dollar contracts with Fortune 500 companies. Then moving into the startup world, it's like nobody thinks about security because we don't have the budget for it and everything shoestring architecture and all of that. And then, you know, decisions that could have been made early aren't made until later. And it's very painful.

So to be able to start to even proactively architect, let alone implement that stuff early, I see tremendous value there. Might as well take the best practices in early, right? You have to. It's the only way to do this.

You can delay it, but then it becomes very apparent, very costly, you know. And I think the trick is really to focus on the business need. There's a way to do security just for the sake of security. Like if you give a security person a budget, yeah, sure, they can burn it.

But the really interesting piece and the piece that like really just drives us is can you focus on what matters? Can you focus on how security can be part of your go-to market engine and really use it as a market distinguisher, which leads to amazing outcomes like closing deals faster, closing deals in new regulated markets, maybe even leading to more attractive M&A terms. Sasha, welcome to the party. Hi, guys.

Hello. A bit late. Hopefully not too late to the party. Not at all.

Not at all. Well, it's great to have you, Sasha. Stephen and Jacob here are the co-founders of Talbot West. I was sharing with them a little bit about why security, but I think it's just a great opportunity for all of us to get to know each other.

And these two, you're broadcasting from Salt Lake City, right? That's correct. Amazing. In terms of security, do you guys think of yourself as security people?

We are not security experts, so we do love to bring in people like yourselves for specific security stuff. There is obviously a strong overlap between all of this. And so, for example, Steve, this might be a good time to talk about one of our aerospace clients. What do you think?

There is a strong security tie in. Yeah, I think, you know, to answer your question, having spent decades at Oracle, you know, secure by default, encrypting data at rest, you know, data in transport, all of that, those concerns were always paramount. So I've always been security adjacent. I've run security related teams, but it's never been my focus.

And spent most of my career trying to steer clear of, say, public sector and a lot of the federal security regulations, stay away from FedRAMP, all of that. But, you know, Jacob here has sucked me into some of these large digital transformation contracts for a company that does a lot of high tech aerospace contracting to the DoD. So, you know, dusting off my knowledge of, you know, CMMC level two compliance, secure enclaves, the ways that we need to, like, help them see how they might have been saving some money by having these on-prem deployments. But really, they need like a FedRAMP certified GovCloud.

So I can delve into security, but I would prefer to let experts handle it the same way as I could do my own taxes, but I think security expert or a tax expert will do it better than I would. Yeah. And so with this particular aerospace client, for example, we're not exactly trying to solve their security issues. We're trying to shine a strong spotlight on the decision making pathways, the dependencies and which ways those dependencies flow and how all of this stuff ties together because it turns into such a tangled rat's nest.

And really our recommendation, I mean, we're making a variety of recommendations to them, but one is to engage a firm that's an expert in CMMC to compliance and security issues. And we're kind of showing how a lot of their decisions are going to be gated by whether they're going to do a GovCloud or on-prem, et cetera, et cetera. And just showcasing these forking pathways and dependencies to them really like not trying to conclusively answer those questions for them, but, you know, convince them to hire the right people for that. In general, there's a lot of modes, especially in the organizations that have been established for a very long time.

And when I hear government, when I hear aerospace, I hear a lot of opportunities for improvements in the process itself. How do you guys see modern technology stack help to address some of the biggest modes that large organizations have today? Why don't you take that one, Steve? Sure.

Well, I think the good news is today with so much on demand as a service, things that are already being certified in a FedRAMP environment, not as much of the legwork has to be done, say, internally by your own team as it used to. You can outsource a little bit more of it as long as you're careful to not run afoul of the regulations, which I think is, as you're alluding to, the huge opportunity comes from the onus of the huge amount of sometimes conflicting regulation that's there. So I think what we've found as a general rule is instead of trying to reinvent the wheel, go out and figure out the best practices, see what other companies have done, see what other offerings are new to the market, especially in the AI space, a lot of things around threat detection, whether you're going to have some of these packet sniffers running at different levels of your network and what's actually going to happen with that data, where is it going to be logged?

There's stuff out there now that literally didn't exist a year ago. So hiring you guys, hiring us, hiring experts who live and breathe that every day makes people realize that there's solutions out there that they never even dreamed of. Yeah. And I think part of Sasha's question, Sasha, correct me if I'm wrong, is just around the inefficiencies in these organizations.

And this is government, this is private sector. The bigger the organization, the more inefficiency. And that's a big focus of ours is helping to shine a light on that. And of course, you're never going to get it perfectly inefficient.

A large organization, if you can even move the needle one or two degrees, you're talking massive impact on bottom line and profitability and all of that. So we're just looking to showcase the ways in which AI and adjacent technologies, automations, different types of solutions can be brought to bear. And we look in a very holistic system of systems way, not just plugging an isolated solution in and thinking that's going to solve it. Because there again, back to this idea of dependencies, you've got to map it.

You've got to showcase the impact of how this depends on that. You've got to have your data sources right, address those at the root level and all of this. And so there's a lot of time spent on the evaluative process. And we do it in three rounds.

So one round is more of a shallow dive across all these ranking them across these five criteria. Then there's kind of almost a semifinal round where you have just a small handful and out of that there's a clear winner. And you do a really deep dive on that and provide a pretty comprehensive roadmap on what that one would look like to implement, what it would drive in ROI and all that. And it's very obvious where you should start.

And then you have your second round bench of, you know, after you do your one, here's how these others. And they're tied together in a kind of stepping stone format because this is another key differentiator of Talbot West is we don't look at problems and pain points and implementations in

landscape, and that change is going to be driven by AI? Largely AI integrations, the ability to create data orchestration layers. I mean, it's not just AI. It's a lot of other things.

But yeah, it's all converging on that. Whether or not the year 2030 is the exact right target, the point is companies that work with us are going to be much closer to that through working with us than they would have been on their own. And they'll be much more on track to be competitive a few years from now when that is the big differentiator, whether or not you have this level of orchestration and coordination across your company. Yeah, I think just to build on that for a second, the kind of total organizational intelligence that Jacob's building kind of these roadmaps for, you know, that's a little farther in the future than some of the first steps.

But the cool thing is we're seeing a preview of that with our own internal kind of AI driven intelligence on our customers. So we're essentially using the latest cutting edge and our own research on top of neuro symbolic AI in order to create an internal picture that we use to get all of the information in one place about customers and whether that's being fed from all meeting transcripts, technical documentation, everything else. We're creating our own natural language queryable kind of intelligence source on these customers. And, you know, Jacob's had what honestly was a pretty brilliant idea to kind of prop these up by what are the natural life cycles of most companies.

So whether that's everything from quote to cash, or like what is the business process involved, let's map that whole thing out. And let's stand that up like a skeleton. And then we can use neuro symbolic AI to create a knowledge graph to create framework that defines the attributes that we want to know about this company. And then we can use, you know, the latest LLM technology that understands natural language to go through that piece by piece and kind of start putting meat on those bones.

So even ahead of the total organizational intelligence vision, which honestly, most companies probably won't get to for a couple years, we're starting to see eating our own dog food organizational intelligence about our customers today. How do you ensure that sustainable and continuous adoption of new technologies? Oftentimes, when a new tech shows up, there's a lot of hype, there's a lot of excitement, and there's initial bump in the adoption and investments adoptions and excitement, but then over time, it just stagnates over and things go back to normal, where we keep grinding through the old technology stack where you are okay with that unless it breaks, just like you guys mentioned earlier.

I'll address that in two ways. One by roadmapping and scoping it correctly, this whole apex process, we're making sure we apply it to the right things. So this whole MIT report, we're only 95% of AI projects fail to see any meaningful ROI. I mean, we think that that's largely driven by improper scoping and lack of understanding the dependencies and all of the things required for a technical implementation to succeed.

And we certainly see a much higher success rate than that with our clients. So I think part of that is just improper scoping, you're just trying to shallowly fit a solution in without understanding the landscape. And then there's the human element, are you properly training the people, doing your change management and all of that. And then the other piece of that is just there's such a frenzy right now over large language models and a fundamental misunderstanding of what their actual capabilities are and aren't.

And one of the things I'm doing right now is documenting a lot of the very specific categories of ways they just fail. And they do. And these ways are persistent across releases, leading me to believe that it's fundamentally structural to the way that these neural networks function. It's not something that they're going to throw more compute, more scaling laws at and outgrow it.

And so I'm documenting these and eventually will publish a paper that I think will be the most comprehensive sort of mapping of the different categories of error, logical, different types of errors that they make. And so a big piece of this is not falling into the large language model frenzy, properly scoping the right tool for the right task. Is this a task for a large language model? Is this a task for some type of machine learning solution?

Is this an ensemble job where it's complex enough, you decompose it into subtasks and you bring in multiple different capabilities and kind of orchestrate them together? And so I think that's a big part of it too, is not buying into the hype and actually steering clients correctly. And when I hear the sort of rhetoric that people conflate large language model with AI and they'll say all of these claims about AI, by which they mean a large language model, and then they're just claiming it can do all this stuff. I just roll my eyes, like that's not serving clients, that's not steering them properly.

Yeah, I think something that I really appreciate that Jacob is always bringing us back towards is kind of a middle of the road approach, not overly pessimistic, not overly optimistic, but delivering our clients what we actually discern to be the state of the landscape today. And to answer your initial question, my own personal view is we're probably in the equivalent of the dot-com boom and bust of the late 90s. I think especially around LLMs, I'm seeing those same patterns coming up. And that one was exciting because that was towards the beginning of my career and so much fun, even the bust.

But I think LLMs being overly hyped, I even drank into the Kool-Aid, like once that transformer paper from Google came out in 2017 and all of a sudden we could translate like all sorts of languages almost immediately. And all of a sudden, like the machines really were, they weren't just passing the Turing test, they were obliterating the Turing test. So like by those metrics, I'm like, maybe, like maybe if we just throw more vectors at this thing, maybe if we just throw more scaling, it'll be like Moore's law and like, and that was happening for a while back towards like GPT-2, GPT-3, 3.5. And so I was kind of thinking maybe we are going to get to a more generalized intelligence, like an AGI that could literally just take a single sentence and like execute on tasks and actions reliably and be like more reliable than a human.

And in some ways, in some limited cases, we are getting close to that. But the idea that LLMs specifically are going to be this artificial general intelligence, I think that balloon has been deflated for me at least. Jacob's helped me see that as well. But I think the counterpoint to that is there's a lot of other AI technologies that people are totally ignoring.

You know, the stuff that I did historically with Oracle back when we did fraud detection for major credit cards or when we're doing predictive sales analytics, you know, we weren't using any neural networks for any of that, but it was about like mapping with logic, you know, a lot of different interdependencies and making predictions based on that and like Bayesian inference algorithms. All of that stuff is ripe to be combined with the natural language of LLMs to be bigger than the sum of its parts. I will not be surprised to see a bust in the investment into just scaling LLMs and data centers and throwing more GPUs at these things. I think there is a lot of demand for it, but the idea that it will keep outperforming itself at the same scale that it has seems fanciful at this point.

And honestly, like maybe this is just my own hope, but Jacob and I could use a breath of fresh air if the pace of AI development just stagnated for, let's say, five years. Awesome. We've got plenty of work to do and plenty of ways to implement the technologies that are currently available to help customers. Almost, you know, when the landscape's shifting on a monthly basis, I think it's harder for companies to predict or start implementing.

So that's about where I think we're at is some amount of bust boom on LLMs, but actually tons of untapped potential in other fields of AI that nobody's talking about or hyping. Yeah, that's a really interesting take, and I appreciate that. Seeing all of the change and all of that unfold so quickly, you don't get into this with all of your clients, right? This is sort of the properties of our underlying technology that's driving all of this change.

But, you know, with that appreciation of the fundamental technology, what do you think is the easiest win that you see for an organization to just be like, hey, I'm new at AI. I can see that there's a lot of change happening. I don't want to be left out. Where do I start?

Where do I begin? Two different answers. One is hire Talbot West to do our APEX process and we'll tell you your easiest win. But short of that, you know, just start playing with ChadGPT.

Know where to use it, know where not to use it. ChadGPT, Claude, you know, Gemini, et cetera, this batch of, you know, chatbots. There are amazing easy efficiencies you can get by knowing how to create custom GPTs, leverage them properly for certain types of knowledge tasks, not putting sensitive data in them, et cetera, et cetera. And so using them in a very narrow scope way in a company, you know, you can be up and running in, you know, one day provided your people know how to do it.

And you can find all this stuff on YouTube. And we do have clients that they're like, hey, we don't want to have to learn this stuff. Come in and just help us learn ChadGPT really well. And so we can do that as well.

Some of our engagements involve that. But yeah, that's a great one as long as you

Data and using natural language, there are opportunities like some of our products he was talking about for this other kind of side venture we started, BizForesight, I do think are revolutionary kind of architectures and technologies, and that's so much fun. I get so much value out of that, like as a programmer. But the downside to that is when I think that something's going to work really well, and then, you know, it seems like just a rat's nest of bugs I need to sort out. But as I start going down that rat's nest, it's like, oh, wait a second.

There's like there's a fundamental philosophical problem with doing this whole thing unstructured in a natural language. And it's not just, you know, oh, we just need a better LLM or a better machine learning algorithm. It's like this. This is fundamentally not going to work.

And I know that's part of exploring and pushing the envelope. And, you know, Jacob reassures me that like we're trying to do research here. So like sometimes stuff's not going to work out, but it can get really frustrating to me if I like if I thought we were right on the cusp of something and then like I have to go a totally different direction. Yeah, I thought for me what it would be.

This hasn't actually happened, but this is what would be like the worst day is if a client was totally unhappy. If for any reason a client didn't have like an amazing outcome, I think that I would personally have a hard time with that. And I realized to a level that's not totally psychologically healthy. I need to disconnect myself from that.

But I do feel like this strong commitment to that. Well, I mean, it sounds like you're a true partner. But we've talked a little bit about the future. And I'm very curious if you had the opportunity to meet your younger self, would you take that opportunity?

And if you would, would you have any advice for yourself? That's a great question. I think if I could go back to like all of the younger versions of myself, it would be mainly a message of like compassion and acceptance of like it's it might be hard now, but like it's all going to be OK. And a lot of the things that you're catastrophizing or worried about are going to work out.

Yeah, that's a beautiful message. I think I'd do similar. I think in addition to having a lot of compassion for myself and the challenges I was going through, it would be a message of put myself first a lot sooner. Don't put up with other people's BS and let their agendas run my life and actually make that decision way sooner to do what's right for me.

And that would have actually set me up a lot better. No, that's amazing. I really appreciate that. You guys make a great team.

Thank you so much for joining on our show today. And thank you for all of our listeners for tuning in to another episode of the Security Podcast of Silicon Valley. Thanks for having us. Yeah, appreciate it.

It's been lovely talking to you guys today. Thank you, guys. Absolute pleasure. Huge thank you.

I'm your host, Jon McLachlan. I was joined with the other host, Sasha Sinkevich. And this has been a Why Security production.