74. Augment Code: AI Agents, Engineered to Ship (with Scott Dietzen)

Welcome, everyone, to another episode of the Security Podcast of Silicon Valley. I'm one of the hosts, John McLaughlin. I'm joined by the other host, Sasha Sienkiewicz. And we have an amazing guest to share with everyone today.
Dietz, the CEO of Augment Computing. John, Sasha, great to be with you. Welcome, Dietz. Maybe for all of our listeners out there, would you like to share a brief overview of Augment and the mission?
So you can think of us as the anti-vibe coding company, if you'd like. We first looked at LLMs four-ish years ago. We saw the potential to disrupt software engineering. You know, companies, generally large enterprises that have many tens of millions of line codebases.
It's a really difficult task to evolve these large software systems, you know, the ones that run the world. In fact, you could say there's no simple change in a hundred million line codebase, right? It just doesn't exist. AIs are actually phenomenally good at managing detail across, you know, really large pieces of information.
And so they become a great co-conspirator for software engineers to work with in trying to build and evolve these complex systems. So that was what we set out to do. We set out to crack the problem of how can we teach these coding AIs to understand, you know, these Herculeanly large complex legacy systems and make the software engineers, software engineering job much more pleasurable, much more productive and yield higher quality software. Because most software in the world disappoints, right?
It's not as reliable as it should be. It has security vulnerabilities. It doesn't have all the features that we want. It can be hard to use.
You know, we've had, I believe last year, $2. 5 trillion lost in the U. S. economic output solely due to software failures.
So how can we make our software so much better? I think, you know, if we can all deliver higher quality, easier to use software with all the features we want, we'll unlock an unbelievable amount of economic potential. The anti-vibe coding co-conspirator for all engineers everywhere working on like ugly, large legacy code bases connected to big enterprise systems. That's it in a nutshell.
You know, from the outside, like, it looks spectacular. Where do you see Augment on this journey? So the big disruption that's happened over, you know, the course of this year is agents. So, you know, the role of the developer is going to move to being a tech lead for a team of agents.
And the most sophisticated developers are already very comfortable in that kind of role because they do it today for humans. But having, you know, a suite of agents that you're beck and call to do, you know, say, less fun, less tedious tasks and being able to tee up a bunch of them in parallel and then check on their work. And this is critical, right? We named the company Augment because we think we are still a long ways from not needing human insight and intellectual guidance to yield ideal results.
You know, the quality of the code, the decision-making about what architectural approach, what data model, database technology, which cloud service, microservices architecture, you know, the aspirations for the software are still motivated by humans and humans still need to check the results of the work. But I think we are seeing, you know, we're no longer talking about 30, 40% improvements, right? We're talking about 300, 400% productivity improvements with the introduction of agents. Agents are very interesting.
It's the extension of the human force, hence the augmentation of the workforce. But very few people talk about the importance of the data that goes into the agent logic, the context engine. It's important to provide really good data as an input in order to have a good output. Yeah, I think this is a theme that's going to continue, right?
These AI assistants started with code completions where context was helpful, right? If you wanted to reuse code rather than add new code. But with every higher level of abstraction, as you go into chat to ask questions about a complex code base, or you think about next edit, a context, next edit is where the AI watches you make a change and then infers what else you might need to change along that journey. Every step up that abstraction hierarchy and agents are the next logical step requires more insight in how the program works, how it's built, why.
Otherwise, the systems can't be affected, right? Agents need more autonomy. And that autonomy depends on their having knowledge of what's happening inside the software in order to make the right call. So, you know, this research work that, you know, we spent years honing the context engine of Augment to be able to handle large complex code bases.
It's, you know, now paying us back many times over in the land of agents where our agents are vastly better than the alternative software engineering agents on the market because of that deep understanding of the code that they're working on. Right. So there is a very strong foundation to build that accelerated augmented force to continue the business initiatives, whatever the customers want to execute on. Agreed.
And it makes the software engineering role much more fun, right? You can spec out a new implementation and then say, hey, crank me out a bunch of unit tests and run them. And, you know, do a quick security pass over this to see if you see any concerns, right? I mean, if something breaks in the build pipeline, you can have an agent troubleshoot it so that, you know, humans don't have to go through a lot of the manual tedious work that has historically been associated with these large software projects.
Yeah, that's spectacular. You mentioned security a little bit. What we've seen is like during a red team exercise or penetration exercise, we always prefer to give those white hat hackers access to the code base of whatever it is that they're looking at. And traditionally, like there's some security tools out there.
There's some static analysis things out there. You can run those things. I don't augment is certainly not a replacement for those things. But now, in addition to that, with augment in the mix, you can actually ask it to do very high level, very detailed analysis of code base and check like for paradigms that seem a little bit fishy or it might be able to be exploited.
And so even just in that recon space, without even getting into the delicious stuff that all of the agents bring to the table, you can chat with the code base. It really comes to life nicely with that context agent or the context window. I think it's a great point. Most of the discussion around coding AIs is centered on software engineering productivity.
I don't think we focus nearly enough on higher quality, more feature rich software that results. Right. And I think these LLMs are going to be a phenomenal tool for helping us measure and improve a software quality. You know, I do think there are new attack vectors that we have with AI.
Right. These large language models, they're not deterministic. So, you know, we can't necessarily test all the code paths and know where the vulnerabilities are. And we can't try every possible prompt and know how the LLM is going to behave.
So we're going to have to use systems like augment in order to help further harden our applications against new security vulnerabilities that are going to come as just an inherent result of AI being an integral part of the software system. So, you know, I think this is the next level up for security. CISOs everywhere are going to be challenged to make sure that these new systems are secured. And, you know, we have to keep in mind that attackers may bring AI to bear in trying to hack into our systems.
So we've got to do, you know, a better job of defense than we ever have. And I have to say congratulations in that department. As I understand it, augment was the first coding platform to achieve SOC 2 Type 2 out there. And that was last year.
And then this year, the very first coding platform to achieve ISO 42001 certification. And so it sounds like security is absolutely top of mind. It is a priority. And not just in the traditional SaaS, you know, here's a SaaS service.
Let's secure the data. Let's secure all of the endpoints. There's a lot of traditional security requirements for hosting a SaaS service. But now there's all of these AI questions, you know, in the model.
I see the certifications as recognition of work that's been going on for a very long time. The company came together not just with great AI research because we do have a very talented AI team, you know, that joined us from places like Google and Meta and NVIDIA. But we had a really, and we continue to have an extremely strong systems team, you know, people that helped design Snowflake, that helped design Databricks. And so, you know, from the outset, we had a very mature attitude about what it was going to take to build a highly secured cloud service, you know, given customers are entrusting us with their software.
And so we need to lead in that regard. So, you know, the fact that we've been first with each of those proof points, attestation points along the way, I think is really reflective of how we design security into the solution from the start. And why security has been very much part of that journey. We're grateful for all of your contributions and helping us build the most secure AI coding platform on the market today.
What is the most common question that you hear from the prospects and customers around the data privacy, especially and explicitly their source code? So they want to understand that state-of-the-art protection is being brought to bear. They want to understand that there's no way for intellectual property to leak. We have had reports from the model providers that some of our competitors are actually selling customer code to the large model providers for training.
We think that that practice is very unethical. Defaulting some customer into sharing their software for training sort of implies maybe you're going to use it internally. But then if you turn around and sell their code to a third party for them to train, that feels like a very unethical behavior. And so we've tried at every juncture to offer state-of-the-art protection and then never taken on any sort of behavior in which, you know, would be considered unethical where somebody's proprietary IP is crossing a boundary into another organization without their knowledge.
Yep. At this point, it's pretty clear that data is the new oil. All of the models that are available on the market have been trained on all of the publicly available information. And at this point, data is the new gold standard.
Yeah, I think, you know, the open question is, are models going to start to plateau because we've tapped all of the publicly available software data in the world? Because, you know, there aren't really algorithmic advantages today or compute advantages that are material between the different large language model providers. I do think there's an opportunity around synthetic data. But for that to help us to improve these models further, we need really strong evaluation functions, right?
So it's the space of possible program design is not unlike the space of Go programs, you know, Go gaming strategies. But with Go, there's an outcome that you see whether you win or lost the game. We don't necessarily have that with software. And so until we can have a good quality measure, I think it's hard to think that we can get a big return on synthetic data to continue to improve the models.
And so I'm thrilled with all the progress the models have made. And, you know, it certainly looks like we can continue this runway through next year. But then I wonder if the models are going to start to plateau in their capabilities. What do you think is the most important connection with the end user?
How can we capture the interest of the developers? Where do we need to meet developers in order to add value to their daily grind? They have a lot of pain points. They have a lot of tickets.
They have to execute a lot of tasks that are in hand. How can we add value to those developers? Two primary things. One, I think it starts with don't disrupt the developer unduly.
Right. So we have structured our product as a plug-in to their existing development environment. So, you know, if somebody's using VS Code in the Microsoft ecosystem with the Microsoft Marketplace and all the Microsoft security and support, we can preserve that. The second thing comes back to the context.
You know, if you're vibe coding, you don't need context, right? Because it's a new problem from scratch that you're specking out right there and you're playing around. Without context, it's all you can do is vibe code. With context, then the engineer has a true partner.
The AI understands what the engineer is doing, may understand the code base better than that engineer does, and so can help that engineer be more effective. And so, you know, these are the kinds of things that an AI for software engineering is capable of that these coding, vibe coding solutions just can't cut it. No, I love that. And I remember seeing the first time that I noticed that some of the other platforms had forked VS Code.
I think the word that came to mind when I saw that was, whoops. Because, you know, good luck trying to stay up to date with all of the changes that all of your amazing engineers are pushing to the product at the same time of what's coming out of Microsoft and those two products. Like, it's easy at first, but then they diverge. And then it just becomes more and more challenging as time goes forward.
It's a huge risk to undertake. Agreed. I mean, it basically puts you in the IDE business. You know, I mean, the team that builds VS Code is not small.
It's the larger engineering team than most of what the AI coding startups have on their entire staffs. So, you know, the work required to keep their forks of VS Code current with the capabilities that Microsoft and JetBrains are offering to the marketplace, I think is going to be very painful over time. It is painful 100%, but also there is lost opportunity in terms of what else you could be doing with that resource. But when you fork something and you cannot keep up with the maintenance of the project, there is an exposure.
AI is an amazing tool. AI is used everywhere. It's used in healthcare, in finance, in government, etc. But it's also used by attackers.
There is a lot of red team members or aspired red team members who want to find vulnerabilities. And AI tools enable people to do that much faster. When you fork something and you're not able to maintain that project up to date, you're exposing yourself to a little bit more discovery activities by members of society. Agreed, right?
I mean, Microsoft has a large security team that does due diligence on their software, and that's why people opt into the Microsoft ecosystem. So I do think enterprises should be careful before they make this decision to opt out. You know, I just have to say, like, thank you. Thank you for taking security so seriously.
It goes back to one of these things, because as an entrepreneur and as someone who just enjoys, like, bringing new things into this world, I think that one of the really important pieces of that is being able to adopt all of that new magical technology. And to really have an impact and to really change the world, it needs to be easy to adopt. I think all new tech has got to be easy to adopt. This is one of the lessons I learned early in my career with some software that was very hard to use and wasn't necessarily robust.
And I was a new grad student coming into the workforce, and I was very naive about how easy and reliable commercial software had to be. So that's a lesson I've taken and brought to every company that I've been part of is that, you know, it's got to be unbelievably easy or you'll never get the mass adoption that, you know, we startups aspire to. And I really love that dedication to the security that you're mentioning that really was there from day zero. There are things in augment that you just couldn't add on at later points in time.
And one of them, just to point out, is that proof of possession mechanism, which is part of the AuthZ. The answer to the question, are you allowed to see this piece of data is actually driven by the answer to the question, do you already have that data? Right. And so if you already have that data, of course, you're authorized to see it.
And it's just a nice, beautiful example of a well thought out, well engineered, security conscious architecture that would be impossible to add later on. And it solves tons of engineering questions around like, hey, how do you de-dupe? But there it is. Proof of possession.
And people don't realize that AI can be a way for intellectual property to leak. Right. And so if the AI I'm working with is informed by code that I'm not allowed to see, all of a sudden I get insight into algorithms that I'm not supposed to be privy to. And so this proof of possession architecture really prevents the AI from inadvertently informing users things that they're not privileged to have access to.
That's right. There is an interesting study that predicts that about 50% of all of the enterprise software engineers will use ML powered coding tools by 2027. What are your thoughts on that prediction? And what do you see as the biggest hurdle for the organizations to adopt AI?
I think it's going well faster than that. You know, when the early days, which, you know, now we're talking about nine months ago, right? We were discussing, you know, customers, our customers were seeing 40, 50, 60% productivity improvement with the product. Now with agents, you know, we're talking about factors like two, three X kinds of improvements, especially, you know, for the best developers seem to be getting the biggest bang for the buck and seem to be the ones that love our product the both.
Because they see that increased productivity. I mean, the industry can't afford to ignore those kinds of improvements, right? You know, it's not, again, like any piece of software that I've ever been aware of has a long wish list associated with it of architectural enhancements, debt that needs to be paid down, new features that should come, you know, security vulnerabilities that should be followed up on. What if we could pay that all down, right?
What if we could deliver, you know, all the software we aspire to? You know, if we just fix our current software, it's going to create all these other ideas about other software that we would like to have in the world. And so, you know, I just see so much potential for these tools to not just make software engineering a lot more fun and more productive, but to deliver all the software that a business could use to help itself grow and proliferate and thrive. Maybe you'd like to share with everyone what's been your proudest day so far on the journey with Augment.
I can't point to one single day. You know, there's been a bunch of customer victories, you know, our first six-figure deal, our first seven-figure deal, that those are natural milestones. I was tremendously enthusiastic about self-service, getting the product to the point where we were comfortable that every developer that wanted to try it in the world would have a good experience. I would say that probably is my single favorite thing so far.
No, that's incredible. And thank you for sharing, too. I'm curious, as you look into the future, like there's been so much progress made in even thinking about like nine months is like a long time ago. If you look into that future, and I'll let you decide how far we'd like to look into the future.
What does success look like? What is the end game here with Augment? So first and foremost, success would be that we are the leading product for real software engineers. You know, the people that are looking after, you know, we say the software that runs the world, the tens of millions of lines existing production code bases.
We would like to deliver the best and leading solution into that marketplace. So, you know, I think there's a supreme amount of work to continue to do better in that regard. You know, we see agents as being able to help automate the full software development lifecycle, not unassisted. Right.
So our vision is that it is the combination of human intellect and agent machine intellect that that powerful combination can allow us to streamline all of software engineering, all of software development, because developers do a lot more than write code. And so we want to be able to help them all the way along the journey. And we think the agents are going to grow up and live independently in the cloud of engineers. Like there'll be agents that wake up in the event of a site failure or a break in CICD pipeline.
Agents can be or a security attack. Right. The agents can wake up and start doing work and then report into their human masters on exactly what's been going on and what they recommend. In any given situation.
So this division of, you know, many cooperating agents to look after all of the production software, I think, is going to make the job of caring and maintaining and evolving these large software systems so dramatically better. I don't think there's a single point when we hit that. But, you know, we're certainly seeing that inflection already, you know, where agents can take on work. You know, six months ago, you would struggle to have an agent do something that took you 10, 15 minutes.
Now I think an agent can take on an hour, 90 minute kind of task. Next year, hopefully that doubles and doubles again. And so we continue to build up our capabilities so that agents are able to do ever more without human assistance. But there's no shortage of problems to solve.
These businesses that use this as sort of a private equity play to cut their teams and do the same work with fewer are missing a colossal opportunity. Because every business I know that's software dependent could grow substantially if it had better, more capable software. And so we think the investment thesis, the abundance of, you know, the software abundance opportunity is going to come as a result of these coding AIs. And that's the world we want to help deliver is where all the software that you want and aspire to is available to you.
And this is, again, where the context engine is super important. How relevant the information that you extract from the knowledge from the source code and feed it into the logic that produces the output is super important. Yes. You know, there were naive approaches where, you know, some people thought they would fine tune these models on company software.
First, people get nervous that there is a potential for leaking IP. But, you know, much more so, the software changes constantly and training models is expensive. And so the idea that you're going to every time you pull a branch or do a check-in, you're going to train a special model for those engineers that are working on it. Right.
You want your AI to be current. It can't be looking back in time because it won't help you make the right decisions about the work you're doing inside of the software. And then the other naive approach was the long context folks that wanted to pass, you know, 100 million line code base in its context to a large language model. You know, these models, of course, the context costs the square of its length.
And so any notion of passing in that much context is ludicrous. It will be horrifically slow. But it's not like the model can assimilate that much data at once anyway. Right.
Just think as a human, if you had to read 100 million line code base and then started getting asked random questions about particular parts of it, you would have to refer back. You would have no chance of having memorized the entire code base. And these models are the same way. And so the approach we've taken in building the context engine we think is absolutely the right one.
And it's going to continue to pay back for the lifetime of the company. Yeah, I really see that and love the analogies to, you know, the human side of things. What do you think might be the most challenging piece to get to that vision, to get to that future? Truthfully, there's a lot of developer education that still needs to happen.
You know, what we see is the most forward-leaning developers, the ones that jump on the next latest and greatest thing every time to try it out. They are having huge success and, you know, just falling in love with the platform. But I would say there's still a lot of opportunity to educate the rest of the marketplace in how capable these tools are and how to take advantage of them. Many of the developers, you know, on Augment have moved to what I call metaprogramming, where they don't necessarily directly manipulate the code themselves anymore.
They count on the AI to do it for them. So they spec out to the AI what they want. They let the AI generate the code. Then they look at it and they tweak it.
But they do it through the AI rather than directly manipulating the code. But it's a very different paradigm, you know, for developers. And, you know, some of them even need to see it in action, right, by watching another developer how productive they are through taking advantage of all of these capabilities. And so there's that old quote, you know, the future is already here, but it's not evenly distributed.
I think there's a huge opportunity for the rank-and-file engineer to embrace these productivity returns from the new tech. So instead of molding and building a Lego piece, you can actually build a whole model out of many Legos. Yes. And do it in parallel, right?
You can have one agent, you know, work on whatever, the hull to your ocean liner, and others could start building the cabin and the engine and the propellers. So, you know, seeing how the pieces all fit together, that's still human insight. But, you know, being able to be an architect to orchestrate a bunch of agents to do this work much faster and much less painfully. You know, programming is very tedious because computers need an extremely accurate description of the behavior that you want.
Most of our programs don't even have any abstract specifications, right? The spec is the software itself. And so the fact that the AI can assimilate that specification and then bring it to bear in helping you evolve the software is incredibly powerful. Yeah, we're moving up this stack.
Everyone used to work on the compiler level. Then Java came along at some point claiming to be the universal language that can compile into all of these different platforms. But now we're moving up another stack where you can speak the human language and have the certainty and trust in a platform to produce the results that will align with company's vision and company's current posture. Yes, I've been around long enough that I've, you know, I wrote some bliss and some machine code early on.
So I, you know, a huge believer in the third and fourth generation languages in terms of the higher levels of abstraction that you bring to bring to bear. But I do think AI is the biggest productivity unlock that we will see in this industry, you know, throughout its entire history. And it may be the last one that we need in terms of improving how we get software into the world. No, I love that.
And I really see that as well. Back in school, I studied computer science. And of course, because I'm a security guy, I zoomed right in on all of the security pieces of computer science. But if you were a young person today and here we are, we're entering this world.
Maybe you're studying computer science, maybe electrical engineering, but something in that engineering space. And you're surrounded by all of this change with AI. Would you have any advice for all of those students out there who are facing this change as well, like with us, but they're earlier in their careers and their journeys? I am still a big believer in human intelligence and the role that human insight has in complementing machine intelligence.
I gave a talk at my alma mater probably 12 years ago where the big concern was all the interesting work had been done in computer science. And I just found that like to be the most preposterous statement. I think clearly AI has unlocked that, but there's so much more in the science of information and so on. I think it's a wonderful discipline.
It's obviously been very rewarding for me. And I today Augment is the fifth startup that I've gotten to be part of. All of them depended on large scale software. And, you know, the first four horrific pain.
You know, I did not understand before getting into the real world how hard it is to make software reliable enough, easy to use enough, secure enough. You know, I had come out of a grad school where, you know, testing was what users were for, not something that we had to do in advance. And so that naivete caused me to be very recognizing the amount of pain necessary to deliver great software. And so it's so much fun, given I did a thesis in machine learning, albeit a long time ago, to see those two threads come together, to see machine learning constructively brought to bear in making our software.
Post-peer storage, I wasn't sure I was going to work again. But I did not want to miss it. I had major fear of missing out in terms of these two threads, machine learning and large scale software coming together that I did not want to miss this fun. I'm curious if you could meet your younger self and if you could bring to your younger self any piece of advice, would you meet your younger self?
And what advice would you have? I think I hit one of the themes earlier, right? At the first startup, I learned how hard it is to make software commercially viable. You know, to get mass adoption, software has to be really easy and it has to not break and do things wrong.
Ironically, as software ages, it can grow a lot of hair, right? If you look at the Oracle database today, it's a very painful product to use and adopt. But in the early days, I can tell you it was really easy, incredibly easy to build and deploy those applications. And they just ran, you know, and similarly open source with Linux, you know.
I remember sharing with my Windows friends that we had Linux servers that had been up for years and never been rebooted. And they were just like, how is that possible, right? You know, they just didn't understand that software could be done and be that reliable. So I think the expectation of what software needs to get that broad commercial adoption was something that I learned painfully at the first startup, but then I've been able to exploit it ever since.
I guess the other thing I would say is trust your instincts. That first startup went sideways for a while. We had customers, but it never took off. It never inflected up.
I remember saying to my mentor, you know, boy, five years in, I knew we could hit the moon. I knew we could crash and burn, but I didn't know we'd be somewhere in between. And, you know, if I'd really trusted my instincts, then I would have realized, you know, that we were missing the target more dramatically than I knew. And so trust your instincts and, of course, hone them through hard work and good luck.
Are there any lessons in that journey that you translate into the AI-driven startup, especially in terms of building the team, shaping the culture, navigating the market? I would say across the boards, yes. Entrepreneurship is a team sport. So getting everyone to check their ego at the door and focus on building the best possible team and the best possible solution.
Avoiding politics. You know, politics kills organizations when you start seeing people talk, managers talk about I, me and how they carve up the pie relative to other managers. This seems to happen along the journey. And it's so horrifically painful because if you can keep the team viewing the company through external lenses, you know, how do we better serve customers?
How do we better compete and, you know, win business in the marketplace? And then trust that the pie is going to be fairly apportioned. One of the things we strive to do and strive to do every company is tell people they don't need to negotiate their own offers. Because you've got a leadership team that's going to try to really work hard to try to get rewards right.
Because you don't want to reward negotiation. I know some of the larger companies in Silicon Valley, I've heard managers tell engineers, look, I'd love to give you a raise because you deserve it. But you need to go get an offer from a competitor before I can give it to you. That's a ludicrous behavior to me because you should reward people for quality of their work, not, you know, the fact that they could potentially defect somewhere else.
Another thing I like is radical transparency. Something that happens in many organizations is information is used politically because it's passed selectively down the tree. If I in the CEO role have some piece of information and I share it with certain people but not other people, then that information becomes a source of power that can be used politically. So it's much better to broadcast as much information as you can so that everybody has access to it.
One of the things we do at Augment is we share our board decks with the entire company and as well as the results of the conversations. Because we want everyone to have insight into not just what's going well, but what we need to work on. Where are the issues that we need to get better? And you can arm a team by giving them that insight.
If you just think your job is to cheerlead and not point out where the company needs to get better, you end up missing a huge, huge opportunity. And that culture, growth, the trust, it's the internal trust. But internal trust, it transitions into the external trust that is established between the company and customers. And that is a very important part of the entire relationship between organizations.
A hundred percent agree. And I will say it's a challenge when you've got hundreds of thousands of developers using your product all around the world. You've got to do everything in your power to make sure each of them are having a great and safe, trustworthy experience. But it's amazing to have that feeling of we're all in the same boat together.
We're traveling, you know, through this future and to this destination. And everyone has each other's backs. That culture is always the aspiration. As I said, entrepreneurship is a team sport.
And so it's all about the team and never about the individual. That legacy was left at Peer Storage as well. So I joined Peer Storage after you had gone on to your next thing, but it's stuck. And I like to say that, you know, that culture, it kind of spoils people because it's so good.
It's so positive. It's so productive. So thank you. It was very much a team effort, right?
I use the stone soup analogy. I didn't build the pure culture or define it myself, but I did bring the pot and I talked about the importance of culture. And then everybody showed up and shipped in different things. And, you know, we kept the best and shed the rest and ended up with a really good culture.
All the gratitude in the world. And I'm so thankful that such smart individuals are focused on these really challenging, really interesting problems of how do we make AI adoptable? How do we make it fun? How do we get it into all of the products and all of the engineering processes that already exist today in real enterprises?
So all the gratitude in the world. And vice versa. I could not be more thrilled to have YSecurity on this journey with Augment. You've contributed hugely.
We would not have come so far so fast so well without you guys. Thank you. It's humbling and it's exciting. And we're grateful to be part of the journey.
Ditto. And with that, a huge thank you for joining us for this episode of the Security Podcast of Silicon Valley. I'm John McLaughlin, one of the hosts, joined with Sasha Sienkiewicz, the other host. And we had the great honor and pleasure to hear from Dietz, the CEO of Augment today.
Thanks again for having me. Great fun. Thank you.