Episode 7
Building a start-up on shifting AI foundations, with Dave Slutzkin
While the workplace focuses on how to use AI tools in their jobs, some entrepreneurs are building new businesses on top of these tools, or leveraging AI in other, unique ways. But it’s a fast-paced world and you have to deal with the changing expectations (or skepticism) of customers, and the changing abilities of GenAI platforms, to build an actual, lasting business.
In this episode, Anthony and Kris discuss the intersection of AI, data governance, and productivity with Cadence founder Dave Slutzkin. Dave takes the pair through his experience building on this shifting foundation, the problem his start-up is focused on solving, and his take on why many companies abandon vibe coding too soon.
They also discuss:
- 00:00 Introduction to AI and Data Governance
- 02:50 The Role of AI in Creative Industries
- 05:33 Dave's Background and AI Focus
- 08:25 Challenges in Organizational Communication
- 10:41 AI's Impact on Individual Productivity
- 13:33 Navigating AI's Relevance and Guardrails
- 16:04 Addressing Data Abstraction Issues
- 18:46 Vibe Coding and Productivity Gains
- 26:07 The Trust Dilemma in AI Development
- 28:06 Navigating the Hype Cycle of AI
- 31:01 The Consumer Impact of AI
- 33:55 The Future of AI: Predictions and Challenges
- 40:19 Governance and Regulation in AI
- 45:04 The Human Element in AI Automation
Resources
Transcript
Anthony Woodward (00:01)
Welcome to FILED, a monthly conversation with those at the convergence of data privacy, data security, data regulations, records and data governance, or even AI governance these days. I'm Anthony Woodward, the CEO of RecordPoint, and with me today is my co-host Kris Brown, our EVP of partners, evangelism and solution engineering. How are you, Kris?
Kris Brown (00:21)
Mate, how are you? It's good to hear from you again. Yeah, I think, you know, I said it, certainly this season we've spent a lot of time talking about AI. So, probably time to stick it in the front there, I think, and start talking about it from the get-go.
Anthony Woodward (00:35)
Yeah, yeah. I think the new intro coming, some work for us to do here on the FILED podcast. But today is another conversation about AI, I think.
Kris Brown (00:44)
Yeah, look, I think, you know, again, we spent a lot of this season talking about AI and AI governance, as you just mentioned. Today, we're going to sort of take a little bit of a different perspective on, you know, the industry as a whole and talking about, you know, someone talking to and talking about, you know, the building tools and the building tools around the AI technology pieces. So, today on the podcast, we've got Dave Slutzkin, a startup investor, consultant, founder, general raconteur. Welcome, Dave.
Dave Slutzkin (01:12)
Hello, thank you. Good to meet both of you and great to be here.
Kris Brown (01:16)
Yeah, no, look, thank you. And I said, look, I'm going to get this out of the way early. The general raconteur, you know, I have been accused of being an exaggerated storyteller for the wants of a better definition, but I really do like that. So, I'm going to steal that one if you don't mind. Certainly, for those who might be seeing some of the clips and other things, you know, Dave, I do apologize that you're a suffering Arsenal supporter.
This is the recording, first recording of our podcast after Liverpool has gone on to win. So, very, very much apologies there for Arsenal coming in second yet again.
Dave Slutzkin (01:53)
I'm glad you had that ready to flick up as the virtual background. That's good to see.
Anthony Woodward (01:58)
I'm sorry, I missed all this. Did Liverpool win something? ⁓
Kris Brown (02:02)
Oh, did Liverpool win something? Please, as a suffering Arsenal supporter yourself, please, please let's not go there. It could be a very long podcast, and it's got nothing to do with AI. But again, I think the other interesting thing, David, and we've probably got a little bit more in common there outside of a love of EPL is that as I understand it, once upon a time you did a little bit of a radio show, especially in EDM space. said, I'm a suffering DJ myself from many years ago.
Dave Slutzkin (02:30)
Yeah, back in the good old days I did a show on community radio here in Melbourne, Australia on a station called 3RRR, which actually the stone's throw from my house now, a fantastic community radio station. I did that for seven or eight years till the kids came along, and yeah, a lot of electronic music and EDM and a whole bunch of stuff. It was really fun. Radio is a fantastic medium. It still exists. It still has a place even in this, you know, magical future that we currently inhabit.
Kris Brown (02:49)
I think that... Yep.
Look, and I think even from my service, it's certainly a suffering club DJ. Is it, you know, looking at the tools and things that they have access to now? Is it this AI based tools looking to replace the DJ? But you know, it's still all about the vibe for me. You know, I think the interesting thing is in that room, be it in a radio station or in a booth, understanding what the people want to listen to and understanding how to make them, you get up and dance or enjoy themselves.
It's still that piece that I think is missing. Certainly, they can stick two songs together at the moment, but it's interesting watching all of those spaces.
Dave Slutzkin (03:33)
It's interesting, not to take this on too much of a tangent, but when we think about AI, AI is theoretically gonna replace filmmakers and it's gonna replace musicians and it's gonna replace DJs. But is that actually true, and is that what people want? There's something about knowing that there's a human mind which is crafting the experience for you in real time, whether it's as a DJ, whether it's as the musicians who are on stage, or in the case of a movie, whether it's been created by an auteur, maybe if it's a...
new Marvel movie maybe it doesn't matter So, much but personally I feel like that human touch is what comes through in great art or in a lot of art and I don't know that AI is going to magically replace that.
Anthony Woodward (04:12)
Yeah, look, mean, I think, you know, just wanting to go back a little bit. I do like that you use the word vibe today. I think it's very, in tune with the concepts of vibe coding, but before we get into that and talk about some of those things, it'd be a really great day to dive in a little bit. and talk a little bit about your background. know that, you know, I was looking, you know, before the podcast at some of the places you've been, I'm like, I think we have been passing in nights.
Places we never actually met before but I'm like how did I was there and he was there and yeah you've been around a bit and done quite a bit you know particularly here in Australia
Dave Slutzkin (04:48)
Not only that, I ran a company called SitePoint and you're running a company called RecordPoint. So, clearly, we're basically siblings. I've been writing code for 30, 35 years since I had an Apple IIe originally that we got from my uncle a very long time ago. Writing code professionally for 25 and running companies and startups for 15 plus as CTO, but also, as CEO or GM in various places. So, I've been lucky to bounce around a whole lot of things and fell into startup
Anthony Woodward (04:55)
Exactly.
Dave Slutzkin (05:18)
in about 2008-2009 when in certainly in Australia that barely existed as an industry at the time. I was lucky to meet the right people and got to go in and run some things and learn a whole lot and do half an MBA around the same time. So, and I've been lucky to invest in a bunch of things and meet a whole lot of really interesting founders and be in a position to start a few things myself over the years including cadence which we might talk about a little bit later. Although we don't have to I'm happy to talk about whatever makes the most sense here.
But yeah, I've been really lucky to, you know, I was into computers when I was, you know, 12 years old and it turns out that computers were going to be the story of the next 35 years. Who would have guessed? Whether they'll be the story of the next 35, whether you get to actually program, is a whole other question.
Kris Brown (06:03)
Yeah, it is interesting, and I think, know, certainly you've had an opportunity to build a bunch of cool things. As you said, you've been involved in that space. You're now a little bit more focused on AI, and we will dive in a little bit to Cadence, but what was the turning point? Yeah, I'm kind of interested in, you know, just looking at some of the things that you have done in the background. What made you decide to focus in this area around AI?
There's obviously the gold rush and we can all go, yet we're all racing out to make sure that we're a part of this. RecordPoint themselves have sort of been playing in this space for quite some time as well. What convinced you that it was worthwhile joining or being a part of this?
Dave Slutzkin (06:40)
Basically, the problem that I or that I and my co-founders wanted to solve is something that we had looked at on and off for six or eight or ten years but now it's possible to solve it. That's the main reason. The secondary reason is I'm always very interested in where the world's going and
I would be and have been looking very closely at AI researching, trialing, testing, trying to work out what I think the world, what's going to happen to the world, partly because I've got an eight year old and an 11 year old and I want to know what world I'm bringing them into. But also, I tend to like, I'm very curious, I tend to like to understand where I think things are going. So, I've been deep in AI for various reasons, personal reasons, just for curiosity and interest for years now. But also, as I said before, we'll talk about
Okay, the problem that we're trying to solve is one of organizational communication, especially one in software engineering or dev or product teams, making sure that the right information goes to the right people at the right time. That's been very hard historically. have platforms like Jira or Slack or whatever, which put kind of band-aids over communication, especially in a remote world, which is where just about every company is as of the last five years.
But now with AI, we think it's possible to really help communication, as opposed to just band-aiding the way that people have historically done it. So, yeah, it's basically that the technology is now at a point where we can solve a problem that we've all experienced and looked at over many years now and wanted to solve. So, it was very clear motivation for us to jump in and actually do it.
Kris Brown (08:17)
Yeah, look, I think the interesting thing there is sort of a very similar conversation how I came to be here at Record Point as 20 odd years ago, we were looking at the traditional information governance model of you deploy something to a desktop, it connects into the authoring tool like a Word or an Excel or a PowerPoint. And certainly, I this audience will be very familiar with some of that add in mentality. And at a time that I finished writing a document or sending an email, I should tag it, and I should do this or that or the other.
And while that was mildly successful in some organizations, it was wildly unsuccessful at many, others. And as the technology has grown and obviously machine learning and then onto into AI, that ability to be able to remove that problem of almost apathy around information governance was the piece. And so, I think it's a very, very similar story to sort of our own as well. And that sense of we've been drawn to the technology stack.
because it's able to help us solve problems. We've been talking about these problems for a very, very long time. And certainly, I think the first time I spoke about this problem with peers was early 2000s, possibly even late 1990s. And it just wasn't possible. There was lots of other tools. was workflows that were built. There was prompts that were built. There was all sorts of things that were built to try and help you to do the next, you UIs that were built, make it easier, make it simpler, make it faster.
To remove the problem altogether is again another level of problem solving. And think we're at a very unique inflection point from a technology perspective because we've got these tools now that are agentic style AI that can do things for us. It is very, very interesting.
Dave Slutzkin (09:53)
And you're right, think the user experience fundamentally matters and I think that's what we start to unlock with these tools. It's not that AI is necessarily magical, in some spaces it might look like it. But if it unlocks certain user experiences that were not previously possible, that's what matters, especially in the field of, whether it's governance or policy, these sort of fields where people should do things. But the problem with should is that you have to make it easy for them, otherwise they won't do it or they'll do a bad job of it.
Kris Brown (10:24)
Easy and show value, right? Like it's, you could make it as easy as anything, but if I don't see any value in it, I won't do it either. So, it's that, yeah, that the should comes with both of those. think that's a really strong point.
Anthony Woodward (10:36)
Yeah, I mean, I think it's really interesting, you know, hearing that background and the linkages there. and that conversation about should, one of the things really like to understand is the problem you're solving with cadence and really diving in more deeply, but probably even to come back a little bit in terms of the framing of the AI conversation, we on this technology side, we are even on this data side are seeing this coming at such a speed, but that isn't a speed of the real world.
Right. It's not a speed that the average folk are going to cope with You're sort of unpacking a little bit what it looks like you're doing cadence It looks like you're trying to solve some of that problem. Is that fair?
Dave Slutzkin (11:15)
Yes, I think that is fair to some extent. What we're trying to solve in the short term, there's sort of two horizons for the way we think about this. In the short term, we want to solve problems for individuals. And then we grow out and we earn the right to start solving problems for organizations themselves. But only if we can solve problems for individuals, exactly what we're talking about before about the should. If I'm asking individuals to, you know, in some way, use a system that's going to give information to their manager or to
rest of the organization, there's a limit to how effective that's going to be if it doesn't feel like there's value for them, exactly as you said before Kris. There has to be value for the individual and the individual has to see value. So, what we're focusing on to start with is that challenge, especially in growing technology organizations where too much is happening. The thing that we hear frequently from individual contributors, from engineers or designers or product managers is we have too many meetings and I still never know what's going on.
They actually want to know what's happening in their organization. They end up sitting in a whole bunch of meetings or reading a whole bunch of Slack channels or looking at a whole bunch of notifications, some of which are relevant to them, most of which aren't. So, it's hard to get the signal from that noise. What we're trying to do in the very short term with Cadence is...
use AI to help them help the AI understand what the person is doing and then the AI can help the person understand what is relevant to them of the noise or the undifferentiated notifications and messages that are happening throughout their organization. Give them a narrow cast as opposed to a broadcast a narrow cast understanding of what's going on in your organization and how it is relevant for you specifically.
Which means that in first thing in the morning, ideally what we can show someone is his three dot points of what matters to you from the rest of the organization in last 24 hours. His 45 dot points of things that don't matter to you. Feel free to browse through them if you want, but we know what you care about, we know what you need to know about, we know what you're working on. Here's what actually matters to you right now.
Here are decisions that other people have made that affect you. Here are changes to strategy that might affect you. Here's cultural things. Here's communication that might affect you. But everything else, the fact that the marketing team just signed a new client, that's cool, that's interesting. You might want to be informed about that, but maybe it's not specifically relevant to your role right this second. What we see in a lot of people is that they feel the need to stay across 30 or 40 or 50 Slack channels.
and a whole lot of GitHub notifications and a whole lot of Jira notifications and a bunch of other things. And that's actually quite hard in practice. So, periodically they declare bankruptcy on that and just ignore everything. And the rest of the time it just gradually builds up and up until such time as they do that. I hope that makes some sense because then later on we try to solve problems for the organizations. But as I say, we have to solve the problems for the individuals first. Exactly what you said, Kris.
Kris Brown (13:50)
Hmm.
And So, let me posit this, Dave. So, ultimately, I then see that you're freeing up time of the developer, the team member in an organization like that, hopefully with the goal of increased productivity, which is a solve up the chain for your organization. Using AI is very important there, but I look at...
Poor examples are similar, which is my feed on Insta or TikTok or Facebook where it's like, for months there I got nothing but dolphin videos and I'm not entirely sure why. What are the guardrails there that you're thinking about from a productivity perspective? do you, is this part of the problem that you're trying to solve, what is relevant to me?
Dave Slutzkin (14:36)
We have to be really close to you, we have to be really close to the individual contributor or the line manager or whoever it is to understand what they're working what is relevant to them at any given time.
That's the guardrail is that where we keep them in the loop on what matters to them and what doesn't matter to them and what they're working on and what they're not working on and what they care about and what they don't care about. It's staying very close to the individual in various ways. I won't necessarily go into detail about the specifics of the product right now. We're in very early testing. We're testing with a few trusted people and we're not launching publicly quite yet. But what we feel like we can do is get really close to the individuals. Again, there's something that AI enables. Get really close to the individuals, understand what they care about.
Kris Brown (14:59)
The feedback loop or something there.
Dave Slutzkin (15:18)
Maybe dolphins are something they care about right now, but maybe not. Maybe we need to give them the feedback channel to say to us, actually, it's enough dolphins. Tell me about this other thing instead. Yeah.
Kris Brown (15:29)
difficult. You've got to scroll quickly, just So, we're clear. You've really got to scroll quickly.
If it looks like a dolphin video, don't pause.
Dave Slutzkin (15:35)
You can't pause, otherwise TikTok
gets too excited. But maybe it knew, maybe it knew that you needed dolphins at that time in your life and maybe that's what got you to here.
Kris Brown (15:44)
Possibly.
I might have to talk to my psych about that.
Dave Slutzkin (15:48)
Yes.
Anthony Woodward (15:52)
So, if you get a trillion, I think, know, exploring that slightly more deeply, there are, you know, I get the point of curating the conversation. I would get your point of being able to bring in and out. But one of the issues we're really seeing as we talk about AI and certainly from this kind of data governance data view, right, is that you are bringing through levels of abstraction that, you know, effectively
are interpreting things that are occurring in the organization. And So, we've seen some examples of this ourselves. I'm really interested about how you're trying to tackle it, where as you bubble up change requests, as you bubble up pull requests from GitHub or other pieces of communication, the...
tooling we have today isn't building agentic capabilities that are truly personalized and tuned to you. And I know that's sort of what you're focused on, but the LLMs themselves don't get there. And So, what you actually get is an atropathy of the data coming through and the way that's being summarized to the point where it's a level of, know, 1984 groupthink. How are you thinking about tackling that issue? Because it's bubbling through the industry as a big one as, you know, in all the conversations I'm having.
Dave Slutzkin (17:05)
The problem at the moment is that AI is So, transactional and all the big model companies are working on giving it memory in various ways and trying to help it understand a little more, but that's proving to be hard. There are limited context windows when I think about the specifics of the way that it works. Technically, we're up to broadly million token context windows, but that's not actually a million tokens because it has to do selective attention across those windows. So, there's actually a limit to how much it can know in any given situation, which is a challenge for any model. You can retry, you can train a whole new model on your
Anthony Woodward (17:25)
All right.
Dave Slutzkin (17:33)
personal organizational data but that's not something that anyone is doing yet. That will happen at some point in the next five or six years. The challenge is having enough of that context to understand what actually matters in a given situation otherwise you end up with, as you said Anthony, you end up with this sort of this almost a downward spiral of
It's not quite garbage in, garbage out, but it's information that gets gradually attenuated each time it's used and falls further and further each time it's used to the point where it's not actually super useful anymore and potentially damaging at some point. What we're trying to do, what we believe we have to do to be effective at this is we have to have a canonical data store which is actually correct.
and which we know is correct and which we put a human in the loop whenever anything goes into that canonical data store. So, we engage with each of the individual contributors. It's another reason we have to engage with individual contributors. We have the concept of certainty built fundamentally into our system. How certain is the LLM about this particular thing? If we're talking to you, Anthony, in some context and you say something about Kris and we don't know what Kris's role actually is, we need to ask you the question So, that we can understand that. We then put the data into a data
store which is our canonical data store which is some sort of structured data store whether it's an RDBMS or something similar. We put it into a structured data store and then that data is I'm going to say correct, at least as correct as we can make it and we have some concept of certainty around which bits are particularly correct. The smarts that we then provide are deciding which bits of that context are important for any given use case of the AI, use case of the LLM, because we know that we can't throw it all into a context window. It's just too big. It's not only slightly too big, it's too big by multiple orders of magnitude. You trillions or tens of trillions or hundreds of trillions of tokens that would, when you fully understand organizational context, that would have to go in and understand this stuff.
All your GitHub pull requests from the last five years and all your JIRA tickets from the last 10 years and all your Slack messages from the last decade, whatever that might be. It's just too much context. And the foundation models, the models themselves won't be able to magically cope with that context anytime soon. So, our value add is we have that canonical data store, we pull the data out, we help individuals put data into that canonical data store So, that we actually understand. And we have a taxonomy that we put over that So, that we know broadly
what's happening in a technology organization and put it in there and then we funnel that into the LLM at each stage. But what we're not doing is using the output of that LLM to go back in without a human in the loop first. So, what we hope that that leads to is that our data is continually, at least not getting worse and hopefully continually improving in accuracy and correctness. So, then we can base our conclusions on something that's actually accurate.
Anthony Woodward (20:08)
Right.
Dave Slutzkin (20:24)
as opposed to something that is garbage in, garbage out.
Anthony Woodward (20:28)
So, the example being that Kris likes dolphins and Liverpool, So, therefore he likes swimming in the River Mersey. Like that's the logical conclusion.
Kris Brown (20:37)
talk about ⁓ a good way to drown.
Dave Slutzkin (20:38)
It's an obvious conclusion isn't it? Do they have dolphins in Liverpool these days? Maybe not So, much. Maybe an otter.
Anthony Woodward (20:40)
Exactly.
Kris Brown (20:44)
Maybe a porpoise or two, I don't think there's any dolphins. Yeah,
probably.
Anthony Woodward (20:51)
So, I guess the question then comes, you know, we haven't brought up and we made the joke about it earlier, sort of the notion of vibe coding and bringing some of these concepts together. How do you see that? Because it's not just a fundamental shift in how we're communicating, which is one element, you know, of being able to be more, you know,
integrated into your teams and your organization and the department and letting AI feed you more effectively. And let's be frank, you know, we had elements of this that might've had a greater human curation in times gone by. I worked many years ago on some really awesome RSS feeds that were really built for big organizations. And yeah, they were human curated, but they were really, the RSS was designed and the atomic structure we created was targeting down to a developer to go, what do you need to know when you log in this morning? So, these aren't new concepts.
But it's awesome that we're doing that. Where does that take us when we talk really about the productivity gains rather than just the team cohesion gains?
Dave Slutzkin (21:53)
That's a good question. Vibe coding is something that we... I'm going to take the broad... Let's define it first. I'm going to take the broad definition of vibe coding, which is code using AI coding assistance. In some ways, they might be agentic, might be AI enhanced CLIs, they might be AI enhanced IDEs.
and then which one is cool, you know, when we're recording this, it turns out that Claude Code is the cool one. But if we don't release this for two weeks, something else will be cool by the time the podcast is released, right? That's changing really fast at the moment.
Anthony Woodward (22:27)
the kind of person that's
always not working with the cool one So, that you're ahead when it changes.
Dave Slutzkin (22:31)
Yeah,
I think that's extremely important to be able to say to people that you were using something before it was cool. I'm on a WhatsApp group of a bunch of people who are somewhat younger than me who are keeping me up to date with the cutting edge in all this space.
Vibe coding, we use a lot internally. We use those AI coding tools and AI coding assistants a huge amount internally and a lot of startups that I advise and work with and have invested in are doing similar. There are definitely productivity gains to be had here, but they aren't free. To some extent they were, right? Like Copilot when it was first released a few years ago, it was just some nice auto-complete.
That was essentially a free productivity gain. It wasn't a huge productivity gain, but it'll gain you, you know, five to 25%, depending on what sort of task you were working on, how boilerplate it was, a bunch of those things. That stuff was broadly free. But the next...
generation of them which is more what I think of as vibe coding haven't turned out to be free because you actually have to change the way you're thinking about coding. Some of us, I've been coding for 35 years, those of us who've been in that position are fairly stuck in our ways.
We have particular ways that we know that we interact with the system we interact with the compiler or the interpreter or the IDE or Whatever it is particular ways that we debug particular ways that we do things But with live coding that is that is kind of different You don't usually start out coding usually actually write a lot more documentation Than you would have in the past because you need to write documentation to guide the LLM. That's not something that every developer Immediately clicks with when they first start doing it I've seen quite a few people go on this journey
and get there eventually, but not be there for the first day and the first week and potentially even the first few weeks of trying to code using AI coding assistance, substantively. It's not as easy as it looks like it should be, or maybe that's not true. It's not as easy as the hype would suggest that it should be. Because everyone's saying, this is amazing what I can do.
Partly that is true. It is amazing what you can do if you're building simple things, if you're building prototypes or websites from green fields or these sort of things. But if you're working on existing code, it's more complicated. I talked before about the context window. The same is true. I talked about in the context of cadence, but the same is true in the context of AI coding tools. You can't put your whole code base into one context window. It's too big. It doesn't fit. So, you can't give the LLM all the context.
which means that the coding tools themselves, whether it's cursor or Claude code or Roo code or Klein or whatever it might be, what they have to do is try and work out what to put into the context window. And your job as a developer turns out to be when you're writing documentation, to give them enough information So, that they can understand what to go into the context window So, then when they write the code for you, it's approximately representative of what you wanted from that. If you don't do that, then you get crap. And when you get crap,
The developer, when done naively, developer thinks, this thing's useless. It's not ever going to be useful, but that's because they didn't guide it effectively. It's very similar in my mind to when you're training a junior developer or someone, you're trying to help someone develop in their career. It's quite possible to give them a task with no context and say, hey, go do this. And then they bring it back to you say, well, that was bad. You did a bad job of that task. But did they actually, or did you just not tell them enough?
So, I don't think necessarily the parallels between AI coding assistants and humans are necessarily useful, but they are potentially, or they're not accurate, but maybe they are broadly useful to think about this as a person, broadly defined.
who needs context in order to do a good job and your job partly is to give them enough context to do that. In the same way that is if you're a manager, in the same way that is if you're a senior developer working with a junior developer. So, there are productivity gains to be had there. I don't think it's making people necessarily twice as fast on.
on most enterprise software, especially if it's existing code. I don't think it's quite that level, but maybe it's 50 to 60 to 70 % once people get good at it, once the organization gets good at it, and once the scaffolding is there and the structure is there. I think that's generally representative of what I've been seeing out there. So, there's definitely some positives that come out of Vibe Coding, broadly defined. It's not...
a cure all though and it takes effort and that's one thing I'm seeing with organizations is that this is a new way of working, they have to put in the effort and when the organization doesn't put in the effort they don't see the benefit and then they think it was a waste of time. Well was because they didn't do enough to get over the hump.
Kris Brown (27:05)
Yeah, it's not that panacea, right? Like you're not just going to go, well, we're going to go this way now and do the thing. You've got to put the effort in. I think probably there's an interesting correlation there between, you know, RecordPoint, we're 15 plus years old and been doing things for a long time. That co-base is very, very large. You know, building something that new, we probably would take an advantage from this and get productivity and start up.
Choosing to go that direction might actually speed up some things. But again, as you say, you've still got to go. This is the direction we're headed. We're going to get good at developing this way. And that kind of leads to my
question slash more of a statement, Dave, and maybe I take a position and just sort of see where you're at. But this is a new industry. There's always going to be that ability to try and take advantage. Builder.ai is the recent thing for the audience. Ultimately, there was a tool based out of London. So, it's obviously very, very clearly being based out of London.
supposed to be very important and well looked after and secure and trustworthy and all those things, but claiming that they were helping and building code using AI and being efficient, but it turns out it was 700 plus developers in the backend who give them a prompt, as you just said, the senior developer giving the junior developer the prompt, and then there'd be just a whole bunch of people peddling in the background to pull this out. It does remove trust in this space.
Do you think this adversely affects that AI journey, especially around the vibe coding and those, you know, that got the, as you said, the hype, I think was a really interesting thing you said there, the hype of this is gonna be So, much better and faster and all the rest of it. And that's not necessarily as true or as cracked up as it's supposed to be. And then we have hits like this that come to the industry where it's like, this is just straight out.
Probably the key here is you think that there will be more of this or once bitten twice shy in this industry.
Dave Slutzkin (29:00)
We know this is a hype cycle, right? At least those of us in this conversation are probably old enough and gray-haired enough to have been through a couple of these now, and maybe this one's bigger. And I think there's probably more substance to this one than there ever was to blockchain and crypto, for instance. But it's still a hype cycle. So, there's still going be people who take advantage of that. There's still going be people who read promises that don't necessarily exist or that...
only implied. So, one thing that I have seen in organizations that I've tried to help is that often what is happening is that execs, especially CEOs, are saying we should be more productive because of AI. We should be able to increase the span of control of our managers from six to 20 because of AI. Why? Well, just AI. You just sort of wave your hands a bit and say AI.
Kris Brown (29:48)
bare fingers.
Dave Slutzkin (29:50)
And that is going to hurt. That's going to hurt a lot of, you know, it's going to hurt the organizations themselves. It's going to hurt the industry. It's going to hurt the perception of what is possible because everyone, no one quite knows. This is a...
This reminds me actually of a period about five years ago, which some might remember when there was some new illness that popped up in the world called COVID-19. And we had to engage in this process of collective sense making. Everyone was trying to work out what is actually happening and how can I predict the future and what is going to happen here and what action should people take? It's actually relatively similar to that in some ways. Yeah, some stuff is going to be possible. It wasn't previously possible with AI.
No one quite knows what the bounds of that are. No one quite knows where this curve is gonna plateau. It will, it'll top off and plateau at some point. Is that already happening? Has it already happened? Will it happen next year? Will it happen in 10 years? No one quite knows that. So, we're all having to engage in this process of collective sense-making to try and understand what that is. And people arrive at different positions on that. Also, in a situation like that, there are people out there who will take advantage of that.
to your point about builder.ai, the people who will say, no, this is totally possible. You've read the hype. Now we are delivering on the hype. But are you actually delivering on the hype? Well, no, but enough people are going to believe that that's fine. So, I think it's incumbent upon especially CEOs and execs in bigger companies.
that there's a lot of work for them to do to get across this stuff well enough. It also, reminds me of the early days of cloud in, know, 20 whatever that was, 13, 14, 15 around that time, DevOps, agile, you know, all of these buzzwords that popped up in enterprise IT where the CIO would read it in a magazine and say, we need to go to cloud without really understanding that. I think the same is true of AI.
It's hard, but it's incumbent upon them to get across this stuff and actually understand what's possible, but that requires having trusted people to talk to about it, and that's not easy to find in a hype cycle.
Anthony Woodward (31:40)
It is different.
yet.
It is different this time though, and I'm with you, know blockchain Even to a lesser extent some of the DevOps, you know, it was just different ways of retooling things already done crypto You really wasn't impactful for the enterprise and you maybe it's day is still to come but Seeming less likely
This is different because it's not the CIO, but I'm having a conversation. It's not the CIO talking about it. It is the CEO. It is the operations people. It is the investor class who are saying there are, and you know, maybe again, they're, they've gotten into the group think of, of the hype cycle, but they're saying this is revolutionary and different and we want to lean in on what impacts are going to occur out of that.
Now I agree with you, I think a lot of them are still going to, know, brand of choice chat interface and say, hey, write me my wedding speech. And it's doing an amazing job of that and going, isn't this going to be cool? It's going to write every email for me. But also, I sit here going, I don't write email anymore. You know, I have a draft response to most emails that go in my inbox because I've set up a process that does that.
And then I just curate that and send it back. So, there is some confirming truths of those things if you lean into it in reality, right? So, I would say it is different because it is visibly integrating into our daily lives. And I get it's not the entire population just yet. I get that no one's experiencing it. But I've never seen tooling like this that has had those profound impacts at that sea level So, quickly.
Dave Slutzkin (33:29)
And I you might be able to tell them I have nuanced views on AI. I'm not a booster, but I'm also, not a doomsayer I think it's I think there's a lot that's really positive about what will happen. I love the phrase this time It's different because we say it every time and it always turns out to be somewhat true and also, somewhat not true You're right. I mean block blockchain and crypto aside which turned out to have very little very little substance, but
Anthony Woodward (33:45)
All right.
Dave Slutzkin (33:55)
I probably disagree with you on cloud. think cloud did actually have some fundamental impacts on organizations.
Anthony Woodward (34:01)
That's a very Australian view though. I will hesitate. You look at the data of the hyperscalers impact in Europe, maybe in the US it's a bit mixed, but you even go, and this is where I say it's different, right? And this is the data I present to my argument.
Dave Slutzkin (34:04)
Yeah.
Anthony Woodward (34:17)
The impact of what we're seeing with AI and agentic elements, and I know those are different things to unpack, and it's probably not for this conversation. You look at the take up of that in Europe, you look at the tape up of that in South America and the US, it is going faster than the take up of the hyperscalers of cloud.
That wasn't the experience. Australia led the world. Australia, New Zealand really led the world in the take up of cloud and the digital transformation and moving into those processes. think one of the things that I've really observed that I'm just coming off a six week trip around the US is there is a different context going on. Those folks didn't lean into SaaS as quickly. They're having to do that because of AI, but their take up of AI is So, much faster than this country right now.
Dave Slutzkin (34:59)
Yeah, the interesting part about AI is because it's got this consumer side to it, as you said, it's writing the speech for your wedding, whatever that might be. Because it's got the consumer side, people can immediately touch and feel the power of it, which you couldn't do with DevOps, you couldn't do with...
You could potentially do with crypto in some ways because you'd buy Bitcoin and go up and you'd think, everything needs to be blockchain now, which probably led to extra hype there. But because this has this consumer side, I know many people who are not at all tech, particularly tech-centric or tech savvy, who are deep into chat GPT and they use it for lots of stuff at work, whether or not it's sanctioned.
They use it on their personal device if they need to because the organization doesn't sanction it but they know, they have already seen how much effort's going to save them in their daily work flow. So, I know a lot of people who use a lot of chat GPT at work.
It has that consumer side, has that individual side, it doesn't need the whole organization to start using it in order for it to really have significant impact for you individually. that's what we're going for with Cadence. If no one else in the organization uses it, but you just use it, it should still have significant impact for you. So, I think you're right. I think it is different from that perspective, that it's much more obvious and available even for execs. So, not only are they hearing the hype, but they can also, feel it themselves, which makes and drive it more.
Anthony Woodward (36:21)
Yeah, look, probably what, it'd be good to think, know, we've kind of walked around a few things here, putting a crystal ball on, where do you see this? Because this is the hard bit, right? I think this is the bit where we start to debate that difference to the others, because it could fizz out, right? One of the really big challenges I see with this technology, this conversation is,
It's a little bit like the experience you had with Uber but on absolute steroids in that Uber was unbelievably cheap. It was So, much cheaper than getting a taxi and now it's more expensive now. It's more convenient, you use it, I still use it but it's, you know, I certainly noticed the price rise.
We're not really paying for AI today, right? Other people are subsidizing it. And when we come to recognize those costs, maybe the productivity gains or the future isn't all there. Where do you see that going? Because I think there are some really big icebergs to really still get through for this technology.
Dave Slutzkin (37:17)
That's, yeah, completely agree. There's two graphs, right? Like there's the amount of cost that these companies are gonna have to attempt to attempt to recoup from users, which is only going up. And will continue to go up.
VC cash or investor cash is going to get less available. That said, potentially not that much less available for a while, but at some point that's going to be less available. The other curve that's coming down is the cost of actually delivering the service. We know that's going to keep coming down because that's what happens in computers. It's always happened. It'll continue to happen. What we don't know is where those two lines are going to meet and if they're going to meet soon enough.
If chat GPT personal is costing you two grand a month, are going to pay it? Probably not quite yet. Are people going to pay 200? People are already paying 200. So, okay, fine. The queries are costing still, even at 200 a month. It's fairly well documented, I think. That OpenAI is still losing money on chat GPT at 200 bucks a month, Less So, in the AI coding stuff. Yes, they're still losing money per token on every request.
but they are more aggressively attempting to recoup costs with per-token pricing and a bunch of other stuff. And again, that's gonna come up to meet it. The question is, the value that's provided, slightly different curve there, can the value that's provided or the perceived value come up quickly enough to meet the costs dropping on the other side? No, that probably does happen.
but it's guesswork at this point. My gut feel in looking closely at this for a while and especially in the last six months looking very closely at it is that probably things, the model performance itself of the current architectures of transformers, the transformer based LLMs, are that...
That's probably topping out. We're probably at the point where we've seen all of the really big gains from that. There's a huge amount of money going into researching further that direction, but also, researching other architectures. That may be next week, maybe next month, maybe next year. But we can't know, and I don't think anyone in the industry even particularly knows. They're boosting it because, you know, everyone knows what side their bread's buttered on.
But I don't think we know what's going to happen in the future. I do think that it looks to me like transformer-based LLMs are about as powerful as they're going to get. They're going to get incrementally more powerful here, but not the same as the jump from GPT 2.5 to GPT 3 to GPT 3.5, which was huge, and Gemini 2.5 has been similarly, a bunch of those things have been really big leaps. It's slowing down. So, I think what that means is...
what companies are looking for now is ways to deploy this effectively So, that they're getting value from, they're getting significant value at the current cost because they know that costs are gonna go up and if they don't know the costs are gonna go up, they probably should know the costs are gonna go up. And the other thing more,
maybe more nefariously is that what the providers are doing is trying to get embedded enough So, that even if they're not quite adding enough value, they'll be able to charge through the nose because they're already embedded in processes, embedded in organizations, the organization has restructured itself around using AI and LLMs in various contexts. So, even if they wouldn't have made that choice originally, they now have to because the organization has been restructured and you're not going to restructure the thing again straight away, even though it's now much more expensive.
It's
a really good question, Anthony. Like these things will move in some interesting directions over the next six or 12 or 18 months. I, you know, I can put my futurist hat on and try and predict where this is going to end up. Probably the most, the thing that I feel most confident about is that it has roughly plateaued currently, but we just don't know what might be around the corner. I suspect though, the first time I built a neural network, which is what these are all based on, the first time I built a neural network was 2000.
2000, 2001 in my honors year of computer science. I built a recurrent neural network using what passed for memory back in those days. It took a long time for that stuff, 20 plus years, for that stuff to turn into something that was really transformative. It's quite possible that with a lot of money going into it, we'll find another transformative thing around the corner, but it's also, possible it takes another 20 years.
Anthony Woodward (41:37)
Yeah, like, I mean, I, I,
I wasn't a neural network in its purest form, but 1999 I built a recurrent neural network at university as well. So, completely know what you're saying. And the reality is of the RNNs of those days, right? It's still exactly the same models that are actually building in. That's slightly off the point. Quick rapid fire question then for you in terms of that future. Do you think we're going to be trapped? You know, it sounds like you think we're going to be trapped.
even by the GPU rate and the rest of it because like one of the biggest issues of this whole conversation at an infrastructure level is there's really only one supplier and that really caps the innovation I mean are those things that you is that why you think we're plateaued the innovation is kept being capped because
Dave Slutzkin (42:26)
No, no, I think Nvidia being the one supply. I mean, there are companies like Groq. Not Elon's Grok, but the other Groq. There's a bunch of people who are also making chips and I think that's gonna move pretty fast. So, I'm sort of not worried about the one supplier thing. But I think my impression and talking to people who know more about this than me is that the chip stuff is still moving pretty fast. It's moving faster than Intel were in their heyday. It's moving fast. So, it's not So, much that that's plateaued, it's just that there's still any incremental benefits, incremental cost decreases available there. It's not that there's orders of magnitude cost decrease
Dave Slutzkin (43:01)
is available from hardware, at least based on the current architecture again. So, I don't think it's So, much that the one supplier thing and you know, whatever. I think it's more just that the architecture needs to do a lot of computation and doing a lot of computation is still going to take until quantum computers maybe change everything. Doing a lot of computation is still going to take a lot of time.
Anthony Woodward (43:04)
Mm-hmm.
Kris Brown (43:30)
I'm gonna bring us back to sort of just that adoption piece that you've spoken about.
Let's talk about not the CEO this time, but the developer.
Why would the developer refuse to do these things? I look at my own life as a poor man's developer having finished my software engineering degree. Again, we're all sort of the same vintage.
I think the last one of the last episodes we recorded Anthony very much changed my world and that he's a bit of vibe cardigan has built himself a bit of a process of making decisions around helping children choose food on a day to day basis, which I think, know, genuinely, we still need to take a good hard look at Anthony. I think there's definitely a model there where we can subscribe that out. I'm already throwing a fist of cash at it. But if I look at the.
Anthony Woodward (44:13)
Just tell me tell me your kids
WhatsApp addresses. I'll have it sorted this afternoon
Kris Brown (44:17)
Yeah, we need to do that thing, right?
you know, the vibe coding element, the use of AI, you know, we understand and I think we spoke about it just now, CEO very much seeing the benefit from a consumer side, you know, I think Anthony used the example of the wedding speech, but even in and of himself using it for other tasks to eliminate things that don't make a lot of sense. We spoke a little bit about that that gap.
where it's the senior developer talking to the junior developer, the ability for the product manager to talk to the team, you're trying to solve that communication problem. Are there other governance problems here, like code quality reliability, show trust of the system and doing it right, whatever else, but are there governance issues or security or legal risks that you think will be sort of baked into some of this that's coming in?
Where does regulation fall in here from where you're coming from?
Dave Slutzkin (45:15)
I mean, it's fascinating. It's a fascinating question, regulation, because we know the governments are always going to have trouble keeping completely up to speed with the cutting edge of technology in general. That's not an easy thing for governments to do. They get advice from, you know, McKinsey or PwC or whoever it is, which may or not be accurate. But also, governance is going to be really important in this space because there's going to be these non-technological implications of AI if...
magic happens and it kills a whole bunch of jobs. That's a problem for governments generally. So, there's a policy question there. Should we allow this to be used as a government?
or as a society more generally. We know the EU has gone a bit harder on regulating AI than the US has, and it seems, from my understanding, they're rolling that back a bit because there's sort of only two positions you can take on it right now. There's no AI or lots of AI. There's not really a sensible middle ground because it's too hard to specify what that sensible middle ground is in a way that people can understand. Wouldn't be surprised if the EU
rolled back completely and went Wild West the same way the US has and just said, you know, anything goes for a period. yeah. Not really.
Anthony Woodward (46:31)
And just for our listeners, that's actually not our expectation. it can probably,
again, another topic for a debate. I think the problem is that the technocrats in Brussels have gotten too technocraty. So, it's not that the legislation is incorrect, it's that it doesn't talk about the actual human interactions. It tried to talk about the technology and what, not to try to debate you, but where we see it going, Kris and I have talked
about this a bunch already but is really it'll be more an evolution to what are the true human boundaries that sit around AI rather than less describe you know what data goes in and what does bias mean and those things but yeah sorry
Dave Slutzkin (47:13)
No, I think that's
a really interesting point and you guys are closer to this than I am, right? Like I only have conversations with people who seem like they know what they're talking about, but you're obviously really close to this and I wouldn't be, I think that makes a lot of sense. I think it's hard to regulate technology generally for those reasons. You say, well, you're not allowed to have a neural network with more than 72 billion parameters. It's like, okay, what does that mean? Like what even is the point of that? Not that that's what the legislation has said, but.
Anthony Woodward (47:27)
Very nice.
And even define a parameter, right? Like it gets really silly, right? Yeah.
Dave Slutzkin (47:42)
Totally.
No, we have 71 billion parameters, but they're all multi-parameters. Yeah, exactly. That's right. ⁓
Kris Brown (47:46)
as defined this way, right? As defined this way, So, we're all good. Yeah, and look, I think
Anthony Woodward (47:47)
Yeah.
Kris Brown (47:52)
The point to make here is that we're actually, yeah, in that space where, I'm gonna roll away back to the beginning of the recording, there's a human element to this. Do we want the element of art, of expertise, of, know, we can't, know, today we can't feed it the whole code base.
But there's senior engineers in businesses that do understand how it all hangs together. That's why they're here. And we're not trying to replicate or remove. It's how do we add on to it? think the regulation piece here and probably the crux of the question, and maybe I'm answering it myself, is how do we ensure from a governance perspective we're getting value, that it makes sense, that we don't eliminate, we're not trying to put everybody out of jobs. That's not what we're trying to do. It's about productivity.
Everybody's trying to create, bring it back up to that level of art. We've been automating jobs in manufacturing for a lot of years. It didn't remove them, it just meant that there's other jobs these people will have to go on to. what makes sense, and I think that's where the tough decisions for governments and governance is going to come.
Dave Slutzkin (48:59)
I read someone saying the day that the theory that this will automate away all white collar work is based in that standard kind Silicon Valley tech viewpoint that every job is full of useless work. Every job is actually easy. It's just that people make it harder. Lawyers, that job's easy. It's just writing some stuff down. They always write the same stuff down. Why do they make it So, hard and charge So, much? Real estate agents, that job's easy. Accountants, that job's easy.
the human elements of that. We've seen this over software, I've built a lot of software in the last 25 or 30 years. What you see with software is that writing the code is usually not the hard part.
The hard part is working out what it should do and talking to the users to understand that and talking to the users to understand their feedback and understand how you should situate that based on where they're coming from and what else you're hearing from other people. It's working with the other individuals in your organization to make sure you're pulling together the right viewpoints and the right understanding of that So, you're not biased in particular ways and a bunch of that. Most of building great software is not going like this on a keyboard, and that's the stuff that's easiest to automate. But does that mean that it ends up, this is not manufacturing it turns out.
It's not assembly line manufacturing, and I suspect the same is probably true of a lot of other white-collar jobs that I am less familiar with. It's not as simple as task comes in, black box performs task, result goes out.
It's actually more complicated than that, and that's why this stuff is probably not going to be super-automatable. Now what that means for regulation and governance, I don't know. It's a really good question. I don't think anyone has an answer to it. No one that I've spoken to has a great answer. Potentially you guys do, but a lot of organizations are trying to work out at the very base level, they're trying to work out ROI on the spend.
Because yes, a lot of money is now going out towards OpenAI, towards Claude or towards Microsoft or whatever for these features. Are we actually getting ROI on that? I don't know. And then they need to work out, are we putting data into that that shouldn't be going into it? And where's that data ending up? Is it ending up in servers in countries we don't want it to be? Or is it being used for training or whatever other things? And they all give you a tick box to say, my data's not being used for training, but that's probably not actually the biggest challenge.
that your data is now going somewhere and you have no visibility of where it's ending up. I think there's a huge amount of really interesting questions that organizations are going to have to answer about that. Maybe people have great answers for it, but I haven't heard any yet personally.
Anthony Woodward (51:31)
Yeah. And I think that, you know, to probably rattle, slurs out our conversation today. And it's been a really fascinating one. Yeah. All. And it's something I used to say for years through the early 2000s, right? All automation is just the acceleration of a substandard set of business processes. And, and that's a lot of what we see, right? Cause you know, you actually need humans to deal with the substandard element of those business processes.
This has been a really fascinating conversation. wish that I, we could, we could really dive on and definitely because of the forecast, I'm going to make sure that, Adam, who's the producer of this podcast has you booked for a year's time so, we can get you back and find out if that plateaus actually happened in AI and, how that, and, and, and how real those things are. yeah.
Dave Slutzkin (52:06)
I'm
Hang on, when I put
the futurist hat on, that comes with an implication that you wouldn't, no one would ever call me on that, on that, that projection.
Anthony Woodward (52:20)
You've clearly
not met Kris Brown. But yeah, no, look, absolutely. I think that that is fascinating. And the reality right is we are still sitting, I think we all agreed and probably to wrap today up with the really fascinating conversation is that we probably all agreed there's still a lot that is unknown about what we're stepping into. There's a lot of excitement. There is a lot to actually prove there are observable elements of ROI, those are longer term returns versus short term returns, is probably debatable. But there is something really here and I think that's the super interesting piece of it.
Dave Slutzkin (52:56)
I completely agree. And yeah, it's been a fascinating conversation. So, thank you for inviting me on. Great to chat.
Anthony Woodward (53:02)
No, thank you. Thanks everybody for listening. I'm Anthony Woodward.
Kris Brown (53:06)
I'm Kris Brown. We'll see you next time on FILED