4
Shining a light on Shadow AI with Rob Williams
Employees aren’t waiting for their organizations to implement sanctioned AI models – they are hearing about the benefits of GenAI and are using free consumer versions of ChatGPT and co, which by default use any data you enter to train the model. This is Shadow AI, and it’s a big problem.
AI consultant, founder and CPO of proddy.io Rob Williams joins us to discuss why this phenomenon is occurring, and what companies can do to combat it.
They also discuss:
- Why it’s important to embrace early adopters of AI, rather than seeking punitive actions
- Building internal AI best practices
- The place of data governance in AI adoption
- Predictions for the next year of AI in business
Resources:
Transcript
Anthony Woodward: Welcome to FILED a monthly conversation with those at the convergence of data privacy, data security, data, regulations, records, and governance. I'm Anthony Woodward, the CEO of RecordPoint, and with me today is my co-host Kris Brown, our executive Vice President of Partners Evangelism and Solutions Engineering.
Today we're welcome to the pod Rob Williams, who is a fantastic guest and an expert, consultant and founder of proddy.io, an AI powered product development assistant. Rob, welcome to FILED.
Robert Williams: Good to be here.
Kris Brown: So, Rob, just for the listeners that maybe take a moment just to sort of describe who you are, how you've come to be here, and certainly feel free to give us a little bit more detail about product.
Robert Williams: So, yeah, my name's Rob Williams, as you've said. Mainly my career is focused on product management, so worked for a number of different organizations as either product manager or then into the Chief Product Officer role. And I spent quite a lot of that time in live entertainment. So, I worked for a lot of large venues, worked for a lot of software providers providing ticketing platforms to the likes of Wembley Stadium, et cetera. And then in the last couple of years I've left that industry behind, really.
And I left to start up a product management consultancy, and the kind of week I left was the week that ChatGPT was released. And so, I just saw the writing on the wall. I'd been watching for a long time, the progress of ai. And so, I decided that was the point at which I was going to, instead of setting up a product management consultancy, I was gonna set up an AI consultancy.
So, for the last two and a half, three years, I'm running an AI consultancy. I work with a bunch of great people at a company called SoHouse AI Advisory. One of the things I've done throughout that is really learning by experimentation. So, setting up product.io, setting up an app called Stoke... in full three AI startups that we've kind of launched, one that we're about to all to do with really for me, learning this new field.
My kind of day job is advising organizations. I've been lucky enough to advise the American government agriculture insurance industries. Again, some of my original live entertainment kind of colleagues and clients, so a number of different organizations on using AI for productivity, gain efficiency, gain process automation, or using it within their own products.
So, really trying to help organizations ride this wave of generative AI and the kind of maturity of some of the traditional forms of AI in order to gain value. So, that's my story, I suppose on the side, I'm also a keen AI advocate myself, so I use it. Pretty much like everything, and I'll talk about that more.
Kris Brown: Look, and I know we'll dive into that topic a little bit further in. I do want to come back just very quickly ‘cos I'm a mad English Premier League fan. I'm sure I've probably said that on the pod before.
So, realistically I should have known you many years ago for the hookup at Wembley. ‘Cos, I have been used to get to a couple of games at Wembley and certainly the game getting tickets from said agency was always very, very difficult and doing it the right way, but. Look for today's episode, we're focusing on the concept of shadow ai, where employees of the company use free consumer versions of AI models to complete their work.
You know, like myself, you ChatGPTs, your Copilots, et cetera, and look, I've seen stats that say that half of all employees are doing this and that. Nearly 70% of workplace chat, GPT usages through public and non-corporate accounts, higher numbers apparently with things like Bard and Gemini. But this is the problem because that free commercial version of these AI models will use that prompt data and use the information you're putting in.
For training. You know, I'm personally, having a son who's involved in a number of sports and we're always looking for, additional ways to fundraise other things. It's funny for myself in that ChatGPT now knows who I'm talking about. It's been able to help. Without too much prompting with the details of the sport and other things.
So, they're absolutely using this data to train on for those of us in corporate land. If an employee was to pay sensitive PII for analysis into these engines, that's gonna make its way into the model, and certainly there's been lots of examples of that. What's your experience? You've got on the ground, as you said, you've been advising people on this, what are you seeing in this space?
Robert Williams: I mean, certainly it's a really interesting phenomenon from a couple of perspectives for me. One is that we are trying to add productivity to organizations. And so, if the organization's already unofficially using ai, then that creates quite an interesting situation. And I've personally been in some discussions with engineering teams where some of the engineers have actually outright said, we don't want you to do this ‘cos we're already using this technology.
Our managers haven't realized we're using it. We are getting a day a week to ourselves. We're still outputting the same volume and quality of code we always were. And you are about to come in and ruin this. And I've actually had these conversations with people and yeah, like the recent stats are showing 60 to 70% of knowledge workers are using ai.
It's funny the way that I think a lot of this problem is because people haven't, and maybe now they're starting to, but haven't really grasped what is happening. And I remember very similar conversations happening years ago when Facebook started to become really popular.
Twitter at the time started to become really popular. At that point, I was actually kind of in IT management, like I my very early start and I remember having conversations with people and saying like, you're not gonna ban this stuff. You don't realize this isn't just a website. This is becoming a fundamental part of people's lives.
So, to ban people from, you know, using some of these tools in the workplace, never mind even if it's just at your desk, but just in the workplace generally, and to start to penalize people for using these tools. These are social connection tools that are being used in order to allow people to remain mentally healthy.
And now in hindsight, maybe there are also large loads of challenges there, but I still stand by that. And I think what's happening with AI is people aren't realizing that these tools are becoming fundamental to the way people live, work, and breathe. And so, you know, it's really difficult to even consider a shadow ban or a band, sorry, on shadow ai.
I'll show you this thing, and it's called Limitless, the Limitless AI pin. And what this does then is every morning it sends me a little message and it says, you know, this is what you can do to make yourself a better person from all the things that you said yesterday.
It's not focusing on what anyone else is saying, but what it's doing is giving me actually some really harsh feedback on a daily basis. To say, you know, you need to be less defensive. Rob, yesterday when you were speaking to this person, you were acting as though you knew everything, and they were only trying to help.
And yesterday when you did this, it's giving me, you know, solid feedback. That's a personal development piece of software. When I wear these, which are the meta ray bands, which again, I'm not gonna put them on ‘cos they'll beep and glow and everything, but they have like llama built in, they have a camera built in, et cetera.
So, what's happening is this technology is becoming so pervasive. And rather than embrace that and rather than say, okay, we know you are going to use this stuff, what organizations are doing, and I think this is all part of that, the initial real like tangible panic that security organizations and IT departments had of what was gonna happen to this data.
What they did was they just locked down really tight. And so, I'm seeing on the ground a resentment towards that and a lack of understanding as to why there are these bans in place.
What I'm not really seeing is it departments and organizations, except for the really progressive ones, embrace the technology and take the attitude of like, right, we understand you are gonna start using this stuff, and here is the responsible way to do it. The irony is for, you know, some of these tools, it's just a case of educating your employees to click a button and then they're opted out.
But because of the clamp down rather than the kind of encouragement and education. You're sort of seeing this rebellion happening, which again, throughout all of anyone who's worked in it, throughout all of our careers, the kind of shadow element of usage of any new tool that comes along is something that we all have to kind of understand.
And I've always been a proponent of. Understanding it, embracing it, and then guiding the employees because you know, it's adding tangible benefit. The other thing I'd say, just quickly, what also isn't happening, and again in some organizations is, and some of the things that I've been suggesting that organizations do, is to embrace those employees and ask them why and what their use cases are, why they're using this technology.
'Cos actually, the interesting thing about shadow. AI use within an IT organization is, it's probably the single best way you can identify the solid use cases for this technology within your organization. Cos it's, these employees are testing it for you, figuring it out, refining it, and you've probably got an incredible set of workflows and prompts and tools that you could bring in.
So, to summarize, I think it's absolutely real. I believe the stats that I see, I see. You know, I would say depending on the organization, 60 to 70% of people using this technology, I think it's just a bleed over from the fact that more and more people are using it in their personal lives. But what I'm not seeing yet is a really organized response to that and the kind of understanding of it.
I'm seeing a lot of life track. I'm having people come in and tell me, well, how do we stop it? And I'm saying, you don't. You are not gonna stop this. Like this is just gonna continue to grow and grow and grow.
Kris Brown: Yeah. Thanks, Rob. That's super interesting and almost quite poignant.
We held a dinner recently for a number of executives in the financial services industry. I was in Sydney a couple of weeks ago and we asked the question, where is everybody at the table? And the executives all took turns and one of the larger financial services organizations, a global brand, was actually a really surprising answer.
It was actually the opposite of the whole lockdown. And where I thought they would've been being that bigger, probably a little bit, you know, considered conservative brand in the market, right? But they've actually gone the other way and pushed it a whole bunch of training to everybody in the organization to understand that the tools exist.
They've started to push the tools on them and say, you know, take the time to make this a part of your day-to-day workflow. Take that opportunity to sort of see and feel, and they've obviously got people watching what's going on and trying to generate those use cases, like where are we seeing value in this?
And so, and I think the rest of the table again, and I'm obviously representing their thoughts in of my own here, but. We're quite surprised by a, that larger, quite global conservative brand being engaged in that way. Mm-hmm. But also, then the other side of encouraging its use and formally pressing the people to actively use AI in their day-to-day, as you said, back to trying to get that productivity, and I know we sort of focused a little bit there on the employee side, but.
In your role and in the organizations you're speaking with, what are leaders saying about this? So, you know, obviously shadow AI and I think, you know, the social media aspect is exactly the same. You know, obviously at a defense level you understand why the phones go into cabinets and watches are taken off and they don't want those other things.
There's a lot more espionage and sabotage and, and these other things that go on. For most of us from a day to day, we're probably not worried about those things. What are you hearing from the leaders as it relates to shadow ai? And we had a couple of board members at this dinner as well, and the same thing was true.
They were very much, you know, we don't wanna end up on the front page of the newspaper. We don't want to have the mistakes made, but what are they really saying when you're interacting with them?
Robert Williams: A lot of the organizations I work with are starting their journey to, you know, adopt AI.
And so, there's a real split of a lot of people, and this is a maybe a gross generalization, but a lot of the time I tend to find that there is less AI knowledge and understanding at the top of the organization. So, there's sometimes a complete kind of ignorance to what's happening. So, like we don't think people are really using it here.
So, I often have these surveys as part of training or an inspiration day that I'll do across an organization. I. And I'll actually kind of anonymously ask, and generally speaking, leaders are really surprised at the widespread use of the technology. And again, it comes back to what I said earlier, that this has become something that's far beyond just a single tool that we use for one purpose.
Why also find is, again, there is this real feeling of risk and like danger around these tools. It's interesting when we started this conversation and when you were first talking, you were talking about the kind of this view that this data that goes into these ais ends up training the model and can end up coming out when someone else is just querying questions and stuff.
And I would say that's the number one challenge I come across for people adopting AI full stop in organizations. And so, at leadership level there is this extreme paranoia around that concept. I think personally that for me, the jury is out on really how accurate a description of a risk that particular concept is.
Now, I completely understand if you are giving these models data and you're having a conversation with these models, then that's stored. And if OpenAI got hacked, then you know there is a risk there in the same way as if you are emailing someone or you're storing that data in iCloud or in 3 6, 5.
And one of the things that I talk to these organizations about is, I'm gonna say something now that's a little bit maybe controversial, and I'm not suggesting that we don't pay extreme attention to the security and you know, if you are an organization that's has any risk of any of that data going in, then Enterprise Plans Chat, GBT, enterprise, GDPR, and people compliant.
These models, the way that they are trained is really in the way that a neural network works is really about burning data into the network. It's burning patterns and really that's what it is. It's not text, it's patterns and linkages of text into that network.
And so, unless you are Coca-Cola and every single employee is putting the formula in all the time, the risk actually. My view is relatively low because you are not going to influence that model and the reason you're not gonna influence it's 'cos if you imagine a situation where everyone who unticked that box and started talking to chat, GBT was influencing the actual training data of the model, the actual weights in real terms, then what would the AI be saying?
It would not know what the difference between authorities’ knowledge and you know, and just someone saying something. It would become a mess. And so, in my view, there's been an overblown risk there, which I think was born out of the very, very first versions of chat, GPT specifically, where they were heavily waiting that initial training because they were still kind of building out the model.
They didn't have the option for you to turn this stuff off in the product. There were a load of developers and various people who gave specific examples. Although those examples, if you look into every single one, there was always some element of that information being on the public internet. And I think that is where the core of ChatGPT of these models is trained, is the public internet.
A lot of the time the data doesn't leak from people using the model. A lot of the time it leaks from bad security around a public internet site or data that's on the internet or data that's behind a paywall, and a lack of understanding that these things, and this is where there is a challenge. These things are trained through paywalls.
They're trained on data that you wouldn't necessarily think is public, but in my opinion, they're trained primarily on that public data and actually the training data that's used, the thumbs up, the thumbs down, it's not used to train the model. It's used actually to create a better experience for users in terms of the UI and the way the model behaves.
And so that's something I. It's an interesting thing to pitch, especially if you're pitching it to the CTO or the InfoSec team because you know, you almost have to then have a rolling caveat at the bottom that says, I'm in no way suggesting that you should, you know, encourage this. But the reality I think, is that it's an overblown risk.
And so that's what I see. I see this real fear that is holding the leadership back, and then the flip of that when I start to show them what this technology can do. Of understanding and epiphany. Being held back on AI technology that is kind of happening right now is a death sentence I feel for a company in the same way as, let's be really clear, like regulation is not going to work here because you are regulating against like GDP.
You literally are regulating against the productivity of the country. So, I'm watching to see what happens across the EU, but I think we need to really recognize that this train is left a station, the wave is rolling forward and. These leaders, when they start to see what you can practically do, that's when they start to embrace it.
I do think there is this real kind of wake-up moment now that needs to occur that, you know, the hype train has gone, the product is still accelerating, the technology is still accelerating faster than it ever was. I do a lot of speaking again before I talk about Gartner's hype cycle.
I. If you've come across it, I love it. I talk about it all the time. This has broken the hype cycle.
Anthony Woodward: I've
Robert Williams: never heard anybody say they love the hype cycle.
That's the thing. Harsh.
I see why. I love it. I love it because it's a really good way of explaining why people misunderstand like hype.
You know, when you look at something like blockchain, that it's my way of saying.
This isn't hype, but it is currently at the peak of the cycle. It will eventually come down and it will eventually get into the plateau, and it's just about how long the, the hype cycle for me reflects the media. Really. It's a harsh light on the media, but this one, what's happened is I think people have thought that this is hype because of the hype cycle.
The media have thought, I know how this works. We boom it up and then it'll crash, and then we can all have a laugh. You know they did that. They've done that a few times. They tried to have a laugh. They've tried to say the models are plateaued. This isn't happening. But yet, LAMA four comes out two days ago, a new model comes out every six weeks, operator comes out.
Claude Code, like by far the most incredible piece of development software I've ever seen is released. You know, so it's broken the hype cycle. And I think once you can start to get that across to leaders, then I think they start to understand. And that's not to say. It's easy to implement. Like, and it's not to say everything should be ai, but it's to say that it's crucial that you have a plan
Kris Brown: to dive in there though, just sort of quickly back to the point you're making about the concern or the risk.
It's almost a little bit back to we're blaming the technology for poor understanding of data, right? And obviously we'll talk about this a little bit, but it's. That fear of the stuff that you don't want to come out will come out.
And even in the private models that they're building, they're using co-pilots or other things internally. It's because they were just feeding it everything. They didn't have a good understanding of what was going in. So, of course it's going to return things you don't want it to. And I think there's that blurred line between the public and the private.
The implementations are paused. We've seen lots and lots of, again, statistics about. Projects that have kicked off, and then, you know, pause because say, oh, well we didn't really have a good understanding of our data. We want to take advantage of it. But actually, the first step is understanding what I have.
So, I take the point there that that certainly very much has that link and that's caused that risk problem. As you say, there's that lack of understanding at the top level too, about how this really works.
Anthony Woodward: I was gonna say, and it probably draws on the same point, the boardrooms I'm talking to are super interesting conversation.
Very much. The boards are hyped up about ai, so that's great. ‘Cos at least they're talking about it. The boards are. Curious and inquisitive about the impact, and they're fearful of the downside risk, which I think is that parameterization.
But because the lack of technology depth in the boardroom, they don't have a framework for the conversation. So, I. For instance, I was literally talking to a midsize board last night here in Australia trying to explain to them differential privacy and how these affected the models. And there are ways to add statistical noise and there are methods to deal with these things, but that language and that understanding of this math is complex and isn't as easy to grapple with.
And how we start to build that language into senior leadership. I mean, how are you approaching that? ‘Cos until we understand these pieces, ‘cos that's the big. Boogieman, I think. Is that
Robert Williams: Yeah,
Anthony Woodward: ai. You know, explain it to me. Well, you know, I sat down and no offense to my mother, I think she's a brilliant woman, but I sat down, she wanted to understand how ChatGPT worked and my whiteboard writings worse at the mess as bad, at the best as time as a crisp attest, it was just a mess.
That's like my drawn this thing out for her. And how do you approach that, Rob?
Robert Williams: So, it's a really, really good point. Anthony and I come across this a lot. So, a typical engagement, I'll end up in a situation where, and I've just had this happen, literally just had this happen where the leadership are bought in, they are like, right, okay, we get it.
We can see this is really important. We're building out strategy. We're gonna move forward. And they'll engage with you on a basis of, right, we want you to build these five tools, or you know, we want you to look at this particular solution. And one of the things that I try and push back on really, really hard, ‘cos it happens time and time again, is until you understand the technology first, you shouldn't choose what you're gonna do.
And so, the way I actually do that is what I tend to do is a full day with the C-level executive leadership team, the highest or whatever structure it is, the highest team there is. Of really just understanding what this is, starting with the history of AI and why are we talking about generative AI and not all these other forms.
Oh, you know, what is it and what is a large language model? And the really fascinating thing about that is that every couple of weeks I run one of these sessions and every couple of weeks I have to rewrite it because you know, we are learning and evolving even our understanding. Of LLMs on a weekly basis.
But I think to your point, you are right. It's absolutely crucial that the first thing that is baked into the organization is an understanding of what this technology is. And it takes a day to really kind of get to a point where you've even got like a good enough high-level understanding. And what is fascinating is then.
Like an understanding of some of the risks, you know, moving towards agentic models, some of the ways that these things work. Then what starts to happen is they say, okay, right now I wanna revise completely what I was trying to achieve in the first place. I say, you know, I'm an AI consultant and I say to all of my clients.
Don't hire an AI consultant. It's not a good long-term solution. Don't outsource your ai. So, what I pitch is, what I wanna do is build a sustainable AI practice within your organization. And so again, that's what isn't happening, I think, and that's what, once people start to understand it. What they start to say is Okay.
Yeah. What you need to do is start to actually build this stuff up from within and that starts with an understanding, but it's challenging, right? ‘Cos there is so much misinformation out there and some of it is very well-meaning misinformation. So, there are still a lot of people, and both of you may be proponents of this model that will say that these are next token predictors.
That's what they do. And then you look at some of the most recent research from Anthropic and what they are seeing is when an LLM is trying to match a rhyme, the network is firing and it's actually thinking about the second rhyme. Before it writes the first word. And so, what they're seeing is these emergent properties of large-scale neural networks where they are displaying inherent reasoning in a non-reasoning LLM.
And they're displaying forward and backward thinking, yet apparently their next token predictors. And so again, you have this, and it is misinformation, but it's, as I say, it's well-meaning misinformation. It was us figuring out what we had with the data that we had at the time. And so that's. You know that it's just this incredible extra layer of complexity.
And what I find fascinating about this topic is now go back and think about all of the development of like traditional programming and coding. You know, there has never been a moment when there was a lack of understanding of the structure or the way that that worked. We understood all the way through all of the history of software development.
We understood every single aspect of every single thing that was occurring. Whereas within ai, and especially now within these scaled up language models, you know, it's really a black box and we don't understand what's going on inside it. And so how can anyone really understand? And every new model that comes out is a scale up of capability and a scale up of some of those challenges.
So, yeah, so I totally agree. I think that for me is the approach. Go in, teach First, learn. That has to be the way you do it. And then once you understand it, you can start to figure out what you wanna do with it.
Anthony Woodward: Yeah, you've bristled me a little bit on the next token thing. I'm gonna have to drill in. I would absolutely a hundred percent say they're still next token predictors.
The point of reasoning is they're now doing a next token of pattern as opposed to next token of word, and it is still a mathematical concept running from the neural network to identify. The patterns across that as opposed to the language and words that's going to be used. And I, in fact, I can prove some of that.
When you move to languages that are not English, particularly some of the more languages in Asia, the models start to behave in very strange ways. And so, I know they're working on that and I'm a hundred percent positive it will be solved because it is just a different mathematical pattern. I would still fundamentally stand on my view that they have very sophisticated pattern mimicking processes, but it's still another form of token.
Robert Williams: Yeah, you should have a look at the philanthropic research. It only came out like a couple of weeks ago, but I haven't read it. I'll be there. One of the tests they did was exactly what you have just said. What? Because they expected exactly this and what they started to find with even across multiple languages.
These very common usage of some of the neurons. But again, I think a lot of this is an inherent property of the scaling up of these networks as well. Mm-hmm. So, that's interesting to see where the behavior starts to change organically as they scale. But yeah, it's a very contentious topic and you know, I, I get you,
Kris Brown: I think it's really, really interesting.
Anthony just made it a little bit easier. The T-1000's not coming through the wall anytime soon. But ultimately though, you were talking about this before about that AI hype, and I sort of touched on it a little bit, but you've got that AI demand that's coming from the employee coming up from the bottom.
You've got the board, as we've just discussed, that are bringing it down from the top as well. What's the challenge, though? How do we, you know, in your experience, how do we deliver this? You spoke a little bit about the learning and building the practice internally, but for the audience. There's gotta be more practical steps to this.
We talked a little bit, as I said about even not understanding the data element has created this risk. What are the key elements about how do we deliver safe, well governed AI for that enterprise? I.
Robert Williams: Yeah, I think there's a couple of components. So, the first I'd say is to embrace your employees and understand what's happening.
So, that's step one. Like when we're talking about safety and when we're talking about shadow AI specifically, then what we're talking about is what you might deem the kind of misuse of AI across the organization. The first thing I've seen some organizations do, which I think is really great move, is to do an amnesty.
And just say, look, we are looking at doing our policy right now. We're trying to understand what happens. We want to survey everyone and understand how you are all using ai. ‘Cos, we want to embrace it and use it well. I think that kind of a positive attitude towards we are moving forward with this then allows you to open the door up to people from all across the department and the company say, well I've been using it for this, and I've been using it for this.
And so that concept's great because ultimately what you're trying to do is get an understanding of what is happening out there and you're never gonna do that. You're not gonna police this stuff away, so you need to understand it. I think then an understanding of the fact that just like back in the early days of social media and the internet, some of those people who were the always messing around on social media, they became the social media manager for the company and they became very successful social media managers for the company, and they were expressing this passion and interest in a specific piece of technology.
Once that was realized to be actually quite a useful piece of technology, then they were brought forward. So, in the same way, like you're trying to build a sustainable AI practice. So, if you know who your people are, who the people are out there who are passionate about this stuff.
Bring them in and actually work with them. So, I think there is this concept of understanding the issue across your organization and then actually capitalizing on that. Because sometimes it's just the case of ticking a box, them understanding the challenges and the concerns and that, you know, they should never put PI information in there, or customer information in there and that sort of side of things.
But really embracing them. Once you start to get that feeling and the feeling of how much this technology is being used, then it is just about strategy, it's about policy, it's about understanding where you're gonna use this stuff and picking the appropriate tools. Again, all of these platforms have enterprise versions with full compliance with SSO into your own environment, with, security, role-based security, all these different levels and tools. So, for me, that's the key. The key is first of all, understanding usage, then embracing the people who are using it and starting to use them to understand what you can do to leverage it.
Then it's just looking at that, now you've got that picture and saying, right, what do we need to implement across the organization to allow this to continue and to be able to control it. And for some organizations that literally is just saying, we're gonna set up ChatGPT teams, or we're gonna set up Gemini, we're gonna use Copilot. It's funny you were talking earlier about some of the challenges that lots of data create. I'm personally not a fan of Copilot. ‘Cos, I think that's one of the issues that's happened is bad implementations of Copilot with a lot of access to a lot of data have caused some of this challenge.
And so, I would say what's almost more important sometimes. Then policing external usage is correctly implementing internal tool sets. ‘cos that really can get you in trouble when someone, I've seen this when someone in a certain department is asking what the salary of the CEO is and this thing's accessing the HR records and pulling out the salary of the CEO because you know, that saved the HR team a lot of time, but now all of a sudden we've got a massive problem.
And that if I was looking at AI use from a security perspective and risk. That's really where the risk lies, is once you get to implementation, how you implement something across the organization. But in terms of shadow AI, that's exactly where I would focus all those people in, get to know them, understand what they're doing, and then capitalize on that.
Anthony Woodward: Yeah. There's so much to unpack in those things around the capitalization. What do you predict a year from now looks like? I'm not even gonna ask further than that because I think it's just so far out and the change is too fast, do you think that we'll be at a place where organizations are starting to look at those controls so they can capitalize on it?
Because the big issue that we're observing is the failure rate's still quite high and the hype has created A lot of interest in the boardroom, like we talked about earlier. I. Yeah, me personally, I know Kris is the same, are really integrating it into our work practices, but the way we're doing it is using, Anthropic and some of these better tools.
What we're seeing in the corporate land is a really different flavor of what's rolling out,
Robert Williams: yeah, I say this a lot, quite often the killer app is not the automation tool in this team or the customer success piece over here. The killer app quite often is broad adoption of AI tools with good training across all staff.
And so, you know, I go on about this, but like going into an organization rolling out, just ChatGPT teams across everyone and training everyone and showing them how to use it responsibly. Often yields a far greater return on investment than this whole piece of work over here. I think part of the challenge you have right now is when you get into those big enterprise deployments, then again, I'm gonna be contentious and I don't mean to be, but then like traditional engineering gets involved.
And in my experience building with engineers with ai, what happens a lot of the time is they try and programmatically engineer the solution rather than allow the AI freedom, especially if we're talking Gen ai, allow the AI freedom to do what it needs to do. When we have a problem, when we are building out an automation, I'm a big fan of giving the AI unstructured like low rule set, allowing it to create the data itself.
Allowing like very narrow use case, very small agent AI models that will go in, perform one task. And then what you see is an incredibly low hallucination rate, incredibly low failure rate, and you create these really interesting structures. But when I'm working with engineers, when we have a problem, what will happen is we will try and fix it in code.
And I'm saying this phrase more than ever whoa, whoa, whoa. Just let me talk to it and actually I'll go away and talk to it, and I'll make it output the Jason better than it's outputting. Now, you don't need to programming controls around that because the big challenge with large scale enterprise AI implementations and automations is that again, this stuff is moving so quickly.
That they are outta date within days, and you need to be able to move up to that next model. But if you put in all these layers of control, the next model probably doesn't need those layers of control. And so now you've broken it because you know you're treating it like a child. It's now a teenager and it won't like those constraints.
So, the most successful implementations I've worked with are very loose. Exposing all the prompts and allowing us to change them as they go. Allowing the ai, giving the AI trust to build the data structure, not enforcing it, and even creating some dynamic and self-creating data structures where, we're not even defining fact like types, like this is a risk, this is a requirement, this is a project manager.
All we're doing is saying this is a fact, this is a fact category. This is a fact copy, and we're letting the AI create its own categories, almost dynamically generate the database. So, I think what you're seeing at the moment is, just people not understanding how this stuff works, and they're trying to still place the old rules in the new world.
And where we are actually headed is much looser and it's much more about giving trust to the AI and often Sam Altman said this but often building for the next model. Understanding where it's headed and trying to almost hit that point that might be six months out and build for that model and kind of ignore the fact that it's not working right now on the basis that it will work then, but yet that's not the way these enterprise installations of AI and go.
The one other thing I'd say that causes it to fail is there is still a lack of understanding, just how much impact several people within the organization who are really anti-AI can have, and so buy-in from the organization upfront and an understanding of who your champions are and of who the challengers might be, I think is really, really important.
Like in my experience, again, a lot of these fail because whoever it be the, you know, a VP in QA or engineering or marketing or whatever is just really like. No way is this gonna work. I'm not gonna let it. And I've seen that happen time and time again as well. And it's, you know, it's a fragile new world, so it's quite easy for someone just to destroy it by forcing it down a road that you don't want it to go down.
Kris Brown: Yeah, no, I think there's great points and I think, again, having that opportunity to look back at this with your own eyes there, Rob, I think it's super interesting for the listeners, I know certainly we can continue to talk and I'd love to maybe put the offer out there that in six months’ time or you know, towards the end of the season, we may even come back and hit you up and, and go again and go, well.
How far did we get along this way? Has everybody taken that opportunity to do those things? I think there's a lot more here to unpack, and I really do appreciate the opportunity to have that chat with you today.
Anthony Woodward: I'm working on the mathematical proof. I'm gonna find Rob in the UK and London somewhere. I'm gonna find a pub and I take him through.
Kris Brown: No, he's, he just up the road from me in, in Ontario. Mate,
Robert Williams: Did you notice how I stealthily avoided ending on a real downer? Which would've been what would've happened if I had said what I think will the world be like in one year. So, you know, I skipped that one so that I'll save that for next time.
Beautiful.
Anthony Woodward: We're Australian, we don't know what you're talking about.
Kris Brown: Yeah, absolutely not. Excellent. Look, thanks again, Rob. I really do appreciate it. Certainly, for the listeners here, we will be definitely looking out for Anthony's proof points that he'll send out and. There'll be something in the news as you, tear it to pieces.
Again, I'll probably have to end up standing in the middle, but that's all a good look.
Anthony Woodward: We'll get the air set into the pod notes,
Kris Brown: We'll get the, whereas absolutely, I'm sure that people will be enthralled and waiting for your mathematical proof. As always, thanks for listening. I'm Kris Brown.
Anthony Woodward: And I'm Anthony Woodward. I'll see you next time on FILED.