10

How big a deal is agentic AI?

Is agentic AI the future of work or just the latest buzzword for LinkedIn influencers? Anthony and Kris offer their take, along with advice for making the most of AI tools.

Along the way, they cover the definition of an agentic system, and why the technology is an inevitable evolution of LLMs. They also delve into the thorny issue of who bears responsibility when an AI agent makes mistakes.

They also discuss:

  • Defining agentic AI
  • The role of autonomy in AI
  • Information governance and agentic AI
  • Challenges in data disposal
  • Legal responsibilities and AI
  • The future of AI in governance
  • The evolving role of developers
  • Tools for enhancing efficiency
  • Understanding data classification
  • The positive impact of data governance
  • The future of agentic AI

Resources:

Transcript

Anthony: [00:00:00] Welcome to FILED a monthly conversation for those at the convergence of data privacy, data security records, and governance. I'm Anthony Woodward, and with today with me is my co-host, Kris Brown, our executive vice President, president of President's Executive Evangelism and Executive Solution Engineering President.

I think.  

Kris: Yes. Thank you very much. That's getting more ridiculous as we go along. I love it. But you know, thanks mate. I said we're separated again. You are in the US and I'm here at Australia. A topic today that we've sort of been touching on a lot this year near and dear to our hearts in terms of AI, but also especially talking today about Agentic AI.

And I think anybody who's, who's been listening will have heard us. Probably, yeah. They could play buzzword bingo we've been saying agentic AI a lot before we start, let's define it. I asked chat GPT this morning for a definition of agentic AI, but I like the way IBM summarizes it 'cause it's a little bit simpler.

I'm sure you'll have an IBM joke there for me as well around this. But they sort of stated, as you know, an artificial intelligence agent is a [00:01:00] system that autonomously performs tasks by designing workflows with available tools. That's a really simple explanation, I guess.

If we start there today, let's have that conversation now. How do you define agentic AI?  

Anthony: It's funny. I really dislike that definition because that basically says any piece of software that's ever created. Today, that means everything's agentic already, right? If you were to include that, right?

Correct. It's so broad. It's an interesting starting point 'cause it's actually hard to properly define agentic AI. And the reason it's hard is exactly that in many ways, many of the things we've been talking about around agentic AI is, is many of the workflows and capabilities we have today.

But what it really comes down to is because, 'cause autonomously performs right. A piece of software, autonomously perform something because it's been programmed to do it? The issue is that what we really gotta get break apart of that definition is it's not programmed to do it. It's been given models and then it makes decisions to do it.

[00:02:00] And there is a really fine line that I think the average person looking at something. Will not be able to discern the difference and in fact, to be frank, there are times where I'm not sure I can discern the difference, and I think I have some understanding of the space.

Kris: Look, and I think the reason why I wanted to start simple, and it's a really great point, is if I look at the one that, Chat GPT gave me as an example. It's very much around plan, act, observe, and adapt. And that's that piece of, well, hang on, I know some stuff, I have some inputs.

I understand what you kind. To do because you, as you say, you've got a model, there's things there that can help me make decisions, and then over time I take feedback. I think, you know, it's like you well done, you pat it on the head per se, but it, it's probably a little bit overreaching, but it's that autonomy piece that the agentic, really does play in here.

I think if we want to create differences. LLMs, I won't say Chat GPT because obviously there's agents with Chat GPT and other things it can do now, but LLMs, were very much respond to a prompt. It's searching, it's [00:03:00] helping you to define things using other models of text and other pieces.

For me, the agent AI is really about the doing things. It's the taking the next step, hey, I'd like you to help me understand something and then plan and then maybe even act. And it's the maybe even act piece that I think it's really, really interesting when we sort of say, you know, the simplest of use cases, I'd like you to read and summarize all my email, write me some formal responses, provide me a task list in terms of importance, and I'll work my way through it and get those out to, my partners suppliers or other people that I'm working with.

That's a huge. Piece of everybody's day, I would have to imagine. Personally, I use tools to help me with this as well because I get hundreds of emails a day. Most of them I don't need to read. It's great to get that summary.

Using an agentic AI, I'm getting it to do a lot of that heavy lifting for me, it's to making me more efficient. Even talking to some of our, graduate developers here at RecordPoint having them have this conversation about agentic AI, it's really, really interesting that there's.

There's a [00:04:00] lot of things to unpack here. There's a ton of stuff to unpack.  

Anthony: Yeah. I think for us in the content management world, when we're thinking about governance and risk to have an agent. Or a co-pilot or whatever we're describing it as these days, you know, really means you can assign it a task and it completes that task without further instruction.

That's the point you were leaning into about your email there are some real bookends to that today, as we envisage this world a year from now, maybe it's different, one of the things I've been playing with is. Having your calendar managed by an agentic endpoint, and man does that mess up fast, so I played with it.

Kris: No wonder I could never get a slot.  

Anthony: I'm sorry, Kris there. I've already bungled. I know some meetings with you. I've turned it off because it couldn't cope. The decisions in doing what seems like a simple thing of, here are some slots, here are some bookings. I prefer 30 minute meetings and not hours, and giving it those parameters, it still couldn't cope because there are too many decisions.

They're obvious to a human but not [00:05:00] obvious, at face value, right? If I've got a meeting invitation I'm sitting here in Seattle. I've been spending some time in Redmond with Microsoft, but there's travel time in that the AI agent didn't understand that that happened to be sitting in Redmond.

Therefore, there was a half hour that it needed to book around it, and maybe I could give it those rules, but I didn't. The next meeting I had was booked into a time where I needed to travel back to the office here in Bellevue. There are things that today those autonomous agents can't do well, they will get better at, but I think that notion of autonomy is being able to give it a goal and it being able to complete a task successfully for you.

Kris: I wanna lean into something that Dave Slutzkin, obviously on a previous podcast spoke about, which was the reason why we're now really looking at, agentic is because it's a way around the limitations of LLMs. If you think about that from an evolution perspective, maybe it's, part and parcel of the evolution itself.

We are defining AI in general we're trying to get stuff done. We're trying to use the power of computing and the power of, modeling but I think Dave's point was more along the lines of this is the natural evolution. [00:06:00] LLMs have been around since 2018, right?

Like the Chat GPT 2 and 3. There's been other prompts and things coming out much, much earlier than this wave of media around it. We are at that. GPT 2 level for agentic. It's not able to do the decision handling around the calendar piece.

I can't imagine. You are the first, but it's not enough information for it to go, well, actually I need these parameters. As you said, the email, it's probably been done a lot. The parameters are relatively simple. It's not always getting it right, but that's okay. I'm the checkpoint.

Calendars is that next piece. I think we've got a real opportunity here that these things will grow and get better. You know, early crystal ball call from you. Where are we gonna be in 12 months? I think it's really, really interesting to see this. This is all about us and RecordPoint and FILED is about the information governance space.

Let's talk a little bit about the information governance agent, where does that start? I've got some ideas, but where does that start? And possibly where does it go if we pull that crystal ball out again?  

Anthony: Yeah. And I think it does depend how far we're forecasting out, right?

To, in [00:07:00] terms of the crystal ball and what we're asking it. But if we go to something that's more relatively predictable, you know, 12, 18 months, two years from now, the reality is that, we will have more connected systems that we can assign goals of activities. So, if we talk about information governance, the traditional world even of records, well what do we do in records?

We create file plans or classification schemes. We apply those file plans and classification schemes to data and classify that data against those file plans and classification schemes, which results in retentioning and potentially other signals. So, those are kind of these three elements, right? And then ultimately we'll act on.

Those file plans or classified data to do something with it disposal, transfer it, whatever it is, out of those source systems. if we take that theory, sort of information management theory at the recording level, then many of those functions within 18 months can be performed by an.

Looking at the regulations, looking at the matching of data from a classification perspective, [00:08:00] applying the classification inside a system in place, those are capable to be agentified. I'm not, I think that's a word I've heard it a lot on podcasts. I think in two years, a human in the loop for exceptions and override and control. So, it's not really replacing someone who does those functions in the front end of that process, but it is augmenting and speeding up their workflow on the back end of that process.

A disposal element I spoke about, well, that's a more complex process. Because it isn't just, Hey, get some data. Here's some rule set match stuff to the rule set. Now let it sit here for a period of time while we watch a counter or maybe an event from an external system, which agents are gonna be quite good at that.

When we go to disposal, someone has to appraise the data, they can do a summary, but summaries aren't data appraisal because there's a lot more metadata or context is a little bit like managing the calendar. Again, we've got all these things that are around my calendar event, the single calendar event.

Not too hard to put in my diary. Is there a blank space? Yes, I can give Kris an appointment, [00:09:00] but knowing what's before and after, knowing what the impacts of these things are, are much harder. Evaluations for the disposal can be a bit more like that, right? 'cause we're now gonna get rid of data outta the enterprise.

We wanna think about, what. Came before it. What's after it? What does this relate to? Is this significant today but wasn't significant when it was created? Those agents, I think two years from now in managing just this very simple process of records management, are gonna be more complex and need more assistance in that world around how humans are inside that loop.

And so that's probably a longer timeframe.  

Kris: I like that. And I think that's, the interesting piece, right? The complexity of the task isn't always obvious. Disposal seems fairly simple on the outside I've captured the object. It's got a classification.

That classification is regularly reviewed per the SMEs there's reporting built into systems and organizations can do these things regularly once that retention comes up. When there's millions of, there's an attestation level decision that's made by a data owner, an executive who, whoever that might be [00:10:00] in your organization, it's almost a feel, right?

Is this information risky? You know, what is it that in it? From a summary perspective, yes, AI could help me with a summary, but what is it about it that gives me the confidence to go, let's get rid of things. The disposal's naturally been that piece of work that most organizations have struggled with because someone does have to make that decision.

They're taking ultimate responsibility for that. The whole point of this is, I kept it, we made a decision to actively remove it and then it was destroyed. So, that's what I no longer have Mr. Court of Law when I end up in, trouble or, you're trying to defend a position. And it's interesting because destruction and over retention is the biggest problem in our industry.

An agent isn't instantly going to be able to improve upon that, for that same reason obviously having the agent be able to do that regular review, having the agent be able to constantly give you that, confidence should allow organizations to have that attestation be much stronger.

That's where there's a gap today. The gap today is very simple. Classification [00:11:00] review. And management. They don't have a hundred percent of their environment, managed. It's probably the same problem that we've got on the other side. We've talked about this at length too, about governance around AI and being able to make, you know, good decision about what should go into the models for the same reason.

But I think, as you said, in time that complexity will still be there, but the agents will improve their ability to handle that complexity. The question I have for you is, at what point, from a regulatory perspective, do we accept that it's the AI's responsibility here?

I know that's a, political legal gambit around, AI making decisions, but can an organization ultimately remove that responsibility from themselves? I don't believe they can, but. What are your thoughts?

Anthony: I think they're different things. So, removing responsibility from yourself versus using an agent that you make effective through the instructions you give it are slightly different things.

I don't know that any organization, in the current parameters we have from the legal system is going to be able to obfuscate their responsibility. But I do [00:12:00] think, if we take the crystal ball out beyond the 12 to 18 months, agents will be able to do these things without fail and be better at it than humans because, there is the other side of this where it's not like it was perfect today.

And it won't be perfect with agents either, right? They're gonna make mistakes, but, they will, when they're dealing with large volumes of things, they are. Statistically more reliable than humans are gonna be in terms of the failure rate eventually. Now, I don't think that's the case today  

Kris: about the traceability, right?

We're getting back to providence almost, right? We really love a good word for our information governance people. It's like how do we prove that we did what we did and the way in which we did it?  

Anthony: I think we're building a narrow use case of the broader notion of records management is.

And I do wanna say that because I think it's important to acknowledge there's much more to this information governance is even greater than that the agents being able to carry out an activity like this will just, and it does go a little bit back to the definition we talked about earlier, will just become integrated into the business's workflow over time the notion of responsibility will be enshrined in the instructions given to [00:13:00] that agent who will continue to need to be monitored because that is the responsibility of the organization, no different than having a human do it.

Because you've still gotta monitor the human, right? Like you can't just assign the task out to, Hey, new person, I've given you an employment contract you're gonna do all the things in your job description and you'll always do them great. And I don't need to do anything else.

And now the organization's absolve its responsibility that that's exactly the same in this TIC world. There has to be monitored. You know, I think there is a lot of conversation, particularly in the technology world where people are like, ah, we'll be able to just get rid of everybody and agents will do everything.

But they've just become another player in the team. They will not function without human observability.  

Kris: That really ties into, and, I've had a couple of conversations. Here at the business, we have a mentor program where we talk to our, younger staff and they're involved with the executives inside the organization.

One of the, graduate. Engineers ask the question, like with there, with this push into ag agentic and the, the way in which things are moving, you know, I think we spoke about it the last podcast in terms of, you know, how is the sausage [00:14:00] made, we had a really good conversation about inventors versus consumers.

Are we moving to a point where the records management or the information governance space. Very much needs to make a decision about are they consumers or for want of a better term? The, the inventors. My comment to the 'cause the question was very much, look, I'm a junior developer. Obviously there's a gen AI that is apparently able to build code at the same level as a junior developer.

At what point do I become redundant? I sort of said, the interesting thing for me is if there's a. Tool out there that allows, me to now vibe code and become a junior developer in the team. Why isn't it true that a junior developer can just purely be more efficient with these tools?

Like you already got a set of skills beyond myself, so your prompts should be better. Your ability to pull this together should be better. Should you be 20, 30, 40, 50, 60% more efficient? Should a junior developer now actually be able to do more with. These tools than Joe Average. Like why, why isn't that true?

Are we getting to that point where it's not so much that we are removing people, but it's now [00:15:00] your training, your expertise, your degree, your history, your experience with the agentic AI at your side is now, the equivalent of having multiple team members we should be more efficient, we should be better, and we should have higher expectations of the level of information governance because of agenda chaos.

Anthony: Yeah. I started my career in a law firm. I'm not old enough to have worked before the internet, you are not either, Kris. We talk about objects, the, weirdest thing ever was walking into an organization, you know, being, having you learn about and use the internet and, and, and had that in my studies.

And look, it was the earlier days of the internet. We didn't have all the tools we have today, but certainly had enough and the firm had a library. I'm like, why is this still here? As a junior person working in that firm, probably two years prior to me arriving there, someone would ask you an esoteric case or something and you would go up to the library, find the book, and maybe it wasn't even in that library.

You'd go across the road to, you know, was in Sydney to the one of the university's law libraries [00:16:00] or potentially state library and pull that data and check it, I didn't have to do that. I would just search Osley or go, search for X, Y, Z, and it was on the internet, and that was just native.

And I, I think we're just at that tipping point, when we talk about this tooling where there's a generation of folk that are just about to walk out of the university, and this agentic piece becomes lingua franca, we're almost like we're still at the library stage of checking a book and checking the case versus the next stage I went to Google and I found it, then I double checked it was the right one and I'd never left my desk.  

Kris: I think we had that conversation last week. How is the sausage made was the terminology that we use. It's just part of what they do.

They are just consumers. They will just walk in and have an expectation around this. And, and I think, you know, for those of us who have been in industry for a while, it's, it's not so much scary as it is that you help. It's going to make things better, more efficient, and harder.

I'm still in awe of the pizza agent. How's that going?  

Anthony: Look, due to user feedback, it's currently offline. I am sitting here in Seattle. My children have [00:17:00] decided that they are adults enough to manage their own processes, and it is now actually, not in use. So, interesting.

The agent. Caused two siblings to communicate more effectively, and they've now taken that back to a human set of processes.  

Kris: So, tell me, that wasn't the goal in the beginning.  

Anthony: Look, you know, they may listen to this podcast. I don't wanna reveal all the secrets too much, but yeah, what I am looking at doing and talking about the other day, is for those that are out in the listening community and want their own, agent, we are looking to make that an open source project for others.

Kris: Awesome. We'll look forward to the links being posted and we'll do some advertising, maybe we might even find a sponsor for that we won't choose sides today, but we can certainly choose sides.

Let's offer some advice, what are the tools that we should be looking at, for our own day to day? Do you have some examples of things you are fond of, they don't have to be plugs, but certainly things that people should be looking at and thinking about, even if it's a matter of getting involved in this space.

Anthony: I wanna hear what yours are too, right? Get too far down. But the look, let's go with the top two, [00:18:00] three, that, that, that. I've been playing with the, you have to be playing with Chat GPT right now. Look, it has to be the first one on anyone's list. The beauty of Chat GPT and you touched on it earlier, is you can actually add agents to it.

They've done a pretty good job. In fact, you others in the podcast may, I've had a bias to using Gemini and some of the Google tool set, and it's very good as well. But what I have seen in my playing of late with Chat GPT is their ability to add agents and bring in other content and play with that is very easy, can do it in Google. It's a little harder. For the average person who's keen to go do that, I'd go and play with that for sure. It gives you a great walled garden to play in this is all contextualized to what you are doing at any point in time.

The second tool worth playing with if you wanna lean into technology and coding and the rest of it and to give them a plug. And, and part of what I built the pizza app in is Lovable, which is a great vibe-coding app. There are others out there too, Windsurf and, and others, but. Lovable’s very easy to use and get started with.

And if you wanna play, [00:19:00] go play with that.  

Kris: I'm probably not as detailed as you in terms of the development pieces, but I have been using for my personal side of things. So, again, Gemini, I will give it a plug on the Gmail side. Personal emails, I was one of those people that got a Gmail account on the first day when names and things are really easy and it's a blessing and a curse.  

It's really easy to tell people what your email address is. It's the curse that it's really easy for people to guess what your email address is and just send you crap. So, that email box gets a ton of work just because it's signed up to everything ever.

And so, I use the tooling in Gemini to pull all of that out because I also get my bills there and I tend to miss them because there's a thousand unread emails and I don't know which ones are Bill and which one's not. Apologies to my local government, 'cause I never pay my rates on time.

I've been using the bits and pieces from Superhuman as well, trying to play with it on top of my, my work ones. I've had a little bit, you limited success there. I interestingly, the agents in chat, GPT certainly, you know, I've had a personal account with chat GPT for a while, but we've also moved more recently to, an [00:20:00] account here at RecordPoint where I've been starting to play with those.

Kids do a lot of stuff, especially with the personal account on chat, GPT, a lot of the, you know, building out a lot of the workflows, especially for my son, we might have mentioned before, but an athlete. So, meal planning and, you know, understanding his calorie intakes and things. There's a lot of stuff there.

He's got a whoop. Which he wears to manage and deal with his recovery. It was very, really, really, really good at predicting he was gonna get sick a couple of weeks ago. You know, get more vitamin C into him. He's absolutely getting run down. The recovery's not coming back.

You could say, yeah, you're a parent, you tend to see it, but I had data telling me this was happening, because, you're not obviously watching all of his sleep and other things. There's a lot of stuff around the house that we are doing in that space.

I've made the house more intelligent, once upon a time I broke a geofence with the car and the garage door would open then I'd break the geofence leaving the house and the garage door would close. That was always really. Simple. Sometimes, you'd go for a walk with the dog and forget that you've got your phone with you and you broke the geofence.

And when you come [00:21:00] back, the garage door's wide open and the house has been wide open for the last 45 minutes while you've been walking the dog. So, you know, start having to have it work out that, oh, you know, I want the garage door to be open because I'm in the car, it's connected to Bluetooth and I've been traveling to these locations and it's normally this time of day it's getting smarter. I'm still. Pulling up to the house and the garage door's not open every single time. So, we've got some work to do there with those things. But I'm using home assistant and some Chat GPT, agents to start to do those sorts of things.

It's really interesting. But for me, even when we talk to our customers, the interesting things that they come up with, you know, and there's some obvious ones. More recently, I was talking with someone who's very interested in what we're doing around signaling and understanding PII and other things of the data.

And it's like, how do I just have that be dealt with, right? We know that this data shouldn't be in these sources. We've defined these things. It gets seen. We know where we want to quarantine that, make it move. Don't destroy. But, make it move and take these next steps.

And I think as we start to think through those really simple processes, we're gonna get more and more [00:22:00] complexity around these things. I can't wait to see what it is that, the team produces for us, but also what the customers ask for and what they're really trying to achieve with data. It's a really interesting space.

Anthony: I, I think there's a lot of confusion worth separating here about these technologies some of the things we're trying to do at RecordPoint is to help, smooth out that confusion and we've got some products coming out that deal with that. The notion of data, I think everybody's gone, oh look, large language models have built models, and they went and indexed the internet and like everything from the beginning of time and now they answer questions.

But that's not really what we talk about agentic guess is using some of that knowledge, but it's actually trying to. Apply a much more narrow set of data and, and I'm used to with data here rather than model because I think they get confused between model and data. Without getting too technical, there's a thing called rag, which is retrieval augmented generation, which means that you can mix data with a model to make decisions about things.

Most of the implementations we're now seeing when we're [00:23:00] talking about agentic, is using some form of that, right? So. To bring back to the context where I think it, it links to your point, when we're playing with this tool, what we wanna do is give the model or, or the LLM the GPT or whatever we call it.

Those are all slightly different things in some ways, but for the purpose of today, we'll treat them the same. Is give it the information that's necessary to do the task and then help make those decisions. When we talk about the pizza application, it's really simple, right?

It's got a simple set of tasks that knows how to, communicate with those couple of individuals to then make a choice. One of the things I'd had the LLM do is then make a recommendation to them, oh, you both don't want pizza. Now today it's Chinese and now the model's gonna make different decisions 'cause it has a memory of those things.

But it's, it's, I'm not exposing all of them to that, if that makes sense. And so one of the things I think for the listeners to think about in the adoption of these technologies, particularly in the information governance space, is, what are the different data feeds your business needs from the, data you [00:24:00] have and the processes you need to create within these AI agents?

And then you can start to think about the constraints that come to this. This looks a lot like building ontologies and classifying data that you've done in the past because those are linked to the things you've done. You can apply these things in your own life, so you can look at your own facets of what you do and start to experiment with that and pick your LLM of Choice Co-Pilot, Chat GPT, Gemini, and they all have flavors of being able to do.  

Kris: Yeah, and I think the other thing you're hitting on there is that if never, ever before in the history of man, understanding and classifying your data is now just as important as it has ever been. Because even to do is a simple example of we want to, as a business, make good decisions about contracts, what you want to have.

A well classified set of all of the good contracts and bad contracts classified in such a way that you understand that so that you can feed that model and help it make those decisions and constantly tell it, this is good, [00:25:00] this is bad. That's gonna help you to create an agent that can do that work for you, be more efficient for you is a very, very simple case.

And so I think that understanding the ontologies that we have been building, the understanding of the data that we've been trying to capture, understanding that the organization needs to understand all of its data as now valuable to these processes, be it rag, be it an LLM, be it a model of some other form.

You're now in a place where this is key. I don't think at any other time in my career, Anthony, is that we're. Talking about the positivity of using data in your organization in the right way. Outside of four years, it's been the, if you don't handle it properly, there will be a fine, there will be a problem, there will be a hack, it's all been negative.

The positivity here of the data is now got such value to your business. You need to do the information governance thing.  

Anthony: Yeah, absolutely. For the listeners it means you can get ahead of this conversation being in this information governance world.

'cause you [00:26:00] already have a lot of the tools that allow you to embrace this change. As I've spoken more recently to a lot of folk in, in information governance or in the, again, you know, the, the, the records management there's been a little bit of fear, but. As you start to explain and unpack how relevant they are to it, the fear quickly goes away.

And I'd encourage all the listeners, go sign up, grab, co-pilot GPT, and, start to think about the application of it, the risk inside of it, and how you can help assist the company's business model by driving those things.  

Kris: Excellent.  

Anthony: We've covered quite a few things, Kris, but I know often in these discussions, you bring up a bunch of legal issues and other things to unpack.

Are you completely comfortable with the application of those things?  

Kris: Think you solved my problem before, right? Like, it's not perfect now. And we have this chase of perfection. I think it's probably the biggest thing in any sales cycle that we have to communicate with customers is that even today with AI based classification and data capture and management, the perception of perfection doesn't exist.

We haven't [00:27:00] ever had it. I think there's still plenty of challenges. I think I asked the question earlier, like, you know. Perceived ability to do handoff responsibility just doesn't exist. You will never be able to do that, but the reverse is you can become a consumer of a tool. You can understand what it's doing, you can actually review and it's, we are back to the process.

I think more than ever we're in this place of have a process, have governance, have review reports. And attest to those things, know that they're true and accept that there's risk. We've had plenty of guests on this podcast who have said, there's levels of risk that you have to accept.

You can't spend your way to perfection, you could put thousands of bodies with high level degrees of information governance to check every single record in your organization, and you might get close to perfection, but humans still make mistakes, so perfection isn't a goal  

Anthony: The goal is how can you defend a situation when it occurs to the satisfaction of what's necessary from either regulatory, legislative, or moral or ethical standards. Right? And, though. [00:28:00] Whilst harder to put, you know, strong financial metrics on, those are organization's responsibilities to, to interact that way.

And, and we see that in the real world. I don't want to call out that there's an airline in Australia that had a bit of an oopsie and. Leaked a bunch of data. You know, there's a real question about them, were they investing enough? Was it appropriate for the situation, et cetera, et cetera? Now that's a privacy breach rather than necessarily a data control breach.

But it's those things you wanna satisfy when the event happens, ‘cos the event will happen. And I think that's the piece you wanna think about. Think about that in this agentic world, because it is a different metaphor to engage with.  

Kris: And, and you know, I think interestingly has been the internal things that we've been doing.

So, in playing with agentic in the space, we've set up things like, tools to ask questions, and set the teams on it to go, we'll break it. The goal was there's information hidden in this dataset. We don't want you to have access to it. We've provided rules and boundaries toward the Agen AI and the LLM to [00:29:00] not return these things.

Team, try and fool it into telling you something else. I don't wanna call out that, you know, I think that there's certainly lots of people out there using tooling for responding and reacting through HubSpot or LinkedIn I'm sure there's going to be a, a spate of people trying to trick their agenda AI into liking.

Or responding, and gaining advantage, the Instagram style. How do I go viral? If I can trick someone with a million viewers into responding or liking, that's going to draw attention to my thing if that person's using an agentic AI, how do I do those things?

There's this loop that goes on where there's the good, the bad, and the ugly that goes on with everything. The how to build a meta mousetrap, as you said. What are the rules that you're putting on it? More recently, some of our LinkedIn posts, I've reused things and posted things, got reactions from individuals and it's like, the reaction came so quickly that they can't have possibly been sitting at their desk and seeing this thing.

It had to have come separately. My evil mastermind is like, how do I get that [00:30:00] person to react more in a way that I don't want them to just for fun even.

Anthony: Look, I think this is a subject we at RecordPoint are extremely passionate about, and I think we're gonna keep talking about this. I was always interested in blockchain, and I found it particularly applicable. At the time there was a lot of talk about Web3, how it was gonna change, how data was stored and how data was gonna be, and I was certainly not on that bandwagon.

I completely feel differently about the future, this is going to change how we interact with ourselves, with the world, with data at large. 'Cause everything is data ultimately. And that's gonna impact how we go to work and the things we do at work.

But that's really exciting because it doesn't mean that everything's gonna be thrown out and different, it just means that we're going to be more effective and hopefully find things more enjoyable in the long run. And I think that's the real passion driving us to go.

The beauty is at the center of this is data and information, and there is a real opportunity to make that so much more effective in the world.

Kris: It's super exciting. I can't wait to see what comes next. I can't wait to interact [00:31:00] with people and hear how they think. The more we talk about this, the more interesting it gets because everybody's got a perspective, and at the moment, there doesn't appear to be any bounds in what we might be able to do.  

Anthony: It is really compelling in terms of what opportunities it can present. I'd love to talk more about it, but I think we're probably at the end of today's podcast. Thank you all for listening. I'm Anthony Woodward, and if you've enjoyed today's episode, please leave us a review on the podcast platform of choice.

We're on LinkedIn under record point. Head to recordpoint.com/filed for any of the full FILED experiences, including the newsletters and many other assets we have. If you have any feedback or you're keen to become a guest on the podcast, please hit us up on email at filed@recordpoint.com. Love to hear from you.

Thank you very much.

Kris: And I'm Kris Brown. We'll see you next time on FILED.

Become a FILED guest

If you’re an expert in any of the industries we discuss – data privacy, cybersecurity, regulation or governance, and more – we want you.
Learn more

Enjoying the podcast?

Subscribe to FILED Newsletter.  
Your monthly round-up of the latest news and views at the intersection of data privacy, data security, and governance.
Subscribe Now