15
Why are enterprises hesitating when it comes to AI? With Jason Tan
Enterprises are cautious about AI for good reasons, but those who hesitate for too long are going to be left behind. In this episode, Anthony and Kris meet AI strategist, AI ethicist, and founder of Engage AI Jason Tan, who discusses the current state of AI adoption and the cautious stance of enterprises, compared with the proactive approach of startups.
They discuss the importance of AI governance, the challenges posed by data and algorithmic biases, and the need for human oversight in AI deployment. Jason also touches on the future of AI and the importance of educating the next generation in a world increasingly influenced by AI technology.
They also discuss:
00:53 Guest Introduction: Jason Tan
01:38 Jason's Career Journey
04:44 AI in Business: Opportunities and Challenges
09:10 AI Governance and Ethical Considerations
15:16 Human Bias in AI
30:30 Future of AI and Final Thoughts
Resources
- 📏 Benchmark: How much PII does the average organization store?
- 📑 Blog post: Mitigating AI risk in your organization
Transcript
Anthony Woodward: [00:00:00] Welcome to FILED, a monthly conversation with those at the convergence of data privacy, data security, data regulations, records, and governance. I'm Anthony Woodward, the CEO of Record Point, and with me today is my co-host Kris Brown, our executive Vice President of Presidents Partners. Evangelism, solution engineering, and any other task for the day.
Kris Brown: And all other things. On my JD, he says, yes. Thanks, mate. That just gets worse every single time. But yeah, no. Fantastic. Good to see you. You've landed safely in Seattle. You've managed to avoid any, government shutdown issues.
Anthony Woodward: Yeah, that's pretty, easy. Slightly delayed flight, but that happens every day of the week.
So it was a nice flight over I waved for air traffic control and wooed them. But yeah, poor guys,
Kris Brown: they deserve all of our support at this point in time.
Anthony Woodward: I was thinking about this on the flight over, this is probably, I, I've spoken to Jason a couple of times.
Jason Tan, we have on. Fantastic background. Politic strategist, AI ethicist, founder of Engage [00:01:00] II and publishes a newsletter that I've subscribed to called Behind the Scenes. How are you, Jason?
Jason Tan: I am good. Thanks for having me, and, super excited to share some of the things I've done, but, and also I think good to hear that you're okay, like all the airport craziness that is going on. Good. That you are landing. Okay.
Anthony Woodward: Always the reality versus the reality of the experience versus what you hear in the media. Now obviously just my experience, but because of what's going on straight through TSA, nobody there, airplane was only half full, was a very pleasant flight.
Jason Tan: That's good.
Kris Brown: I guarantee you that will not be my experience on Saturday, but we'll see what happens. Great to have you here, Jason. Let's start at the top for all of our listeners. I'd love for you to just sort of tell us your story. How do you get to where you are now and, and what's going on?
Jason Tan: Yeah, absolutely. Well, I started my career working for Suncorp, so Sun, for those of you who are in the States, Suncorp is one of the second largest insurer. Here in Australia. So I work with the actuaries to build the [00:02:00] algorithm pricing, and I often tell people I am part of the reason why their home insurance and their motor vehicle insurance is so expensive because.
I built them the, algorithm pricing and do a bunch of the really advanced thing that people did quite realize. Fast forward a couple of years, I did data engineering, as my consulting, my boutique consulting business, and where I worked to build their BI dashboard, to build their advanced analytics.
And along the way, I discover something really interesting. As part of the prospecting. When you're the business owner, a lot of time you spend is on prospecting, finding client winning deals, winning client and closing deals. The one thing that I discovered that is really interesting when I was doing that consulting research is that I've noticed a lot of my target client, IE the head of customer.
When they were posting on [00:03:00] LinkedIn. They often did not get much of engagement, and it's completely understandable because they don't write crazy stuff. They don't write controversial topics; they don't write like influencers. And that gave me an opportunity where if I just engage with some of those people a number of time, often it became really, really easy for me to connect with them, send in DM, and then invite them to my podcast where I share how a lot of these, head of analytic at various enterprise, we're using Advanced Analytic, we're using data to really, to run the business.
Fast forward to 2022. When ChatGPT was introduced to the world, I realized that I could actually productize that entire strategy. We did an MVP over a weekend. And we got [00:04:00] 2000 users in a week. And in fact, we even had the paying customer on the second day who pay for the entire year.
So that was something that I focused over. The last two and a half year, we built Engage AI and we scaled it to a hundred thousand users around the world. And, yeah, that, that is my background. I recently exited the company, but it has been fun. So I have been building the AI application, that people actually want to use.
Kris Brown: So I have the bone to pick with you then Jason. I've just received my Suncorp home insurance uplift for the year, and we need to have a chat. Right. So, but anyway, I won't hold it against you, I promise. But again, given that you're deeply involved in the marketplace in this space.
Let's start with a simple question. And I know you've answered it in part obviously in your own experience, but how are people capitalizing on, in this AI moment? As you said, ChatGPT has joined us, the opportunity is out there. Agentic AI is sort of a bit of a theme for today. Are people, how are people really capitalizing on this AI moment?
Jason Tan: [00:05:00] That is a good question. So I have been talking to many of my contacts over the last three months, and to my surprise, everyone is still sitting on the fence. Well, at least in Australia, my conversations with a lot of different companies suggest that people are still sitting on the fence. A lot of them very much just implemented in a way that adopting ChatGPT or maybe Google Gemini, they are not too sure exactly what is the next step. So to my surprise, not that many people are trying to introduce a new product, new opportunity with these new capabilities of the AI that we have known.
For the last two and a half years. So I think that's really surprised to me. On the other hand, if I speak to some of the startup generally, they are trying to do a lot more with the ai, where they certainly try to introduce AI into their product. [00:06:00] However, I think the biggest challenge or the biggest gap that I see though, is that a lot of the company who certainly do try to introduce AI is that they are very much built like a chat bot that is similar, like ChatGPT. I actually think that is not the right solution for the market, for the customer that they are serving. They should think about how to incorporate. The AI into the business operation and solve some of the problem.
In saying all that is they are still trying to hire people just to adopt AI in the organization. Whereas when the startup land. People are certainly introducing AI and then you see a lot of, new company are trying to tap into this opportunity. So that is what I discover. Is that what you found the same as well?
Yeah.
Kris Brown: I think I agree and I was probably gonna ask you a follow up along the lines of what do you think slowing the enterprise side? Like obviously in startup world, we understand that, you know, you're always trying to do more with less. [00:07:00] There is this tool out there that's helping you.
You know, in reading some of the behind the scenes articles, you give some great examples about how organizations like Lovable and others have really just grown enormously from these really small base of employees. But why do you think – and I'm gonna lead you here a little bit. Do you think it's because we have a bit of respect for that governance and regulation element and people are a little bit worried about, well, what am I putting in there or, or is it just we don't know what we don't know?
Jason Tan: Well, there are many reason, and I think one of the reason those enterprise often are slow is because there are just too many meeting and approval would be required. And also they have to think about a lot of those things, right? But equally in terms of the enterprise, I think sometime the startup people often underestimate is that the enterprise have an existing reputation to work. So it's not as easy like a startup who has [00:08:00] got nothing to lose to introduce AI. So for the enterprise, a lot of these company, it means that they have to worry about the governance, they have to worry about the ethics. They have to worry that what happen if the AI say some crazy thing that put them on the headline of the newspaper of being a racist, AI, you know, all those sort of thing. Those are the thing. And often equally, why, from all the conversation that I'm having is that one thing that a lot of these new role that the enterprise are hiring in terms of the head of, journey ai, not only just about adopting ai, but also about.
Setting up the AI governance and the data governance to make sure that they are protecting their assisting reputation because they got something to lose, right? So as much as we like to laugh about the enterprise are slow, we often forget that, well, that's because they have billions dollars of revenue that they can just simply.
With their [00:09:00] reputation.
Anthony Woodward: I'd be interested though, you know, what do you think these risks are? Because clearly on one side, as you say, the startups just dive in because there's value to be created and they can quickly accrete that, but that has some risk. And then on the flip side, the enterprises are looking at it going well, until I've gotten rid of most of the risks, I can't lean into it.
What are those risks like and how do we look at that in terms of what people have to embrace in terms of these innovations? I
Jason Tan: think there are two things that I see a lot in the enterprise world in terms of adopting the AI right? Number one is putting the AI as a customer service, and then the other one is using ai.
If they have a big engineering team, they introduce AI to the engineers to write software, to write code. I think the second part is not so much of a risk, but if you think about the first part here. Where AI is used as a customer service, that is a really big risk that they have to worry in the way that they have to make sure [00:10:00] that the AI is functioning as expected with gut real surrounding those AI that would not say anything crazy.
I would not say anything racist that would not infect insult the customer. So those are the risks that they have to worry. But more importantly though, the challenge with the AI that the form of the AI that we got with the LLM now is that it's a probabilistic system rather than. Deterministic, system.
So in the old day when I was building the algorithm pricing for Suncorp, very often when there was something that goes wrong, we literally was able to read the code, trace every single step, figure out what exactly went wrong so that we can, we can fix it. But the LLM, the form of the AI that we are using now, unfortunately, is [00:11:00] not exactly that way, is almost like a backbone.
When you ask one question or when you give them instruction yesterday for exactly the same instruction that you do today, it could come out differently. So those are the thing in a black box where you don't really know exactly and you cannot be a hundred percent sure exactly how that gonna work. Then you've gotta worry.
So that is the thing that is stopping the enterprise to adopt it. But more importantly though, is that the surgeon retrieval is the AI telling their correct information, and those are the things that they have to be sure. If they are putting the AI to serve the cost.
Anthony Woodward: Yeah. Well, an interesting point.
I'd love to understand when you think about an AI stack and we should really probably even give the audience like what's the, when you say AI, which we said plenty of time, what do you mean by that?
Jason Tan: Well, my definition of the AI is that [00:12:00] the technology will be able to perform and function the things that the human can do.
That is my high level of the, from my experience of working on this field for close to 20 years now. Right. Is that the good side that Open AI has helped is that it makes so much more company and so many more people to talk about AI. That is a good side of it. The unfortunate side of it though is that a lot of time when they talk about AI, they very much just talk about ChatGPT or L.
But that is a bigger role of the AI out there that AI can do. Like for example, the computer vision. That is one thing that, and when you build it right, it can do and can perform the function that human can do. But very few people understand, let alone talk about that topic. And that's because a lot of people when they talk about AI, they actually refer to ChatGPT and is that the [00:13:00] same experience that you have found?
Anthony Woodward: Yeah, to some extent. I guess you're touching on where I was going with my question ai's used to be too loosely right now to mean so many. I agree. And so folk have stereotyped it to mean, a certain process that looks a lot like ChatGPT And look, when you're an executive and I've spent my day, writing a one page brief to investors based on a question they asked me at nine o'clock last night.
And I love my investors and thank you very much for giving me such opportunity to do such things. You know, hopping off a plane and not sleeping is always fun, but, my use there and I think a lot of executives in business, this is not dissimilar where I've gotta do a thing, I wanna do it quickly.
I'm not thinking very straight. Help me get my thoughts together. It's so fantastic at that, right? It took something that probably would've taken me four or five hours. I was able to do it an hour. So, you know, that's magical. Right. I got to go to bed rather than it being my task. But that's only one version or one way of looking at how the AI is implemented or what are the benefits we can get from the evolution we've seen, [00:14:00] right.
These models that are finally tuned for an application versus, you know, retrieval tuned. So your ls, are really different. Right. And I, and I'm really interested. Your experience, right? This sort of fine-tuned Suncorp, I'm going to give you a series of questions to answer. You are gonna come back and answer those, but it's finely tuned to this data set versus this wider one, LLM, and its ability to do retrieval, augment generation, or its ability to give me a response, you know, through that, which comes with different reasons.
Jason Tan: Yeah, it does. It does come with different risks and there are, I would say there are multiple stages for doing a lot of those things right. Number one is that you fine tune on a very, very specific subject of the data and the information that you want to use it to serve your audience. Or your customer, whether internal or external.
So you certainly want to do that one thing. And then the second step is also the search and [00:15:00] retrieval, because a lot of time the information is just exactly like how we human would respond.
Anthony Woodward: So Jason, you've written about humans, human biases, behind algorithmic decisions, and it'd be really interesting to understand that. 'cause you know, as we before when we were talking about the definition of AI, you said it was anything replacing a human, which to me is almost any piece of software
most software is replacing a human function of some sort, right? Like the simplest thing, like using Excel in a spreadsheet is replacing a human that would tabulate that by hand with a pencil, right? So isn't human biases in everything, behind every algorithm and you know, do you even want to get rid of them?
Jason Tan: Well, that definitely Excel is also a bit of ai. Let's, let's not worry about that definition. But I think when I say the AI replacing, I think what I mean by that is that a lot of those things that are a lot more fluid [00:16:00] in a way that, the AI can do. That rather than just a system where it is very, very rigid.
Nevertheless, let's come back to your question in terms of the human bias. So to me, from my experience, I found that there are two type of bias. Number one is the data bias. Number two is the algorithm bias. So let's talk about the data bias. So that remind me whenever I talk about data bias that remind me.
One of the projects that we did for university was very much to trying to understand how the health of the investor, personal investor, would impact their return on investment. So one of the thing that we had to do is to. Look at the profile photo of these people and then we train the algorithm so that we can calculate their BMI.
Now, before we [00:17:00] start building the algorithm, the thing that we had to figure out and that we had to do is to train the data of, doing the facial recognition or looking at the profile photo, and then try to come out the BMI calculation. To do that, we actually have to buy data.
And we have to source data. And one of them where is a lot more easier to obtain is the mugshot photo. IE the people who have to take their, have their photo taken before they go into prison. And the data source that we got from there was very specifically from the state.
As we were training the data, we started to realize that there was a bias in the data itself where there are certain race, it's skill to certain race, and so it means that we were getting really, really good in predicting [00:18:00] the BMI for certain rates. And we were not really good in predicting the BMI for other races simply because there was not enough data in there.
So that data bias itself is a challenge, right? And if you try to put that context into the error and context, we always talk about open AI are training on how many treatments of the data from the internet. And you take a step back and you think about all. Different kind of writing, all different kind of writing material that we can find on data, whether we like it or not.
I think that that is a data bias where it is skewed to certain direction and whether it's right or wrong. But here, judge, then the underlying data bias is going to skew to certain directions simply because there is a lot more data to be trained by the ai, to be trained by the big tech skewing and is hating to certain direction, right?
So [00:19:00] that is the data bias. And in terms of the algorithm bias, that comes down to your engineer. It comes down to your engineer in terms of how they are building the algorithm and how they are tweaking the algorithm equally, what sort of got real. Today put on the algorithm and from the last two and a half years with all these big tag from Google to X to open ai, you can equally see that different company, they have a different taste and different direction in terms of the algorithm.
Where one is allowing certain things, but the other company doesn't necessarily allow for it. So those are the two bias that I think it will always be introduced. It will always be there, and that is something that I think that we probably have to worry. To, to a certain extent, and that that also come back to one of the article that I wrote a couple of months ago that [00:20:00] I talk about the AI severity and, and part of the reason why I, I, I wrote that is because there is so many conversation to say that well.
Australia should have its own AI and it should have its own value where it is based on the value of Australian. I totally get that. At a high level it sounds great because we shouvldn't just use the value of American or China. So that that, that's great. But when it comes to the actual implementation, how do we know that it is actually voicing out?
Different individual and different group of people within the country. So those are the thing that human is going to make their judgment and that will introduce certain bias. So that is my take.
Kris Brown: My, my head's sort of starting to throb a little bit there Jason.
Like, you know, it, it's kind of interesting 'cause you go, we want to create a model that doesn't have bias and then we're going to introduce an individual to, you have to [00:21:00] choose whether the data coming in is biased or not, but they won't get to do that at any form of scale. So therefore they would need to introduce an AI or an algorithm that's going to help 'em to filter that.
Which would also then means that someone has to determine what biases they want to or not introduce. And that in itself has its own bias and the paradox continues. And so I think, I agree with the position of wouldn't it be great if, but there's also that, how do you implement those things?
Mm.
Anthony Woodward: And I think AI that looks at the AI that checks the AI and then that AI checks the other ai, it's perfect.
I was trying
Kris Brown: very hard not to do that, but it's exactly what I said. So it's an interesting one, and I think, you know, that leads. Probably to a bit of a follow up question for me, which is assuming that there's not much that we can do about that or that, you know, maybe we shouldn't do much about that and actually just check for, and again, scratching my head as I say these things out loud, because it's like even in using a, a ChatGPT or a Gemini and asking questions or performing outcome and getting it to perform outcomes for me and getting.
[00:22:00] Results I still need to review and check and, and I think
Jason Tan: hundred percent
Kris Brown: That, there's that education element there in that, that we need to help and understand. It's there to help. As Anthony said it, when your boss tells you have to do a bunch of stuff, you can get it done on the plane really quickly and punch out the other end and maybe get some sleep.
Thanks boss. But at the same time, you know, there is that element of, you still have to be cognizant. This is your output and, therefore, escalate that up now to the business and you still have to be cognizant as an organization. What are you using AI for and how are you regulating that?
And I think, you know, we're starting to see a lot of examples of obviously the AI adoption, outpacing regulation and governance. But there is, a very large portion of organizations now saying that they've got some form of AI governance policy in place. Some of the policies that I've seen are thou shalt not use ai, which is probably not a great one.
But you know, that policy is fine, but it's now about practice. You know, how can organizations continue to [00:23:00] accelerate that innovation without compromising on trust or compliance?
Jason Tan: So that is really when the organization, they have to implement and they have to use certain technology to put that in place.
To put that safeguard and put that as a gut wheel. They have to do something about it. It has got to be more than just a check box to say, have we done this? Have we done that? I think they have to start exploring, what are the technology that is available out there to help them to do the AI governance.
To help them to do the data governance so that the employee who use AI to perform the certain function will always be bounded by the policy that they believe in and whether they the employee are checking and reviewing. One of the words that I often use in describing this scenario is that the taste, I think in five, five to 10 years’ time, or especially [00:24:00] in my kids' generation, so my kid is seven in final, so they literally are going to grow up in the AI native or I, is that, how much taste do they have in terms of their work today?
Just rely on the AI to give them without checking, without adding their thought, without adding their personality in there. Today, actually want to review and add their thought and add their personality. We are going to have people with all different kind of things. You are always. Always have the people who are just take going to take whatever AI is giving to them.
You on the other spectrum, you're going to have the people will say, you know what, I don't want to use AI at all. And then you have the people and you're gonna have the people, a very, very large group of the people in the middle. And they are pace equally. Going spanning all different kinds spectrum in terms of how much exactly they are going to audit you and adding in their [00:25:00] personal thoughts into that output.
Help with ai.
Kris Brown: Yeah, and I, I think as you said, it. It's that level of understanding that you need to check. It's that level of understanding the data that you are feeding in. It's the putting those guardrails in place and it will continue to happen. Even here in Australia. More recently, there was a very large organization involved in the government contract where they produced a report, which clearly has hallucinated all the way through the report.
We've made up links and, and other things, and it's, it's like, you know, you know the report. Should have been produced cheaper for both sides, obviously, you know, ultimately having AI workers do that work means that they should be able to do things faster and cheaper, and there's a benefit to that. But at the same time, there should have been checks and balances in place that the output and the, the influence on that outcome of that document wasn't so great that the document was effectively wrong.
I think there's that, that balance and, and certainly these are sorts of things that you've [00:26:00] addressed in some of your, some,
Jason Tan: yeah. So in that particular case, I think unfortunately the individual who produced that report with the help of the AI had a very low pace of his own output.
The AI governance system is so important, like how do you make sure, because if you expect everyone to behave exactly the same as you do, you are going to be disappointed a hundred percent guaranteed. Right? Because everyone have different level of the expectation. To solve that problem, to make sure that doesn't happen, you want to have the AI governance in play, a governance system in play to make sure.
That it check and it prevent all those sort of thing happen. And that is the importance of the AI governance system. And if anyone, I mean, if the organization who are still on the fence about the ai, I think that's the first thing that they should look at, right? I mean, AI is one of the things that [00:27:00] like, say, and one of the things that I often use as an analogy is that.
And compared to crypto or compared to blockchain, where after all these 10, 15 years are yet to still see a good actual application that is leveraging the blockchain. Whereas for AI, in just two and a half years, we are seeing so many applications. So AI is not going to go away. AI is definitely is going to stay, so you want to be left behind, but if you are worried about it, how it would impact.
The reputation then AI governance system is really one thing that they should start seeking out.
Anthony Woodward: The more I've worked in the space and the more I've spoken to people, we, as you know, technologists in the space want to talk about explainability, interpretability, providence, transparency, and then we talk about that as a nuanced way to establish trust.
But we do that [00:28:00] because. We don't fundamentally have the capabilities to explain why AI makes a decision. Like that's the one problem, right? In a, the clarity algorithm, I can trace you through with all the bugs that I've inserted and any code written, I can at least tell you the logic. Eventually, when we go through the nuanced understanding of an AI system and these, you know, really co beginning to become very complex, large language models or, you know, other.
Neural networks that can be applied. That ability to create that traceability through that decision consequence mm-hmm. Is very difficult. And you know, like I was reading a study the other day, right? When you do something like look at radiology, interpreting radiological reports, AI is amazing.
Generally considered plenty of literature around this to be better than humans. Not, not like outstandingly better, but definitely better. And there was another study around doctors malpractices analyzing all the different [00:29:00] malpractices and trying to map that back to a decision criteria.
Really bad and really bad because these are really human interactions that occur that have. So many things that reweight them in terms of an interpretation. So there are these different uses for AI and I don't think we're talking about what's good and what's bad in terms of that use and what's a good set of systems for the use and what's a bad set of systems for the use.
When you are out talking to people, and I know you've written some articles around this, how do you talk about this good and bad and trust and this establishment because we can't do trust in the way we'd traditionally go, which is the trust is. I can show you how it worked. Or in the case of a human, I can go and say, why did you make these decisions?
Please tell me. And human will give you the AI can't do that and that's why it's different.
Jason Tan: Yeah. Well, one thing that I often tell people, right, is that there are certain use cases where you can be very comfortable to put the AI [00:30:00] and it is okay when it goes wrong, that is the certain use cases, especially some business use cases.
But there are, however, some of the business use cases where you can't afford to be wrong, you can't afford to make mistake, and that is where you have to put the human in the loop. So that is really number one and number two, I, I actually think that we should work on the basic with the expectation that the very AI that we are seeing today.
It is perhaps just a second stage of the AI and with all these really intelligent people who are doing the research, who are finding ways to improve it, we should work on the basis that it will improve, it will get better. Where one day it can be a deterministic system. So, and the third thing.
That we know for sure already is that the [00:31:00] imperfect system that we have got today, right now has already had 800, 800 million people as a weekly active user at the ChatGPT. You know, it's not going to go away, that's for sure. So AI, whether we like it or not, is not going to go away. And often I tie it to the fundamentals of the human nature, right?
Is this thing going to make it easier for my life? Yes. Is this thing going to save me a lot of time? Those, given that those two conditions are satisfied, we are gonna want to use it and it will improve. It will improve of time. So my view is that if you are not sure, and if you are concerned, put human in the loop.
What you need to understand though is that this thing is not going to go away, and if you don't do it now, someone else is going to come out a different way of disrupting your [00:32:00] business with the use of the ai, and by the time you realize it, it's probably too late for you to do something about it while the other party who was taking certain risks and also safeguarding themselves, those risks by putting the human in the room using
implementing the AI governance system, they are a lot more ready than you are for not doing anything with AI and in the near future, where some very, very intelligent people figure that out, they are going to fix this problem. They, they don't, they wouldn't have to worry about this problem. Right. So that, that is my view of it.
Kris Brown: I was gonna ask you as a bit of a finish up question. I was gonna ask you what do you think the future looks like, but you've kind of just answered it, so I've got one quick follow up on that, Jason. Can you remember the time where your mind was changed when it came to ai? So what was the point in time and what was the pivot that said?
The position that you've just taken, which is, I know that we're gonna keep using this. You [00:33:00] asked yourself those couple of questions. Do you remember what that time was?
Anthony Woodward: This is my new standard question. You can't steal my question.
Kris Brown: I'm stealing it. It's a good one. Bad luck.
Jason Tan: I think my mind was changed for not using ai, but my mind was changed when the company like Google, Open AI, where they introduced the AI that can make video and also graphic images.
A lot better than a few years ago. It's mind blowing. I think it's mind blowing. You could imagine that without the AI and if you have an actual, graphic person doing that, it was, it will take at least a few hours. Right. But now you can, you can, you can write in a prom and it come out, those video or those images in less than a minute.
Yes. They are not perfect. But remember just about two years ago, they were horrible, but in just two years, they are getting so good [00:34:00] already, what will happen in 10 years. So my mind was changed, not for the other way around in terms of I don't trust AI or I don't want to use ai, or AI is not going to be useful, but my mind will change.
I have to teach my kids how to use ai. I have to teach my kids how to grow up in this world of ai. Where AI is going to perform a lot of those things much, much better than they do. How are they going to have a fulfilling life and also have fulfilling income, have a job or have a business if they are into that?
For the rest of their life. I think that that is what I have to think about all the time.
Anthony Woodward: Look, thank you very much for joining us today, Jason. This has really been a fantastic and super interesting conversation. For the listeners out there, where do they find you?
Jason Tan: Find me on LinkedIn, so search for Jason Pen.
Jason Pen has got the blue badge [00:35:00] voice badge, or find on my Substack JPC Pen. Dot substack.com. So those are the two places that would be best to find me.
Anthony Woodward: Fantastic. Well, as, as I said, thank you for joining us. It's been a fantastic conversation. Thanks all for listening. I'm Anthony Woodward, and if you've enjoyed today's episode, please give us a review on your podcast platform of choice, share also on LinkedIn like Jason under record point, record point.com/filed for any additional.
Filed experiences. I believe we even do an outtake of all the errors that Kris had in each of the previous, filed episodes. There are some up there that Kris has done. They're quite amusing. So head over to record point.com/filed for the outtakes. And if you have any feedbacks ideas for an episode, wanna become a guest or just wanna come say hi, email us at filed@recordpoint.com.
Kris Brown: And I'm Kris Brown, the guy who makes all the mistakes. We'll see you next time on FILED.
Jason Tan: And thanks for having me. It has been great.
Kris Brown: Thanks, Jason. Cheers mate. Appreciate it.
