14
What does it take to be an AI-ready business in 2025?
In 2025, every business wants to innovate with AI, but fewer understand how to balance their AI innovation dreams with governance, and the importance of maintaining customer trust throughout their transformation.
Alyssa Harvey Dawson, a board member at organizations including AppLovin and AI 2030, and formerly of companies like HubSpot, Sidewalk Labs and Netflix, shares her experiences and insights on balancing innovation with risk management, the role of data in AI solutions, and the importance of maintaining customer trust through responsible data use.
This episode also delves into how board members can effectively oversee AI initiatives without needing to understand the technical intricacies, grounding AI governance in practical business solutions and risk management frameworks.
Also: what does AI have to do with improv?
They also discuss:
00:00 Introduction and Guest Welcome
00:51 Alyssa Harvey Dawson's Career Highlights
03:47 Starting with AI Governance
05:52 Data Governance and AI Integration
09:12 Board-Level AI Governance
12:21 Risk Management and AI
17:25 Future of AI Governance
30:53 Improv and AI Governance
35:07 Conclusion and Final Thoughts
Resources:
- 📏 Benchmark: How much PII does the average organization store?
- 📑 Blog post: Mitigating AI risk in your organization
Transcript
Anthony Woodward: Welcome to FILED a monthly conversation with those at the convergence of data privacy, data security, data regulations, records, and governance. I'm Anthony Woodward, the CEO of RecordPoint, and with me today is my co-host Kris Brown, our executive Vice President of Partners Evangelism and Solution Engineering continues to be a mouthful.
We'll sort that out another day, but today we have a fantastic guest. Which I've really been looking forward on the podcast, we have Alyssa Harvey Dawson, who is a board member at organizations including App Loving and AI 2030, which I can't wait to talk more about, as well as being a former general counsel and tech exec at companies, including HubSpot, sidewalk Labs and Netflix.
Welcome to the podcast today, Alyssa.
Alyssa Harvey Dawson: Thank you very much for having me. I'm really happy to be here.
Kris Brown: Beautiful. And look, welcome to the podcast today, Alyssa. It's great to have you on. We're gonna spend a lot of the time today talking about, AI governance and [00:01:00] especially with your experience how boards can play a role in that, I'd love for you to just sort of give us maybe that quick tour through that resume, especially being involved with companies like Autodesk, Netflix, eBay, sidewalk Labs, HubSpot. What stood out for you? What's been the thing that you've done in your career that you're like looking at going, this was the thing?
Alyssa Harvey Dawson: That's a great question and first of all. Thank you for being excited for having a conversation on AI governance. When I read what your podcast is and is about, you don't find a lot of people who are just like, I wanna talk about AI governance, but it's so crucial these days. So I'm really happy to be here.
When I think about my career, I feel like I've been fortunateto work at kinda that intersection of law, technology, and innovation. And youmentioned a bunch of the names of the companies that I've been involved with,but the through line for each of them was. They were always seeking to sort ofbe ahead of the curve.
They were innovating and creating sometimes areas, streamingtechnology, streaming [00:02:00] tv, Netflix.You knew it was there, but you hadn't had the same type of impact as what theywere creating. And that was what their goal was, was to set out and create it.Another company that you didn't mention that people probably don't know as muchabout was a company called Harmon I went there to head up IP because they werethe first company, they were a tier two automotive supplier and they workedwith all of the premier automotive companies and they were the first ones toput basically navigation, connectivity, everything that we now take for grantedin cars, into cars.
And so I went there because they were seeing big tech come into say, Hey, we think we can own this piece. And Harmon's like, Hey, we alreadydid this, we know automotive and we know how to do it better. So they werelike, we've been innovating long before you guys thought this was interesting.
And so don't forget us, Samsung eventually bought them becauseI guess they agreed. But like again, the great thing that was always there wasthe desire to sort of move the needle, push the needle forward beyond thecutting edge [00:03:00] and move things and Iplayed my part and from a legal perspective, whether that was IP governance, Ieventually ended up doing data governance. Responsible AI through Sidewalk Labswas the beginning of that journey and all of it was helping to manage risk, butreally to manage it so that the innovation can keep moving forward. That was mything, like how can we help the growth and direction of the company andminimize the bad stuff that can happen?
Anthony Woodward: Fantastic. And what a great, we love that you're here to talk about AIgovernance. 'cause it's something Kris and I can probably talk about for. Threeor four days, which we won't keep you that long, but thank you for making thetime. There's so much to unpack when you start with a new company and youstarted with a few, how do you start thinking about these things with yourbackground and legal and your background? Thinking about data governance and AIgovernance, what does that first 90 days look like and how do you startunpacking it?
Alyssa Harvey Dawson: You know, it really starts with understanding what the business is about.
What we're building, where we're trying to go, what's thestrategy, what are the business [00:04:00]drivers, you know? I just don't think there's kind of like a world for a legalor any support team outside of the business, right? You have to be integratedin and understand. And have an appreciation for where the business is trying togrow, what it's trying to do.
And so for me, that was always the starting point was I am hereto understand what this business is about and then to figure out how myfunction, my area, my role, if it's in the C-suite, is going to be able toadvance that strategically. So you always start at the baseline ofunderstanding the business and the industry that it's in and who it's trying toserve, the customers and the problems it's trying to solve. So that's kind oflike the lens in which I went For me, and I was a journalist.
I don't know if you guys were journalist, but I was ajournalism major. And so one of the things that you learned in traditionaljournalism, was, you know, understand as much as you could about your subjectmatter so that you can report on it in a fair and accurate and complete way.
And so I kind of like [00:05:00]go into companies with that mindset of trying to understand as much as I can about that environment.
Anthony Woodward: I think you can tell that I'm sadly just a law and technology graduate. And Kris as way more skills, I think, than me on the journalism side.
Kris Brown: Let's just say I spent a lot more time behind a microphone, but we won't say that it's journalism.
What I used to do, Kris was a DJ A lot of time in very, verylate nights, in many, many nightclubs
Alyssa Harvey Dawson: Later we can talk about, I had a friend who was a ran nightclub in DC so you you might, you might have gone to them actually, but that's for a
Anthony Woodward: very good chance for another time.
So when we think about those programs, I think to boil down to today's topic on AI governance, how do you start to boil that down and like what are the key criteria for 2025 and let's even timestamp it October 25,'cause this is moving so quickly that we should be thinking about for AI governance and how you'd thinking about integrating it into that plan in a new role.
Alyssa Harvey Dawson: Yeah. You know, so, you mentioned, I serve on a public company board. I serveon private [00:06:00] company, boards and anadvisory board, i'm a co-chair of the AI governance subcommittee of AI 2030,which is about enabling people to develop.
Responsible AI and put it into the mainstream such that by 2030we're building things intentionally and responsibly, ideally avoiding some ofthe negative consequences. One of the things I realized when I started thisjourney was to understand.
At that time, machine learning, deep learning, that was thename that we called it. Not just we, everyone called it that. But to understandthat really you needed to understand what data and information you had or weretrying to get access to, because it's the data that dictates what you're gonnabe able to do with that innovation.
there's just an inner, it's intertwined. Understanding thedata, understanding the innovation, and ultimately, if you're in a businesswhere you're trying to serve the needs of your stakeholders, you wanna alsounderstand how to [00:07:00] keep the trustthat you have when you are using either your own data or your customer's datasor your, business partner's data.
And so to me, those things are intertwined. And so it reallystarts with what data do I have? And how am I going to treat and use that datain a responsible and trustworthy way so that it ends up having the outcomesthat we wanted to have and avoid some of the negative things that could happenif I wasn't thinking about that in the right way.
So it starts with the data.
Kris Brown: You arein the right place. It doesn't get any easier or simpler than that. All startswith the data, I think all of those compliance efforts all come back tounderstanding what you have, being able to know what you're then going to dowith it, being able to dictate within that framework why you are doing thosethings.
Because you might be doing things that are with sensitive datathat are with private data, but it's as long as you've got that framing, we aretalking about how you do these things in terms of setting up that AI governanceand you, as you mentioned, you've had a bit of history as well of looking afterdata governance inside organizations.
There is this natural blend. How [00:08:00]do you get buy into, take that moment to say, well this is what we need to do before we rush ahead. Because right now there's the MIT study that was out more recently, you know, very large percentage of these early adopter programs had AI failing. Probably the outcome being that most of it was, we really weren't aware of what it was gonna do with the data, and it didn't feel like we were quite prepared for that.
How do you give that sort of, direction to a business? I think more from that board level position that you've had the experience in.
Alyssa Harvey Dawson: Yeah, so I mean, it's ultimately, about a partnership and it goes back to something I was saying earlier, which is when you come into a company or a position as a board member, you're trying to understand and work with management to understand the business drivers and what it is that the company does.
Technology oriented company, what that technologydifferentiator is. And so I mention that because being AI ready means you aremoving away from zero set. Having a knee-jerk reaction, like, [00:09:00] everyone's in a frenzy. Oh, I have to havesomething that's ai. You kind of forget the business fundamentals, right?
You forget that what you're trying to do is come up with athoughtful solution to a business problem that your customers have. And so tome, the thing that you do when you come into companies is you don't say tothem, Hey, I'm gonna wave a bunch of like checklist and compliance things atyou. You kind of need to bring it back to the, oh, what problem are we tryingto solve for the customers?
Is what information, what data are we using that would help ussolve that problem? And then you sort of go into the, okay, if that's whatwe're trying to do, have we thought about this outcome? Have we thought aboutthat outcome? And you start to have that dialogue, but it's all grounded aroundthe business solution and the problem that you're trying to solve.
And I think that if you sort of do it from that vantage point.You are talking with your partners, whether that's in product engineering, thecompliance teams, the security [00:10:00]teams. If you're all talking about being grounded by the business solution thatyour AI.
Driven solution is trying to solve, then I think you're atleast talking from the same playbook and you're showing that you have aninterest in furthering that innovation and doing it in a way that's smart andresponsible. So I try to ground it in the business and make sure that peopleknow we're on the same team.
And that's. The business team, it's the customer team, it'shelping our stakeholders. And normally most people would say, yeah, they wouldnod with that. They're not gonna be like, oh, I don't care about helpingstakeholders, or, no, we're, you know, we're just trying to make up randomstuff. We're not trying to solve a problem.
People are trying to solve a problem usually. And so that kind of helps you to sort of be on the same playing field, and then you can introduce those other concepts.
Kris Brown: Yeah. Excellent. And I think you kept coming back to that. That customer trust having that understanding and being a part of all of the elements of the business and bringing them back to that grounding.
I said, I really do like that.
Anthony Woodward: Yeah. I love the application of the real framework. I think that kind of leadership technique of [00:11:00] isolating those pieces out, but I think on the ground in reality, there are a number of Problems that folk are trying to tackle there. So yes, grounding in the business is awesome, but we start to get into the complexity of what data, what contracts, what control, and then how do I step into that?
Because there are some hard lines there, right? employee data is tricky. Customer data is tricky. There is legislation out there with CCPA so how do you bring. That grounding of the business context, bring all of that into this conversation.
Alyssa Harvey Dawson: Yeah. Well I think, I mean, I think once you're grounded in in the business and you're grounded in, then that helps you to also.
Take a step back and say, data and information are we, do we need to use in order to solve that problem? What do we have? What do might you need to bring in? And then you can get a little more granular, right? So, Say you are in the health space and you're coming up with a revolutionary solution or product that is going to help [00:12:00]recommend potential therapies for people.
And to do that, you. Think that you need access to largevolumes of health data of individuals to make those predictions worthwhile andusable. So I just hit on one of the most sensitive data areas, right?Information about health and people's health and welfare information that couldimpact their welfare.
And so. You already now know that you're dealing with some ofthe most sensitive of the sense of information. And so there are obviously lawsthat govern, what you can do with that information and data. I'm not gonna getinto Hippo. And any act that comes out when it's talking about, you know, highrisk data, like you're hitting on it.
But now that you've isolated, like that's the type of data andinformation that you have, then that should dictate some of the protections,right? The considerations, the concerns that you want to place around that databecause of the category that it sits in. So [00:13:00]I talk about framing for businesses high, medium, and low risk data insituations and based on where you fall, right?
You're gonna treat those things differently and you're gonna doit for very obvious reasons once you've isolated what you're trying to do withit. And so, I don't know that anyone would be surprised when you start, take astep back and say. We're talking about people's, you know, sensitive health andinformation and data that I'm going to wanna keep that very safe and secure.
Probably anonymized, protected needs to be accurate. Wanna betransparent. You know, these are all like responsible AI principles, but thatis a natural fallout based on the data and information that you wanna use. Andyou, again, you're gonna have. You know, partners with your product and yourengineering team and wanting to do the same thing.
And I certainly found that in any company that I had, likepeople cared about keeping that trust of the customers. And [00:14:00] so once you sort of talk about and isolateout the data and don't sort of treat everything the same, I think you can comeup with reasonable tiers of risk.
Make sure that your business understands how to tier thoserisks. You wanna help them to determine what the company's risk appetite is.And then you wanna see sort of thoughtful business grounded strategies toprotect some of the things that are the highest risk. And then, you know, maybeyou're taking a bigger risk with some of the things that are in the low riskcategories, maybe some more internal development.
That have less you know, hair around them, if you will. Sothat's how I think you sort of ground yourself in what to do is you reallyisolate what's the important, what's the type of data and information, and thenyou come up with appropriate strategies based on where you fall, if that makessense to y'all.
Anthony Woodward: Itmakes great sense at the data level. I wanna ask the more complex question. Intwo forms though, as a board, how do I really understand the algorithmic impactassessment? Because you [00:15:00] even haveanthropic out there today saying this is black magic and we can't even tell youhow the black box computes things.
And I know that's the kind of extreme example alongside the ability to then describe that. So a board can sign that risk offs. Completely get at a data level, we can quantify it. We can do this in a way that I think most board members are equipped with the skills to do that. When we get to the algorithmic conversation, how does that sign off happen?
Alyssa Harvey Dawson: Yeah. So, you know, board members don't have to be engineers, right? We are notmaking the models I have a journalism background, right? So I surely am notgonna sit in the shoes of an engineer and pretend to understand the complex.Calculations that are taking place.
And if you are trying to do that, you're just gonna knockyourself out of the water as a board member. 'cause you're never gonna getthere unless you happen to be, that board member that has that skillset mostdon't. So what can you do? What should you be thinking about? Again, yourbusiness should be able to answer the question [00:16:00]about what it is that they're trying to do.
Not in a convoluted, you know engineer algorithm algorithmicway. 'cause we're not trying to, trying to make sure that you're doing it,you're in the right way from a, you know, a computer science degreeperspective, but like, literally, what are we trying to do? Oh, we're trying topredict A, B, and C and X, Y, and Z is explainable to a board.
And to do that, we're gonna take this information. And we'regonna use this information and spit out A, B, and C. I'm like, okay. So now youcan process that, you can figure it out so you can understand what data's beingused. You can understand how the AI fits into the business model, and then thatopens you up to think about the risk.
So if you just told me, going back to the healthcare. I'm gonnabe using highly sensitive data to predict whether or not a therapy is a goodavenue for someone's treatment. I don't really think I need to know exactly howthat thing is doing. I can though ask different questions.
I'm like, okay, so how are we guarding against. Inaccuracies,right? How are [00:17:00] we, ensuring that youknow that we're, we are having, the right level of information to be able tomake that prediction. those are the questions that you can ask around itbecause you can sort of see what the bad outcomes would be, and you would hopethat your engineers are also asking the same questions because they too.
Want to have something that works, right? There's no sense indoing that project if at the end of the day your, you know, recommendations arewrong, they're gonna be recommending or telling people that therapies work ordon't work, and that's inaccurate. Like your project's a failure, right? So noone's about having a failure.
And so it naturally flows that you're gonna do that. You caneven raise bias, right? It's like, oh, you know, how are we. Clear that, youknow, the test subjects, that this is gonna be a good outcome for, black orbrown people who also might benefit from this gene therapy making. I'mobviously making all this up, but like Right.
But you know it's gonna be useful to a wider population. 'causewe [00:18:00] want it to be sold and used by awide population. And so it just goes back to just. Asking the practicalquestions, moving away from, the black magic and just asking what the thingdoes and what outcomes you're trying to find.
And then thinking through what are some of the outputs thatcould be concerning and what are the types of things that we're doing to guardagainst those.
Kris Brown: I thinkit's a great explanation and it helps the practitioners who would be listeningto the podcast be going, we do need to just simplify these things and talkabout it in a natural language way that people can understand, because theboard members will understand that.
they'll ask those critical questions about, how do I protectthe business? How do I ensure we're doing the right thing, as you said, how doI ensure customer trust remains through this process? In reading the articlethat you published in 20 mines, it was a little bit of a link here, but thesetup for.
We've got that AI governance and whatnot in framework. I lovethe fact that there's a diagram there where there is a massive overlap betweenAI governance [00:19:00] and data governance.And you've touched on it a little bit here as well through the chat. But how dowe, and certainly the practitioners in this world, how do we continue to beatthat drum?
understanding what your data is, what are the boards reallysort of, we see, we're obviously in the data governance space, selling into,very large organizations who have got lots and lots of data. But it's still,been a long process if you are being in this space for, 20 plus years.
Selling to these organizations and it's always that story of you know, why do I need to do this? What, what's some advice you could give to the, to the listener? The practitioner who's out there going, you know, I'm apart of a large organization, we know they're looking to use ai. I think this is almost a new opportunity for data go governance teams to be like, we we're a big part of what needs to happen here.
But what's your thoughts there, Alyssa, around how they can communicate back up to boards around why data governance is so important here?
Alyssa Harvey Dawson: Yeah. I mean, it really goes back to what I was saying. It's relevant [00:20:00] because your solution that you're likely trying to get to with AI is dependent on the data and information that's going to be driving that.
Right? And so the two are just interwoven. I don't know. Andyou guys might because I don't know of all the things that are going out there,but most of the solutions and the things that I hear about what AI can dodepend on masses amounts of data that, I mean, that's why people are so excitedby Gen ai, and its ability to process millions upon millions of bits ofinformation out there amazingly fast speed and to come up with, Solutionsoutcomes, it's almost natural that you are going towards that because thesuccess of your solution usually depends on the amount of information thatyou're gonna be able to feed into that algorithm so that the outcome isactually going to be worthwhile.
I think that if you just stick with what is a solution tryingto do Is it, trying to help [00:21:00] peopledo, whether that's customers support solution, how is it gonna be helpful?Because it's helping you to process all the thousands of questions thatcustomers have in a faster way.
What is that? What are those thousands of questions? It's data,right? It's information. It's what you have come to access as part of yourcompany. So you sort of bring it back, you tie it back to the what are wetrying to do? What are we trying to prove? What solution are we having? And youmove it away from something that seems foreign or different, right?
I think that probably we get into trouble. When we treat it like it's something that's so different than other stuff that we've beenbefore. 'cause then people are probably trying to figure out a different framework. I'm like is it different because of this? It's like, no, no, we're going back to the same thing.
when I think about board oversight, making it a part of your regular enterprise risk management solution, just like we did with cyber. It grounds it back into something that's real and tangible that people know about as opposed to making it, ephemeral bring it back to the basics and I [00:22:00] think you'll connect more with people.
Kris Brown: Yeah, I think even in the example that you were, very skillfully making up before with the health piece, is that if you go back to that, low, medium, high risk, having data governance placed to know what those things are, having access to all of your data and having those things against it as well.
I think it all flows down to that, you know, what is it I'm trying to do and how am I going to do it? Here is that piece, for the listener, for the data governance practitioner out there. This is a whole new time where you are going to be a part of the solution.
Alyssa Harvey Dawson: I love that.
Alyssa Harvey Dawson: You are part of the solution, right? You are. You are going to help. Make abetter product, right? Because at the end of the day, that's what the company, that's what the business is trying to do. They want that competitive differentiator.
They want something that's gonna sell, and you can be part of enabling them to do that in the best way. once again, you're seen as part of the team, part of the crew that's enabling that to happen, I think you're invited in more if you're [00:23:00] seen as a checklist person. in a way that's not connected.
I mean, it never worked before. Quite honestly. because anything that's detached from what people are trying to do to drive revenue and return on investment is gonna seem like, you know, why am I bothering? So the more you can connect. What you're doing with the returns and the results that people are trying to have, the better off you're gonna be at gaining traction and attention.
And I've learned that the hard way. Like, you know, and by no means, like that perfect legal practitioner that did not make this mistake many times, but like you then realize, how can I show that what I'm doing is actually connected to what they're trying to do? And then you see the aha.
You see the, oh, yes, come in. Let me help explain to youwhat's happening here. And it's such a better position to be in as a personwho's figuring that out. It's such a better place to be in, to be on the insidethan to just sort of like trying to peek in on the outside and figure it out.
Kris Brown: I reallyappreciate you saying that. I think that is. A key takeaway, even just fromthis podcast in general, is that you be a part of that, [00:24:00] show them how you're adding that value that will helpto elevate into the group, and then you become part of the solution. Andthere's all the other elements or the other upsides of good data governance.
But this is one of those unique moments where everybody needsthis. And I do appreciate that.
Anthony Woodward: I'dlove to drill into something you did talk about there though, around treatingfor boards AI as just another component of your enterprise risk process. Youknow, if I could draw you on, do you think we need to be getting closer to whatwe see with the SEC cyber incident disclosure for AI because we don't have someof those same controls yet occurring in a way that exposes things to the publiclight of day.
Is that, is that a track you see us going down, or do you see adifferent method in that space?
Alyssa Harvey Dawson:Yeah, I definitely see a trend. I was just at a dinner the other day talkingabout AI governance. With other board members of different companies.
we were talking about what are the trends, what are peopleseeing? a Harvard law review on corporate governance article that was showingin [00:25:00] 2025 that there was a growingtrend towards boards Adding AI governance and a risk review to either theirstanding committee, like sometimes you have an audit committee that's lookinginto this like privacy data, governance, cyber usually with audit or your riskcommittee.
But you know, sort of adding that into the mix and that trendwas increasing. by 2026 next proxy season, if you haven't incorporated thisinto your risk factors already, well, kind of shocking, but like, it'sdefinitely gonna be there. And it's also should be a standing part of yourboard meeting, like people are updating their charters to make, being moreexplicit about it being a part of it.
And so I do think that it falls squarely in there. It's a risk,right? You're managing a risk. So why you wouldn't. Put that in there and treatit as such. You know, it just makes sense to do that
Anthony Woodward:first time. I think Kris ever, we we're starting to see that risk around datarisk, around records risk, around that actually appearing on the board, which,which is fantastic.
It's something I think we've been talking about for [00:26:00] years. Is there a blast radius for boardsto think about? Okay, and what I mean by that is there. A different frameworkwe need to apply here, in your view, when we talk about ai. 'cause the blastradius is so much bigger and can have conceptually wider impacts.
Particularly, you know, as you take yourself out a year, acouple of years from now and are so intertwined in the way people work is whatwe're expecting. Is there a model to think about that yet in your view? Or isit really just the same methods we've always used?
Alyssa Harvey Dawson:I don't know that the methods have to be that different.
Yes. The blast radius could be very different for companies,especially if you're dealing with some of the higher risk, more sensitive.Areas and outcomes that could be generated from using an AI tool or a model. Idon't know why that would need to feel so different than your top risks, right.You know what I, you know, when you're, when you're sitting back and saying,what are some of the top existential risks to [00:27:00]the company?
You know, as a board you wanna hear about, because we don'twanna hear about all of them 'cause companies have a lot of risks. So you'refocusing on those. If your AI solution product feature falls into that, thenthat should be part of the discussion. Or quite honestly, it could be if younot having, an AI solution or differentiator and your competitors do, thatshould fall into the competitive factors or the environmental ones.
And so I don't know that we have to recreate. The risk framework. I think we just need to use the risk framework and see where AI fits into or not, because I also fundamentally believe that if you are dealing with AI in a low risk scenario, maybe for your company, it's more about your internal operations and productivity and efficiency, which is great, but it might not be as existential.
I don't know that you need to bump it up into a different category just 'cause it happens to do with ai. I think that's artificially inflating the risk. [00:28:00] So I think we're better off, trying to use the frameworks that we have, which I think are good at identifying, those high, medium and low risk and which ones to focus on as businesses.
Putting it in there and then using the that review to help inform how much attention should be paid. By the board. I'm generally of the mindset that if we don't have to create new things for people to learn and figure out, and we can use ones that are working, why don't we, we were looking at, you know, things like a NIST framework at other companies and.
What are we talking about? Parent fairness, accountability. I mean, it's, it's, it's not that far off from what you were asking yourself. Solet's not recreate the wheel unnecessarily. And when you do, you do, but like, let's not do it just because.
Anthony Woodward: I was listening to one of your podcasts and I'd love to know, or one of the podcasts you're on more correctly, what's your intersection between improv and AI governance?
Where do you see those connecting? Because in that podcast you talked a bunch [00:29:00] about on the legal side, but I was keen to explore the data side.
Alyssa Harvey Dawson: Oh, that's so funny. I can't even remember what podcast that was, where I mentioned improv.
Anthony Woodward: it was
Alyssa Harvey Dawson: Oh, oh, oh. That was fun. That was fun. Okay, so with improv, you have to remain agile with improv. Have to, you're a part of a partnership, right?Improv only works. If the group works together, and I already talked about the importance of people being part of the solution. At the end of the day with improv, you already said, agile being part of the solution.
You have to be a great listener. You have to be a great listener. And so that goes back to the understanding what the business is trying to do. Understanding the industry and really opening your ears. There's just, there are so many great trainings from improv. And you know what the last I'll say is with improv, you're trying to have fun, right?
AI has the potential to be transformative, which improv can be as well. [00:30:00] To get there. there are gonna be complex things to work through, but if we do this the right way in the responsible way and the safe way and we have amazing things come from it, that could be a lot of fun.
So let's not forget that the fun and transformative part of this new innovation is something to create as well. So those are the intersections. I see.
Anthony Woodward: Great answer. We should explore that one further.
Alyssa Harvey Dawson: I did not have my dingle card that we talk about improv.
Kris Brown: That's what I do. Ali, I think you mentioned a little bit, you're a part of obviously the AI 2030, where you're looking at trying to be a part of that. Where do you see this going? Let's make it two years into the future. 'cause everything's moving so very quickly. I don't think we can even go five at this point intime.
Obviously you're discussing those things with. That group. But two years from now, where do you see this?
Alyssa Harvey Dawson: If I had my way, we would have a structure where it is just commonplace for companies to embed developing AI responsibly into data design workflows. It's part of the DNA of what people are building, and it's there because [00:31:00] people have realized that trust, accuracy, fairness, accountability. Our fundamental business, critical principles that in whatever you do, you can't go without.
And so because you care about that, you're gonna care about building things the right way and the best way possible. and you don't have to sacrifice speed or innovation to incorporate that in. That's sort of my nirvana, where people were just talking about it as if it's part of the everyday, it's the mainstreams.
when I think about AI governance and building frameworks for AI2030, That's what I would like to put out there in the world so that people see it as not some complex thing that's hard to understand or figure out, but something that's an every day.
Everyone's job, from the C-suite on down to incorporate.
Kris Brown: And I would think, again, for the AI by design.
Alyssa Harvey Dawson: Right? Privacy by design, AI by design. Security by design. it's just in the DNA.
Kris Brown: Yeah. And I think [00:32:00] again, for that listener, for that data governance practitioner, this is the thing to hold onto.
You've got that power of helping an organization to help. Ask those questions about the data that you've got, ensuring that we're using the right data, the right way to get those outcomes should be the goal of that practitioner to be a part of it. I think that's a great little snippet of why we should be doing this.
I really like, I really appreciate That was great.
Anthony Woodward: It's been really great having you on the podcast today, Alyssa. I did have one kind of parting question that's been gnawing at me.
Alyssa Harvey Dawson: This is the curve ball, isn't it?
Anthony Woodward: I'd love to know the last time you changed your mind on ai.
Like was there a thing you thought was a big risk but you like flipped on it? 180 degrees. I always find in those learning moments when we perceive something as being, and then we actually discover as we pull the thread, it's completely different. Just those critical junctures.
Alyssa Harvey Dawson: Yeah. Yeah. I mean, without sort of divergent confidentiality, I would say during a discussion.
with the stakeholders and [00:33:00]the business, thinking that the thing that was being proposed to use, the AI solution was going to cause harm, and then using some of those tools that I mentioned, listening, connecting it with the business outcome, calibrating the risk, realizing that, oh, okay, this is less scary So this is something that can be easily greenlit and we don't have to spend a lot of time on. I think it's important to have those aha moments and to exhibit that you're having those aha moments so that people can know that like, yes, you're able to listen, comprehend, understand, and shift gears based on the data and facts that are actually presented to you at any given time.
I think that's incredibly important. It's important to get that trust from your internal stakeholders and to be then considered, you know, part of the solution as opposed to part of the problem.
Anthony Woodward: Thank you for joining us. Spending some time with us and sharing what is a ton of amazing insight. I really appreciate the time you spent with us [00:34:00] today, Ali.
Alyssa Harvey Dawson: I appreciate being here. You know, at the end of the day, I think customers, investors, and employees, want trust. They want transparency, they want accountability. And you can use responsible AI governance to get there. And I think that we're positioned to get there. I do feel like I hear people talking about it more.
At the early stages than say, people were talking about privacy and data governance. And so to me that's a very hopeful sign. it's appearing faster.
Anthony Woodward: It's amazing what the threat of science fiction in Skynet has created.
Alyssa Harvey Dawson: It is.
Anthony Woodward: Well, thanks everyone for listening as well. I'm Anthony Woodward. And I'm Kris Brown, and we'll see you next time on FILED.
