Assure your customers their data is safe with you
Protect your customers and your business with
the Data Trust Platform.
Legislation like the EU’s AI Act helps to encourage safe and ethical use of GenAI, but compliance can be challenging. Learn key steps you can take to prepare for these laws now to reduce your risk.
When Roberto Mata filed a complaint against Avianca, a Colombian airline, in the Supreme Court of New York, saying he was allegedly injured by a metal serving cart striking his knee on a flight between El Salvador and John F. Kennedy airport, he probably didn’t expect the case to open up new rifts in legal theory, computing, or the nature of reality.
Unfortunately for Mata, his lawyers Steven A Schwartz and Peter LoDuca leveraged a new generative AI tool, ChatGPT to research a rebuttal to Avianca’s request to dismiss the case. The result was an “Affirmation in Opposition” which cited a variety of legal cases in opposition, none of which Avianca — or anyone else — could locate. Unfortunately for Schwartz, LoDuca, and poor Mata, they did not exist. ChatGPT had hallucinated them; the LLM had made them up. Schwartz later said he was “mortified” to learn ChatGPT was not a search engine.
While not the only such case of GenAI-enabled legal fantasy — another one involved former US President Donald Trump’s former fixer Michael Cohen — this is surely the most high-profile, the cautionary tale that most people will remember when using the platform.
So, what happened in this case? Was the GenAI app operating poorly? Was it hacked to provide poor output? In fact, ChatGPT in this case operated exactly as designed. It was the data that was to blame.
In 2017, the Economist declared that data was the new oil; the world’s most valuable resource. But just like with oil, data must be processed and curated before it is of any real use. We don’t fill up our cars with crude, after all.
Nowhere is this truer than with Generative AI, where the adage, “garbage in, garbage out” takes new form. GenAI brings a variety of risks, from poorly secured data to issues with transparency and explainability, accountability, and poor-quality data. These issues would be bad enough on their own, but they exist alongside an evolving legal landscape, including privacy laws and novel AI regulation such as the European Union’s AI Act. Businesses need help navigating this territory and, unfortunately, they probably can’t rely on ChatGPT for guidance. Let’s take a look.
As the tech world races to re-organize itself around artificial intelligence, securing AI initiatives has become crucial. Yet in the rush to adoption, a large majority of organizations have missed an essential step: only 24% of Generative AI initiatives are secured, risking data breaches and exposure of sensitive information.
Organizations must establish robust frameworks to protect AI data, models, and usage. Instead of fearing AI, we should focus on deploying it securely to enhance cybersecurity and better manage data breaches.
Including sensitive data in AI models can lead to severe legal penalties and non-compliance with data protection laws, causing reputational damage and loss of public trust. CISOs are especially concerned about security risks from unvetted AI models and the potential for models to memorize and expose sensitive training data. Such concerns have led to some CISOs cancelling Microsoft Copilot projects entirely.
Just like with decisions made by humans, AI decision-making needs to be sourced. Why did this system make a particular recommendation? On what data was a given output predicated?
Data that lacks transparency and explainability hinders trust and compliance in AI systems. Without clear documentation on data usage and decision-making processes, auditing and justifying AI model outcomes becomes challenging. This opacity can lead to regulatory compliance issues and erode stakeholder confidence.
When there is a lack of clear accountability and ethical guidelines for AI models, their effectiveness and ethical use is undermined. With the EU's AI Act in effect, lack of transparent processes and clear responsibilities may cause stakeholders to delay AI adoption, increasing oversight and validation costs. In Australia, similar requirements have been laid out by the Federal Government, though they are currently voluntary.
On top of it all, "noisy data," or non-useful and irrelevant data, can severely degrade AI model performance and accuracy. There is often much more of such data than there is legitimate data — think: test files or drafts. Training AI models on such data results in unreliable outcomes, increased computational costs, and inefficient use of resources, leading to a higher financial burden on organizations. If something is risky and not useful, why bother at all?
Many countries are working on AI-specific legislation, but while we wait for these wider directives to become law, organizations must comply with existing privacy legislation where relevant. The EU AI Act explicitly states that the principles of the EU’s General Data Protection Regulation (GDPR) still apply.
The US National Institute Of Standards And Technology (NIST)’s AI risk management framework includes “privacy enhanced” as a key characteristic of what it calls “trustworthy AI”. Meanwhile, Singapore’s Personal Data Protection Commission Model AI Governance Framework offers detailed, readily implementable guidance to address key ethical and governance issues when deploying AI solutions.
There are three sources of risk when it comes to GenAI. Each must be considered when deploying a GenAI model within the organization.
The LLMs that GenAI apps leverage contain hundreds of gigabytes of data, often scraped from the internet. This means that Personally Identifiable Information (PII) could be included without the necessary legal basis and safeguards.
As well as general training data, you also need to consider your “enterprise data” that includes customer or employee PII (e.g., an employee’s age, customers’ medical conditions).
By their nature, GenAI apps equip enterprises with new content, which might either contain personal information or offer sensitive or personal insights obtained by inference.
With GenAI, as with all projects that leverage data, transparency is key. If employee or customer personal data comprises part of your GenAI project, define the outcomes you plan to achieve. Communicate that purpose in a clear and friendly way to the individuals whose data you are using, ahead of the project launch date.
If you allow your organization to copy data or use it for training data, you must state this and ensure you have legal grounds to do so. Allow customers to stay in control with consent practices. Make it easy for them to withdraw consent, and to contact you for more information or to exercise their privacy rights. Ensure there is a human in the loop to review content ahead of publication.
Privacy is only one piece of the GenAI governance puzzle. But it will allow your organization to achieve the goal of using GenAI more responsibly and ethically to build customer and employee trust.
At all stages of working with GenAI – when considering what data to process and store, when engaging with partners and platforms, and when designing GenAI-driven experiences, always consider five key principles: transparency, accountability, oversight, human agency, and fairness.
The EU AI Act represents a significant legislative advancement, aiming to govern the development and deployment of AI throughout the European Union. It uses a risk-based framework to strike a balance between fostering innovation and ensuring safety, emphasizing the importance of human rights, transparency, and alignment with EU principles. This regulation is set to influence the global landscape of AI, establishing new benchmarks for AI oversight both within and outside the EU.
As part of the Act, AI systems are categorized into four tiers based on their risk to individuals and society:
Organizations working on high-risk AI systems must document their training, testing, and validation processes. This ensures transparency and accountability, enabling better oversight and public trust in AI technologies.
The Act emphasizes human oversight over AI systems, particularly those involved in sensitive areas like employment, healthcare, and law enforcement. This ensures that AI systems augment, rather than replace, human decision-making.
The EU AI Act proposes the establishment of national supervisory bodies across EU member states, coordinated by the European AI Board. These bodies will ensure compliance and oversee the implementation of the regulations.
This Act is especially important for those doing business in the EU, which in a globalized, online world, is a large number of businesses.
Non-compliance with the EU AI Act will be met with a maximum financial penalty of up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher.
Businesses operating in the EU will have to comply, or risk major fines, as well as privacy data leakage when feeding data into GenAI apps. The US is already taking steps towards AI regulation, and we can expect further developments in the coming years, with influence from the EU AI Act.
RecordPoint can help with AI Act compliance by optimizing the management of organizational data used in AI models. The platform helps ensure AI traceability and compliance with the EU AI Act and other emerging AI regulations. It supports AI lifecycle management and compliance, enabling secure and effective AI technology deployment while fostering transparency and accountability.
GenAI brings with it a number of risks to corporate and customer data, and these must be addressed to enable safe and secure usage. We have all read the stories of when GenAI usage has negatively impacted an organization, and its customers. We don’t need to ask ChatGPT to dream up potential doomsday scenarios.
Legislation like the EU’s AI Act helps to encourage safe and ethical use of GenAI, but compliance can be challenging. Organizations may need help to understand their data, ensure transparency and accountability, and to achieve compliance. The time to prepare for these laws is now.
Take the next steps toward AI governance. Learn more about how keeping XAI and compliance top of mind is the next step in your AI journey.
View our expanded range of available Connectors, including popular SaaS platforms, such as Salesforce, Workday, Zendesk, SAP, and many more.
Know your data is complete and compliant with RecordPoint Data Privacy.
Protect your customers and your business with
the Data Trust Platform.