RecordPoint offers guidance on responsible AI regulation

RecordPoint believes AI regulation should protect citizen privacy. The company provided this perspective to the Australian government, which had invited public feedback.

Miles Ashcroft
Miles Ashcroft

Head of Risk

August 10, 2023
October 14, 2024
Share on Social Media

We believe the Australian federal government should regulate artificial intelligence (AI) systems to ensure they safeguard sensitive citizen data.

RecordPoint provided this perspective as part of a submission to the Australian federal government, which has invited feedback from the general public on responsible AI regulation.

The feedback was in response to an in-depth discussion paper on responsible AI produced by the Department of Industry, Science, and Resources. While many related federal government initiatives to regulate AI are underway, this discussion paper sought systemwide feedback on actions that should be taken across the economy on AI regulation and governance.

This discussion is happening in the context of increased global regulatory action to address the risk of AI and with the acknowledgment that AI brings many potential opportunities and challenges to the country.

As an organization focused on leveraging machine learning (ML) to build products that protect privacy and reduce risk, our submission focused on reducing the risk of AI systems exposing citizens’ personally identifiable information (PII).

We have republished excerpts from our submission below:

Q: What potential risks from AI are not covered by Australia’s existing regulatory approaches? Do you have suggestions for possible regulatory action to mitigate these risks?  

Our response:

Risk 1: Presence of PII in the model/ training data  

Unlike jurisdictions like the European Union, there is no consideration in the current framework of the training data/model data. Suppose the platforms are built using model or training models containing PII. In that case, bad actors can resurface this PII with a carefully constructed query.  

 

Risk 2: Disinformation risk

Generative AI systems such as LLMs only attempt to answer a query and do not consider whether their answer is truthful, only that it satisfies the query. A hallucination is when AIs generate false information. These may cause issues at an individual level defaming individuals and, in the aggregate, as disinformation and inaccurate data begin to drown out the truth.

Q: How can the Australian Government further support responsible AI practices in its agencies?  

Our response:

The Australian government agencies should work with industry to create a resource kit that focuses on the following:

  1. Data Governance and Privacy: Strengthen data governance and privacy regulations within government agencies. Ensure that AI systems handle sensitive data responsibly and implement procedures to safeguard citizen data.
  1. AI Ethics Guidelines and Frameworks: Develop and update comprehensive AI ethics guidelines and frameworks tailored to government agencies. These guidelines should emphasize the importance of fairness, transparency, accountability, and privacy in AI deployments.
  1. AI Education and Training: Invest in AI education and training programs for government employees to enhance their understanding of AI technology and its ethical implications. This education will help ensure that personnel involved in AI projects are well-informed about responsible practices.
  1. Establish AI Review Boards with Industry: Create multidisciplinary review boards comprising AI ethics, law, and social sciences experts. These boards can assess proposed AI projects for compliance with ethical standards and provide valuable insights for improvement.
  1. Open Data, synthetic dataset, and Collaboration: Encourage sharing of AI-related research and best practices among government agencies. Open data initiatives can promote transparency and collaboration while allowing agencies to learn from one another’s experiences.
  1. Bias Mitigation and Fairness: Implement measures to detect and mitigate biases in AI systems, especially in law enforcement, welfare, and decision-making processes. Consider the impact on vulnerable or marginalized communities and ensure fairness in AI applications.
  1. Public Engagement and Consultation: Involve the public and relevant stakeholders in discussions about AI deployment in government agencies. Seek feedback on AI policies and initiatives to ensure the technology aligns with societal values and expectations.
  1. Promote Ethical AI Suppliers: Encourage agencies to use AI products and services from vendors that follow responsible AI practices. Consider ethical considerations in procurement decisions for AI solutions.

Q: Where and when will transparency be most critical and valuable to mitigate potential AI risks and improve public trust and confidence in AI?  

Our response:

The government should mandate transparency across public and private sectors, with traceable controls. Transparency should be mandated in the following ways:

  • The training data used for a given AI model:
  • Where it came from;
  • Any PI and PII/copyright-protected material present in the data;
  • Possible biases;
  • How organizations are deploying these models.

A good model could be nutritional labels in food. We are not all nutritionists, but the standardization of food labels (together with a broader social context for what is “healthy”) allows us to understand what is in our food to make informed decisions on what to eat.  

In the same way, the presence of “AI nutrition labels” would allow customers to make informed decisions on whether and how to interact with a given platform, business, or agency.

AI regulation is still in its infancy

While governments worldwide continue to evolve their regulatory frameworks, most recently in the European Union, we look forward to opportunities to provide feedback on AI regulation and the privacy implications.

Further reading on AI

Discover Connectors

View our expanded range of available Connectors, including popular SaaS platforms, such as Salesforce, Workday, Zendesk, SAP, and many more.

Explore the platform

Protect customer privacy and your business

Know your data is complete and compliant with RecordPoint Data Privacy.

Learn More
Share on Social Media
bg
bg

Assure your customers their data is safe with you

Protect your customers and your business with
the Data Trust Platform.