Meta AI’s social feed is a privacy nightmare
Is making AI queries public a good idea? Or a privacy disaster in the making
Subscribe to FILED Newsletter
Hi there,
Welcome to FILED Newsletter, your round-up of the latest news and views at the intersection of data privacy, data security, and governance.
This month:
- Italy’s privacy watchdog fines Replika for its poor privacy practices
- DoJ investigating claims Coinbase’s customer service accepted bribes for personal data.
- A broad coalition of groups and individuals are voicing their opposition to any changes to the GDPR.
But first, do people really want to make their AI queries public?
Meta AI’s social feed supercharges Shadow AI risk
Lately, we’ve talked a lot about the dangers of Shadow AI, where employees use free versions of Generative AI (GenAI) tools for business purposes, inputting sensitive or confidential information. Because these users are typically on a free plan, this sensitive data is then used as training data for the publicly accessible model, creating accidental data breaches and putting their organization at risk.
But new moves from tech giants have brought this potential source of data exposure out of the shadows. What if, in addition to sharing sensitive data with a Large Language Model like Meta AI, you also shared it with the world?
Meta’s social AI gambit
Meta recently launched Meta AI, a standalone version of its AI app complete with a Pinterest-like social feed for the AI era, where you can view AI queries shared by users, and comment on those you find interesting. Currently, these are not public by default (though Meta does have a spotty track record when it comes to abruptly changing privacy defaults), and users have to click “share” to make them public.
But browsing the feed, amongst the AI-generated images of puppies and bodybuilders, it’s not too difficult to find examples of people seemingly unaware their chats are public, like someone querying “the difference between demonic possession and multiple personality disorder” or requesting a ‘draft complaint letter to “Bronwyn”’. It’s hard to imagine why people would deliberately share such specific and often personal chats.
Meta AI seems to be confusing people by mashing up two distinct experiences – using social networks and chatting with LLMs – to create a third paradigm that combines Facebook’s frictionless sharing experience with the power of an LLM. The result is people asking about very personal, sensitive subjects and then sharing the results – like Reddit, except it’s not anonymous and you are primarily asking an AI.
What this platform is actually for is up for debate, but currently, people are using it to apply for jobs, do their jobs, talk about their first day at their job, and look for a new job. It seems difficult to believe that accidental data disclosures won’t follow. And once the data is out there, it’s out there.
AI governance is (still) essential
Open AI, while apparently no longer pursuing a for-profit structure, is still desperately in search of growth to justify its high valuation and rounds of funding. Its CEO Sam Altman has also discussed adding a social element to the platform. The ‘AI social platform’ model may become a popular one across the industry, making it vital you get your AI governance under control now. If your team knows the right tools to use, how to do so safely and compliantly, and is seeing value from using them, they will be less inclined to seek out Meta AI to do their job or get some ideas on dealing with “Bronwyn.”
Don’t wait on regulation to solve the issue (redux)
Finally, if you’re hoping for regulation to help you navigate this change, you were given fresh reason for concern this month with the news that the US House of Representatives was looking to grab the wheel from the states, and put a moratorium on state-level AI law making. The move is part of a package of legislation and would roll back state-level AI laws and make the federal government the sole AI regulator for US tech firms. This is all part of the push to speed up AI development to outcompete their Chinese rivals, which we covered in April’s edition of FILED. Critics have said the move – which is very early in the legislative process and may never become a reality – would amount to a gift to tech firms which had asked for an end to the patchwork of state-level laws.
Whether or not this new effort is successful, it’s a sign that lawmakers may have different priorities, and it’s still up to you to manage AI governance within your organization to ensure these tools – whether Meta AI or good ol’ Copilot – are used safely. The AI risk remains; it’s up to you to respond.
🕵️ Privacy & governance
What you need to know about the FTC’s COPPA amendments.
More than 100 groups and individuals have grouped together to object to the changes to the GDPR (which we covered in last month’s newsletter).
🔐 Security
🔓Breaches
🧑⚖️Legal cases & breach fallout
The Department of Justice is investigating Coinbase’s contracted customer service agents in India, who accepted bribes in exchange for allowing criminals access to user data.
Amnesty International urged Thai authorities to investigate claims of state-sponsored cyberattacks against human rights organizations and pro-democracy activists following the leak of internal government documents that appeared to suggest such activity.
The latest from RecordPoint
📖 Read:
Data mesh vs data fabric: what’s the difference?
Unblocking AI: Why 73% of orgs have stalled their Copilot rollout — and how to move forward safely.
In the modern world, safeguarding against data risk should be a top priority for your business. Our guide to identifying different types of data risks, conduct an assessment, and manage any potential threats.
Read our guide to assessing, mitigating, and managing data risk.
I wrote in SCWorld about the data governance lessons in Elon Musk's DOGE campaign.
Also from me, a look at the intertwined nature of data and AI.
And we're pleased to announce a strategic partnership with Oceanic Consulting Group (OCG), a premier boutique advisory firm redefining financial services strategy in Australia. Learn more in this news piece on our website.
🎧 Listen:
CKAN Project co-steward and Link Digital Executive Director Steven De Costa joins Anthony and Kris to discuss how information governance fits into the open data world.
AI consultant Rob Williams joins FILED to discuss the rise of shadow AI, and what companies can do to combat it.
Josh Mason, CTO of RecordPoint was a guest on the DOS Won’t Hunt podcast, along with Katie Klein, vice president of marketing for Comcast Business, to discuss the pain points of introducing new technology to legacy operational needs.