The Tea hack shows the burden of collecting too much user data
The Tea hack shows that when your userbase isn’t anonymous, you need to protect their data.
Subscribe to FILED Newsletter
Welcome to FILED Newsletter, your round-up of the latest news and views at the intersection of data privacy, data security, and governance.
This month:
- 16 million PayPal usernames and passwords are for sale
- A guide to AI governance for SMEs and founders
- And an interim report from Australia’s Productivity Commission proposes radical changes to the Privacy Act, leading to a rebuke from the Privacy Commissioner
But first, get the tea on why laws requiring identity verification raise the stakes for companies offering online services, with the Tea hack a worst-case scenario.
If you only read one thing:
The Tea "hack" offers an object lesson in the dangers of identity verification
Last month’s Tea data breach, which saw 72,000 images and 1.1 million DMs of its female userbase leaked, offered a vivid case-study of a lot of trends we discuss in this newsletter: the rapid growth of AI making application development accessible to all, data breaches caused by easily avoidable cybersecurity mistakes, and the unintended consequences that come from the rapid introduction of privacy regulations.
As a quick reminder, Tea was a women-only app built for women to research their male partners or to warn other women about men anonymously. The app was vibe coded by an AI-native founder with less than six months of experience in coding. While perhaps well-meaning, it turned out he made obvious mistakes, like leaving a trove of 72,000 images and 1.1M DMs sitting in unencrypted databases, easily accessible with minimal effort. The breach has led to users' images and locations being posted to (of course) 4Chan, the creation of a "hot or not" style app, along with an interactive map of the userbase.
The key issue was that Tea inherently needed an exclusively female userbase, so users had to verify their gender by posting a government ID and a selfie to use the app. This self-imposed burden meant the app was not only storing all the regular data that comes with being a social network – IP addresses, location data, the content of direct messages and posts – but the much more sensitive data used to gain entry. Removing the option of anonymity raises the data storage risk. But recent age verification laws mean more services will have to take on the burden of storing this data.
The UK asks for proof, while users seek workarounds
There was a certain irony that this high-profile data leak came as the United Kingdom introduced its Online Safety Act, which requires websites operating in the UK with adult content – a definition including services like Discord and Reddit – to “robustly” age check users. Websites that ignore the new laws could be fined up to £18m, or 10% of worldwide revenue.
The introduction of the law has also caused havoc for a wide variety of unrelated sites and forums like Reddit's r/cider, while sending VPN downloads skyrocketing, as well as introducing novel workarounds, like Discord users using a “selfie mode” in video game Death Stranding to bypass the filter.
More countries line up with their own laws
The UK is not the only country seeking to de-anonymize the internet. In December, Australia’s ban on under-16s using social media will go into effect. The law will apply to platforms that meet the government’s definition of an “age-restricted social media platform”, which has the sole or significant purpose of enabling social interaction with two or more users, and which allows users to post material on the service. While the big platforms like Facebook, Instagram and YouTube are the targets of the law, smaller platforms that meet that definition and are not exempt could be swept up. It all means that the number of platforms holding sensitive user data is about to increase drastically.
And to pick another example, Colorado's new children's data protection framework, goes live in October and applies to all companies offering online services, products, or features to Colorado residents, irrespective of their revenue or the volume of data they process.
Those who play by the rules are now at risk, as these requirements are setting companies up for massive data breaches. As the adage that on the internet, nobody knows you’re a dog becomes less true, the burden of storing all of this data falls on companies which may be unprepared for the responsibility.
Data breaches happen no matter the security measures. We're not saying don't collect data – with these laws, that may not be an option! – we're saying when you do, you need to respect it, and protect it, with a solution beyond the MVP vibe coded app. With these new laws, more sites suddenly have the requirement to collect sensitive data. We need them to take the burden more seriously than Tea did.
🕵️ Privacy & governance
Australia's Privacy Commissioner Carly Kind responds to the Productivity Commission's interim report on the Privacy Act, which proposed organizations would no longer have to meet the prescribed requirements of the Privacy Act. No surprise, she doesn’t agree. "Such a system would be, at its best, unworkable," she says.
Don't wait for the Australian government's second tranche of Privacy Act changes, Peter Leonard, Chair of ADMA’s Regulatory and Advocacy Working Group argues, as a wait-and-see approach creates real legal risk.
AI web browser assistants like OpenAI’s ChatGPT, Microsoft’s Copilot, and Merlin AI track and share sensitive user data, including medical records and social security numbers, a new study has found.
🔐 Security
🔓Breaches
16 million PayPal user names and passwords are for sale on a cybercrime forum; change your password now.
HR giant Workday disclosed a data breach after attackers gained access to a third-party customer relationship management (CRM) platform in a recent social engineering attack.
🧑⚖️Legal cases & breach fallout
No-one wants to delete any of the thousands of photos on their phones: that's creating real data breach risk.
The CISO role Is evolving in an AI and zero trust era.
7 legal considerations for mitigating risk in AI implementation.
🤖 AI governance
An internal Meta policy document allows AI chatbots to hold “engage a child in conversations that are romantic or sensual,” and offer false medical information.
A guide to AI Governance for SMEs and founders.
The latest from RecordPoint
📖 Read
Case study: A major US insurer was gearing up for potential M+A, but data chaos was hindering their readiness. Read how RecordPoint helped the insurer earmark 40% of its electronic records for destruction, saving more than $100,000 a year in storage and returning an estimated $11 million in employee productivity.
To truly get a handle on your unstructured data, you’ll need support from stakeholders across the business. That can be easier said than done, but there is a way forward: Explore our buy-in guide for tackling the unstructured data beast within your organization.
🎧 Listen
Is agentic AI the future of work or just the latest buzzword for LinkedIn influencers? In the latest episode of FILED, Anthony and Kris offer their take. Plus: the AI tools they recommend.
And in this mid-season episode of FILED, Kris Brown and Anthony Woodward sit down with John Maloney, a barrister, Melbourne Law School lecturer, and co-host of the podcast Don't Praise the Machine, to discuss some of 2025’s biggest stories and what we can expect for the second half of the year.