Shadow AI is people
When shadow AI is on personal devices, you need more than technology.
Subscribe to FILED Newsletter
Hi there,
Welcome to FILED Newsletter, your round-up of the latest news and views at the intersection of data privacy, data security, and governance.
This month:
- Austria says Microsoft violated the GDPR
- Qantas customer data hits the dark web
- California Governor Gavin Newsom signed a landmark bill to regulate AI companion chatbots
But first, when it comes to shadow AI on personal devices, technical solutions only get you so far.
If you only read one thing:
Shadow AI is also a people problem
Last month we released RexCommand, a free, centralized hub to help companies of all sizes safely and quickly progress through their AI governance maturity journey.
Our goal with launching this free tool is to tackle a major problem for the industry. AI usage is skyrocketing but so far, governance has lagged. Some numbers for your perusal:
- 40% of employees are using banned AI tools to help boost their productivity at work.
- Although around 44% of organizations now have some sort of AI governance policy in place, most have yet to turn policy into practice.
To combat this, RexCommand gives organizations the ability to manage workplace personal AI usage at the company policy level, by adding AI risk assessment and explainability checks directly into existing workflows. This ensures that every new AI-enabled system aligns with governance requirements from the start. RexCommand connects to organizations’ procurement processes, and integrates with SaaS tools, and we’re also working on connectors to mobile device management platforms (a la Intune).
Left to their own devices
One question we kept coming up against was, “OK, we can see how RexCommand can monitor shadow AI on company devices, and that’s great, but what about personal devices?”
It’s a fair question – employees looking to do more with AI tools often reach for personal devices and personal browsers, and their employers are naturally worried about the potential for a data breach.
The thing is, from a technological point of view, there’s not much employers can do about this problem. RexCommand isn’t spyware, and it can’t reach into employees’ personal devices to ensure they’re not using ChatGPT to analyze customer data. Employers would be entering into a tricky legal area if they deployed any such software on employees’ personal devices.
But technology isn’t the only solution; shadow AI is a people problem as much as it is a technological one.
The real solution is policy, education, and providing teams with safe, approved alternatives.
This is something we discussed on FILED way back in April, when we caught up with technologist Rob Williams. His take was that to really solve it, you needed to bring shadow AI out of the shadows, and partner with those employees who were so desperate to adopt AI that they do so before it’s permitted.
Rather than install bossware on employees’ personal devices, companies should bring employees into the solution: explain why shadow AI is damaging, what approved tools they should use instead, and in general educate them on their AI policy. You have to trust your employees to play by the rules once they know what they are.
But you can't do any of that if you don't have a policy to begin with, and that's where an AI Governance tool like RexCommand comes in, giving organizations the visibility and structure to actually operationalize responsible AI use, not just talk about it.
🕵️ Privacy & governance
Is your cyber insurance ready for AI and data privacy risks?
🔐 Security
🔓Breaches
Clop Ransomware has confirmed the hack of Harvard University.
"Highly significant" cyber-attacks rose by 50% in past year, UK security agency says.
Three in four Australian organizations will boost cyber budgets this year, thanks in part to AI.
UK trade union Prospect is notifying members of a breach that involved data such as sexual orientation and disabilities.
🧑⚖️Legal cases & breach fallout
Attacks targeting Fortra's GoAnywhere managed file transfer software recently exploited on-premises installations where system administrators exposed the management console to the internet.
AT&T Is Paying Out $177 Million for Data Breaches. Learn Who's Eligible and How to File a Claim.
🤖 AI governance
California Governor Gavin Newsom signed a landmark bill to regulate AI companion chatbots, making it the first state in the US to require AI chatbot operators to implement safety protocols for AI companions.
Why AI obedience may be more dangerous than AI rebellion.
A practical guide to implementing AI ethics governance.
The latest from RecordPoint
📖Read
A new piece from RecordPoint’s CEO, Anthony Woodward on Insurance Thought Leadership: Unstructured data sprawl is threatening many insurance carriers. The solution - strong governance - has a useful new side-effect, enabling the transformative adoption of AI.
RecordPoint’s CTO, Josh Mason was quoted in an InfoWorld article on “nonfunctional requirements for AI agents”. He emphasized the importance of ensuring privacy and compliance concerns are at the core of these new products.
And Head of product Joe Pearce in Dark Reading on why security leaders need to focus their attention on collaboration platforms like SharePoint, because if they don't, threat actors like Volt Typhoon will.
🎧 Listen
In addition to writing about Volt Typhoon, Joe Pearce gave us the inside story on our new solution, RexCommand, mentioned above. He dove into the story behind the free tool that is the easiest way to operationalize AI policy, enforce governance across the AI lifecycle, and prove compliance, all from one platform.
And Superposition founder David Cohen discussed how he helps startups grow with AI, the communications gap in data consultancy, and the growing pressure to adopt AI.