
The shadow AI problem: When good intentions create greater risks
48% of employees in AI-restrictive organisations still paste sensitive data into uncontrolled tools. Discover why blanket bans multiply security risks rather than containing them, and how 67 unlicensed AI applications became the enterprise average.


The unintended consequences
Organisations are absolutely right to worry about data security, protecting intellectual property, and maintaining GDPR compliance. These aren’t trivial concerns—they are fundamental to running a responsible business in 2025.
But here’s where it gets interesting (and slightly terrifying for your IT department). Picture this scenario, which we’ve heard countless times from professionals across the UK: your organisation, sensibly concerned about security, decides to ban all generative AI tools. Meeting adjourned, policy drafted, IT blocks implemented. Job done, right?
Not quite.
According to recent data, 27 percent of organisations have tried outright bans on generative AI. Yet nearly half of employees—48 percent—still paste sensitive data into uncontrolled AI tools. That’s almost half your workforce going rogue with company data.
Even more startling? The average enterprise now runs 67 separate AI applications, with 90 percent unlicensed. That’s not prohibition working—that’s prohibition driving use underground, like a corporate speakeasy for productivity tools.
The dilemma
When organisations realise they need to address AI adoption, they typically see two options:
- Option one: Purchase proper business licences with data protection controls, prevent model training on your data, and ensure full GDPR compliance. Provide staff with access to powerful, end-to-end AI tools like the 100+ gecco assistants, all within a secure, controlled environment—whether through ChatGPT, Claude, or Gemini.
- Option two: Block all staff from using generative AI tools and hope for the best.
Unfortunately, many still choose the second option. The result? Competitors surge ahead with AI solutions that cut costs, save time, and empower teams, while those who ban AI fall behind.
The security multiplication effect
Here’s what makes shadow AI particularly troublesome: each unsanctioned application creates its own data retention rules, audit gaps, and contractual liabilities. Instead of containing risk, you multiply it exponentially.
When Sarah in accounting uses her personal ChatGPT to analyse a financial report, or Tom in HR uploads candidate CVs to an unknown AI tool, they’re not being malicious. They’re simply trying to work better, faster, and smarter. But each action opens up a new blind spot: no audit trail, no governance, no certainty where sensitive information is stored—or how it’s being used to train future models.
The control paradox
The irony is that organisations implementing blanket bans often have the best intentions. They want to protect data, people, and reputation. But in trying to maintain complete control, they lose it entirely.
When you implement proper AI tools that cover multiple business needs, it becomes easier to control your data and actually feel in control. A centralised, licensed solution means:
- You know exactly which AI tools your team is using
- Data stays within your controlled environment
- Usage can be monitored and audited
- Compliance is met by design
- Your team gains productivity without introducing risk
Moving forward with confidence
The solution isn’t to fight the tide of AI adoption—it’s to channel it properly. Solutions like gecco’s provide that secure middle ground: enterprise-grade security, GDPR compliance by default, and data never used for model training. Your team can continue benefiting from the productivity of AI—only safely, legally, and under your control.
Because the choice isn’t really between using AI or not. It’s between using AI properly or watching it seep through the cracks of your infrastructure like digital water finding its level.
The last thing any organisation needs in 2025 is to discover their competitive disadvantage came bundled with a data breach. There’s a better way, and it starts with acknowledging that the genie is already out of the bottle.

Humans + AI: the future of work takes shape
The jobs of tomorrow won’t be about competing with AI, but about leveraging it

AI in the Workplace: Why fear isn’t the answer
There are real, understandable concerns. AI is fast-moving, capable of reshaping established routines, and critcally still new. But the answer isn’t to bury our heads in the sand.
Subscribe to the gecco newsletter

