Your employees are already using AI. The question is whether you know about it.

Cyberhaven's research found that 38% of employees are sharing confidential company data through consumer AI tools: ChatGPT, Claude, Gemini, Copilot. Not through sanctioned enterprise deployments with data handling agreements. Through personal accounts, on personal devices, with no audit trail and no governance.

The broader number is worse: 78% of employees are using AI tools their employer hasn't approved. They're summarising client documents, generating financial analysis, drafting legal correspondence, and processing sensitive data through systems their IT and compliance teams don't know exist.

This is shadow AI. And it's about to become a very expensive problem.

The regulatory clock is ticking

The EU AI Act's major enforcement milestones begin August 2, 2026. That includes transparency obligations, high-risk system requirements under Annex III, and governance structures that most organisations haven't built yet.

If you're operating in the EU or processing EU citizen data, which includes most UK businesses with European clients, you need to be ready. Not "aware." Ready. With documented governance frameworks, risk assessments for every AI system in use (including the ones your employees adopted without telling you), and human oversight structures that can withstand regulatory scrutiny.

This isn't theoretical. IBM's Institute for Business Value reports that 82% of organisations say AI risks have accelerated their need for governance modernisation. They know the gap exists. Most haven't closed it.

Why governance fails when it's treated as a blocker

The instinct in most organisations is to respond to shadow AI with a ban. Lock down the tools, restrict access, send a memo. This doesn't work. The tools are too useful, the workarounds too easy, and the productivity gains too visible for people to stop using them just because a policy document said so.

The organisations that get this right treat governance as an enabler, not a blocker. The goal isn't to stop people using AI. It's to channel that usage into managed, monitored, compliant systems that the organisation controls. Governance that makes AI adoption faster, not slower, by removing the ambiguity that forces employees to make their own decisions about what's acceptable.

A proper governance framework does three things:

First, it maps what's actually happening. A shadow AI audit identifies every tool in use, every data flow, every risk. You can't govern what you can't see. Most organisations are genuinely surprised by what they find.

Second, it provides sanctioned alternatives. For every unapproved tool your employees are using, there should be an approved path that's at least as convenient. If the governance framework makes people's work harder, they'll route around it, and you're back to shadow AI.

Third, it builds the compliance structure. Risk classifications, human oversight requirements, data handling protocols, incident response procedures. The EU AI Act doesn't require perfection. It requires a demonstrable, documented framework that shows you're managing AI risk systematically.

The cost of waiting

Informatica's 2026 survey of 600 data leaders found that three out of four say governance hasn't kept pace with AI adoption. Nearly 7 in 10 organisations have adopted generative AI, and almost half have moved into agentic AI. The tools are already deployed. The governance is months, or years, behind.

The gap creates three categories of risk:

Regulatory risk. EU AI Act non-compliance penalties scale to 7% of global annual turnover for the most serious violations. Even for mid-market companies, that's a material number.

Data risk. Confidential client data, financial information, proprietary IP, all flowing through systems with no data processing agreements, no audit logs, and no retention controls.

Operational risk. Decisions being made based on AI outputs that no one is verifying, using models that no one has validated for the use case, with error rates that no one is measuring.

What I'd do in your position

If I were sitting in your seat (CTO, COO, or board member of a company with 50+ employees) I'd do three things in the next 30 days:

Run a shadow AI audit. Not a survey. A proper audit. Map the tools, the data flows, and the risks. Accept that you'll find things you didn't expect.

Build a governance framework that enables, not restricts. Classify your AI use cases by risk level. Provide sanctioned tools for the high-value, high-frequency use cases. Build monitoring into the approved stack.

Set a timeline for EU AI Act readiness. August 2026 isn't far. If you need to be compliant, work backwards from that date and identify every gap.

If you'd like help with any of that, let's talk. I've built governance frameworks alongside production AI systems, not as a separate workstream, but as part of the deployment. It's the only way it works.


Related: Most AI transformations are performance art · Why 80% of AI projects fail to deliver ROI · The AI Transformation Sprint

Ready to make AI actually work?

Tell me what you're working on. I'll respond personally. If there's a fit, we'll take it from there.

Currently accepting one new client alongside existing commitments. Second slot opens Q3 2026.