Skip to main content

What We Do in the Shadows: The Elephant in the AI Strategy Room.

Many organisations believe they don't have an AI strategy yet. That's the comfortable version. The uncomfortable version is that they already do. It just hasn't been designed intentionally.

27 March 2026

Many organisations believe they don't have an AI strategy yet.

That's the comfortable version.

The uncomfortable version is that they already do. It just hasn't been designed intentionally.

Across industries right now, employees are using AI tools in their daily work. Often without approval. Often without anyone upstairs even knowing about it. Often with sensitive data flowing through systems that haven't been vetted, assessed, or even identified.

This is shadow AI. And it's far more widespread than most leadership teams know (or want to acknowledge).

The data isn't ambiguous in the least. Over 80% of employees use unapproved AI tools. More than 90% of companies show evidence of employee-driven AI usage. Over half of those employees have put sensitive data into AI systems their employers know nothing about. One in five organisations has already experienced a breach linked to this behaviour.

These figures come from sources like UpGuard, Reco, Menlo Security, MIT, IBM, Gartner, and others, based on surveys of thousands of employees/IT leaders and telemetry data from enterprises. Prevalence varies by company size, industry, and region, but the trend is clear: employee-driven AI use far outpaces official approvals, creating security, compliance, and data leakage risks.

The reason is not a mystery. Employees aren't waiting for a strategy - because they don't need one. They're responding to pressure to move faster, to the availability of powerful tools, and to the entirely rational desire to do less mindless work. From where they're sitting, using AI makes sense. From where leadership is sitting, it's uncontrolled adoption, and this is about to become a very big problem indeed.

A lot of organisations respond to this by restricting access. Block the tools. Lock down the systems. This rarely works. Employees use personal accounts, access tools outside corporate networks, and route around whatever controls have been put in place. Which leaves you with the worst of both worlds: AI being used across your organisation, but invisible, ungoverned, and not aligned to anything you've actually decided.

AI adoption will not be stopped, but the real issue isn't the presence of AI. It's the absence of structure around it.

Without clear policies, defined use cases, knowledge governance, or decision frameworks, AI becomes another layer of complexity sitting on top of existing complexity. And complexity, when nobody's managing it, creates risk. Not someday, now.

Regulation is beginning to catch up. Across regions, legislation is emerging that will require organisations to demonstrate they understand how AI is being used, that data exposure is managed, that accountability exists, and that there's actual governance in place. For a lot of organisations, this is going to be a reckoning. Not because they've been using AI, but because they won't be able to show where, how, or whether it's been used responsibly.

So whether you designed the protocols or not, your organisation already has an AI strategy. It looks like employees choosing tools individually. Usage patterns emerging organically. Decisions being made without oversight. It's a strategy, just not a defensible one.

If you, as a decision maker, still ask "should we adopt AI?" That decision has already been made, by the people who work for you. What you should be asking is how you'll take control of usage and future adoption.

Because a storm is coming. And the organisations that will weather it are the ones with the foresight to plan, align, and take control now.

Structure before AI. Most organisations are doing it in reverse.

Shadow AI isn't a threat to be eliminated. It's a signal that your organisation is already evolving, whether you're steering it or not.