I want to name something that everyone in the room knows but nobody says out loud.
Most AI transformations are performance art. They exist to demonstrate that something is happening, not to deliver results. The innovation lab is staffed. The pilots are running. The quarterly board update has an AI slide with a roadmap and a timeline and a budget. And the P&L hasn't moved.
This is not a controversial observation. BCG's own research admits that 60% of AI consulting engagements generate no material value. Only 5% achieve anything at scale. McKinsey puts the number of organisations achieving true AI transformation at one in twenty. Gartner predicts that 60% of AI initiatives will be abandoned through 2026.
The industry has a name for this. They call it "pilot purgatory." I think that's too kind. Pilots are experiments that lead somewhere. What most organisations are doing is theatre.
What AI theatre looks like
You can spot it from the language. "We're exploring." "We're evaluating vendors." "We have a pilot running with promising results." "We're building our AI strategy."
These phrases have one thing in common: they describe activity, not outcomes. Nobody is measuring anything. Nobody has defined what success looks like. And nobody has the authority, or the incentive, to kill something that isn't working.
The typical pattern runs like this:
A consulting firm is engaged to "develop an AI strategy." They deliver a 60-page deck with a maturity assessment, a vendor landscape, and a phased roadmap. The organisation selects two or three use cases for pilots. The pilots are built by a small team with enthusiastic support from one sponsor. The demos look impressive. Then the pilot hits production-readiness requirements (security review, data governance, integration with existing systems, change management) and stalls.
Six months later, the pilot is still "in progress." The team has moved on to the next pilot. The first one quietly dies. The board update still shows green.
I've seen this pattern at organisations of every size, from funded startups to FTSE 250 companies. The technology is real. The results are not.
Why it happens
Three structural reasons.
First, incentives are misaligned. The people evaluating AI tools are not the people whose P&L depends on the outcomes. When the innovation team's KPI is "number of pilots launched" rather than "revenue impact of deployed systems," you get exactly what you'd expect: lots of pilots, no production systems.
Second, the process is backwards. Deloitte's research shows that organisations taking a work-redesign approach, rethinking processes before selecting tools, are twice as likely to exceed ROI targets than those starting with the technology. But redesigning processes is hard, political, and slow. Buying a tool and running a pilot is fast, visible, and feels like progress.
Third, nobody has done it before. Most organisations don't have a senior leader who has built production AI systems. They have people who have evaluated them, recommended them, and managed pilots of them. The gap between "this demos well" and "this runs in production at scale" is enormous, and it's a gap that only operating experience can bridge.
What the alternative looks like
The organisations that get real results from AI share a few characteristics.
They start with the P&L, not the technology. Before selecting any tool or model, they identify a specific business process where AI can deliver measurable improvement, and they define what "measurable" means upfront. Not "efficiency gains." A number. A KPI. A target.
They redesign the workflow. Prosci's research across 1,107 organisations found that 63% of AI failures trace to human factors, not technology. The organisations that succeed don't bolt AI onto existing processes. They redesign the process around what AI can and can't do, with clear decision delegation: what the system decides, what a human decides, and where the handoff happens.
They deploy to production in weeks, not quarters. At AdBrain, we had an agentic AI system handling real customer cases within weeks of starting the build. Not because we cut corners. Because we scoped tightly. One process. One set of decisions. Full governance. Measured KPIs from day one. The result: 67% autonomous case resolution, 23% sales lift. Production, not pilot.
They kill what doesn't work. This is the hardest part. Most organisations don't have kill criteria for AI initiatives. If it's not working after 90 days, they add more resources or extend the timeline. The right move is to kill it, learn from it, and redirect the investment to something that can deliver.
The 90-day test
If you want to know whether your AI programme is performance art or real transformation, ask yourself three questions:
Can you name one AI system in production that is measured by a business KPI? Not "deployed." Not "being used by the team." Measured. With a number. That someone reports on.
Do you have kill criteria for your current AI initiatives? Is there a defined point at which you'd stop investing in a pilot that isn't delivering?
Has anyone in your leadership team built a production AI system before? Not evaluated one. Not managed a pilot. Built one, deployed it, and measured the results.
If the answer to any of those is no, you're closer to theatre than transformation. That's not a judgement. It's a diagnosis. And the good news is that the fix is faster than you think.
One production system. One set of measured KPIs. One honest assessment of what's working and what isn't. That's the starting point. Everything else is decoration.
If you'd like to have that conversation, I'm here. No deck, no maturity assessment. Just an honest view of where you are and what would actually move the numbers.
Related: Why your AI spend isn't showing up in the numbers · Agentic AI in 2026: what actually works in production · How I approach AI transformation