I sat through an AI strategy review with a leadership team last month where the head of transformation opened the deck with a slide titled "AI Use Cases In Production." There were 47 of them. Customer service summarisation. Sales call coaching. Marketing copy generation. Code completion. Document classification. Meeting minutes. Onboarding assistants. The list filled the screen in 9-point font.

I asked one question. "What's the EBIT impact?"

The room went quiet, because the answer was: nobody had computed it. The use cases existed. The value didn't.

This is the new shape of AI theatre. I called the previous shape performance art three months ago. That piece was about pilots that never reach production. This one is about pilots that do reach production and still produce nothing on the P&L.

The trap has moved from pilot purgatory to use-case purgatory. Counting deployed assistants is the new measuring of pilot velocity.

Why use-case counting persists

I understand why this keeps happening. Use cases are easy to count. They are easy to put on a board slide. They give a head of transformation something concrete to point at when the CEO asks "what are we doing with AI?" Forty-seven is more impressive than four. Activity is easier to measure than outcome.

There's also a procurement reason. Most enterprise AI platforms are sold on use-case breadth. Vendors love to show a tile-grid of pre-built assistants because it makes the platform look comprehensive. The customer side answers in kind by deploying as many tiles as possible. Both sides are optimising for the wrong thing.

The result is what I keep seeing in mid-market and PE-backed businesses: a portfolio of small productivity improvements, none of which is large enough to move a business metric, all of which add to the AI budget. Twelve months in, the conversation with the CFO becomes uncomfortable. The honest answer is "I cannot draw a line from any of these to revenue, cost, or risk."

Lakhani, Spataro and Stave coined a useful phrase for this in a March 2026 Harvard Business Review piece. They called it being "pilot-rich but transformation-poor." That is exactly the shape of the problem. Hundreds of working pilots, none of which has changed how the business operates.

The Krishna reframe

At IBM Think 2026 earlier this month, Arvind Krishna put the alternative more directly than I have heard a major-vendor CEO put it: the enterprises pulling ahead are not deploying more AI, they are redesigning how the business operates. IBM positioned the AI operating model around agents, data, automation, and hybrid governance, and framed the shift as "from improving parts of the business to changing how the business operates."

That is not the same statement as "deploy more AI." It is the opposite statement. ServiceNow's Bill McDermott has been making the same point in different words. AI value comes from auditable execution across workflows, data, policy, CRM, HR, security, and IT, not from disconnected assistants sitting alongside human work.

The HBR authors put the same point in a single line that should be printed on the wall of every transformation office: when you drop AI into one step of a process you have not redesigned, "the bottleneck simply shifts." Their example is an agent that drafts a complex contract in seconds, only to have the contract sit in a manual legal review queue for two weeks. The pilot worked. The cycle time did not change. The business did not get faster.

The point is that AI's economic impact does not live in the assistant in the middle of a process. It lives in the redesign of the process itself.

What "value stream" actually means

A value stream is the end-to-end sequence of steps that delivers a specific outcome the business cares about. It usually crosses multiple teams, multiple systems, and multiple decision points. The interesting metric for a value stream is throughput, cycle time, error rate, cost per unit, or customer impact, not the number of tools used inside it.

Examples of value streams worth redesigning:

Customer support resolution. From inbound contact to closed ticket. Measured in time to first response, time to resolution, customer effort score, cost per resolution.

Sales lead to revenue. From inbound lead to closed deal. Measured in conversion rate at each stage, sales cycle time, cost of customer acquisition.

Month-end financial close. From period end to signed-off financial statements. Measured in days to close, manual adjustments, audit findings.

Underwriting or claims processing. From application to decision. Measured in straight-through processing rate, cycle time, loss ratio.

Engineering migration. From legacy system to modern stack. Measured in throughput, defect escape rate, cost per migrated component.

Each of these is a candidate for end-to-end redesign with AI in the middle, agents at decision points, workflows at routine steps, and human judgment at exceptions. The metric for success is not "we deployed an AI assistant in step 4." It is "this value stream now operates at half the cycle time and a third of the cost."

The Microsoft data the room missed

Microsoft's 5 May Work Trend Index reported a finding that should be on every transformation lead's desk. Organisational factors, including culture, manager support, and talent systems, accounted for more than twice the perceived AI impact of individual factors. The same Copilot deployment, in two organisations, produces wildly different outcomes depending on how the work itself is structured.

This is why use-case counting fails. You can deploy an excellent meeting summary assistant in two companies. In the first, where meetings are already overprescribed and follow-through is weak, the assistant generates more meeting noise. In the second, where meetings are tightly run and decisions are tracked, the assistant compresses cycle time. Same tool, opposite outcomes, because the surrounding operating model is different.

Use cases live in tools. Value lives in operating models. You cannot use-case your way to a redesign.

How to pick three value streams

If you are leading AI strategy at a mid-market or PE-backed business, I'd kill the use-case inventory and run a different exercise. Take it to your leadership team in the next month.

Map your top eight value streams. Not eighty. Eight. The ones that, if you compressed cycle time by 30% or reduced unit cost by 20%, would move the business in a way the board would notice.

Pick three. Use three criteria: measurable economic impact, achievable within 6 to 12 months, and a sponsor with real authority. The third criterion is the one most exercises skip. A value stream without an empowered owner becomes another deck.

Commit to redesign, not augmentation. The temptation is to bolt AI onto the existing process. Don't. Map the current process, identify which steps can be eliminated, which can be automated end-to-end, which need a human decision, and which can be combined. Then design the future state, with AI in the middle, and migrate.

Measure outcomes, not adoption. The metrics are cycle time, unit cost, throughput, quality, customer satisfaction. Not seats licensed. Not assistants deployed. Not prompts entered. Hold the leadership team to those metrics quarterly.

Tell the rest of the business to wait. This is the hardest part. Every other team will want their own AI use case. The answer for the next year is "we are concentrating effort on three streams, we will share what we learn." A focused redesign of three value streams will produce more EBIT impact than fifty deployed assistants. I have watched this in both directions.

The board reframe

For boards and investors reading this, the simplest test of an AI strategy is: name the three value streams you are redesigning, and the metrics you'll be measured against in 12 months. If the answer is a list of deployed tools, you don't have a strategy. You have a procurement plan with AI in the title.

The companies pulling ahead in 2026 are not the ones with the longest use-case inventory. They are the ones who picked three things and rebuilt them properly. The operating model is the moat. The use cases are the marketing.

If you'd like to talk through which value streams in your business would benefit most from end-to-end redesign, get in touch. I've helped leadership teams pick the right three more than once, and it is almost always not the three they would have chosen alone.


Related: Most AI transformations are performance art · Why 80% of AI projects fail to deliver ROI · The dual-stream strategy

Ready to make AI actually work?

Tell me what you're working on. I'll respond personally. If there's a fit, we'll take it from there.

or take the free AI readiness assessment →

Currently accepting one new client alongside existing commitments. Second slot opens Q3 2026.