A statistic that should concern anyone investing in AI: according to multiple industry surveys, roughly 80% of companies have deployed generative AI. And roughly 80% report no material impact on earnings.

That's not a coincidence. That's a pattern. And it's worth understanding what's causing it.

The easy explanation (and why it's wrong)

The tempting interpretation is that the technology isn't there yet. That LLMs hallucinate too much. That the models aren't reliable enough. That we need better tools before AI can deliver business value.

This is comfortable because it requires no action. Just wait.

It's also wrong. The organisations delivering measurable ROI from AI are not waiting. They have fundamentally different approaches to how they build and deploy AI systems, and those approaches are not technology-dependent. They're architectural and organisational.

What the 20% do differently

1. They solve the data problem first

Every serious AI deployment that delivers business outcomes starts with a data architecture question, not a model question.

The question isn't "which LLM should we use?" It's "can we reliably retrieve the specific information this AI system needs to make good decisions?"

In most organisations, the answer to that second question is no. Customer data is fragmented across CRM systems. Product data doesn't conform to a consistent schema. Historical decisions weren't recorded in a way that makes them usable as context. Case notes are in unstructured text with no metadata.

Before any AI system can work reliably in production, someone has to make this data retrieval problem tractable. The organisations that skip this step and go straight to building on top of raw, fragmented data end up with AI systems that work inconsistently and unpredictably. Which is worse than no AI at all, because it erodes trust.

The 20% treat data architecture as a prerequisite, not a detail.

2. They define the right problem to solve

There's a pattern in AI project failures that goes like this:

  1. Organisation decides to "add AI"
  2. Team picks a use case that seems tractable: a chatbot, a recommendation engine, a document summariser
  3. The use case gets built and deployed
  4. Nobody uses it, or it doesn't move any metric that matters
  5. Conclusion: "AI doesn't work for us"

The failure happened at step 2. The use case wasn't connected to a problem that the business actually has, in a workflow that people actually use, with a metric that the business actually tracks.

The 20% start from the business problem: what is the high-volume, high-friction, high-cost workflow that AI could change? Then they work backward to the AI architecture, not forward from "what's possible with LLMs."

For the insurance brokerage I worked with at AdBrain, the problem was specific and measurable: thousands of customer service cases per month, inconsistent handling, slow resolution, agents unable to scale. AI was the solution to that problem, not a technology experiment running parallel to the real business.

3. They invest in production engineering

The gap between "the demo works" and "the system works in production" is where most AI projects disappear.

Production AI requires:

  • Retrieval that works at scale, not just in a controlled test environment with clean data
  • Graceful degradation. What happens when the model is uncertain, when the data is missing, when the case is outside the training distribution?
  • Observability. Can you see what the system is doing, why it made each decision, and where it's failing?
  • Human-in-the-loop design. When should the AI handle something autonomously, and when should it escalate? And when it escalates, does the human get the context they need?
  • Audit trails, especially in regulated industries, but increasingly in any consequential business process

The 20% treat production engineering as a first-class concern from the start. The 80% treat it as a problem to solve after the demo works.

4. They stage the rollout

"Big bang" AI deployment almost never works. The 20% deploy incrementally: start with a small fraction of real traffic, measure outcomes against real business metrics, build confidence in the system before scaling.

This isn't just risk management. It's how you build institutional trust in an automated system. The people whose jobs the system is affecting need to see it work. Leadership needs to see the metrics move. Compliance needs to see the audit trail.

Staged rollout also allows you to catch failures early, in production conditions that your test environment will never fully replicate, without catastrophic consequences.

5. They treat AI governance as a competitive advantage

With the EU AI Act in full effect and organisations increasingly asking hard questions about AI risk, the companies that have built mature AI governance frameworks are at a commercial advantage over those that haven't.

This doesn't mean governance as bureaucracy. It means:

  • Clear ownership of AI decisions (the CAIO role or equivalent)
  • Documented decision logic for automated systems
  • Regular accuracy and bias auditing
  • Clear escalation paths when systems fail
  • An approach to EU AI Act risk categorisation for the AI systems you're deploying

Only about 1 in 5 companies have this in place today. That's a gap, and a gap creates differentiation.

What this means for your organisation

If you're building or deploying AI, the questions above aren't abstract. They're the questions your enterprise customers' IT and legal teams will ask before signing a contract. They're the questions an acquirer's technical due diligence team will surface during an exit process. They're the questions investors will ask when they want to understand why your AI investment is defensible.

Building with these considerations in mind from the start is not slower or more expensive than building without them. It's usually faster, because you don't have to rebuild the parts that break.


If you want to pressure-test your AI approach against these patterns, get in touch.


Related: Agentic AI in 2026: what actually works in production · Most AI transformations are performance art · Shadow AI is your next audit finding

Ready to make AI actually work?

Tell me what you're working on. I'll respond personally. If there's a fit, we'll take it from there.

Currently accepting one new client alongside existing commitments. Second slot opens Q3 2026.