McKinsey's latest State of AI survey found that 88% of organisations now use AI in at least one business function. But only 6% qualify as "AI high performers" attributing more than 5% of EBIT to AI.
That gap, between using AI and benefiting from AI, is the real story. The technology is deployed everywhere. The value is concentrated in a few. And the reason is almost always the same: most companies treat AI as an engineering tool, not a business-wide operating system.
Every process in every business in every industry can be optimised with AI. I don't mean hypothetically. I mean right now, with tools that exist today, at a cost that makes the ROI obvious within weeks. Shopify CEO Tobi Lutke made the right call when he told his company that no one can request headcount without first proving AI cannot do the job. The burden of proof has flipped. It's no longer "why should we use AI?" It's "why aren't we?"
The engineering tunnel vision problem
Most companies that "do AI" are doing it in one place: software development. They've bought Copilot licences. Maybe they're experimenting with agentic coding tools. The engineering team is 6 months ahead of everyone else in the organisation.
Meanwhile, the finance team is still manually reconciling spreadsheets. Marketing is writing every piece of content from scratch. Customer support agents are copy-pasting the same responses they've been using for three years. Legal is reviewing contracts clause by clause.
Lakhani, Spataro and Stave described this same pattern in a March 2026 Harvard Business Review piece as "islands of productivity": tools that boost individual output but exist as isolated wins, difficult to convert into scaled, trustworthy enterprise systems. That phrase matches what I see almost everywhere. A high-performing engineering function. A back office that has barely changed in a decade.
This is a massive missed opportunity. The biggest AI gains I've seen are not in engineering. They're in operations, support, and back-office functions where the work is repetitive, the volume is high, and the cost of doing it manually is quietly enormous.
Function by function. What works today
Finance and accounting
AI is genuinely good at financial operations right now. Invoice processing, expense categorisation, anomaly detection in transactions, cash flow forecasting. These are not experimental use cases. They're production-ready.
IBM automated over 90% of their finance journal entries using watsonx Orchestrate. HPE uses intelligent agents to automate quarterly close, forecasting, and analysis. These aren't pilots. They're running in production at two of the largest technology companies in the world.
I've seen finance teams cut month-end close time by 40% using AI-assisted reconciliation. The tool doesn't replace the accountant. It does the tedious matching work and flags the exceptions that need human judgement. Fortune reports that CFOs are predicting AI will transform finance from retrospective reporting to real-time decision-making. The role is shifting from "what happened" to "what should we do."
What doesn't work yet: fully autonomous financial decision-making. AI can surface insights and flag anomalies, but anything involving judgement calls on material financial matters still needs a human. And it should.
Marketing and content
This is where the gap between what's possible and what most teams actually do is widest.
Most marketing teams using AI are generating first drafts of blog posts. That's fine, but it's about 10% of the value available. The real gains are in research, personalisation, and distribution. AI can analyse your entire content library and identify gaps. It can segment audiences and tailor messaging at a scale that would require a team of ten to do manually. It can A/B test subject lines, optimise send times, and identify which channels are converting for which segments.
HubSpot's 2025 State of Marketing report found that teams using AI across the full content lifecycle (research, creation, distribution, analysis) saw 3x the output with the same headcount. Not 3x the content. 3x the output measured by engagement and conversion.
What doesn't work yet: letting AI write your brand voice without heavy human editing. The tools produce competent copy. Competent is not enough. You still need a human who understands what makes your voice distinctive.
Sales
AI in sales is past the experimental phase. Lead scoring, call analysis, pipeline forecasting, proposal generation. These are all live use cases with proven ROI.
The most impactful application I've seen is AI-powered call analysis. Tools like Gong and Chorus analyse every sales conversation, identify what top performers do differently, and surface coaching opportunities for the rest of the team. One organisation I advised saw win rates increase 18% within a quarter after implementing structured call analysis with AI-generated coaching recommendations.
Proposal generation is another area where the ROI is immediate. A senior salesperson spending two hours crafting a custom proposal can get a strong first draft in fifteen minutes. That's not about replacing the salesperson. It's about freeing them to spend time on relationships and strategy instead of formatting documents.
What doesn't work yet: AI closing deals autonomously. Sales is fundamentally about trust between humans. AI makes salespeople more effective. It doesn't replace them.
Customer support
This is where I have the most direct experience. The insurance brokerage case I led achieved 67% autonomous case resolution with an agentic AI system. Not chatbot-style deflection. Genuine resolution: retrieving policy information, making coverage determinations, communicating outcomes to customers.
Klarna's experience is instructive. Their AI assistant handled 2.3 million conversations in its first month, equivalent to 700 full-time agents. The headlines were triumphant. Then quality problems forced them to reverse course and rehire humans. They ended up with a hybrid model that may actually be more effective than either pure approach. The lesson: AI can handle the volume, but you need humans for the edge cases that destroy customer trust when handled badly.
The key insight from my own work: customer support is the function where AI delivers the most measurable, immediate ROI in most organisations. The volume is high, the queries are repetitive, the cost per interaction is well understood, and the quality bar is definable. If you're only going to pick one function to start with, start here.
What doesn't work yet: handling emotionally complex or genuinely novel situations. The best systems know when to escalate. The worst ones try to handle everything and erode trust in the process.
Legal and contracts
Legal is the sleeper disruption story of 2026. Corporate AI adoption in legal more than doubled in a single year, from 23% to 52%, according to the ACC/Everlaw survey. 64% of in-house teams now expect less dependence on outside counsel. 61% are pushing for changes in how outside legal services are priced.
Tools like Luminance, Ironclad, and Thomson Reuters' CoCounsel can review standard commercial contracts in minutes, flagging non-standard clauses, missing provisions, and risk areas that a junior lawyer would take hours to identify. LexisNexis has deployed a four-agent system for legal research: orchestrator, legal research, web search, and customer document analysis.
I've seen legal teams reduce contract review time by 60-70% for standard agreements. The AI handles the first pass. The lawyer handles the judgement calls. This isn't about reducing legal headcount. It's about removing the bottleneck that legal review often creates in deal velocity.
What doesn't work yet: novel legal reasoning or complex regulatory interpretation. AI can tell you what a contract says. It can't yet reliably tell you what it means in the context of a specific business situation and regulatory environment.
Project management and reporting
This is the function where AI adoption is lowest and the opportunity is most underestimated.
AI can eliminate 80% of status reporting. It can pull data from Jira, Slack, GitHub, and your time-tracking tool, synthesise it into a coherent status update, flag risks based on velocity trends, and generate the weekly report that a project manager currently spends two hours compiling every Friday.
I've seen project leads reclaim an entire day per week by automating reporting and status synthesis. That's a day they can spend actually managing the project instead of describing it.
What doesn't work yet: AI project managers. Prioritisation, stakeholder management, and the ability to read a room when a project is going sideways. These remain distinctly human skills.
Internal communications
Meeting summarisation alone justifies AI investment for most organisations. If your company has more than 50 people, you're spending thousands of hours per year in meetings where half the attendees are there "just in case." AI meeting tools (Otter, Fireflies, Copilot in Teams) can record, transcribe, summarise, and extract action items. The people who didn't need to attend can read the summary in two minutes.
Beyond meetings: internal knowledge bases that actually answer questions, onboarding documentation that stays current, policy documents that can be queried rather than read. These are all production-ready use cases today.
The compounding effect
Here's what makes internal AI enablement a competitive advantage rather than just an efficiency play.
Each function you optimise with AI frees capacity. That capacity can be redirected to higher-value work, or it can allow you to grow operations without proportionally growing headcount. When you do this across five or six functions simultaneously, the effect compounds. You don't get 10% efficiency in each area. You get a fundamentally different cost structure.
The trap, as Lakhani, Spataro and Stave noted in Harvard Business Review, is that "saved time is often re-absorbed into low-value activities, like more internal meetings or unnecessary emails, rather than being structurally harvested." Capacity does not capture itself. Someone has to redesign the role, change the budget line, or shift the headcount allocation. Without that step, the AI investment shows up as a vendor bill and nothing else.
The roles most exposed are the reporting-heavy ones in the middle of the organisation: finance analysts whose primary output is monthly decks, compliance officers compiling regulatory submissions, supply chain planners building forecasts from spreadsheets, procurement managers drafting RFPs. The organisations moving fastest are not eliminating these roles wholesale. They are reallocating the people in them toward analytical, judgement-heavy, and creative work that AI cannot yet do well.
The organisations I work with that understand this are not just saving money. They're moving faster than competitors who are still running every function manually. And that gap widens every quarter.
The common mistakes
Even with the right intent, most organisations get internal AI enablement wrong in predictable ways.
Mistake one: starting everywhere at once. Pick one function. Get it working. Measure the results. Then expand. The 80% failure rate applies to internal tools just as much as customer-facing ones.
Mistake two: no governance. When every department is buying its own AI tools, you get shadow AI at scale. Sensitive data flowing to tools nobody approved. Duplicate spend. Inconsistent quality. Internal enablement needs a lightweight governance framework from day one.
Mistake three: expecting AI to fix broken processes. If your sales process is a mess, AI will make it a faster mess. The organisations that see real results redesign the workflow before adding AI to it.
Mistake four: forgetting the humans. Every function-level AI deployment is a change management exercise. The finance team needs to trust the reconciliation tool. The legal team needs to believe the contract review is reliable. That trust is built through transparency, gradual rollout, and honest communication about what the tool does and doesn't do.
Where to start
If I were advising an organisation that hadn't yet done any cross-functional AI enablement, I'd tell them to pick the function with the highest volume of repetitive, well-defined work. For most companies, that's customer support or finance operations.
Run a focused pilot. Measure before and after. Be honest about what worked and what didn't. Then use that success (and those lessons) to build momentum for the next function.
The technology is ready. The tools are affordable. The ROI is measurable within a quarter, not a year. The only thing missing in most organisations is someone with enough authority and enough cross-functional understanding to see the full picture and connect the dots. That is not a technology problem. It is a leadership one.
If you want to map where AI can create the most value inside your organisation, get in touch.
Related: Why 80% of AI projects fail to deliver ROI · Most companies are adopting AI. Few are adopting it well · How I approach AI transformation