I don't sell a process. I adapt one.
Every client is different: different stage, different technology maturity, different organisational culture, different definition of success. But after four companies, dozens of advisory engagements, and production AI systems across insurance, defence, healthcare, and enterprise software, I've developed a thinking structure that works.
This is not a proprietary framework with a trademarked name. It's the distilled approach of someone who has built, shipped, and scaled AI systems, and who knows the difference between what demos well and what works in production.
The three-arc engagement
Every engagement follows three arcs. The length of each varies, but the sequence doesn't.
Arc 1: Diagnostic (Weeks 1–3)
Before I build anything, I need to understand what's actually happening, not what the slide deck says is happening.
Shadow AI audit. What AI tools are people actually using? Where is data flowing? What's sanctioned, what's unsanctioned, and what's the risk exposure?
Process mapping. Which processes are candidates for AI deployment? Where are the decision points? What's the current error rate, throughput, and cost per unit?
Data readiness. Is the data infrastructure ready for production AI? Not "do you have data," but can it be accessed, governed, and served to a production system at the reliability level required?
Team assessment. Does the organisation have the skills to sustain AI systems after I leave? Where are the gaps?
The diagnostic phase ends with a clear, honest document: here's where you are, here's what's possible, and here's what I'd prioritise. No sugar-coating. If the answer is "this isn't ready yet," I'll say so.
Arc 2: Build and deploy (Weeks 3–8)
This is where most advisory engagements fail, because most advisors stop at the recommendation.
I don't stop at the recommendation.
Decision delegation architecture. For the selected process, we define precisely which decisions the AI system makes autonomously, which require human review, and where the escalation boundaries sit. This is the governance model built into the system, not bolted on after.
Tight-scope deployment. One process. One set of KPIs. Full governance from day one. The goal is a production system delivering measurable results within weeks, not a pilot that "shows promise" for months.
Kill criteria. Before we deploy, we define when we'd stop. If the system doesn't hit its KPIs within the agreed timeframe, we kill it and redirect the investment. No sunk-cost persistence.
Measurement from day one. Every system has defined KPIs that are tracked from the moment it goes live. Not vanity metrics. Business metrics. Cost per case, resolution rate, time-to-close, revenue impact. Numbers that appear on a P&L.
Arc 3: Sustain and scale (Ongoing)
A production system is only valuable if it stays in production and the organisation can evolve it without depending on me.
Knowledge transfer. The team understands the system architecture, the governance framework, and the monitoring approach. They can maintain and extend it.
Scaling roadmap. With one system in production and measured results, the next question is: what's next? The 90-day action plan identifies the next two or three processes, prioritised by P&L impact.
Governance evolution. The governance framework grows with the AI deployment. New systems, new risk categories, new regulatory requirements, all incorporated systematically.
Decision delegation architecture
This is the core intellectual framework behind every AI deployment I build. It answers the question most organisations get wrong: what should the AI decide, and what should a human decide?
Most organisations draw this line based on comfort, not analysis. They keep humans in the loop for everything because it feels safer, which means the AI system adds latency without reducing cost, and the productivity gains never materialise.
The decision delegation architecture maps every decision in a process and classifies it:
- Fully automated. The AI decides and acts. Error rate is within acceptable bounds, consequences of errors are manageable, and the volume justifies automation.
- AI-recommended, human-approved. The AI proposes, a human confirms. For decisions where the error cost is high but the AI's recommendation quality is strong enough to save significant analysis time.
- Human-only. The AI provides context but the human decides. For decisions requiring judgement, empathy, or contextual knowledge the system can't access.
The boundaries between these categories are defined by data: error rates, cost of errors, volume, and regulatory requirements. Not by instinct. And they evolve as the system improves and the organisation builds confidence.
This is why 67% autonomous resolution worked at AdBrain. We didn't automate everything. We automated exactly the decisions that could be automated safely and profitably, and kept humans on the decisions that needed them.
What this is not
It's not a template. Two clients with the same industry and similar size will get different approaches, because their data maturity, team capability, and strategic priorities are different.
It's not a maturity model. I'm not going to assess you against a 5-level framework and sell you a roadmap to level 5. I'm going to build something that works and measure whether it delivers.
It's not vendor-neutral consulting. It's implementation. I work alongside your team, in your systems, with your data. The deliverable is a working system, not a recommendation to build one.
If you want to understand how this would apply to your situation, let's talk. I can usually tell within 30 minutes whether there's a fit and what the first 90 days would look like.