CEO confidence in revenue growth has hit a five-year low. PwC's 2026 survey of 4,454 chief executives across 95 countries found that only 30% are confident, down from 56% in 2022. Mentions of "uncertainty" appeared in 87% of public earnings statements in early 2025. Glassdoor reviews mentioning uncertainty are up 80% year-over-year.

The five highest measurements of global economic policy uncertainty ever recorded have all come in the past five years. And this is before the full impact of AI reshapes entire industries.

I've led technology teams through several of these inflection points, at NHS Wales, where the wrong call affects millions of people, and at startups, where the wrong call means you run out of runway. The pattern I've noticed is that the skill separating the leaders who thrive from those who freeze is always the same: the ability to make good decisions when nobody knows what happens next.

Why uncertainty breaks most leaders

There's a neuroscience explanation for why uncertainty is so paralysing.

When the brain encounters uncertain threats, the amygdala fires and cortisol rises. A systematic review of the research found that the resulting stress response significantly impairs decision-making on uncertainty-based tasks. Worse, and this is the part that should concern every leader, elevated cortisol impairs metacognition. You make worse decisions and you don't realise it.

This creates a predictable failure pattern in organisations. The leader faces an uncertain situation, say, whether to invest heavily in AI or wait for the technology to mature. Stress rises. The brain's threat response activates. And instead of reasoning clearly, the leader defaults to one of two failure modes: analysis paralysis (delay the decision indefinitely, commission another report) or false certainty (pick a direction with excessive conviction, ignore disconfirming evidence).

Both are catastrophic. And both are everywhere.

What the evidence says actually works

Jim Collins and Morten Hansen spent nine years studying companies that outperformed their industry index by 10x in chaotic, uncertain environments. Their findings upend the conventional wisdom about visionary leadership.

10X leaders were not more creative, more visionary, or more risk-seeking than comparison leaders. They were more disciplined, more empirical, and more paranoid.

Three specific behaviours defined them:

Empirical creativity: "fire bullets, then cannonballs." Before making big bets, 10X leaders ran small, low-cost experiments to calibrate their aim. They didn't rely on analysis or intuition alone. They tested. Then, when they had empirical evidence that something worked, they committed resources aggressively. The comparison companies did the opposite: they fired cannonballs first, making large bets based on untested assumptions.

Fanatic discipline. 10X companies changed only 10-20% of their strategies over 20 years. Comparison companies changed 55-70%. Consistency, sticking with a validated approach even when the environment was chaotic, beat agility. This is counterintuitive. Most leadership advice says "be agile, pivot fast." The data says the opposite: find what works and hold the line.

Productive paranoia. 10X leaders built financial buffers and contingency plans before crises hit, not during them. They assumed bad things would happen and prepared accordingly. They carried more cash, had more contingency plans, and were more attuned to threats than their peers.

The most striking finding: luck was evenly distributed across both groups. What differed was what leaders did with it.

The OODA loop and pattern recognition

Colonel John Boyd developed the OODA loop (Observe, Orient, Decide, Act) from studying air combat. His insight was that the entity which cycles through this loop fastest doesn't just win. It creates confusion and disorientation in the opponent.

But Boyd's most important contribution was identifying orientation as the bottleneck. Not observation, not decision, not action, but the mental model you bring to the situation. Your culture, experience, prior assumptions, and cognitive biases all shape how you interpret what you observe. And when your mental model is wrong, faster action just means you fail faster.

Gary Klein's research on expert decision-makers confirms this. He studied firefighters, military officers, and surgeons making high-stakes decisions under time pressure. He found that experts don't compare options analytically. They use pattern recognition from experience to rapidly identify a workable course of action. The quality of their decisions depends on the quality of their mental models, which depend on the depth and breadth of their experience. I saw this firsthand when I served as expert witness in a Hong Kong High Court case involving 39 casualties. The analysis that changed the outcome didn't come from better data. It came from a different way of looking at the same evidence.

This is why the best decisions under uncertainty come from leaders who have built diverse mental models through operating experience, not from leaders who have the best analysis or the most data.

What this means in the age of AI

The current AI transition is the most significant source of uncertainty in business since the internet. And most leaders are handling it badly.

EY's 2026 CEO Outlook found that 82% of CEOs are more optimistic about AI than a year ago, but 60% admit they've intentionally slowed implementation due to fear of errors. Half believe their job stability depends on successfully integrating AI. Gartner reports that 72% of CIOs are breaking even or losing money on AI investments.

The pattern is familiar. Uncertainty triggers the stress response. Leaders oscillate between two extremes: rushing to adopt AI without understanding the risks (false certainty) or delaying meaningful action while commissioning strategy decks (analysis paralysis).

Collins's research suggests a better approach, one I use with every organisation I work with:

Fire bullets first. Don't bet the business on a comprehensive AI transformation. Pick one high-value process. Build a focused AI solution. Measure it against a real business metric. Learn from what works and what doesn't. Then, and only then, fire the cannonball.

Be empirical, not ideological. The AI debate has become tribal: accelerationists versus sceptics, maximalists versus minimalists. None of that matters. What matters is whether the specific AI system you're building moves a specific number that your business cares about. If it doesn't, kill it. If it does, scale it.

Build the buffer. Productive paranoia in the AI context means keeping the skills and the people you'll need if the technology doesn't deliver what it promises. It means not cutting your junior engineering pipeline to fund AI tools that might not work. It means maintaining optionality, the ability to change course without catastrophic cost.

The Kodak lesson

Kodak invented the digital camera in 1975. Their leadership buried it because they feared cannibalising their film business. Despite a succession of new CEOs and mounting evidence that digital photography was the future, the organisation couldn't adapt. The result: an 80% workforce decline, a collapsed stock price, and a bankruptcy filing after more than a century of market dominance.

Nokia's engineers presented a full touchscreen phone prototype to management. The response: "that's not how phones work." The company lost roughly $100 billion in market value.

Blockbuster's executives dismissed Netflix as a "very small niche business" and turned down a $50 million acquisition offer. Netflix is now worth over $100 billion.

In every case, the failure wasn't a lack of information. It was a failure of orientation. Boyd's bottleneck. The mental models of the leaders in the room couldn't accommodate the change they were facing. So they didn't.

The leaders navigating AI well today, and I include Satya Nadella's transformation of Microsoft from a $300 billion company to a $2.5 trillion one among the best examples, share a different orientation. They treat their mental models as provisional. They expect to be wrong about some things. They test, measure, and adjust. And they maintain the discipline to hold their nerve when the uncertainty is highest.

Eisenhower put it best: "Plans are worthless, but planning is everything." The value isn't in the plan. It's in the thinking the plan forces you to do. And when reality diverges from the plan, as it always does, that thinking is what allows you to improvise effectively.

That's the skill. Not predicting the future. Not having the best strategy. Not being the most decisive person in the room. Just the ability to think clearly when nobody knows what happens next, and to act on that thinking with discipline.

It's the most important skill in business. And it's learnable.


If you're making technology decisions under uncertainty and want a second perspective, a fractional CTO engagement is one way to get it. Or simply get in touch.


Related: Most AI transformations are performance art · The training ladder is broken · NHS Wales: transformation at national scale · How I work

Ready to make AI actually work?

Tell me what you're working on. I'll respond personally. If there's a fit, we'll take it from there.

or take the free AI readiness assessment →

Currently accepting one new client alongside existing commitments. Second slot opens Q3 2026.