If it can be automated, it must be automated.

I mean that literally. Not as aspiration, but as economic inevitability. Every task that a machine can do faster, cheaper, and more consistently than a human will eventually be handed to a machine. This has been true since the spinning jenny. AI just compresses the timeline from decades to months.

The question that most leaders are asking, "Will AI take our jobs?", is the wrong question. It's too binary and too abstract. The better question is specific and strategic: what can only humans do, and how do we get better at it?

That's where the real competitive advantage lives. Not in resisting automation, but in ruthlessly embracing it so that your people can focus on the things that actually require a human mind.

What AI does better. Accept it and move on

Let's be honest about this, because denial wastes time.

AI is better than humans at processing large volumes of structured information. It's better at pattern matching across datasets too large for any person to hold in their head. It's better at maintaining consistency across thousands of decisions. It's better at operating at scale without fatigue, mood, or distraction.

McKinsey estimates that 60-70% of current work activities could be automated with existing technology. Not future technology. What exists today. The insurance brokerage I worked with achieved 67% autonomous case resolution. Not because the technology was extraordinary, but because most of those cases genuinely didn't require human judgement. They required pattern matching, data retrieval, and rule application. Machines do all three better than people.

Here's the finding that should challenge any comfortable assumptions about human-AI collaboration. Stanford HAI's 2025 AI Index reported that in a healthcare diagnostic trial, GPT-4 alone achieved 92% accuracy. Physicians alone scored 74%. And physicians assisted by GPT-4 scored 76%. Adding humans to AI made it worse. Not better. Worse. The humans second-guessed correct AI diagnoses and introduced errors that the AI alone would not have made.

This is deeply counterintuitive. It's also a useful corrective. The comfortable narrative that humans will always add value "in the loop" needs qualifying. For many tasks, they don't. The human edge is real, but it's narrower than most people want to believe.

Pretending otherwise isn't loyalty to your workforce. It's a failure of leadership. The organisations that try to protect people from automation end up protecting them into irrelevance, doing work that machines do better while competitors pull ahead.

The irreducible human capabilities

So what's left? More than you might think. And it's the work that actually matters.

The Harvard Business School study with 758 BCG consultants mapped what they call the "jagged frontier" of AI capability. Consultants using GPT-4 completed 12% more tasks, 25% faster, with 40% higher quality, but only for tasks within the AI's capability frontier. For tasks outside the frontier, consultants using AI were 19% less likely to produce correct solutions. They trusted the AI when they shouldn't have. The best performers were the ones who knew exactly where the frontier was, and handled the other side themselves.

Strategic intent. Knowing what's worth doing

AI can optimise any objective function you give it. What it cannot do is decide which objective function matters. It can tell you the most efficient path to a goal. It cannot tell you whether the goal is worth pursuing.

This is not a small distinction. The most expensive mistakes in business aren't execution failures. They're strategic ones. Building the wrong product perfectly. Optimising the wrong metric efficiently. Scaling the wrong business model flawlessly.

I've seen AI projects fail not because the technology didn't work, but because nobody asked whether the problem was worth solving. The 80% failure rate in AI projects isn't primarily a technical problem. It's a strategic one. Someone has to own the intent. Someone has to say, "This is worth doing" or, more importantly, "This is not worth doing." That someone is human.

Taste, judgement, and opinion

Here's something that doesn't get discussed enough in the AI conversation. Taste matters.

Not aesthetic taste, though that too. Business taste. The intuition that comes from twenty years of experience in an industry. The ability to look at a proposal and know, before the data confirms it, that something is off. The judgement to say "this is technically correct but strategically wrong."

AI has no opinions. It has outputs calibrated to probability distributions. It will give you the statistically most likely answer, which is often the most mediocre one. The organisations deploying AI well aren't the ones deferring to AI outputs. They're the ones whose leaders have strong enough judgement to know when the AI is right and when it's confidently wrong.

Block's CTO Dhanji Prasanna said it well: code quality has "nothing to do" with product success. YouTube triumphed despite "terrible" code. The decisions that matter are about what to build, not how to build it. That's taste. That's judgement. And no model currently possesses it.

I've built products where the data pointed one way and experience pointed another. Experience was right more often than most data scientists would like to admit. Not because data doesn't matter, but because data only tells you what happened. Judgement tells you what it means.

Lakhani, Spataro and Stave warned about this in HBR. A relentless focus on efficiency, they argue, "risks hollowing out the human capabilities, such as judgment and storytelling, that differentiate high-value work." The organisations that treat AI purely as a cost-reduction tool tend to lose the capacity to do the work AI cannot do.

Human-to-human relationships

No amount of AI sophistication replaces the trust that forms between people who have worked through difficult problems together. A client doesn't hire me because an algorithm recommended me. They hire me because a conversation revealed that I understood their situation in a way that felt different from the others.

This isn't sentiment. It's economics. B2B sales cycles, board decisions, partnership formations, key hires. The transactions that shape organisations happen between humans who trust each other. AI can prepare the analysis. It can surface the options. It can draft the presentation. But the moment where someone decides to commit resources, take a risk, or change direction, that moment is human.

The shift from individual contributor to orchestrator makes this more important, not less. As AI handles more of the technical execution, the human skills of communication, persuasion, and relationship become a larger proportion of what determines success.

Ethical reasoning and contextual wisdom

AI systems optimise. That's what they do. They find the path of least resistance to whatever objective you've defined. They don't ask whether the objective is ethical. They don't consider second-order consequences on people who aren't represented in the training data. They don't feel uncomfortable when something is technically legal but plainly wrong.

Every organisation I've worked with has faced decisions where the right answer wasn't the most efficient one. Where serving a customer well meant absorbing a cost. Where doing the ethical thing meant leaving money on the table. These decisions require a kind of contextual wisdom that comes from lived experience, from having been in situations where the "optimal" choice turned out to be the wrong one.

The governance questions around shadow AI are a good example. An AI system will happily process sensitive data if you let it. Knowing when it shouldn't, and why, requires human judgement about risk, reputation, and responsibility that no model currently possesses.

The strategic argument. Offload everything else

If these are the irreducible human capabilities, the strategic implication is clear: automate everything that isn't on this list.

Not gradually. Not cautiously. Aggressively.

Every hour your best people spend on work that a machine could do is an hour they're not spending on strategic thinking, relationship building, or the kind of difficult judgement calls that actually determine your organisation's trajectory. It's not just wasteful. It's competitively dangerous.

The dual-stream approach I advocate is exactly this: run two tracks simultaneously. On one track, automate every process that can be automated. On the other, deliberately invest in developing the human capabilities that machines cannot replicate. The organisations that do both will outperform those that do either one alone.

This means restructuring roles, not eliminating them. It means telling your team: "We're going to take away the parts of your job that a machine does better, and we're going to expect you to become exceptional at the parts only you can do." That's a harder conversation than "AI is coming for your jobs," but it's a more honest and more productive one.

The "for now" problem

I want to be honest about something. The human edge is shrinking.

Five years ago, I would have included creative writing on the list of irreducible human capabilities. Today, AI produces competent prose that passes most quality bars. Three years ago, I would have included basic code architecture. Today, agentic AI systems are making architectural decisions that are, in many cases, good enough.

The boundary between "human only" and "machine capable" moves in one direction. It moves towards the machine. Slowly in some domains, startlingly fast in others. Anyone who tells you that strategic thinking, relationship building, and ethical judgement are permanently safe from automation is making a prediction about a technology whose trajectory they cannot know.

Ben Goertzel, CEO of SingularityNET, claims AI will surpass human strategic thinking "in about two years." I think he's probably wrong on the timeline, but I take the direction seriously. The WEF Future of Jobs Report identifies creative thinking, resilience, and curiosity as the fastest-rising skills employers value. These are the human edge skills. And yet a quieter risk is emerging in the data. Critical thinking is atrophying in teams that lean too hard on AI outputs. Junior staff who don't first form their own view before consulting the model lose, over time, the muscle that makes them a useful check on it. We may be losing the very capabilities that make us irreplaceable, even as we still hold them.

I don't think they're safe forever. I think they're safe now, and they'll be safe for long enough to matter strategically. The organisations that double down on developing these capabilities in their people will have a decisive advantage for the next five to ten years. And five to ten years of competitive advantage is, in business terms, an eternity.

The right response to a shrinking edge is not to pretend it isn't shrinking. It's to maximise the advantage while it exists. Play the hand you have, not the hand you wish you had or the hand you fear you'll be dealt.

What this means for you

If you lead an organisation, three things follow from this argument.

First, stop protecting people from automation. Every task you shield from AI because "it's always been done by humans" is a task where you're choosing nostalgia over competitiveness. Free your people to do the work that only they can do.

Second, invest in the human skills that matter. Strategic thinking, judgement, relationship building, ethical reasoning. These aren't soft skills. They're the hardest skills. And they're the ones your organisation's future depends on. When AI handles scale and speed, the bottleneck becomes human judgement: the precision of the questions you ask, the depth with which you interpret model reasoning, and your ability to turn AI-generated ideas into better decisions. Lakhani, Spataro and Stave put it more bluntly in a recent Harvard Business Review piece: "the AI last mile is not blocked by technology. It is blocked by unresolved questions regarding operating models, governance, and human identity." Treat these skills with the same seriousness you treat technical capability.

Third, be honest about the timeline. The human edge is real, but it's not permanent. Build your strategy around it, but don't build your identity around it. Stay adaptive. The line will move again.

The most valuable people in any organisation have always been the ones who know what's worth doing, not just how to do it. AI hasn't changed that. If anything, it's made it more true than ever.


If you're working out where the human edge matters most in your organisation, get in touch.


Related: You're not a 10x engineer. You're an orchestrator · Most companies are adopting AI. Few are adopting it well · Traditional moats are dissolving

Ready to make AI actually work?

Tell me what you're working on. I'll respond personally. If there's a fit, we'll take it from there.

or take the free AI readiness assessment →

Currently accepting one new client alongside existing commitments. Second slot opens Q3 2026.