Andrej Karpathy coined "vibe coding" in February 2025: "you fully give in to the vibes, embrace exponentials, and forget that the code even exists." It was a throwaway tweet. It became Collins Dictionary's Word of the Year. And a year later, Karpathy himself declared it passé.
His replacement term is more interesting: agentic engineering. "The new default is that you are not writing the code directly 99% of the time. You are orchestrating agents who do and acting as oversight."
That one sentence captures the biggest shift in software engineering in a generation. And most of the industry hasn't caught up with what it actually means.
The old 10x myth
The "10x engineer" was always a flawed concept. The idea that one developer could produce ten times the output of another was based on a misunderstanding of what software engineers actually do. Writing code was never the bottleneck. Understanding the problem, designing the system, and making the right trade-offs. That was always the hard part.
But the myth persisted because, in the old paradigm, implementation speed was visible and valued. The engineer who could bang out a feature in a day while others took a week looked ten times more productive, even if the feature was poorly designed and would need rewriting in six months.
AI has made that version of productivity meaningless. When any developer can direct an AI agent to produce code at machine speed, typing speed is no longer a differentiator. The skill that used to look like 10x, fast and fluent code production, has been commoditised overnight.
What hasn't been commoditised is the thing that was always more important: judgement.
What orchestration actually looks like
Boris Cherny, the head of Claude Code at Anthropic, runs 5 local sessions and 5-10 remote sessions in parallel, each in its own git checkout. He shipped 22 pull requests in a day, then 27 the next, every line written by AI. He hasn't edited code by hand since November 2025.
That sounds like effortless productivity. It isn't.
What Cherny actually does is architectural direction, quality control, and context management at a pace that would overwhelm most engineers. Each session needs a well-scoped task. Each result needs review against the broader system. Roughly 10-20% of parallel sessions get abandoned because the agent went in an unexpected direction. The team maintains structured documentation that tells the agent what conventions to follow and what mistakes to avoid, and every time the agent makes a new kind of error, they update the documentation so it doesn't happen again.
This isn't "vibe coding." This is engineering management at a different level of abstraction. The code is written by machines. The thinking (the decomposition, the verification, the architectural coherence) is entirely human.
At StrongDM, three engineers built what Simon Willison called "the most ambitious form of AI-assisted software development I've seen yet." No human writes code. No human reviews code. The humans design specifications, curate test scenarios, and watch scores. They built digital replicas of Okta, Jira, Slack, and Google Docs to run thousands of automated test scenarios per hour.
Three people. Zero human code. And the system works because those three people have the engineering judgement to design the right specifications and the right tests. Which is vastly harder than writing the code would have been.
The dirty secret
Tanmai Gopal, CEO of a billion-dollar-plus startup, said something that cuts through the hype: "70% of the effort required to make AI useful relies entirely on unwritten business context that exists only in human heads."
This is the part that gets lost in the "AI writes all the code" narrative. The code is the easy part. It was always the easy part. The hard part is knowing what to build, why to build it, and how it fits into the system that already exists.
An AI agent can write a feature. It cannot understand why the previous architect made certain trade-offs, what implicit assumptions the existing system relies on, which stakeholders care about which outcomes, or how a change in one subsystem will cascade through the business process it supports.
That context, the messy, human, organisational context, is the orchestrator's primary raw material. And it's the thing that makes the role harder, not easier, than the old "just write the code" paradigm.
The competence drain
DHH, the creator of Ruby on Rails and one of the most respected voices in software development, described his experience with AI coding tools in strikingly physical terms: "I can literally feel competence draining out of my fingers."
He loves using AI for drafts, API lookups, and second opinions. But he keeps AI code in a separate window. He doesn't let it drive. His analogy: "You're not going to get fit by watching fitness videos. You have to do the sit-ups."
Dave Farley, author of the definitive book on continuous delivery, made the same point more directly: "AI is not going to replace software engineers. But it is going to expose the ones who never learned to think like engineers in the first place."
There's a real tension here. The orchestrator role requires more engineering judgement than the implementer role, not less. You need to evaluate AI-generated code for correctness, security, performance, and maintainability. That means you need to understand code at a level that many developers never reached even when they were writing it themselves.
The METR study found that developers using AI were 19% slower, and believed they were 20% faster. That perception gap is what the competence drain looks like from the inside. You feel productive because the tool is doing things at machine speed. But your ability to critically evaluate what it produces is atrophying. And you don't notice until something breaks in production.
What the orchestrator needs to be good at
If the role is shifting from implementer to orchestrator, the skill profile shifts with it. Based on what I've seen work, both in my own practice and in the organisations I advise, here's what matters:
Problem decomposition. The single most important skill. An AI agent can execute a well-scoped task. It cannot take a vague business requirement and figure out what "done" looks like. The orchestrator breaks ambiguous problems into clear, testable units of work, each one small enough for an agent to handle and specific enough to verify.
Architectural judgement. AI generates code that works in isolation. The orchestrator ensures it works within the system: that the data model is consistent, the abstractions are right, the performance characteristics match the requirements, and the changes don't introduce subtle regressions elsewhere. This requires the kind of deep system understanding that only comes from experience.
Verification discipline. Gergely Orosz, one of the most followed voices in software engineering, identified this as the skill that's becoming more valuable as AI generates more code: the ability to read code critically and catch what's wrong. Not just syntax errors. Architectural flaws, security vulnerabilities, subtle logic errors that the agent will never flag. CodeRabbit's data shows AI produces 75% more logic errors and 8x more excessive I/O operations than humans. Someone has to catch those.
Context maintenance. The orchestrator holds the business context, the system context, and the user context in their head simultaneously, and translates that context into instructions the agent can act on. This is what Tanmai Gopal's "70% unwritten business context" means in practice: the orchestrator is the bridge between what the organisation needs and what the machine can do.
Knowing when not to use AI. This might be the most underrated skill. Not every task benefits from AI delegation. Some problems require the kind of deep, focused thinking that gets worse when you're managing parallel agent sessions. The best orchestrators I've worked with are deliberate about when they direct agents and when they sit down and think.
The identity problem
There's something uncomfortable about this transition that doesn't get discussed enough.
An anonymous junior engineer at a large San Francisco tech company told the SF Standard: "I'm basically a proxy to Claude Code. My manager tells me what to do, and I tell Claude to do it. The skill you spent years developing is now just commoditised to the general public. It makes you feel kind of empty."
Michael Parker, VP of Engineering at TurinTech, put it differently: "I used to be a craftsman. Now I feel like a factory manager of Ikea. I'm just shipping low-quality chairs."
These aren't edge cases. A lot of experienced developers are grappling with the same feeling. The thing they were good at, writing elegant, efficient, well-structured code, is no longer the thing that matters most. The new thing that matters, orchestrating AI agents, maintaining context, verifying output, doesn't feel like craftsmanship. It feels like project management.
I think this is a temporary perception. Orchestration at its best is a craft, one that requires deeper understanding of systems, architecture, and engineering principles than implementation ever did. But it's a new craft, and the industry hasn't yet built the identity, the language, or the career ladders around it.
The organisations that figure this out first, that build roles, progression paths, and recognition structures for the orchestrator, will attract and retain the best engineering talent. The ones that don't will lose their best people to the companies that do.
What this means for engineering leaders
If you lead an engineering team, the transition from implementer to orchestrator isn't optional. It's happening whether you manage it or not.
The question is whether it happens deliberately, with clear expectations, appropriate training, and workflow design that sets people up to succeed, or whether it happens chaotically, with developers independently experimenting with AI tools, accumulating technical debt, and losing skills they'll need when the agent produces something wrong.
I'd start with three things:
Redefine what "senior" means. If your promotion criteria still centre on code quality and implementation speed, they're measuring the old game. Senior engineers in the agentic era are defined by judgement, system thinking, and the ability to direct AI effectively, not by how fast they can write a function.
Build verification into the workflow. AI-generated code needs different review practices than human-written code. The failure modes are different: more duplication, more security vulnerabilities, more plausible-looking logic errors. Your code review process needs to account for this explicitly.
Invest in the transition. Your best implementers won't automatically become your best orchestrators. Some will. Others will need support, training, and time. The ones who make the transition will be extraordinarily valuable. The investment is worth it.
The title "software engineer" might survive. Or it might evolve into something else. What won't change is the need for people who can think clearly about complex systems, make good decisions under uncertainty, and translate business problems into technical solutions. That's always been the job. The tools just changed.
If you're navigating this transition in your engineering organisation and want to think it through with someone who's been in the seat, get in touch.
Related: The training ladder is broken · AI made developers 19% slower: here's what they were doing wrong · Agentic AI in 2026: what actually works in production