The Space Between 'Coding Is Solved' and Monday Morning
Boris Cherny is right. And he's describing a different planet than the one most engineering teams live on.
Boris Cherny says coding is largely solved. He leads the Claude Code team at Anthropic, where his engineering staff ship anywhere between 10 to 30 pull requests a day. Yes, you read that right. Productivity per engineer is up 200%, compared to single-digit percentage gains at companies with hundreds of engineers working on the same problem. Some engineers spend hundreds of thousands of dollars a month in tokens. Boris himself hasn’t edited a single line of code by hand since November.
And it isn’t just engineers at Anthropic, it’s PMs and designers too. Everybody codes. Even their finance staff (well maybe not all of them). This is what AI adoption looks like at one of the most technically sophisticated companies on earth. So when he proclaims that coding is largely solved, he’s right.
If you’re an engineering leader who just happened to listen to that episode on your commute, it’s okay to feel a little gobsmacked. This is a real gap in the industry. You might even be sitting at your desk right now, wondering what any of it means for your team, where you’ve got a faction of engineers that have been slowly becoming power users with AI tooling, and others who are quietly avoiding them. It’s hard to tell whether your “AI adoption” is actually working, or it’s just happening on paper.
You’ve likely been accumulating signals like this from other companies for months too. You hear it on podcasts, you come across articles where you’re seeing results reported that look nothing like yours. It’s hard to pin down exactly what’s going on, because it’s not like those companies have access to tools that your team doesn’t. Something needs to change, but what is it?
Boris is describing a real place. Yet most engineering leaders are unfortunately living in a different one. Something needs to change, but what is it?
What’s True at the Frontier
So what makes Anthropic’s results so impressive? Hint: it’s not better prompting, or aggressive adoption standards. And sure, for frontier labs, it’s not exactly a shocker that they’re having success using their own tools. Claude Code is genuinely great. But that’s not where the magic sauce lies.
Anyone hired into Boris’s team came with the expectation that AI is the default way of working. Teams were kept lean on purpose, because it would force people to lean on AI tooling to keep up, and so naturally the roles just blurred. Which is why you hear about engineers, PMs, designers, even finance folks all building stuff with Claude Code.
Okay, fine. They are the frontier labs. Of course their organizations use AI well. But Sherwin Wu, the leader of the API team at OpenAI (where supposedly 95% of engineers use Codex daily and 100% of PRs are reviewed by it), admits something interesting about what he sees outside their walls: most enterprise AI deployments have negative (yes, negative) ROI. And the reason, he concedes, is because adoption was pushed from the top-down without support from the bottom-up.
The companies that see real changes are the ones who modified how people work together, what roles mean, what gets rewarded, and what quality looks like. AI tooling is table stakes. Everything around it is what generates the results.
What’s True in the Field
A study [1] of over 10,000 developers tells a different story. Individual output jumped significantly: 21% more tasks completed, 98% more pull requests merged. Numbers look great, put them on the dashboard! But then review queues ballooned 91%, PR sizes grew 154%, Bugs per developer increased 9%. Individual productivity shot up, but the systems around them couldn’t absorb it. And the net? (silence) There was no measurable company-level improvement. Ouch.
The adoption that does exist is concentrated in a handful of power users, and beyond them, the adoption is thin. Experienced engineers are often the most hesitant. They’ve seen enough technology rollouts to be cautious. It’s the engineers who adopted early that are carrying a disproportionate load, and they started burning out from it.
Code reviews have shifted. When a reviewer can’t tell what was thought through and what was generated, the conversation drifts from “is this change correct?” to “did you actually write this?” It’s an insidious problem, and it’s happening on teams that otherwise function well.
So What’s Actually in the Way?
So why haven’t these companies just done what Anthropic did? If the tools are the same, what’s actually in the way? The answer looks different depending on where you sit.
Engineers sense the rules are changing in real time, and the updated version hasn’t been written. Champions picked up the flag and ran with it, but the rest of the org hasn’t caught up. The most experienced people on the team have reservations, and those reservations deserve a real answer. Leaders can see the gap between the metrics and the reality, but the cause isn’t obvious.
What connects all of it is something that’s easy to feel and hard to talk about. It’s trust.
The Scribe and the Conversations That Aren’t Happening
The printing press changed everything. The tedious task of having to manually copy between books would soon go away forever. Boris talks about having found a historical document about a scribe from this era. The scribe was excited. They were finally freed of the tedium of their job, enabling them to focus on what they were truly passionate about: their illumination, painting beautiful ornate initial letters, decorative borders, miniature illustrations in gold leaf and vivid pigments. They cared about their art. They cared about the entirely separate craft of bookbinding.
Boris identifies with that scribe because coding was always instrumental for him. He recalls his time in high school when he got a TI-83 graphing calculator. One of the first things he learned to code was an algebra solver that enabled him to cheat on a math test. He quickly shared it with his friends, and sure enough, they all started getting A’s on their tests. When the teacher found out, he was soon told (in kinder words) to knock it off. The tedious part of software engineering is being automated, and what’s left is the interesting part: figuring out what to build.
But not every engineer is Boris’s scribe. For some, the craft of code was the art. The elegant type system, the clean refactor, the naming that makes the next person’s life easier. The printing press didn’t free them from tedium. It automated the part they loved.
Boris knows this is painful. “In the meantime, it’s going to be very disruptive, and it’s going to be painful for a lot of people.” He joined Anthropic because he’d read enough science fiction to know “how this thing can go” and felt he had to help it go better. That impulse is sincere. And Anthropic makes a massive investment in making sure their AI behaves well. Unfortunately there is almost no investment in what happens to the humans on the other end of it.
Most engineers aren’t at either pole. They’re not Boris, thrilled about what’s next. They’re probably not convinced the sky is falling either. They’re somewhere in between, where things are probably going to be fine, but their job is changing in ways they can’t predict and nobody has checked in to ask how that’s going for them.
Those conversations need to happen. They mostly aren’t, because the loudest voices in this space are either marveling at what the frontier can do or warning about what it will destroy. The people in the middle don’t have a narrative. They just have a feeling that something is off.
Monday Morning
Boris Cherny is right that coding is largely solved, at Anthropic, for a team that was built from scratch around that assumption. For everyone else, the tools are available and the conditions aren’t. The gap between those two realities is where most engineering organizations are living right now, and it’s where the actual work of AI adoption needs to happen.
And it’s not about getting people to use the tools more. It’s about rebuilding the conditions that make those tools actually work inside a team of real people with real histories and real concerns about what’s changing.
It’s about whether people can trust the output, trust each other, and trust that the organization is going to be fair about how it all shakes out.
The companies that figure this out will be the ones where leaders had the conversations that aren’t happening yet. Where experienced engineers felt their judgment still mattered. Where “adoption” meant something deeper than usage metrics on a dashboard.
Nobody remembers which tools a company used. They remember how it felt to work there.
[1] Faros AI, The AI Productivity Paradox Report, 2025: faros.ai/ai-productivity-paradox
If you want to know when the next one comes out: