The code is yours. All of it
AI generates the code. You sign the commit. That's not a workflow detail — it's the entire conversation we're not having.
I use AI to write code every day. Claude Code, Cursor, depending on the context. I'm not here to defend abstinence or pretend the tools don't work — they work, and whoever hasn't incorporated this into their workflow is wasting real time on tasks that don't deserve human attention.
But there's a conversation the industry is avoiding. And the longer it takes, the more expensive it gets.
AI generates. You sign. This is not a detail
42% of all committed code today was generated by AI. Developers estimate that number reaches 65% by 2027. Generation speed increased. Code volume increased. Accountability was not distributed along with it.
When production breaks at 3am, the one who opens the terminal isn't Cursor. It isn't Claude. It's you. And "the AI generated that part" isn't an explanation for any stakeholder, any postmortem, any user who lost data.
In 1999, Andrew Hunt and David Thomas wrote something that seemed obvious at the time: you own your code. The Pragmatic Programmer wasn't making a legal argument — it was making a professional one. If you wrote it, you understand it, you stand behind it.
Twenty-six years later, authorship got more complicated. The principle didn't.
The gap nobody wants to name
96% of developers don't fully trust the code AI generates. But only 48% always verify before committing. That gap isn't laziness. It's capitulation disguised as productivity.
CodeRabbit analyzed thousands of pull requests and found that AI code produces 1.7x more issues than human code — not syntax errors the linter catches, but logic bugs and edge cases that pass CI and explode in production. The code looks right. Tests pass. The problem surfaces when a real user hits a specific path the AI inferred incorrectly.
This has a name in systems safety literature: silent failure. And it's the most expensive kind of bug — because you don't know it exists until it's already cost you.
Pressure is not an excuse. It's a test
I've heard variations of the same sentence in different contexts: "the pressure to ship doesn't leave time to review properly." I understand the context. I don't accept the argument.
An experienced engineer who capitulates every time there's pressure isn't an experienced engineer — it's an engineer who hasn't yet learned that saying no is part of the job. Not heroism. Basic technical competence.
I've seen this happen far from AI. 49MB to load 4 headlines. No developer wakes up one day and decides to put 5MB of tracking JavaScript before rendering any text. It's cumulative — one tracker here, an A/B test library there, a widget that "barely weighs anything." The degradation happens too slowly to trigger an alarm, fast enough to become irreversible.
The same mechanism operates with AI. Each unverified line is a micro-decision where you transfer risk without transferring responsibility. In the team you lead: who has real authority to refuse a merge that wasn't understood? Or has that become tech debt with a card in the backlog that nobody will resolve?
The problem being built in silence
A METR randomized controlled trial measured 16 experienced developers working on their own open-source repositories. Result: with AI, they took 19% longer. The same developers estimated beforehand they'd be 20% faster. After finishing, they still believed they had been faster.
The perception gap is as significant as the productivity gap. You feel faster because you're generating more. But generation without comprehension isn't speed — it's volume. And volume without accountability is debt disguised as delivery.
But there's something worse than the developer who doesn't verify their own AI code. It's the formation market being silently destroyed.
Stanford research found that employment of developers between 22 and 25 years old fell nearly 20% since the 2022 peak. AI is exceptionally good at replacing exactly the work that formed junior developers: the simple CRUD, the boilerplate, the scaffolding. The tacit knowledge that makes a senior worth their salary — debugging instinct, architectural decisions that aged badly, understanding why the obvious solution fails under load — that knowledge doesn't come from prompts. It comes from having made mistakes in real contexts and paid the price.
If the tasks that formed juniors disappear, where does the next generation of seniors come from? Who forms them? That bill will arrive, and it will arrive expensive.
What changes in practice
Treat every line AI generates the way you'd treat a PR from a junior developer you're mentoring. Someone competent enough to be useful, but who isn't yet accountable for the consequences of what they ship.
You read that PR carefully. You understand what it does. You think about the edge cases. You approve it or you don't. And when it goes to production, your name is on it — because you made the call.
That's not a workflow limitation. It's the definition of the job.
"You own your code" in 1999 meant: don't blame the compiler, don't blame the requirements, don't blame the framework. In 2026 it means the same thing — with one more line: don't blame the AI.
The question isn't how much AI you use.
The question is: who signs the commit?
If the answer is you, you'd better understand what's in it. Act like it.