Hi there,
Something uncomfortable is on the record about experienced developers using AI. What they think is working, and what actually is, have stopped being the same thing.
Last year, METR ran a controlled trial. 16 senior open-source developers, hundreds of tasks from repos they already maintain, randomized whether AI was allowed.
The devs predicted AI would make them 24% faster. After the trial, they still felt 20% faster. The measurement said they were 19% slower.
Not because AI isn't powerful. Because they were running the old playbook with a new tool plugged in.
The developers getting real speed from AI dropped five reflexes they used to be rewarded for. Here they are.
1. Writing code before you've specified what it should do
Stack Overflow ran its 2025 developer survey, the biggest annual temperature-check in the industry. Two-thirds said their biggest frustration with AI is "almost right, but not quite". Almost right is what you get when the AI fills in what you didn't specify.
The old reflex is to open the editor, start typing, and let the structure show up as you go. It worked when you were both the thinker and the typist, but now it hands the thinking to the agent.
Every feature that goes from "users should be able to X" straight to code is one your agent is quietly making a dozen decisions on. Getting some of them wrong is the default, not the exception.
Do less on the keyboard, and more before you open it.
2. Reviewing AI output line by line
That's how you review a junior dev who might have misnamed a variable. AI doesn't misname variables. It writes clean, confident code.
The things that break production sit a layer above the line. Missing validation, wrong abstractions, a duplicate utility you already had, a package pulled in for no reason.
A 2025 analysis of AI-generated code on public GitHub found security weaknesses in 29.5% of Python snippets and 24.2% of JavaScript. None of it shows up line-by-line.
3. Measuring your value by how much code you wrote this week
The old scoreboard was lines, commits, and PRs merged. AI lifts all three.
Google's 2025 DORA report records the other half. More instability, more change failures, more rework. Code ships faster, and breaks more often, and takes longer to fix.
Volume was never value. The engineers who'll still be sharp in three years are already counting something else. Decisions made, systems designed, and the things that didn't break because you caught them before they ever got written.
4. Treating AI as a tool on the side
Open a chat, paste a question, copy-paste the answer back. Every time you do that, a new tool is running through an old workflow.
GitClear's 2025 analysis of 211 million lines of code shows what happens when that scales. Duplicated blocks of five or more lines grew eightfold in 2024. Refactoring dropped from a quarter of all changes in 2021 to under 10% in 2025.
More new code. Less shaping of what's already there.
The real shift is building AI into the loop. Generation, review, and refinement stop being three tabs and become one motion.
5. Trusting passing tests as proof the code works
Tests used to verify what you specified. When the same AI writes both the code and the tests in one session, the tests verify what the AI inferred you meant.
Green CI doesn't mean the code does what you asked. It means the AI's interpretation of what you asked is consistent with itself.
If the AI also wrote your tests, you don't have tests. You have a second draft of the same interpretation.
None of this is technical, and that's the point. The tools aren't the hard part anymore. What's left is judgment, the call on where to let AI run and what stays yours.
That's the work. And it's what we mean when we say unlearn.
Unlearn opens Tuesday, April 28 at 14:00 CEST.
Alex GS
Tech Education Lead
P.S. The hardest thing about unlearning is that nobody tells you when to start. The feedback loop is slow enough that by the time it's obvious, you've spent a year building the wrong muscles.