AI coding tools will make you significantly faster. That's the promise. After using AI daily over two years, I can tell you: the promise is real. But not in the way the headlines suggest.
The studies show impressive numbers. Junior developers completing tasks at least 40% faster. Code written in half the time. Productivity gains that would make any manager excited.
What they don't show is the learning curve. Or the months where everything felt slower. Or the strange realization that the bottleneck was never what I thought it was.
First, I got slower
The first months coding with AI were frustrating. I'd ask for something, and get something close but not quite right, then spend time fixing it. I'd accept a suggestion, realize it didn't fit, and rewrite it anyway. The tools promised speed but delivered friction.
I started questioning whether I was doing something wrong. I was spending more time reviewing AI output than I would have spent just writing the code myself.
So I kept experimenting. Different tools, different approaches. Slowly, something started to click.
What the studies miss
Look closer at the research and patterns emerge.
A study across Microsoft, Accenture, and another company found junior developers gained 27-39% productivity. Senior developers? Only 8-13%. Yet they end up shipping more AI-generated code than juniors.[1] The difference isn't productivity. It's judgment. A randomized trial with experienced open-source developers found something even more surprising: they were 19% slower with AI. But here's the fascinating part: they believed they were 20% faster.[2]
The studies measure task completion. They don't measure the learning curve. Microsoft research suggests it takes eleven weeks before developers see real productivity gains. Eleven weeks of feeling faster while actually being slower.
The real bottleneck
For me, typing was never the bottleneck. I've been writing code for thirty years. My fingers know where the keys are. Syntax flows without thinking.
But knowing what to type? That's different.
The real bottleneck is understanding. What problem am I actually solving? What edge cases exist? How does this change affect the rest of the system? What's the right structure for this? What will break? What assumptions am I making?
AI can help with that. It's a brilliant sparring partner. Maybe the best I've ever had. It can ask questions I hadn't considered, suggest angles I might have missed. But it can't do the understanding for me. You can't outsource clarity about a problem you haven't fully grasped.
When I didn't know what I wanted to build (or how to structure it), AI made things worse. It generated confident-looking code for the wrong problem, and filled the void with plausible nonsense.
The silence before building
The most important part of my work is silent.
No keyboard. Sometimes no screen. Just thinking about what needs to happen. Walking through the problem in my head. Or sketching on paper. Asking questions, to colleagues or to myself.
Leslie Lamport, who won the Turing Award for his work on distributed systems, once said that code was never meant to be a medium for thought. "It constrains your ability to think when you're thinking in terms of a programming language," he argued. Code makes you focus on the trees while missing the forest.
He's right. The best solutions come before you touch a keyboard. They come in the shower, on a walk, in a conversation. They come when you're not trying to write code.
That silence can't be skipped. AI can be a thinking partner during this phase, but only if you're driving the conversation, not waiting for solutions. Used wrong, it fills the silence with noise. Suggestions for problems you don't understand yet. Solutions to the wrong question.
The thinking has to come first. AI can help you think, but it can't think for you.
Directing AI is a skill
I build things many times faster than before. Not only because AI improved, but because I learned how to use it. When to invoke it. When to ignore it. How to phrase a question so it understands what I mean. When to trust the output and when to think for myself. Which tasks it handles well and which ones it makes worse.
This took months. Not days, not weeks. Months of wrong turns and wasted time and slowly building intuition.
The developers in those studies showing small gains for experienced programmers? They were probably still learning. Still in the phase where AI creates as many problems as it solves.
A new layer of skill
Programming with AI is a different skill than programming without it.
It doesn't replace experience. It requires a new layer of experience on top of everything you already know. You need enough knowledge to judge the output, enough context to ask the right question, and enough taste to know when it's subtly wrong.
The data backs this up. Projects that rely too heavily on AI-generated code see a 41% increase in bugs.[3] They've learned, often the hard way, that AI confidence and AI correctness are different things.
Junior developers show bigger gains on benchmarks because their bottleneck was knowledge. They didn't know the syntax, the patterns. AI fills that gap.
For experienced developers, knowledge was never the problem. The challenge was always making the right decisions. That same judgment now helps them know when to trust AI. So they end up writing more code—not because AI helps them more, but because they know when it works.
When it works
AI works when I know what I want and how I want to build it.
When I have a clear picture in my head and only the execution remains, it's like having someone type for me, faster than I ever could, while I move on to the next problem. I describe what I need, AI produces code, I review and adjust, we move on. Fast. Fluid. As if we're working together.
But when I'm still discovering what I want, when the problem itself is unclear, AI gets in the way. It answers before I've finished asking. It fills the silence I needed.
The skill is knowing which phase you're in. And using AI differently in each: as a thinking partner when exploring, as a builder when executing.
The wrong metric
"How much faster are you with AI?" is the wrong question.
There's an old distinction in software development: building the thing right versus building the right thing. Junior developers obsess over the first. Is the code clean? Does it follow best practices? Senior developers know that none of that matters if you're building the wrong thing.
AI is exceptionally good at building the thing right. It knows the patterns, the best practices. It just can't choose which pattern fits your situation. It can write clean code faster than you can type it.
But AI can't tell you if you're building the right thing. It can't tell you that the feature nobody asked for will solve the problem everyone has. It can't tell you that the elegant solution will create three new problems. That requires judgment and experience that no model has.
Speed assumes you know where you're going. It assumes the destination is fixed and only the travel time varies. But most of software development isn't like that. Most of it is figuring out where you're going in the first place.
The right question is: "Do you know what you're building?"
If yes, AI makes you much faster. The typing that was never really a bottleneck becomes even less of one. You can focus on the next problem while AI handles the current one.
If no, AI makes you feel faster. It gives you the sensation of progress. Code appears on screen. Things seem to happen. But you're not moving toward the right destination. You're just moving.
I'm faster with AI now. Much faster. But only because I spent months learning when not to use it.
The bottleneck was never typing. It was knowing what to type. AI can help you figure that out, but it can't figure it out for you.