AI Didn't Make Me a 10x Better Engineer. Discipline Did.
Everyone's talking about how AI makes engineers 10x more productive. That's bullshit. AI didn't make me faster. It just removed every excuse I had for stay...
Written by Imran Gardezi, 15 years at Shopify, Brex, Motorola, Pfizer at Modh.
Published January 28, 2026.
10 minute read.
Topics: not worried about ai, ai didn't make me a 10x better engineer. discipline did., ai is an amplifier of what you already have, ai is an, ai makes you, ai makes disciplined, experienced people fast.
{{youtube:}}
Everyone's talking about how AI makes engineers 10x more productive.
That's bullshit.
"AI didn't make me faster. It just removed every excuse I had for staying slow."
I wasn't blocked by skill. I was blocked by discipline.
And by the end of this, you'll know exactly why, and what to do about it.
The Excuse Era Is Over
For years I told myself stories. "I need to learn more frameworks before I can ship." "When I have more time. More money. More clarity." Story after story. And they felt legitimate. I wasn't making excuses. I was being "strategic." That's what I told myself. The problem with strategic procrastination is that it feels productive. You're reading, researching, planning. You can point to the activity and convince yourself you're making progress. But nothing ships.
Don't know the syntax? AI knows. Never built a mobile app? AI scaffolds the whole thing. Every skill I said I needed to acquire first, AI provides access to it instantly. The barrier to entry for any technical skill has effectively collapsed to zero. You can't hide behind "I don't know how" anymore because AI knows how. The only question left is whether you'll do the work.
"And you know what's left when excuses disappear? Just you. And your discipline. Or lack of it."
AI is a mirror. It reflects back whatever you bring to it. Bring clarity, get clarity. Bring confusion, get more confusion. This is why two engineers can use the exact same AI tools and get wildly different results. The tool isn't the variable. The person directing the tool is the variable.
For someone with ten years of experience, AI is a turbocharger. All those patterns I've seen, all those mistakes I've made, AI lets me express that experience faster. I know what good architecture looks like, so I can prompt for it, recognize it, and refine it. The AI accelerates the execution of judgment I already possess.
For someone without experience? AI is dangerous. You're generating code you don't understand. Shipping systems you can't debug. Building technical debt at 10x speed. The output looks professional, passes basic tests, and goes to production. Then six months later, something breaks in a way the developer can't diagnose because they never understood the code in the first place.
But here's where this gets interesting. AI can be the best teacher you've ever had. Or the worst crutch. It depends on whether you use it to learn or to skip learning. If you treat every AI interaction as a learning opportunity, asking why the code works, what the tradeoffs are, what would break, you build genuine understanding. If you treat AI as a copy-paste machine, you build a house of cards.
Vibe Coding Is Dangerous
Everyone's talking about "vibe coding." Just prompt and ship. Let the AI handle it.
"Vibe coding works, if you're a veteran with a bayonet."
You know what good code looks like. You can spot when AI goes off the rails. You know when to stop prompting and start thinking. You have taste. For a senior engineer, vibe coding is shorthand for "I'm using AI to execute decisions I've already made." The prompts are precise because the thinking was precise first.
But if you're inexperienced? Vibe coding is a kid with a machine gun. The output looks impressive. The destruction is silent and delayed. You won't know what went wrong until production tells you, and by then the damage is done.
Here's a scenario I keep seeing. Junior developer uses AI to build an auth system. Works. Tests pass. Ship it. Six months later, security vulnerability. The AI generated token validation that didn't check expiry correctly. The developer didn't catch it because they didn't know what correct token validation looks like. This isn't a hypothetical. I've personally reviewed codebases where AI-generated auth code had subtle flaws that would have allowed session hijacking. The code looked clean. The tests passed. The vulnerability was invisible to anyone who didn't already know what to look for.
"Treat AI like a junior developer. You wouldn't merge a junior's PR without reviewing it. Don't merge AI's code without reviewing it either."
If you're a team of one, you ARE the reviewer. That's why judgment matters more than speed.
Experience isn't about writing code anymore. Experience is about judgment. Knowing what to build. What NOT to build. When the AI is bullshitting you. That takes reps. Reps you can't shortcut with a better prompt.
The Jevons Paradox
Let me tell you about the Jevons Paradox.
In the 1860s, an economist noticed something counterintuitive about the steam engine James Watt had improved a century earlier. Everyone thought more efficient steam engines would mean less coal. The opposite happened. More efficiency led to more usage. Total coal consumption went UP. The efficiency didn't reduce demand. It made steam power economically viable for applications that were previously too expensive, creating entirely new categories of consumption.
"Same thing with AI and work."
AI makes coding more efficient. But that doesn't mean there's less work. The bar goes up. What used to take a team of five to build? One person can build now. So expectations adjust. Solo founder who built an MVP in six months? Now expected to do it in six weeks. The efficiency gains don't give you breathing room. They raise the baseline of what's considered acceptable output.
Here's a pattern I keep seeing. Team adopts AI tools. Productivity goes up. And within months, the expectations go up faster. I watched a small team double their output, and still feel more pressure than before. The founder wanted more features. The board asked why they needed so many engineers. Same people. Same hours. Twice the output. And somehow, more pressure. The productivity gain got absorbed into higher expectations almost immediately, leaving the team running faster on the same treadmill.
"AI didn't make the work disappear. AI made the expectations escalate."
The Two-Prompt Rule
So what actually makes you faster? Here's the rule that changed everything for me.
In the last three teams I worked with, developers were spending more time in ChatGPT than in their editor. That's not productivity. That's procrastination with extra steps. They'd craft elaborate prompts, iterate on the output, ask follow-up questions, refine the response, and two hours later they had a solution they could have written in thirty minutes if they'd just opened the editor and started typing.
"The prompt is not the work. The work is the work."
For research and exploration, prompt all you want. Go deep. But for execution? Here's my rule.
Two prompts. Max.
First prompt: feed it your spec and constraints, let it generate. Second prompt: have it review against your test cases. Two prompts. Not twenty. Constraints in. Verified output out. That's it. Pick. Execute. Debug. Ship. The discipline of limiting yourself to two prompts forces you to do the thinking upfront. You can't be vague in two prompts. You have to know what you want before you ask for it.
Now, for anything touching security, payments, or data integrity, throw the rule out. Slow down. Review every line. AI doesn't get a free pass on the stuff that can sink your company. These are the areas where a subtle bug can cost you everything: customer trust, regulatory compliance, real money. The two-prompt rule optimizes for speed in low-risk areas so you can invest your careful attention where it actually matters.
For everything else? Last month I rebuilt an existing API layer, eight endpoints, auth middleware, error handling, from a spec I'd already mapped out. I had the patterns in my head. Two prompts in. Shipping in under an hour. Old me would have spent the morning in ChatGPT "exploring options." If it's wrong, I find out fast. And fix it. That's faster than prompting my way to perfection.
It's about stopping yourself from hiding in prompts instead of building.
The Discipline Stack
There are three layers to this discipline stack, and most people only think about the third one.
Layer one: Environmental discipline.
Phone in another room. Slack closed. One terminal. One editor. One problem. The physical setup matters more than people admit. Every notification, every open tab, every visible distraction is a decision your brain has to make: do I engage with this or stay focused? Eliminate the decisions by eliminating the stimuli.
I don't start coding until I can describe the problem in one sentence. If I can't, I'm not ready to code. I'm ready to think more.
Here's an engineering ritual I swear by. Before I open the editor, I write the commit message. Not the code. The commit message. "Add rate limiting to auth endpoints." That's it. Now I know exactly what done looks like. Everything else is noise. This one practice has saved me more time than any AI tool because it prevents the most expensive kind of waste: working on the wrong thing.
Layer two: Mental discipline.
Fewer decisions, better decisions. Decision fatigue is real. Every decision drains the tank. By 3pm, the tank is low. That's when you make bad decisions, or avoid them entirely. The best engineers I've worked with don't have more willpower than everyone else. They structure their days so that willpower isn't required. The hard decisions happen in the morning. The routine execution fills the afternoon.
The two-prompt rule IS mental discipline. Don't waste decision energy on prompt optimization. What to work on? Whatever's highest priority, already decided. Which approach? First reasonable option.
Another ritual: I timebox every task. Thirty minutes. If I'm not making progress in thirty minutes, I stop and re-scope. Either the problem is bigger than I thought, or I'm solving the wrong problem. Both are signals. Not failures. The timebox prevents the two-hour rabbit hole where you're too deep to see that you took a wrong turn an hour ago.
Layer three: Execution discipline.
The best teams I've worked with have one non-negotiable ritual: deploy every single day. Not "when it's ready." Every day. That constraint forces you to break work into small pieces. Small pieces mean small risks. Small risks mean fast recovery. That's not reckless. That's disciplined.
"Tomorrow morning, try this. Before you open your editor, write the commit message first. Just once. See what happens."
The Wake-Up Call
Now here's the story that really changed me.
I still remember my first day at my first real dev job. Someone said, "Just clone the repo from GitHub." I had no idea what Git even was. My stomach dropped. I stood there staring at the screen. Everyone else was typing. I was frozen. I thought: this is it. I'm about to be exposed as a fraud. That feeling, the imposter syndrome of being surrounded by people who seem to know something you don't, never fully goes away. But what you do with it determines everything.
That night I went home and told my now-wife I'd blown it. She asked what happened. I couldn't even explain it properly. I just said, "I didn't know something everyone else knew, and they're going to figure out I don't belong there."
Then I studied. All night. Git clone. Git commit. Git push. Over and over until my eyes burned. By Monday, I was pushing commits like I'd been doing it for years. Nobody ever knew. That's execution discipline. Not talent. Not tools. Just refusing to stay stuck. The gap between feeling stuck and being stuck is action. One night of discomfort closed a gap that could have ended my career before it started.
Then a few years later, a kid lapped me.
Two years in the industry. He shipped a full SaaS, week by week, in public, while I was building market-sizing spreadsheets and waiting until I felt ready. Same tools. Same AI. Same twenty-four hours in a day. I had more experience, more knowledge, more skills. He had more discipline.
Every Friday he'd post an update. Week one: landing page live. Week two: auth working. Week three: Stripe integration. Week four: first paying customer. Each post was a punch to my ego because I had no excuse. I could have done everything he was doing, faster, with better architecture. But I wasn't doing it. He was.
Meanwhile I was reading articles about market sizing. Watching videos about go-to-market strategy. Building spreadsheets. Researching. Always researching.
"He shipped something every week. I researched something every week. He had users. I had bookmarks."
That was my wake-up call. Not a gentle one. A gut punch. Experience means nothing without execution. He proved that.
The Real 10x
Let me show you what the real 10x actually looks like.
Old world: most of your day was boilerplate. Debugging. Searching Stack Overflow. The actual thinking, strategy, architecture, trade-offs, was a sliver of your time. You'd spend four hours writing code and twenty minutes making decisions. The ratio was heavily skewed toward mechanical work.
New world: AI handles all of that. What's left? The thinking. Strategy. Architecture. Decisions. Judgment. Now the ratio flips. You spend twenty minutes generating code and four hours making decisions about what to generate, how to structure it, and whether the output actually solves the problem. The engineers who thrive in this world are the ones who developed the decision-making muscle, not just the typing muscle.
A lot of heads-down engineers never developed that muscle. They were so busy with the execution grind that they never learned the strategic layer. Now the grind is gone and they're exposed.
I worked with a founder who had a five-person dev team and a four hundred thousand dollar budget. Eight months of "building." Every day, developers logged in and working hard. But whenever the founder asked to see the product? Excuses. "Almost there." "Just fixing a few things." When they finally launched, core features were broken. Users churned immediately. The team was doing the mechanical work diligently. Nobody was making the strategic decisions about what to ship, when, and in what order. That's the gap AI can't fill.
"If you spent ten years writing boilerplate, AI took your job."
"If you spent ten years making architectural decisions, understanding trade-offs, AI gave you a weapon."
What did you spend your years doing? That determines whether AI helps you or replaces you.
The Close
Here's the truth nobody wants to hear.
"What's missing isn't tools. It's discipline."
"AI makes disciplined people unstoppable. AI makes undisciplined people dangerous."
The tools matter. But tools in undisciplined hands are just expensive distractions. This is about what you do between the prompts. Your habits. Your discipline. Your willingness to ship. The difference between an engineer who uses AI to ship production software every week and an engineer who uses AI to generate code they never deploy isn't the tool. It's the person.
Want to test if your team has real discipline or just vibe coding with extra steps? We built a free AI Code Review Checklist: seven questions your team should be answering before any AI-generated code hits production. And if you need a team that already ships with this kind of discipline, that link is there too.
"Build your discipline, and AI becomes a weapon. Ignore it, and no tool in the world will save you."
Two prompts or twenty. Ship daily or research forever. The tools are identical. The discipline isn't.
Key Takeaways
-
AI amplifies existing capability, it doesn't create it. If you bring fifteen years of architectural patterns and hard-won judgment to AI, it turbocharges your output. If you bring confusion and no experience, AI generates professional-looking code you can't debug, review, or maintain. The tool is neutral. What you bring to it determines the outcome.
-
The real constraint is discipline, not technical skills. Every excuse for not shipping (not enough time, not enough knowledge, not enough clarity) evaporated when AI arrived. What's left is whether you have the discipline to pick a problem, scope it, execute, and ship. Most engineers don't lack capability. They lack the habit of finishing.
-
"Vibe coding" only works if you have years of reps behind you. A senior engineer vibe-coding is shorthand for executing decisions they've already made. A junior engineer vibe-coding is generating code they can't evaluate. The same activity produces radically different outcomes depending on the experience behind it. Treat AI output like a junior developer's PR: always review before merging.
-
The Jevons Paradox means AI won't reduce your workload. Historically, efficiency gains don't reduce demand. They raise expectations. Teams that double their output with AI tools find that expectations triple. The productivity gains get absorbed into higher baselines almost immediately. AI doesn't buy you time. It buys you higher stakes.
-
The Two-Prompt Rule separates builders from prompt optimizers. Limit execution tasks to two prompts: one to generate, one to verify. This forces you to do the thinking upfront and prevents the trap of hiding in prompts instead of building. Save your careful, unlimited review for security, payments, and data integrity where subtle bugs can sink the business.
Frequently Asked Questions
Will AI replace software engineers, or does it just make them faster?
AI replaces engineers who spent their careers doing mechanical work: writing boilerplate, debugging routine issues, searching Stack Overflow. For those engineers, the value they provided is now automated. But AI gives experienced engineers a weapon. If you spent years developing architectural judgment, understanding trade-offs, and making strategic decisions about what to build, AI lets you execute that judgment at 10x speed. The divide isn't "engineers vs. AI." It's "engineers with judgment vs. engineers without it."
How do I use AI coding tools without creating technical debt?
Treat AI-generated code like a junior developer's pull request. Never merge it without review. Use the Two-Prompt Rule for execution: one prompt to generate, one to verify against your test cases. For anything touching security, payments, or data integrity, slow down and review every line. The biggest source of AI-created technical debt is code that passes tests but has subtle flaws (like auth tokens that don't check expiry) that only someone with experience would catch. Build a review checklist and enforce it for every piece of AI-generated code.
What is the Jevons Paradox and how does it apply to AI productivity?
The Jevons Paradox, observed in the 1860s with steam engines, shows that when a resource becomes more efficient to use, total consumption increases rather than decreases. Applied to AI: coding becomes more efficient, but the total demand for software goes up. What used to take a team of five now takes one person, so expectations adjust. Solo founders who shipped MVPs in six months are now expected to do it in six weeks. Teams that double their output with AI face boards asking why they need as many engineers. The efficiency gains don't create breathing room. They raise the bar.
What daily habits separate disciplined engineers from undisciplined ones?
Three layers. Environmental discipline: phone in another room, Slack closed, one problem at a time. Write the commit message before writing any code so you know exactly what "done" looks like. Mental discipline: timebox every task to thirty minutes. If you're not making progress, stop and re-scope rather than going down a rabbit hole. The Two-Prompt Rule prevents wasting decision energy on prompt optimization. Execution discipline: deploy every single day. This forces small, low-risk changes and fast recovery. The compound effect of these habits is staggering.
Is "vibe coding" a legitimate engineering practice or a dangerous trend?
It depends entirely on who's doing it. For a veteran engineer with ten-plus years of pattern recognition, vibe coding is a legitimate acceleration technique. They know what good code looks like, can spot when AI goes off the rails, and have the judgment to course-correct. For an inexperienced developer, vibe coding produces code they can't evaluate, debug, or maintain. The output looks professional and passes basic tests, but subtle flaws (security vulnerabilities, data integrity issues, architectural dead ends) go undetected because the developer doesn't know what to look for. The practice itself is neutral. The experience behind it is everything.