The $75K Playbook: How We Ship Production Software in 90 Days
Every agency will tell you what they build. Nobody shows you how. Today I'm giving you the entire system. Five phases. Real case studies. Exact pricing. Th...
Written by Imran Gardezi, 15 years at Shopify, Brex, Motorola, Pfizer at Modh.
Published January 21, 2026.
19 minute read.
Topics: 75k playbook ship 90 days, the $75k playbook: how we ship production software in 90 days, software projects fail, software in 90.
{{youtube:}}
Every agency will tell you what they build. Beautiful case studies. Polished testimonials. Nobody shows you how.
Today I'm showing you how.
Five phases. Twelve weeks. Real case studies at every step. Exact pricing. No "contact us for a quote" nonsense.
"This is the Costco Sample. I'm giving you so much for free that you either take this framework and do it yourself, or you hire me because you know exactly what you're getting."
Fifteen years at Shopify, Brex, Motorola, Pfizer. I've rebuilt twelve projects that other teams abandoned. And every single one failed for the same reason.
Not bad developers. Not bad technology. Bad sequence.
They coded before they understood the problem. They built features before they mapped reality. They launched before they had monitoring. They walked away before anyone could maintain it. These aren't isolated failures. They're a pattern so consistent that I can predict which projects will fail in the first conversation.
Today I'm giving you the sequence that prevents all four. The Modh Delivery OS. Five phases. Let's go.
The Pattern
Here's what it looks like.
A founder has a business that works. Revenue is coming in. But operations run on spreadsheets, tribal knowledge, and the founder's calendar. They know they need software. So they hire a team.
The team starts building. Week one: database schema. Week two: API endpoints. Week three: a login screen. Month two: a dashboard that shows data nobody asked for. Month four: "we're almost done." Month six: "we need to rethink the architecture." This trajectory is so common it has become almost a cliche in the industry, yet founders keep walking straight into it because the pattern is invisible from the inside.
Month eight: the founder has spent sixty, eighty, a hundred thousand dollars. The software technically runs. But it doesn't match how the business works. Users hate it. The team that built it is gone. Nobody documented anything.
"That's Expensive Rubbish. I've seen it happen with $50K builds and $500K builds. The budget doesn't matter. The sequence does."
Dev Purgatory looks different. That's the team that's been building for eighteen months. Always "almost ready." Always one more feature away from launch. The roadmap keeps growing. Nothing ships. The founder's spouse is asking when this thing will make money. The insidious part of Dev Purgatory is that everyone feels productive. Commits are flowing, standup meetings happen, Jira boards move from left to right. But production users never see any of it.
Three enemies. Three failures. Same root cause: wrong sequence.
"The fix isn't better developers. The fix is a system."
Phase 1: Diagnosis
Not what the founder thinks happens. Not what the org chart says. Not what the last team documented. What actually happens. Who does what. Where delays happen. Where errors happen. What data exists. What metrics matter. Diagnosis is about stripping away the narrative and looking at the raw mechanics of how the business operates every single day.
Let me give you a real example.
Client comes to me. Revenue is running on Google Sheets. Multiple sheets. The founder was the workflow engine. Every task, every handoff, every status update went through one person. If the founder was out sick, the business stopped. The workflow logic was tribal. It lived in people's heads, not in any system anyone could point to.
"We didn't open a code editor. We opened a whiteboard."
We mapped the money path. Lead. Onboard. Deliver. Renew. Four stages. Every dollar in that business flows through those four stages. If you don't understand the money path, you're guessing. And guessing at seventy-five thousand dollars is a bad idea. The money path is the single most important artifact in Diagnosis because it forces you to follow the revenue, not the feature requests.
We mapped who touches the work at each stage. Founder. Admin. Operator. Customer. Contractor. Five roles. Each role has different permissions. Different views. Different needs.
We found seven places where things fell through cracks. Seven failure points. The founder knew about two of them. The other five were invisible. They'd been losing revenue for months and didn't know it.
One failure point: the handoff from onboard to deliver. When a new customer signed up, the founder manually assigned tasks. If the founder forgot (and the founder forgot about once a week) the customer sat there for three days wondering if anyone was working on their project. That's not a technology problem. That's a system problem. But you can't see it without Diagnosis.
Deliverables from Diagnosis: a system map showing every step of the workflow, a constraint list showing what limits throughput, and a failure modes document showing where things break. That's it. No code yet. These three documents become the foundation for every decision that follows. Without them, you're building software based on assumptions, and assumptions at this price point are unacceptable.
Why no code? Because the most expensive code is code that solves the wrong problem. Diagnosis prevents Expensive Rubbish. Two weeks of mapping saves six months of rebuilding.
"Diagnosis costs time. Skipping Diagnosis costs everything."
Phase 2: Scope + Sequence
Phase two. Scope + Sequence. Weeks two and three. Define the smallest system that creates value. Not all the features. Not the full vision. The smallest set that changes the business.
Core user journeys. Not twenty. Three. The three journeys that, if they work, transform the operation. Entity relationships showing what talks to what and how data flows through the system. Permission model defining who sees what, who does what, who approves what. Milestone sequence laying out what ships first, what depends on what, and what order creates the most value soonest. And a definition of done that means production-grade done, not "it works on my machine." Real users can touch it without breaking it.
Here's what we do instead.
The sales tracking client. Sales reps tracked calls inconsistently. Some used spreadsheets. Some used sticky notes. Some used nothing. Managers couldn't trust the numbers. The data was so unreliable that reporting was useless. Managers were making decisions based on gut feeling, not reality. This is a common scenario in B2B companies: the data exists somewhere, but it's scattered across so many formats and systems that it might as well not exist at all.
We scoped three journeys. That's it. Three.
Journey one: a rep logs a call. Journey two: a manager reviews a week. Journey three: an admin sees the truth.
"Three journeys. That's all we needed to define the entire data model, the permission structure, and the milestone sequence."
We sequenced milestones: data ingestion first. Build the pipeline that pulls from calendar events and meeting metadata. Define what counts as an outcome and how attribution works. Get the data right before you build anything on top of it. Data quality is the foundation that every other feature depends on. If the ingestion layer is unreliable, every dashboard, every report, every AI feature built on top of it will be wrong.
Then dashboards. Rep view, manager view, admin view. Different data, different permissions, same source of truth.
Then AI features. Summaries, review scoring, actionable follow-ups. AI comes last, not first. You can't add intelligence to data you haven't defined. You can't automate a workflow you haven't mapped.
And here's the part that surprises clients. The scope document is usually smaller than they expected. They come in thinking they need forty features. We show them they need eight. The other thirty-two are either nice-to-haves that can wait, or solutions to problems that don't exist yet.
"The smallest system that creates value. That's the scope. Everything else is noise."
Phase 3: Build
Phase three. Build. Weeks three through ten. Seven weeks of shipping. Not seven weeks of coding. Seven weeks of shipping. There's a difference.
Coding is writing software. Shipping is delivering value that real people use. We ship in vertical slices.
A horizontal layer is: build all the database tables, then build all the APIs, then build all the UI, then pray it fits together at the end. That's how most teams work. It's a disaster because integration problems don't surface until the end when everything needs to connect, and by then you've invested months of effort into components that don't fit together.
A vertical slice is: build one complete user journey. End to end. Database to UI. Shippable on its own. Testable by a real user. Valuable before anything else ships.
Back to the spreadsheet client. We built four vertical slices.
Slice one: onboarding flow. A new customer signs up. They're assigned to an operator. Tasks get created automatically. The founder doesn't touch it.
"The founder saw that working and the look on his face was worth more than any case study. 'Wait, it just... does it?' Yeah. That's the point."
Slice two: task assignment and completion. Operators see their tasks. They mark them complete. Status updates happen in real time. The founder sees a dashboard instead of a spreadsheet.
Slice three: client portal. Customers log in and see their project status. No more "hey, what's the update?" emails. The information is there. Self-serve. This single slice eliminated dozens of weekly status-update emails and gave the founder back hours of communication overhead.
Slice four: reporting dashboard. Revenue by stage. Throughput metrics. Cycle time. The numbers the founder needed to make decisions instead of guessing.
"The founder stopped manually coordinating after slice two. Not after the full build. After slice two. Two weeks of vertical slices replaced forty hours a week of manual coordination."
Each slice protected the data model. That's non-negotiable. The data model is the decision that compounds more than any other. Get it wrong and every feature fights the schema. Get it right and adding features is straightforward.
Validate Before You Code. Never after. Every slice got tested by real users before we built the next one. Not by the founder. By the people who would use the system every day. The operators. The admins. The actual humans whose Monday morning would change because of this software.
We caught three critical issues during slice testing that never would have surfaced in a demo. One: operators needed to reassign tasks mid-stream, and we hadn't modelled that. Two: the notification cadence was too aggressive. Users were muting alerts by day two. Three: the reporting view the founder wanted was not the reporting view the operators needed. The founder wanted revenue metrics. The operators wanted workload balance.
Without real-user testing between slices, we would have built the wrong reporting dashboard. And nobody would have known until month three of production. By then, the operators would have gone back to the spreadsheet.
Phase 3: The Stack
PostgreSQL. Redis. React. Next.js. Boring. Proven. Documented. Your engineers can find answers on Stack Overflow at 2am. Try that with the latest trendy framework. The value of a boring stack compounds over years. Every hire you make already knows the tools. Every problem you encounter already has a documented solution. Every library you depend on has been battle-tested by millions of other production systems.
"The sexiest architecture is the one still working in three years."
Use proven tools for the eighty percent. Clerk for auth, Stripe for payments. Only custom-build the twenty percent that's unique to the client's business. Don't build commodities. Every hour you spend reinventing authentication is an hour you're not spending on the features that differentiate your product.
Daily deploys. Staging environment. If your deploys are exciting, something is wrong.
"Boring is the goal. Boring deploys. Boring tech. Boring, reliable software that makes money every day without drama."
Phase 3: The Hard Case
Let me give you a harder example. Because not every project is a clean start.
The complex workflow migration. Real operations with allocations and approvals that the existing system couldn't represent. The schema had been built for a simpler version of reality. The business grew. The workflows got more complicated. The schema didn't keep up.
Every workaround made the system more fragile. Custom fields jammed into places they didn't belong. Manual steps to cover what the software couldn't handle. Engineers scared to touch the codebase because one change cascaded into three unexpected failures. This is the natural endpoint of technical debt that accumulates when a schema doesn't evolve with the business. The workarounds become load-bearing walls, and eventually nobody knows which ones are safe to remove.
"We didn't patch the old system. We designed the future schema. The actual target state. What the system needs to look like to represent reality. Not reality from two years ago, but reality now."
Then we shipped in phases. Old system running in parallel. New system taking over one workflow at a time.
Phase one: the simplest workflow migrated to the new schema. Data flowing to both systems simultaneously. Users interacting with the new interface for one workflow while everything else stayed on the old one.
Phase two: the next workflow. Same pattern. Migrate. Validate. Confirm.
Phase three, four. Each one with its own rollback plan. If phase three breaks, you roll back phase three. Not the whole system. Not a weekend of downtime. Not a prayer. This incremental approach is critical because it contains the blast radius of any failure. When something goes wrong (and something always does), you know exactly which workflow to investigate and exactly how to revert it.
Operations continued uninterrupted. No big-bang migration. No "we're shutting down for a weekend." No data loss. The team that ran the old system barely noticed the transition.
"That's what production-grade means. Not flashy. Not exciting. Safe. Predictable. Boring."
Phase 4: Stabilize
This is where most teams stop. "It works. Ship it. Move on."
"Working is not the same as stable. Working means the happy path functions. Stable means you know when something breaks, you know why, and you can fix it without panic."
Performance testing. Hit the system with real load. Find the queries that slow down with a thousand rows. Find the API calls that timeout when three users hit them simultaneously. Fix them before real users find them for you. Performance issues are the kind of bugs that don't show up in development because your test data is small and you're the only user. In production, these same queries can bring the entire system to its knees.
Edge cases. What happens when a user submits a form twice? What happens when a payment webhook fires but the order was already cancelled? What happens when the internet drops halfway through a file upload? These aren't theoretical. These are Tuesday.
Monitoring and alerting. Not vanity metrics. Operational dashboards. Response times. Error rates. Queue depths. The numbers that tell you if the system is healthy. Good monitoring is like having a vital signs monitor on your system. You don't wait for the patient to flatline; you watch the trends and intervene before things go critical.
Sentry for error tracking. Slack alerts when errors spike. When something crosses a threshold, you find out in minutes. Not when a customer emails you three days later.
Runbooks. Step-by-step guides for the most common problems. "The payment webhook is failing. Here's what you do." Written so that anyone on the team can follow them. Not tribal knowledge that lives in one person's head.
Remember: the system is going to break. Every system breaks. The question is whether you find out in two minutes or two weeks. Stabilize is the difference.
Phase 5: Transfer
Ensure the client team can carry the system forward without depending on me.
This is where most agencies create lock-in. Not intentionally. They don't say "you need us forever." They walk away without documenting anything. Same result. You're dependent on people who aren't there anymore. The knowledge walks out the door with the contractors, and the client is left with a system they own but don't understand. That's not a partnership. That's a hostage situation with a delayed timer.
"I want the client to not need me six months later. That sounds like bad business. It's the opposite."
Deliverables. An onboarding guide for new developers. "Here's the codebase. Here's how it's organised. Here's how to run it locally. Here's where things are."
An architecture overview. Not a hundred-page document nobody reads. A clear, honest explanation of how the system works and why decisions were made. "We chose PostgreSQL because of X. We structured permissions this way because of Y. This part of the codebase is more complicated than it should be. Here's why and what to do about it."
A common issue playbook. The ten things that will break and how to fix them. Written from experience because we've been running the system for ten weeks at this point. We know what breaks.
And a "how to add a feature safely" guide. Step by step. So a junior developer doesn't accidentally break production because nobody told them about the deployment process.
"Transfer is what separates a vendor from a partner. A vendor delivers software. A partner delivers capability."
When they don't need me, they trust me. When they trust me, they refer me. And when they have a new problem, they come back. Not because they're locked in. Because they choose to.
The spreadsheet client? Six months after Transfer, they added a new product line. New workflows, new roles, new reporting. Their in-house developer handled it. Used the architecture guide. Followed the "how to add a feature safely" steps. Shipped it in two weeks. No outside help.
That's the outcome. Not dependency. Capability.
"Every agency should be working to make themselves unnecessary. The ones that don't are selling lock-in, not software."
The Math
Right. Let's talk money.
You're buying judgment, not code. The typing is the easy part. And here's what's changed. AI handles the typing now. AI agents write the boilerplate. Generate the tests. Build the documentation. Set up the monitoring. That's sixty to seventy percent of what a 10-person team does every day. The mechanical parts. The shift from typing to judgment is the most important change in software engineering in the last decade, and most agencies haven't adjusted their pricing or their process to reflect it.
What AI can't do is the part that actually matters. Choose the right data model. Decide which features to build and which to kill. Know which architecture decisions compound and which ones create debt. Recognize the seven failure points before they cost you six figures. That takes experience. That takes judgment. That takes fifteen years of shipping production systems.
"One skilled engineer with AI agent-optimized workflows now ships what used to take a team of ten. But only if that engineer has the battle scars to direct the AI. Without judgment, AI just generates expensive rubbish faster."
So here's what I charge.
Strategy Session. Two thousand dollars. I map your system and tell you honestly whether what you're building is worth building. Sometimes the answer is "don't build this." Hearing that before you spend seventy-five thousand is a gift.
Dev Consulting Retainer. Five thousand a month. A fractional CTO embedded in your daily workflow. Architecture reviews, roadmap sequencing, debugging production incidents.
MVP Build. Twenty thousand dollars. Production-grade. Real auth. Real data model. Error handling. Observability. Documentation.
"A Modh MVP is not a demo. It is not a prototype. It is software you can charge money for."
Full Platform Build. Seventy-five thousand and up. The full Delivery OS. Five phases. Twelve weeks. Everything I showed you today. One senior engineer. AI agent workflows. The output that used to require a team and six months.
What you're paying for is not hours. Not lines of code. Not headcount. It's fifteen years of knowing which decisions compound and which ones don't. The discipline to say no to the twenty features that don't matter so you can ship the three that do. And now, the AI-augmented system that turns that judgment into speed that wasn't possible two years ago.
The ROI
Now let me show you the other side of the equation.
The migration client had five engineers spending twenty percent of their time on workarounds. Five engineers. Twenty percent. That's one full engineer's salary, a hundred thousand a year, spent working around a schema that doesn't match reality. And the workarounds weren't static. They grew more complex each quarter as the business evolved and the schema fell further behind. By the time they brought me in, the workaround code was nearly as large as the actual application code.
The spreadsheet client. The founder spent forty hours a week manually coordinating tasks. At even a hundred pounds an hour, that's four thousand a month. Forty-eight thousand a year. Manually copying and pasting between spreadsheets. Not counting the errors. Not counting the customers who waited three days because a handoff was missed. Not counting the opportunity cost of a founder who should be selling, strategizing, and growing the business instead of functioning as a human workflow engine.
The sales tracking client had managers making decisions on unreliable data. One wrong sales hire based on bad metrics costs fifty to a hundred thousand in salary, benefits, and lost pipeline. If the data that triggered that hire was wrong, you just burned six figures on a bad spreadsheet.
"The build costs seventy-five thousand dollars. The cost of not building compounds every single month. And the gap between those two numbers only gets wider."
The Decision
Here's the system. All of it.
Diagnosis. Scope + Sequence. Build. Stabilize. Transfer. Five phases. Twelve weeks.
Skip Diagnosis, you build Expensive Rubbish. Skip Scope, you enter Dev Purgatory. Skip Stabilize, you're Flying Blind. Skip Transfer, you're locked in forever.
"That's the Costco Sample. The whole system. For free."
You now have two options.
Take this framework. Apply it to your own project. Use the sequence. Ship something real.
Or book a Strategy Session. Two thousand dollars. I'll map your system. I'll tell you honestly whether what you're building is worth building. And if it is, I'll show you exactly how to get it shipped.
"I don't sell hours. I sell outcomes."
Outcomes over features. Systems over hacks. Speed with foundations.
If your software project feels unpredictable, it's not the technology. It's the sequence. Fix the sequence. Ship the system.
Key Takeaways
-
Diagnosis before code prevents Expensive Rubbish. The most costly software is software that solves the wrong problem. Two weeks of mapping the money path, failure points, and actual workflows saves six months of building features nobody needs. If you skip this phase, you're gambling seventy-five thousand dollars on assumptions.
-
Scope + Sequence prevents Dev Purgatory. Scoping to value (not features) means identifying the three user journeys that transform the operation, then sequencing milestones so the most valuable changes ship first. The roadmap stops growing because every item on it is tied to measurable business impact, not wish-list feature requests.
-
Vertical slices deliver value from week one, not month six. Building one complete user journey end-to-end (database to UI) means real users test real software at every stage. Horizontal layers, where you build all the tables, then all the APIs, then all the UI, hide integration failures until the very end when they're most expensive to fix.
-
Stabilize is the phase most teams skip, and it's the one that determines whether your software survives contact with real users. Performance testing, edge case handling, monitoring, alerting, and runbooks are what separate "it works on my machine" from production-grade. Without Stabilize, you're Flying Blind.
-
Transfer separates a vendor from a partner. Documentation, architecture overviews, and "how to add a feature safely" guides ensure the client team can carry the system forward independently. The goal is capability, not dependency. The best outcome is a client who doesn't need you six months later.
Frequently Asked Questions
How long does it take to build production software from scratch?
The Modh Delivery OS runs five phases in twelve weeks: Diagnosis (2 weeks), Scope + Sequence (1 week), Build (7 weeks), Stabilize (2 weeks), and Transfer (1 week, overlapping with Stabilize). The timeline is achievable because each phase feeds directly into the next, and vertical slices mean you're shipping usable software from week three. Projects that skip phases often take longer because they spend months fixing problems that proper sequencing would have prevented.
Why do most custom software projects fail even with good developers?
The root cause is almost always wrong sequence, not wrong talent. Teams code before they understand the problem, build features before they map reality, launch before they have monitoring, and walk away before anyone can maintain it. Twelve projects I've rebuilt all failed for these reasons. Good developers executing in the wrong order produce the same result as bad developers: software that doesn't match how the business actually works.
What's the difference between an MVP and a production-grade build?
An MVP (as most agencies build it) is software that demonstrates a concept but lacks the foundations to scale: no proper auth, no observability, no error handling, no documentation. A production-grade build includes all of those from day one. The production-grade build costs more upfront but avoids the sixty to a hundred-and-fifty thousand dollar rewrite that cheap MVPs almost always require within six to twelve months.
How do you know if your current software project is heading for failure?
Three signals. First, you're more than two months in and real users haven't tested working software yet. Second, the team can't answer "what happens at 10x our current scale?" with specifics. Third, "we'll fix it later" appears in every conversation. If any of these are true, your project is likely heading for Expensive Rubbish, Dev Purgatory, or Flying Blind. A Strategy Session diagnoses which pattern you're in and how to course-correct.
Is $75K too expensive for a software build?
Compare it to the alternative. The spreadsheet client was spending $48K per year in manual coordination. The migration client was burning $100K per year on engineering workarounds. The sales tracking client risked six figures on a single bad hire caused by unreliable data. A $75K build that eliminates these costs pays for itself within a year, often sooner. The "expensive" option is nearly always the cheap one when you factor in the compounding cost of not building.