The Software Development Life Cycle didn’t die — it restructured. The phases that used to dominate (coding, testing) are now largely AI-executed. The phases that used to be treated as overhead (planning, analysis, design) are now where the real work happens.

Something changed quietly in 2024, and most studios missed it.

The product we used to charge $50,000 to build — you can now get a credible prototype for $10,000. That’s not a market correction. It’s a structural reset. And it has almost nothing to do with AI writing code faster.

The real shift is about where the work happens now. Which phases of software development actually create value. What it means to be a senior engineer, an architect, a product thinker, in a world where a well-prompted agent can scaffold an entire feature in an afternoon.

I’ve been building software for nine years — 120+ products across five countries, from USAID infrastructure to fintech to logistics platforms. In March 2026, Tiran and I made the decision to close a 40-person studio we’d built over a decade, because we saw this coming before most people were talking about it. What I’m writing here isn’t prediction. It’s what we’re living.

This is what the SDLC looks like now.

A quick definition before we go further: spec-first development is the practice of mapping every user journey, writing a complete product specification, and validating the full design before writing a single line of production code. It’s the methodology at the centre of what changed — and the reason some teams are getting dramatically better outcomes from AI than others.

25–35%
of dev time is actually coding + testing
Source: Bain Technology Report 2025
25–30%
productivity gain with end-to-end process transformation
Source: Bain Technology Report 2025

The model everyone is still using was already broken

The traditional SDLC is a waterfall dressed up in Agile clothes. Plan once. Gather requirements once. Design once. Then build for weeks. Test. Ship. Maintain — meaning: keep the lights on and process the bugs.

Agile improved the cadence. Two-week sprints, standups, retrospectives. But it didn’t change the fundamental assumption: that planning is overhead, and building is the real work.

That assumption was always wrong. AI just made it impossible to ignore.

Bain & Company’s 2025 research found something that should have stopped every engineering leader cold: writing and testing code accounts for only 25–35% of total development time from idea to launch.

Source: Bain Technology Report 2025 →

Most “AI-native” studios are using AI to accelerate a third of the problem. The other two thirds — planning, analysis, architecture, maintenance, documentation, client communication — they’re doing the same way they always did.

That’s not AI-native. That’s AI-sprinkled.

AI-native development means redesigning the entire delivery process — not just the coding step — so that AI is embedded in planning, analysis, design, testing, documentation, and maintenance. The spec is what makes that possible.

The most effective engineering teams of the AI era will not be those who code the fastest. They will be those who plan the most precisely.

↳ The key finding: coding and testing account for only 25–35% of total development time. AI-native development is not about accelerating that third — it’s about transforming the other two thirds.

What actually changed — stage by stage

Let’s go through the SDLC properly. Not as a checklist — as an honest account of what AI changes, what it doesn’t, and where the new leverage actually lives.

One thing to note before we start: the order has changed. It’s no longer a straight line. Phases 1, 2, and 3 now loop — you plan, analyse, design, then return to planning with what you learned from design, then refine the spec again before a single line of production code is written. This is not regression. This is the model working correctly.

Phase The Old Way What AI Changed What Didn’t Change
01
Planning
Scope doc written once, filed in Notion, ignored by week 3. AI generates draft scope, risk analysis, and milestone structure in hours. The brief becomes the foundation of everything that follows. The quality of the question determines the quality of the plan. Garbage intent, garbage output.
02
Analysis
Requirements gathered across 3 meetings, written up in a PDF nobody reads again. AI synthesises user research, competitor analysis, and raw input into structured requirement docs fast. What took days now takes hours. Someone still has to know what problem is worth solving. AI can’t interview your users for you.
03
Design
Architecture defined once in isolation, rarely revisited until something breaks. Design and planning now loop continuously. You spec, AI proposes architecture, you challenge it, you re-spec. Industry practitioners increasingly describe this continuous spec refinement as the defining architectural practice of AI-era development. Architecture decisions still carry long-term consequences no agent can absorb. The human still holds the trade-offs.
04–05
Coding + Testing
Separate phases, sequential, weeks apart. Testing was an afterthought bolted on at the end. Agentic workflows collapse these into a single continuous loop. Code is written, reviewed, and tested in near-real-time. AI-assisted code review catches issues that would previously have reached production. Human review is non-negotiable. AI without a spec generates functional-looking code that creates catastrophic technical debt. We’ve seen it.
06
Deployment
Manual, terrifying, usually on a Friday afternoon. The ritual of holding your breath. Already largely automated via CI/CD before AI. AI adds marginal orchestration improvements. The least changed phase in the entire SDLC. The gains here were already captured by DevOps. This is not where the story is.
07
Maintenance
Bug fixes. Server uptime. Responding to the same support ticket for the third time. Maintenance now expands to include: roadmap development, living documentation, customer success artefacts, onboarding materials, marketing assets — all of which require versioned specs to stay coherent as the product evolves. Human judgment on what matters to the customer. This phase now demands as much architectural thinking as Phase 3.
The SDLC under AI: where the leverage moved, and what it didn’t touch.

Working on a product?

We run a free 30-minute scoping call. You’ll leave with a clear sense of scope, timeline, and cost — whether we’re the right fit or not.

Book a discovery call →

On the loop: 1 → 2 → 3 → back to 1

The most important structural change in the AI-era SDLC is one nobody in the tools conversation talks about: the front of the process is now iterative in a way it never was before.

Previously, you planned once, gathered requirements once, designed once, then started building. Revisiting the plan mid-project meant failure — scope creep, client conflict, budget overrun.

Now, the loop is the process. You brief → analyse → design → and then return to planning with the constraints that design revealed. You refine the spec. You design again. You return to planning one more time. Only then does a line of production code get written.

01
Plan
02
Analyse
03
Design
01
Refine
Spec
04–05
Build +
Test
Loop until the spec is locked — then ship. Not before.

GitHub’s Spec Kit — open-sourced in 2025 — describes this exactly: you don’t move to the next phase until the current task is fully validated.

Source: GitHub Blog — Spec-Driven Development with AI →

This isn’t a return to waterfall. Waterfall’s problem was that the feedback cycles were months long. This loop can happen in days. The discipline is the same; the speed is entirely different.

↳ The key finding: in AI-native development, phases 1–3 now iterate continuously before any production code is written. The loop is not rework — it is the process.

On maintenance: the phase that grew the most

The traditional view of maintenance is: keep the lights on. Patch the bugs. Respond to tickets.

That view is now embarrassingly incomplete.

In an AI-native development model, maintenance is the full ongoing life of the product. That means:

All of this requires the same spec discipline as Phase 3. If you don’t have a versioned spec — a living document of what the product is, why it exists, and how it behaves — then every maintenance cycle starts from scratch. You’re rebuilding context every time.

AI makes maintaining living specs cheap. Which means the cost of not having them just went up.

This is precisely why we built spec2web — a local-first tool that keeps the spec, the content, and the deployment history connected, so that every maintenance cycle starts from context, not from scratch.

↳ The key finding: maintenance in an AI-native model now includes roadmap development, living documentation, and customer success artefacts — all of which require the same spec discipline as Day 1.

The trap most teams fall into

I want to be honest about something, because I don’t think enough people are saying it.

The productivity gains from AI coding tools are real. Copilot, Cursor, Claude Code — they genuinely accelerate development. But the headline numbers — the 10–15% productivity boosts that most teams report — are not the ceiling. They’re the floor.

Bain’s research found that teams pairing AI with end-to-end process transformation saw 25–30% productivity improvements — more than double the gains from basic code assistant adoption alone.

The difference isn’t the tool. It’s whether the spec exists before the agent runs.

Without a spec, AI-generated code looks impressive and moves fast. It also creates what researchers increasingly call “architectural decay” — functional code that becomes progressively harder to maintain as AI-generated changes accumulate without architectural oversight. The team at GitClear observed a significant rise in duplicated code blocks in AI-assisted codebases, often without developers’ awareness.

Source: GitClear — Coding on Copilot: AI’s Impact on Code Quality →

We saw this pattern under Beta Launch. The fastest-moving projects were never the ones that started with the most momentum. They were the ones that started with the clearest brief.

Anyone can get a prototype now. The question is whether what they got is the right prototype.

The $50,000 product now costs $10,000. That changes everything.

Here is the thing that is actually keeping me up at night as a builder. Not in a worried way. In a “I need to figure out what this means” way.

Products we used to charge $50,000 to build — scoped, architected, full-stack, production-grade — now have a credible price point of $10,000 to $15,000. That’s not a race to the bottom. That’s a genuine structural shift in what’s possible.

It means two things simultaneously, and they’re in tension:

This is precisely why the planning, analysis, and design loop matters more than it ever has. When building was expensive, a bad spec hurt. When building is cheap, a bad spec is almost free — until you try to grow.

Gartner’s 2025 research put it clearly: by 2030, CIOs expect that zero percent of IT work will be done by humans without AI. The firms that thrive won’t be the ones who adopted AI fastest. They’ll be the ones who embedded the right process around it.

Source: Gartner, November 2025 →

The studios that will win in this market are not the ones with the best AI tools. They’re the ones that figured out how to make a $10,000 build as architecturally sound as what used to cost $50,000 — because the thinking happened first.

What this means for how you build

If I were advising a development studio in 2026, here is what I would say:

The SDLC isn’t dead. The shape of it just changed — in a way that rewards clarity over cleverness, and punishes people who skip the thinking.

We built specshop.dev around this. Not as a thesis. As a practice. The spec is the product. Everything else follows.

If this resonates — or if you disagree with something — I’d like to hear from you.

Essay — Software Development · specshop.dev · March 2026