I keep hearing people talk about AI as a “10x developer tool.” That framework is wrong. It assumes that the workflow remains the same and that the speed increases. That’s not what’s happening. The entire life cycle, the life cycle we built careers around, the life cycle that spawned a multi-billion dollar tool industry, is collapsing in on itself.
And most people haven’t noticed it yet.
The SDLC you learned is a relic
This is the classic software development life cycle that most of us have learned:
graph TD
A[Requirements] --> B[System Design]
B --> C[Implementation]
C --> D[Testing]
D --> E[Code Review]
E --> F[Deployment]
F --> G[Monitoring]
G --> AEach phase has its own tools, its own rituals, its own cottage industry. Jira for requirements. Figma for design. VS code for implementation. A joke for testing. GitHub for code review. AWS for implementation. Datadog for monitoring.
Every step is discreet. Sequential. Transfers everywhere.
This is what actually happens when an engineer works with a coding agent:
graph TD
A[Intent] --> B[Agent]
B --> C[Code + Tests + Deployment]
C --> D{Does it work?}
D -->|No| B
D -->|Yes| E[Ship]
style E fill:#d1fae5,stroke:#6ee7b7,color:#065f46The stages collapsed. They didn’t get any faster. They merged. The agent does not know which step he is standing on because there are no steps. There is only intention, context and iteration.
AI native engineers don’t know what the SDLC is
I’ve spent a lot of time talking to engineers who started their careers after the launch of Cursor. They don’t know what the software development life cycle is. They don’t know what DevOps is or what an SRE is. Not because they are bad engineers. Because they never needed it. They have never gone through sprint planning. They never estimated the story points. They’ve never waited three days for a PR review.
They just build things.
You describe what you want. The agent writes the code. You look at it. You iterate. You ship. Everything at the same time.
These engineers are no worse for skipping the ceremony. It doesn’t bother them. Sprint planning, code review workflows, release trains, estimation rituals. None of it. They skipped the whole orthodoxy and went straight to building.
And honestly? I’m jealous.
Every stage collapses
I’ll go through the SDLC and show what’s left of it.
Requirements Gathering: Fluent, not dictated
Requirements used to be passed on. A PM writes a PRD, engineers estimate it, and the specification is frozen before a line of code is written. That made sense when construction was expensive. When each feature took weeks, you had to decide in advance what to build.
That limitation has disappeared. When an agent can generate a full version of a feature in minutes, you don’t have to specify every detail up front. You direct, the agent builds a version, you look at it, you adapt, you try a different approach. You can generate ten versions and choose the best one. Requirements are no longer a phase. They are a byproduct of iteration.
What is Jira if the audience isn’t people coordinating through a pipeline? What is Jira if it’s agents that consume context? Jira was built to track work through phases that no longer exist. If your “requirements” are just context for an agent, then the ticketing system is no longer a project management tool. It is a context store. And it’s a terrible one.
System design: discovered, not dictated
System design is still important. But the way it happens is fundamentally changing.
Designing was something you did before you wrote code. You would put the architecture on a whiteboard, discuss the tradeoffs, draw boxes and arrows, and then implement it. The gap between design and code was days or weeks.
That gap is closing. Design becomes something you discover by giving the agent the right context, not something you dictate in advance. The model has seen more systems, more architectures, and more patterns than any individual engineer. When you describe a problem, the agent not only implements your design, but suggests architectures that are often superior to what you would have come up with on your own. You have a design conversation in real time and the output is working code.
You still need to know when an agent is over-engineering or missing a constraint. But you contribute to the design and do not prescribe it.
Implementation: This is now the agent’s job
This one is clear. The agent writes the code. Whole functions. Complete solutions with error handling, typing, edge cases.
I don’t personally know anyone who still types lines of code. We review what agents write, give them context, provide direction, and focus on the issues that actually require human judgment.
Testing: simultaneous, not consecutive
Agents write tests alongside the code. Not as an afterthought. Not in a separate ‘test phase’. The test is part of the generation. TDD is no longer a methodology, it’s just how agents work by default.
The entire QA function as a separate phase has disappeared. When code and tests are generated together, verified together, and iterated together, there is no carryover. No “throw it over the wall to QA.”. The agent can perform the QA itself.
Code review: give up
The pull request flow should disappear. I was never a fan, but now it’s just a relic of the past.
I know that’s uncomfortable. Code review is sacred. This is how you catch bugs, share knowledge and maintain standards. It is also an issue of identity. Were engineersand reviewing code is what engineers do. But sticking to PR workflow in an agent-driven world isn’t strict. It’s an identity crisis.
Think about it. An agent generates 500 PRs per day. Your team might be able to review ten of them. The review queue is being replenished. This is not a bottleneck worth optimizing. It’s a fake bottleneck, a bottleneck that only exists because we force a human ritual onto a machine workflow.
graph TD
A[Agent generates PR] --> B[Waits for human review]
B --> C{Reviewer available?}
C -->|No| D[Sits in queue for hours/days]
C -->|Yes| E[Review + Comments]
E --> F[Agent addresses feedback]
F --> B
D --> B
style B fill:#fee2e2,stroke:#fca5a5,color:#991b1b
style D fill:#fee2e2,stroke:#fca5a5,color:#991b1bThis diagram should not exist. The whole flow is wrong.
The assessment needs to be completely reconsidered. Either it becomes part of the code generation itself, the agent verifies its own work against the plan document, runs the tests, checks for regressions, validates against architectural constraints, or a second agent reviews the output of the first agent. Opponents are plowing through the proposed changes and trying to break them in every dimension. We already have the tools for that. Human-in-the-loop review is exception-based and only triggered when automated verification can’t resolve a conflict or when the change affects something truly new.
What does a world without pull requests look like? Officers commit to leadership. Automated checks, tests, type checks, security scans, behavioral differences, validate the change. If all goes well, it will be sent automatically. If something doesn’t work, the agent fixes it. A human only intervenes when the system really doesn’t know what to do.
graph TD
A[Agent generates code] --> B[Agent self-verifies]
B --> C[Second agent reviews]
C --> D[Automated checks]
D --> E{All clear?}
E -->|Yes| F[Ship]
E -->|No - resolvable| A
E -->|No - novel issue| G[Human review]
G --> A
style F fill:#d1fae5,stroke:#6ee7b7,color:#065f46We spend our review cycles reading discrepancies that an agent can verify in seconds. That’s not quality assurance. That’s Luddism.
Inset: decoupled and continuous
Agents are already writing deployment pipelines that are more complex and specialized than what most teams would ever want to build by hand. Feature flags, canary releases, progressive deployments, automatic rollback triggers, the kind of release engineering that used to require a dedicated platform team.
The key shift is that agents naturally decouple deployment from release. Code is continuously deployed, each change, as soon as it is generated and verified, produces an artifact that ends up behind a gate in production. Release is a separate decision, driven by feature flags or rules of the road.
Some teams are already approaching true continuous deployment and release. Code is generated, tests pass, artifacts are built, and the change is live, all in one automated flow without a human between intent and production.
Where this goes next is even more interesting. Imagine agents who don’t just deploy code, but manage the entire lifecycle of a release, monitoring the rollout, adjusting traffic rates based on error rates, automatically rolling back when latency spikes, and only notifying a human when something truly new goes wrong. The implementation phase is not simply automated. It becomes an ongoing, self-adjusting process that never really ends.
graph TD
A[Agent generates code] --> B[Automated verification]
B --> C[Artifact produced]
C --> D[Deploy behind feature flag]
D --> E[Progressive rollout]
E --> F{Healthy?}
F -->|Yes| G[Full release]
F -->|No| H[Auto-rollback]
H --> I[Agent investigates]
I --> A
style G fill:#d1fae5,stroke:#6ee7b7,color:#065f46
style H fill:#fee2e2,stroke:#fca5a5,color:#991b1bMonitoring: the last phase standing, and it must evolve
Monitoring is the only phase of the SDLC that survives. And it not only survives, it becomes the foundation on which everything else rests.
When agents ship code faster than people can review it, observability is no longer a pleasant dashboard layer. It is the most important safety mechanism for the entire collapsed life cycle. Every other security, the design review, the code review, the QA phase, the release sign-off, has been absorbed or eliminated. Monitoring is what remains. It is the last line of defense.
But most observation platforms are built for people. Alerts, log search, dashboard, etc. all designed for a person to look at, interpret and act on. That model breaks when the volume of changes exceeds human attention. If an agent is sending 500 changes per day and your observability settings require a human to investigate every deviation, then you have created a new bottleneck. You just moved it from code review to incident response.
Observability without action is just expensive storage. The future of observability lies not in dashboards, but in closed systems where telemetry data becomes context for the agent that submitted the code so it can detect and fix the regression.
The observability layer becomes the feedback mechanism that drives the entire loop. No stage at the end. The connective tissue of the entire system.
graph TD
A[Intent] --> B[Agent builds, tests, deploys]
B --> C[Production]
C --> D[Observability layer]
D -->|Anomaly detected| E[Agent investigates + fixes]
E --> B
D -->|Healthy| F[Next intent]
F --> A
style D fill:#dbeafe,stroke:#93c5fd,color:#1e40afThe teams that figure this out first, observability that goes straight back into the agent loop, and not into a human’s pager, will transmit faster and more securely than everyone else. The teams that don’t will drown in warnings.
The new life cycle is a tighter loop
The SDLC was a wide loop. Requirements → Design → Code → Test → Review → Deploy → Monitor. Linear. Sequential. Full of transfers and waiting.
The new life cycle is a tight loop.
graph TD
A[Human Intent + Context] --> B[AI Agent]
B --> C[Build + Test + Deploy]
C --> D[Observe]
D -->|Problem| B
D -->|Fine| E[Next Intent]
E --> B
style B fill:#ede9fe,stroke:#c4b5fd,color:#5b21b6Intention. Build. Observe. Repeat.
No tickets. No sprints. No story points. No PRs lining up. No separate QA phase. No release trains.
Just a person with a purpose and an agent who executes.
So what’s left?
Context. That’s it.
The quality of what you build with agents is directly proportional to the quality of the context you give them. Not the process. Not the ceremony. The context.
The SDLC is dead. The new skill is context engineering. The new safety net is observability.
And most of the industry is still configuring Datadog dashboards that no one is looking at.
#Boris #Tane


