When Developer Time Stops Being the Constraint

Over the past few years, we put a lot of process in place for our development teams. Coding standards, linting, pull-request rules, QA gates, test layers, release checks, and post-deploy monitoring. We iterated on all of it to keep production stable and code maintainable.

Those practices were built around one core constraint: developer time was expensive, so rework had to be minimized. Over the last year, agentic AI has started to challenge that assumption.

The question AI is raising

If you listen to current AI conversations, especially around agentic workflows, a common story is emerging: go from idea to production with very little friction. Validate the specification at the beginning, validate the outcome at the end, and spend less attention on the code in between.

That model is compelling, but it raises questions for me.

The tension, for me, is practical: process still matters, code quality still matters, and test quality still matters. If humans are less involved in authoring and more involved in validation, accountability has to be made explicit rather than assumed.

Between promise and reality

From my perspective, the reality isn't black or white. AI is already useful, but the pace of change is hard to absorb. Things that felt unthinkable at the beginning of 2025 are normal at the beginning of 2026.

Over the last year, we tried many uses: code generation, review assistants, test drafting, codebase exploration, even repository guidance files to help tools understand structure and standards. Some of it worked well, especially on newer projects.

The harder part is legacy code. Most of our systems were not designed with AI in mind. We carry old structures, accumulated complexity, and uneven documentation. In that context, even experienced developers need time to build understanding.

Expecting an agent to consistently do better without that context is optimistic.

Different reactions, temporary boundaries

That gap between promise and context explains why reactions vary so much. Some developers lean heavily into AI. Others push back because generated code misses local standards or feels slower to fix than writing directly.

We also started seeing side effects: very large PRs, uneven review quality, missing tests because people assumed the model handled everything, and sometimes authors who could not clearly explain what the change was doing.

So we set boundaries. For now, developers remain accountable for generated code. They need to understand it, test it, and explain it. That feels necessary today, even if I expect these boundaries to evolve as tooling and practices mature.

Reframing old problems

My own perspective has shifted in the last few months. Instead of spending most effort patching old code, I started using AI to plan migrations of selected system areas toward cleaner targets.

The approach was simple: make context discoverable, define the destination clearly, and let the model reason toward the target instead of overfitting to legacy patterns. The first POCs were faster and closer to what I needed than I expected.

That changed the question for me. For years, technical debt felt like something we acknowledged but rarely reduced in a meaningful way, because delivery pressure always won.

I built roadmaps to reduce debt with limited capacity, and we never moved fast enough to get ahead of it. At the same time, we went from nine development teams, to three, to two, while maintaining the same systems. In that context, debt is no longer a side issue. It competes directly with every new feature. If teams keep getting smaller, how do we reduce debt and still ship meaningful work?

Starting from the spec

One idea I keep coming back to is starting from the spec. Not from scratch, but from a well-defined plan. Features we actually need. Leaning out business logic that accumulated over time and doesn't need to be carried forward. A proper test plan. Letting AI build a clean version of the system around that instead of endlessly patching the old one. If something is wrong, wipe it, refine the plan, and have the AI redo it. That doesn't feel optimal, but the constraint is no longer developer time. It has moved elsewhere. It now sits at the plan level: iterating and refining until it's clear, both for you and for the AI.

When iteration gets cheaper

This is the counter-intuitive part. We spent years protecting developer time by trying to avoid rework. Now, creating and discarding a POC can take minutes or hours instead of days or weeks.

That doesn't remove the need for review, security, or quality. It changes where effort belongs. More energy goes into framing the problem, validating behavior, and holding accountability. Less energy goes into preserving every early implementation choice.

I can't see the future clearly. But it feels increasingly likely that our main constraint is shifting from writing code to defining, reviewing, and governing what gets built.

Open questions

That leaves me with a few open questions:

I keep watching for four pressure points: how far generation can scale before review quality degrades, how accountability evolves when humans validate outcomes instead of authoring every line, how architecture coherence holds under faster iteration, and how newer developers learn the craft while it is still shifting.

I used to frame this primarily as a tooling upgrade. Over time, I started seeing it as a governance shift that changes where engineering judgment is applied.

- Patrick