Over the years, I've worked with many different people and lived through several cycles of product development. What still surprises me is how easily organizations forget, and how often the same patterns come back.
This is only my perspective, and I might be wrong in parts of it. But this is the loop I keep seeing.
Early feature work and momentum
Earlier in my career, most requests came from partners and affiliates. We rarely listened to clients directly. We were focused on pleasing the people who promoted our products.
We built features based on what others said would sell. Revenue was rising, so there was little pressure to challenge those assumptions. For a while, the approach seemed to work.
Adding structure and challenge
Later, the company adopted agile practices and introduced a definition of ready. Requests were reviewed and challenged by product and engineering before implementation.
If something was unclear, we pushed back and asked for more context. We also built MOCs and POCs, reviewed them with stakeholders, and iterated before release.
It wasn't perfect. Stakeholders were often ready to ship early prototypes, and we had to argue for cleanup before production.
Shipping without stopping
Features were still driven by partner requests, but now also by internal ideas. People felt they knew what users needed.
We shipped sprint after sprint and moved directly to the next request. We rarely stopped to validate whether what we built actually created value. Because revenue remained healthy, that gap stayed mostly invisible.
When revenue slowed
That confidence lasted until revenue slowed. The first reaction was restructuring: teams were shuffled, some people left, and ideas that were previously accepted were suddenly questioned.
Once the immediate turbulence passed, a different conversation began.
Measuring value
We decided ideas needed clearer measurement plans. ROI wasn't a new concept, but planning how to validate impact after launch was not common practice.
A team was created to enforce measurement and follow-up. It was not an easy transition. People felt blocked, and partners felt ignored.
But for the work that went through this process, we understood outcomes better. Some projects exceeded expectations. Some brought little value and were removed. Others improved user experience but returned less than they cost, forcing explicit tradeoff decisions.
Pushing back against complexity
That phase worked for a while, then the pendulum moved again. The process was seen as too heavy, and the dominant question became: why can't business units decide directly what engineering should build?
Layers were removed, teams were reshuffled, and developers moved closer to business units. Engineering time became harder to justify unless a request was framed as mandatory.
I don't think everything in that change was wrong. The previous model did create friction. But measurement became selective. We often measured the projects we were skeptical about, while business-driven projects passed with fewer questions.
Back where we started
This shift affected the engineering roadmap directly. Work that used to have dedicated space now had to compete for attention and be justified as urgent.
When revenue improved, some of that space came back, but not to previous levels. Looking at today, the pattern still feels familiar: fewer projects are measured end-to-end, more decisions are made on intuition, and engineering investment remains hard to defend.
I still don't know whether this pattern is inevitable or just a habit we keep repeating. What changed for me is that I now look less for a permanent model and more for continuity: what we measure, what we stop measuring, and what we decide to remember when pressure changes.
- Patrick