Building Features Without Closing the Loop

Revenue, confidence, and measurement

Over the years, I've seen many ways of building features. Some driven by instinct, others backed by months of experimentation. What has stayed consistent is how closely confidence tends to follow revenue.

When numbers are strong, decisions feel validated. The roadmap feels coherent. Priorities seem obvious. It's easy to believe the process is working as intended.

When revenue softens, the posture changes. We start looking for what broke. Dashboards get revisited. Assumptions get questioned. Feature performance suddenly matters more than it did when things were going well.

This is where I keep seeing the same pattern: measurement becomes reactive. We use it to diagnose uncertainty, not to continuously validate assumptions. That makes it harder to distinguish signal from noise, especially when market conditions shift for reasons outside our control.

Requests, value, and unfinished ROI

At the same time, feature requests continue to accumulate. Requests are visible, they feel concrete, and they often point to real friction. But they are signals, not proof of value.

Volume of feedback is easy to confuse with importance. The most vocal users often shape prioritization more than the most valuable ones. In content-heavy products, that distinction matters. Subscription revenue is usually well understood. Cost-to-serve, much less so. Engagement patterns vary widely, and not all activity contributes equally to long-term value.

Despite that, ROI is often treated as an upfront exercise. It appears in planning discussions and justification decks, where assumptions about impact and upside are documented clearly. What's less common is returning to those assumptions once the feature ships.

Rarely do we pause to ask whether behavior changed the way we expected, whether the right customer segment adopted the feature, or whether projected value actually materialized. Features get absorbed into the product, success and failure blur together, and attention shifts to the next initiative.

Unsettled bets

A few years ago, I read Don't Just Roll the Dice by Neil Davidson, and one idea stayed with me: product decisions are bets.

Not reckless ones, but bets nonetheless. They assume upside, carry risk, and are made under uncertainty. What struck me wasn't that we place bets. That's inevitable. It's that we rarely settle them.

The feature ships. The roadmap moves forward. But the original hypothesis often goes unexamined. Over time, development starts to resemble momentum more than deliberate learning, and decisions accumulate faster than understanding.

Simon Sinek's The Infinite Game helped me frame this differently. If the goal is long-term resilience, not short-term scorekeeping, then ROI can't be treated as a one-time checkpoint. It has to be part of an ongoing loop that keeps strategy connected to reality.

Closing the loop doesn't require perfect measurement. It requires continuity. A willingness to revisit assumptions, not only when things go wrong, but also when they appear to go right.

Without that discipline, product confidence can drift away from product reality. And momentum, by itself, doesn't tell you whether you're creating value or just moving.

I used to read shipping speed as proof that the system was learning. Over time, I started treating it as only one signal, and often a noisy one.

What changed for me is that I now treat every shipped feature as an unresolved hypothesis until we deliberately revisit the outcome.

- Patrick