Illustrated article cover showing a calm landscape, embedded title text, and one large path splitting into smaller glowing paths

I have been working in web development for a long time. When I started, most of what we built was server-side. Like many people, I began with static HTML and JavaScript, then moved into PHP and session-based applications.

Coming from software applications, the web model took some getting used to. The idea that each page load essentially reset the experience, and that you had to rebuild the user's context every time, felt inefficient to me at first. But that was the model, and over time I learned how to think within it. Later, I spent years building custom platforms and CMSs around that pattern.

More recently, like many companies, we started moving away from that approach. We split the frontend from the backend and built richer client-side experiences with React and single-page applications. That shift brought real benefits. Interfaces became more dynamic. Interactions often felt faster once the app was loaded. Teams also gained a clearer separation between presentation and business logic.

But that shift also changed where the work happens. When more of the application runs in the browser, more responsibility moves to the user's device. In some situations, that works well. In others, it becomes a problem. Not every user has a fast laptop, a recent phone, or a clean connection. If the site feels sluggish, unresponsive, or visually unstable, the architecture may look modern while still delivering a poor experience.

That is part of why some companies that pushed heavily toward client-rendered applications later reintroduced more server-side rendering. It is not because the earlier approach was automatically wrong. It is because they learned that where the work happens matters just as much as how modern the stack looks.

To me, that is the more useful way to frame performance: as a workload placement problem. Some work belongs on the server. Some belongs in the browser. Some can wait until after the first screen is visible. Some should never block the user at all. Once I started looking at frontend performance through that lens, the discussion felt more useful than the usual debate about frameworks.

On the frontend in particular, one of the biggest constraints is the main thread. That is where much of the JavaScript execution, UI coordination, and interaction handling comes together. If too much happens there at once, the experience starts to feel heavy. Buttons respond late. Scrolling becomes less smooth. Inputs lag. Even when everything is technically working, the application stops feeling good.

The comparison that came to mind for me was a pattern we have used for years on the backend: offloading work. On the server side, when something is expensive or non-critical, we often move it elsewhere. We queue it, process it in parallel, or hand it to another service. We do not insist that every task happen in the same execution path if that creates a worse experience.

So the question became: can we apply more of that mindset in the browser? Browsers do give us some tools for this. Dedicated Web Workers can move certain calculations off the main thread. Service Workers can take responsibility for caching, offline behavior, and some network-level concerns. More specialized worklets exist for narrow rendering or audio cases. In the right situation, these can help keep the visible application more responsive.

On paper, that sounds like an obvious win. If a browser application is doing too much in one place, why not split some of the work out?

The answer is the same as in most architecture decisions: because the tradeoff is not free. Moving work away from the main thread adds complexity. You have to think differently about communication, state, debugging, browser support, and what code is actually allowed to run in those environments. You also do not solve every performance problem that way. For example, layout instability is usually addressed more directly by reserving space properly and designing stable loading states, not by introducing workers.

My team looked into this. We explored whether workers could help in meaningful parts of our application. The concept was attractive, but in practice we did not find a use case where the added implementation complexity clearly justified the benefit for our users.

That conclusion matters because it is easy to mistake available complexity for useful complexity. Good architecture is not about using advanced features because they exist. It is about knowing when the extra complexity will materially improve the product, and when it will only make the system harder to understand and maintain.

I still think this is an underused way of thinking in frontend development, not because every application needs workers, but because too many teams still treat the browser as if all logic should naturally flow through one busy path. To me, the more important shift is conceptual: modern frontend performance is not just about shipping less JavaScript or choosing a newer framework. It is about being intentional about where work runs, when it runs, and whether the user should feel it at all.

That includes more than background threads. It also means deferring non-essential data loads, deciding what should be rendered on the server, designing loading states that feel stable, and avoiding interfaces that jump around while content arrives. Users do not need to understand the architecture. They only need the product to feel fast, stable, and trustworthy.

Once you start thinking in terms of workload placement, the pattern stops being only a frontend concern. It starts to show up in other parts of engineering too, because the underlying question stays the same: what should happen here, what should happen elsewhere, and what is the cost of making that split?

With agentic AI becoming part of everyday development, we are starting to face a similar orchestration problem. At first, everything happens in one main context. Then people try delegating everything. After that, maturity sets in, and the real question becomes: what should be delegated, what context should go with it, and when does the overhead outweigh the gain? It is a different medium, but the judgment problem is similar. Whether we are talking about browser architecture, backend systems, or AI-assisted workflows, a big problem often becomes more manageable when it is broken into smaller, better-placed tasks. But the hard part is not the splitting. The hard part is deciding what is worth splitting in the first place.

That decision point will matter more over time. As tools improve, teams may become more willing to experiment with workload placement decisions that previously felt too complex or too expensive to pursue. Some of that may show up in frontend architecture, some in backend systems, and some in AI-assisted workflows. I am curious to see how that changes the way we build over the next few years, not because every system should become more complicated, but because the teams that understand where work should run will likely build the experiences that feel simplest to the people using them.

- Patrick