Navigating the future of frontend

Make sense of modern frontend meta-frameworks. Connect the dots between fundamental concepts old and new.

Rem · 8 Mar 2024

Share:

Introduction

The frontend ecosystem is in a period of transition. Up-and-coming frontenders navigate a complex environment of competing frameworks, concepts, advocates, preferences, and best practices.

Today, the web and web technologies run much of the most used software.

From Java applets, to Flash, and the rapid evolution of Javascript, countless tools have been built to manage the broad spectrum of browsers, screen sizes, device capabilities, network conditions, and rising user expectations.

But throughout this pursuit of a fully-fledged software distribution platform, the fundamental constraints of the web have not changed.

In this post we’ll explore how the modern Javascript meta frameworks navigate these constraints.

We’ll build a high-level mental model of how fundamental pieces like compilation, routing, data loading and mutations, caching, and invalidation, all come together. This will help us make sense of the emerging architectural patterns, including the capabilities React Server components provide.

Despite the devil being in the details, we’ll see more convergence among frameworks than meets the eye.

By the end, we’ll be armed with fundamental concepts that persist beyond the rise and fall of frameworks and surface-level APIs, and better understand the direction the frontend ecosystem is heading towards.

The rise and fall of frameworks

The web platform is a slow-moving comprehensive layer under the user-land tools built on top of it. Over time platforms improve, removing the need for higher-level abstractions.

But platforms don’t always provide the necessary capabilities. The death of Flash saw the Javascript ecosystem fill in the gaps as the go-to technology for building rich interactive experiences.

The tech boom of the past decade saw many organizations leverage the web as the primary direct-to-customer distribution channel, creating a frontend specialization akin to thick-client desktop development compared to previous generations of web developers.

Popular frameworks today like Angular, React, Vue, Svelte, Solid, Qwik, etc, solve client-side interactivity while being able to compose components that render consistently as data changes.

The details lead to different trade-offs, so they are still debated, like the best way to handle reactivity, preferences for templates over functions, or appetite for complexity. But at a high level, they converge on similar concepts and offer similar capabilities.

A big theme is the rising complexity of the meta-frameworks that build application-level frameworks on top of them. Let’s start our survey by understanding two universal cycles.

Capability versus suitability

A tool’s capability is its ability to express ideas. As it gains new capabilities, a tool becomes more complex.

The suitability of a tool is whether it’s fit for purpose for a given situation. In an organizational context, this means being satisfactory to as many people as possible without regularly hitting pitfalls or footguns.

Cycles of innovation that increase capabilities also lead to complexity and confusion. This leads to cycles where abstractions are built that reign in capabilities, or make them easier to use. Counter-trends often appear in response that favor more straightforward approaches.

Bundling the un-bundled

The phenomenon of Javascript fatigue comes from having to decide between the disparate capabilities of multiple technologies and making them work together.

Bundling is when we connect these capabilities together into a single offering that makes using them more accessible.

Bundled abstractions have to tackle many problems at once. As a result, they tend to get large and have levels of abstraction and magic that often make developers uncomfortable.

When these get perceived as too slow or limiting, a new cycle begins, where pieces are broken off and innovated on in isolation.

It’s through this ongoing iteration and experiencing the pain of pitfalls over time that we learn what’s important and what’s not.

Several years of iterations in client-side component frameworks have led to each having its own “meta” framework that bundles up the disparate tooling and best practices. Many of React’s new APIs are unbundled capabilities intended for integration into one of these higher-level application frameworks.

Back to basics

”If you know the way broadly you will see it in everything” - Miyamoto Musashi

The two main ingredients of a computer system are computation and storage.

A system’s performance is constrained by operations that compute stuff and I/O (input, output) operations like fetching data from storage.

On the web, these fundamental constraints manifest as Javascript’s single thread and the network latency between users and servers.

All connections on the web involve a client and a server that follow a request-response pattern. The server is always the first port of call (even if it’s serving static files from a CDN). How we distribute these operations between server and client leads to different trade-offs.

Computation on the client allows for fast interactions, but too much, and the main thread becomes unresponsive. Servers don’t have this constraint, but asking them to do stuff incurs network latency. Balancing these constraints for interactive experiences on the web is key.

Reading and writing

Another essential ingredient is the ability to read and write data.

The web started as read-only static documents. Eventually, we began persisting data and dynamically generating HTML. Armed with the trusty form element, we could now perform writes, expanding the web’s capabilities for new kinds of applications.

In this model, user interactions on the client convert to synchronous HTTP requests. Updates to the experience after writing are coarse-grained as the server responds with a freshly generated document. Where the browser reloads itself to display it.

Eventually, we got XMLHttpRequest which opened up even more capabilities. User actions could now be asynchronous, where updates are fine-grained, where only the relevant part of the page updates.

In both these cases, the source of truth for what is rendered and the application state driven by the server.

By now, we know the story well. Over time templates and application state shifted more and more to the client, where application state became client-driven, allowing for fast optimistic writes that mask the underlying network.

This is the dominant model to which many people entering the industry the past decade have cut their teeth on. In an industry where velocity is king, as features grow, there’s only one place for all that code to go.

When an approach using a single machine becomes performance-inhibited, we can abandon it; otherwise, we enter into the realm of distributed systems.

Distributed systems frontend

The mental model of a client-only approach is like a long-running desktop application that syncs with a backend asynchronously.

The shift to server-driven application state is one of the most significant mental model changes as most of the “back of the frontend” code moves back to the server.

The difference between other server-first application frameworks in different ecosystems is that the capabilities of rich client-side interactivity and stable navigations are maintained.

The confusion arises from knowing when and how to leverage the performance characteristics of server-driven patterns with the capabilities of a client-driven approach within the same framework and product.

React Server components go even further, pursuing a unified authoring experience of composable components interwoven across server and client. Which is a significant shift for the industry’s most dominant framework.

Other language ecosystems are exploring similar concepts. For example, C#‘s Blazor, Clojure’s Electric, and Rust’s Leptos pursue similar ideas.

Before we get lost in the weeds, let’s take a step back and understand why now?.

In addition to greater performance, let’s understand some key factors that have led us to this new direction of web development in the Javascript ecosystem.

  • One language to rule them all

    As the lingua franca of the web, there’s no other language as universal as JavaScript.

    When Node.js came about, we could write isomorphic code that ran on both the client and server. Early pioneering full-stack frameworks like Meteor embraced these capabilities.

    Meteor was an early example of an isomorphic Javascript framework with full-stack reactivity, and RPCs, abstracting away the fact that the browser and server are very different environments.

    At the time, this all-in-one approach lost industry mind-share to less opinionated and looser approaches, like React, as the minimal view library.

    Since then, TypeScript has had an enormous impact, becoming a defacto for many developers and organizations.

    Tools like tPRC and the T3 stack use isomorphic Typescript providing end-to-end type safety with the unification of code, types, schemas, and execution model co-located in the same repository.

  • Next-gen compilers and bundlers

    We can think of compilers as programs that transform, prepare, and optimize the code we write for later execution. We touch on how this plays out on the web in Building and delivering frontends at scale.

    Steady advances in bundler and compiler technology have resulted in iterations of fast next-gen bundlers rewritten from the ground up that are really good at managing Javascript module graphs.

    These capabilities allow frameworks to separate module graphs for client and server that execute in different run times.

    This idea of code extraction powers the unified client-server authoring experience, and much of the magic that needs to happen behind the scenes.

  • The Suspense is over

    Understanding the capabilities Suspense unlocks is key to grasping the server-first mental model emerging in the React frameworks. With variants are being explored in other frameworks like Solid, Vue, Preact and Astro.

    The key insight from a user experience perspective is that we can more intentionally design the loading phases of data-intensive experiences.

    A key insight from a performance perspective is that Suspense affords the parallelization of resource loading and rendering.

    Inspired by the concepts in BigPipe at Facebook, it mitigates the synchronous waiting time for the server to fetch data and render HTML, while the browser sits idle.

    The client can instead start downloading resources like fonts, CSS, and JS as it encounters those tags as the browsers parse the HTML. All while the server is loading data in parallel.

    This mitigates the hit to TTFB and slower largest contentful paint in a purely server-driven “fetch then render” model.

    But compared to simply flushing the <head> early, and async loading everything from the client, this can be done with fine-grained control of phased loading stages. In contrast to a proliferation of unsolicited throbbers “popcorning” in and out of the page as data and code load in a waterfall causing cumulative layout shifts.

    Beyond initial page loads, it also allows RSCs to stream down serialized virtual DOM for in-place transitions. The insight here from a technical perspective, is that Suspense solves the problem of consistent rendering when rendering is async and can be streamed out of order.

    More on that word salad

    A core problem a framework’s reactivity system solves is how to ensure a user interface renders consistently when data changes over time.

    Consistency means that what is displayed accurately reflects the current source of truth. It ensures that the UI doesn’t show out-of-date or different data from another element using the same data.

    The virtual DOM and signals are both approaches for this. A simplified understanding of the differences is that the virtual DOM is coarse-grained as it “diffs the view,” while signals are fine-grained and “diff the model.” Each approach has different trade-offs.

    Suspense solves the rendering consistency problem from another angle, when resources are loaded asynchronously as I/O bound operations when component trees load over the network.

    This means we can stream responses from different data sources and not have to manually manage race conditions and swap place-holder DOM content for the final DOM content when things arrive over the network out of order.

    It can also be cleverly combined with build time compilers to create new rendering methods like partial pre-rendering.

  • Advances in infrastructure

    While these capabilities have been brewing in the JavaScript ecosystem, cloud infrastructure powering the web has also rapidly evolved.

    Capabilities like streaming from a server runtime to the browser requires backend infrastructure capable of that.

    Many server infrastructure providers are now supporting this kind of capability. With the popularity of serverless and edge computing, we’re seeing the creation of new runtimes pop up like Deno, Bun, txki.js, and LLRT built to be able to start and run fast in edge and serverless environments and implement web standard APIs like fetch and ReadableStream. Alongside the rise of boutique “frontend cloud” providers with solutions that abstract away the complexity of all the underlying infrastructure.

Routers all the way up

“You must understand that there is more than one path to the top of the mountain” - Miyamoto Musashi

Routing is foundational up and down the stack. The Internet and the web can be seen as a series of routers. Routers are also the backbone of any framework. It’s the first entry point for the compiler at build time, for the initial request, and the destination for many user interactions after.

The convenience and shareability of the URL (and QR codes) are fundamental to the web’s success as a software distribution mechanism. Routers are the connectors between URLs to the code and data that need to load.

One way to think of the router is as a state manager for the URL, and a route as a shareable destination within an application. This is an essential job because URLs are the primary input to what layouts to show, and what code and data need to be loaded.

The router is tied to vital operations like data fetching and caching, mutations, and re-validations. Because of this, where the application router lives, and does its work is fundamental to a frontend architecture.

Client versus server

In a traditional server-driven approach, the router maps requests to URLs that fetch data and render HTML templates. Transitions between URLs result in a new document being generated and require a browser refresh.

In a client-driven approach, the router’s code must be downloaded to the browser, where after everything has bootstrapped, it starts listening to browser history changes: for example link clicks and back and forward navigation events.

From here, rather than requesting a brand new document, it maps changes to the URL to client component code that re-renders the existing document.

Client-side routing is central to an SPA architecture. Route transitions preserve the client’s current state, where existing JS, CSS, and other assets don’t need to be re-evaluated. Code and data can be preloaded before a transition is made. It also enables experiences like animations between route transitions (with similar UX patterns now being baked into the platform).

Most meta-frameworks provide the combination server-driven application state while preserving the overall capabilities of client-side routing. The distinction between client and server start to become blurred with the approaches Qwik and the RSC architecture take.

A History of Waterfalls

For some time, dynamic client-side routing was a common pattern. That is, having the router render routes as components anywhere in the tree.

This design allows for highly flexible and dynamic capabilities at runtime, like rendering a <Redirect /> component. However, to know what code and data to load, we must render the component tree to determine the routes.

In practice, this meant that many client-driven component architectures using this pattern run into a fair amount of client-server network waterfalls.

A waterfall is a series of sequential network requests, where each request is dependent on the completion of the previous one. Waterfalls are the silent killer of performance that lurk in the network tab.

Meta-framework routers converge on statically defined route definitions.

A popular manifestation of this is file-system-based routing - an intuitive way to map folder structures to URLs and then URLs to specific files in those folders. Where a compiler generates routes by traversing the file system.

Having a configuration file define all routes is another simple and type-safe approach.

These route definitions naturally form a hierarchical tree structure. Most routers create a nested route tree that maps URL segments to corresponding component sub-trees. We’ll see next why this is important to note, and how leveraging the URL is key to many server-first data loading patterns.

The new backend of frontend

After we’ve mapped the URL to a component tree, we’ll need to load code and data. As we’ve seen, a big theme of the meta-frameworks is reconciling the performance benefits of server-driven application state without giving up the capabilities of a component-based rich client-side approach.

Co-location is a big part of the component model and its ability to compose easily. There’s a trade-off here between the portability of components that manage their own data dependencies, i.e., by fetching their own data. But risk creating unnecessary waterfalls when composed. Versus accepting that data as a prop (or a a promise) with fetching is hoisted up to the route level.

We’ve talked about Relay being an example of a highly capable “back of the frontend” client-side library. Allowing data co-location with components, but with hoisted fetches. This capability comes at a complexity and bundle-size cost and requires GraphQL. Let’s understand how these trade-offs are navigated when moving to the server, without bundling a client-fetching library or using GraphQL.

Best friends forever

Backend for frontends (BFFs) is a design pattern familiar in a service-orientated backend environment.

The basic idea is that a tailored backend service sits directly behind each client platform (web, mobile, CLI, etc) and caters to the specific needs of that frontend application.

For example, with a server-driven approach like HTMX, the backend responds with HTML partials to a thin client that performs AJAX-style updates.

In the case of RSCs, it’s a tailored backend that returns serialized component trees to a thin, or thick client on a diet, depending on the experience.

Let’s understand some benefits compared to running this layer on the server.

  • Simplify client bundles by keeping most data fetching and data transformation logic on the server, including heavy transformation libraries (date formatting, internationalization, etc.), and any tokens or secrets out of code sent to browsers.

  • Compose multiple data requirements and prune data and avoid over-fetching. As we saw with Suspense, slower API calls can be streamed down into a Suspense boundary and not block rendering.

  • Empowers frontenders with a similar DX to GraphQL solutions by allowing product developers to specify the precise data requirements for an experience.

  • Leverage URL state - A golden rule of state management in a reactive system is to store the minimal representation of state and use that to derive additional states.

    We can apply this principle to the URL, where individual URL segments map to component sub-trees, and the current states of components within them. For example, with query params that map to the current search filter or currently selected options.

    From a performance perspective, managing state this way allows this application layer to fetch all code and data upfront, close to the server where the data lives.

    So in most cases, by the time we run on the client, we have all the information we need ahead of time without having to request back to the server from the client. This is a good position to be in for both initial loads and subsequent transitions.

    This also means we’re taking full advantage of the power of the web as a distribution mechanism — shareable links via URL that provide consistency in what you see and ensure that when you share it, other people also see things like selected filters, open models, etc.

    With this in mind, the URL is a great place to store certain kinds of client state if it allows you to fetch code or data ahead of time.

Cache rules everything around me

Moving these layers out of the client means we can do more and cache more on the server. A fundamental principle of performance is do less. One way to do this is to do as much work ahead of time as possible and store the results in a cache.

There are multiple types of caches (and deeper layers within caches) that are important to understand at a high level.

A public cache stores the results of work where data isn’t sensitive or personalized. An example is a public CDN where the HTML output of a server build is cached.

A private cache is only accessible for individual users (or separate cohorts of users). An example would be an in-memory client-side cache of remote data. Or the browser’s native HTTP cache.

A major source of complexity in any system is state management. In the frontend a good chunk of that is managing the frontend’s synchronization with the remote data it interacts with, which is really a type of cache management.

The new remote data cache

As we’ve seen, having an in-memory cache in the browser as the source of truth for views allows for optimistic writes for fast interactions. Whenever we have caches, we need to understand how they are invalidated. Let’s examine the different ways to interact with the client-side cache.

  • Manual cache management: This involves manually managing a normalized cache using state management tools like Redux. It requires imperative direct cache updates for optimistic updates that often update again when the response comes back.

  • Key-based invalidations remove the need for manual management. An example of a best-in-class tool for this is React Query, which handles a bunch of other tricky cache management problems. Whereas Apollo or Relay takes a similar approach in that everything is handled for you under the hood.

Moving this layer to the server means moving the primary source of truth for the views. Having seen how cache management is done in a client model, let’s understand how it’s done in a server-first one.

Cache invalidation and Server actions

In a “traditional” request-response model, writes that update server state are associated with navigations because the browser needs to render a new document after updates. A typical pattern is the POST, redirect, GET request flow.

<!-- browser sends form data to the url passed to "action" -->
<form action="form_action.php" method="post">
  <!-- fields -->
</form>

Most frameworks converge on this pattern for as a starting default for performing writes. This makes it easier for a SPA (PESPA) to be progressively enhanced.

The form’s action attribute takes a URL for an endpoint that receives the form data sent by the browser. Frameworks like Remix and Sveltekit will send writes with form data to route level server actions. Whereas Next and SolidStart allow calling server actions anywhere in the component tree, making them more akin to RPCs.

Once we’ve written to the server state (the database and any server caches), instead of returning a brand new document, the client framework uses its reactivity system to diff the response and update the page in place.

One benefit of returning data encoded into a view, instead of just data, is that the response can return the updated UI in a single server round trip, compared to the browser having to do another GET after it receives the redirect to update the view; this is a benefit React Server Components has has we’ll see next.

This approach is much more straightforward compared to manually managing a client-side cache, and also doesn’t require bundling a data-fetching library. But as we saw earlier, the request-response model has coarse-grained updates at the route (or nested route) level.

This is a good default for a large percentage of experiences. However, for certain features, we might still need the benefits of fine-grained cache management and client-side data loading.

For example when polling, or just when the coarse-grained request-response flow doesn’t nicely map to what you are building, and you want to avoid re-running the server component or loader functions on writes.

The benefit of server actions that can be used anywhere in the module graph is that you mix and match approaches for what makes sense. For example, you could hydrate a client-side cache with the results of a server action.

// client fetching and caching with RPC-style server actions
useQuery({
  queryKey: ['cool-beans'],
  // any function that returns a promise
  queryFn: () => myServerActionThatReturnsJson(),
})

There’s much more nuance in this area that we will need more time to explore. Let’s finish up by understanding some new capabilities React Server components provide combined with server actions, and how they intersect with other emerging technologies.

Multi-dimensional components

React server components are a significant paradigm shift. In their nascent stage, they’ve been hard to follow because there are many different ways to conceptualize them.

From an islands architecture perspective, various flavors of server components distinct from React are also being explored in other framework ecosystems like Nuxt and Deno’s Fresh.

All of the trade-offs React makes are in pursuit of preserving the component model and the compositional powers that come with it. Another way to understand them at an architectural level is as componentized BFFs.

From the client’s perspective, RSCs are components that have run ahead of time, e.g., during a static build, or on the server before the client runs.

A simple mental model is to think about them as serialized components. The idea of running React off the main thread by serializing the output of components have been brewing for some time.

This new capability allows React to express multiple architectural styles:

A static site built ahead of time, a server-driven architecture with HTMX style AJAX updates, progressively enhanced SPAs, or a purely client-rendered SPA with a single entry point. Or all in the same application, depending on the specific experience. Beyond the potential performance benefits, let’s explore some interesting potential benefits of this kind of fluid architecture.

  • Composition across the network

    Server components provide the ability to share and compose full-stack slices of functionality and a new kind of frontend authoring experience.

    Within organizations, this is a new take on the model of separate frontend and backend teams, to be more aligned to teams that work in full-stack vertical slices or steel threads.

    For large organizations with standardized infrastructure, the use case is compelling to have full-stack platform components that can be consumed and composed by product teams. The ability to compose the output of RSCs in a federated model is another emerging capability.

    It’s unclear how this will play out at the ecosystem level, but it will undoubtedly bring interesting changes to component API design. For example, packages may also export preload functions that can be hoisted up to the route level to avoid server waterfalls. Because this is a new paradigm, many best practices still need to be explored, and pitfalls to be found.

  • Server-driven UIs

    This is a concept that some large organizations like AirBnb and Uber utilize for finer-grained control of server-driven rendering of their native mobile frontends.

    The introduction of react-strict-dom provides an interesting combination of React Native and RSCs to more easily leverage these ideas across platforms beyond the web, including the emerging platforms like AR and spatial user interfaces.

  • Generative AI UIs

    It’s hard to predict how the future of generative AI will play out in this space. But it’s here to stay. An emerging capability in this model is the ability to dynamically generate highly personalized, rich interactive experiences on the fly.

    A more down-to-earth example is situations where you need data before you know what components to render. In that case you need to bundle multiple different types of interactive components ahead of time. Because the number of UI components can grow indefinitely with something like this (e.g CMS content types), this type of dynamic rendering of components would otherwise require sending all the code down to the client or introduce delays when lazy loading the different component types from the client.

    Having components end-to-end means we can stream components without growing massive bundles. An interesting exploration here uses AI function calling alongside the flexibility of server actions to return serialized interactive components.

The future of frontend

We covered a lot of ground again in this post and barely scratched the surface of some of the fundamental layers in a web application framework. This is not to mention how technologies like WebAssembly and WebGPU may come into play in unexpected ways. Or how other ecosystems outside the big Javascript frameworks make different trade-offs with stateful server approaches or the emergence of local-first development.

It’s exciting to be at the forefront of all these technologies. However, it’s also easy to get overwhelmed.

An essential skill to develop is identifying the inherent complexity of a problem, and the incidental complexity that arises from the solutions to that problem. For frontend young bloods, this means narrowing your focus to concepts that are foundational and unchanging.

A big part of engineering (and life) is making decisions and committing to a direction. The more you know about the users’ and team’s needs, the better trade-offs you can make and the more confident you will be in your decisions.

A key insight into understanding the evolution of technology is that it doesn’t always mean progress for all.

A capability or approach in one phase of innovation may be the exact sweet spot for what you are creating; if that’s the case, just keep on building and may the force be with you.

References and resources

Want to level up?

Get notified when new content comes out

Feel free to unsubscribe anytime. No spam.