Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reason the web changes so fast, and there are so many rewrites, is the same reason a puzzle whose pieces don't fit together keeps getting shifted around and restarted.

People are looking for a satisfying non-leaky abstraction to build upon and they don't find it with web technologies. They get close, but those last few pieces never quite fit, and we lack the power to reshape the pieces, so we tear out all the pieces and try again. Maybe this next time we'll find a better way to fit them together.



This is insightful. I remember thinking after the first generation of SPA frameworks like Backbone and Ember and—somewhat later—AngularJS that maybe the second generation (React, Vue, etc.) would get it all sorted out and we'd arrive at stability and consensus. But that hasn't happened. The next generation was better in some ways, worse in a few, and still not quite right in many others.

Of course I hear plenty of people complaining that apps on top of hypertext is a fundamental mistake and so we can't expect it to ever really work, but the way you put it really made it click for me. The problem isn't that we haven't solved the puzzle, it's that the pieces don't actually fit together. Thank you.


> [...] we'd arrive at stability and consensus.

Honestly, that's basically happened, but just a generation later.

Most modern frameworks are doing very similar things under the hood. Svelte, SolidJS, and modern Vue (i.e. with Vapor mode) all basically do the same thing: they turn your template into an HTML string with holes in it, and then update those holes with dynamic data using signals to track when that data changes. Preact and Vue without Vapor mode also use signals to track reactivity, but use VDOM to rerender an entire component whenever a signal changes. And Angular is a bit different, but still generally moving over to using signals as the core reactivity mechanism.

The largest differences between these frameworks at this point is how they do templating (i.e. JSX vs HTML vs SFCs, and syntax differences), and the size of their ecosystems. Beyond that, they are all increasingly similar, and converging on very similar ideas.

The black sheep of the family here is React, which is doing basically the same thing it always did, although I notice even there that there are increasing numbers of libraries and tools that are basically "use signals with React".

I'd argue the signal-based concept fits really well with the DOM and browsers in general. Signals ensure that the internal application data stays consistent, but you can have DOM elements (or components) with their own state, like text boxes or accordions. And because with many of these frameworks, templates directly return DOM nodes rather than VDOM objects, you're generally working directly with browser APIs when necessary.


To egotistically comment on my own comment (unfortunately I can't edit any more):

This is all specific to frontend stuff, i.e. how do you build an application that mostly lives on the client. Generally, the answer to that seems to be signals* and your choice of templating system — unless you're using React, then the answer is pure render functions and additional mechanisms to attach state to that.

Where there's more exploration right now is how you build an application that spans multiple computers. This is generally hard — I don't think anyone has demonstrated an obvious way of doing this yet. In the old days, we had fully client-side applications, but syncing files was always done manually. Then we had applications that lived on external servers, but this meant that you needed a network round-trip to do anything meaningful. Then we moved the rendering back to the client which meant that the client could make some of its own decisions, and reduced the number of round trips needed, but this bloats the client, and makes it useless if the network goes down.

The challenge right now, then, is trying to figure out how to share code between client and server in such a way that

    (a) the client doesn't have to do more than it needs to do (no need to contain the logic for rendering an entire application just to handle some fancy tabs).
    (b) the client can still do lots of things that are useful (optimistic updates, frontend validation, etc
    (c) both client and server can be modelled as a single application as opposed to two different ones, potentially with different tooling, languages, and teams
    (d) the volatility of the network is always accounted for and does not break the application
This is where most of the innovation in frontend frameworks is going towards. React has their RSCs (React Server Components, components that are written using React but render only on the server), and other frameworks are developing their own approaches. You also see it more widely with the growth of local-first software, which approaches the problem from a different angle (can we get rid of the server altogether?) but is essentially trying to solve the same core issues.

And importantly here, I don't think anyone has solved these issues yet, even in older models of software. The reason this development is ongoing has nothing to do with the web platform (which is an incredible resource in many ways: a sandboxed mini-OS with incredibly powerful primitives), it's because it's a very old problem and we still need to figure out what to do about it.

* Or some other reactivity primitive or eventing system, but mostly signals.


Frontend web development is effectively distributed systems built on top of markup languages and backwards compatible scripting languages.

We are running code on servers and clients, communicating between the two (crossing the network boundary), while our code often runs on millions of distributed hostile clients that we don't control.

It's inherently complex, and inherently hostile.

From my view, RSC's are the first solution to acknowledge these complexities and redesign the paradigms closer to first principles. That comes with a tougher mental model, because the problem-space is inherently complex. Every prior or parallel solution attempts to paper over that complexity with an over-simplified abstraction.

HTMX (and rails, php, etc.) leans too heavily on the server, client-only-libraries give you no accessibility to the server, and traditional JS SSR frameworks attempt to treat the server as just another client. Astro works because it drives you towards building largely static sites (leaning on build-time and server-side routing aggressively).

RSCs balance most of these incentives, gives you the power to access each of them at build-time and at run-time (at the page level or even the component level!). It makes each environment fully powerful (server, client, and both). And manages to solve streaming (suspense and complex serialization) and diffing (navigating client-side and maintaining state or UI).

But people would rather lean on lazy tropes as if they only exist to sell server-cycles or to further frontend complexity. No! They're just the first solution to accept that complexity and give developers the power to wield them. Long-term, I think people will come to learn their mental model and understand why they exist. As some react core team members have said, this is kind of the way we should have always built websites-once you return to first principles, you end up with something that looks similar to RSCs[0]. I think others will solve these problems with simpler mental models in the future, but it's a damn good start and doesn't deserve the vitriol it gets.

[0] https://www.youtube.com/watch?v=ozI4V_29fj4


Except RSC doesn't solve for apps, it solves for websites, which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate. That alongside the immensely complex restrictions leaves me wondering why it exists or has any traction, other than a sort of technical exercise and new thing for people to play with.

Meanwhile, sync engines seem to actually solve these problems - the distributed data syncing and the client-side needs like optimistic updates, while also letting you avoid the complexity. And, you can keep your server-first rendering.

To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX) and the only reason I think anyone really likes RSC is because there is so much money behind it, and so little relatively in sync engines. That said, I don't blame people for not even mentioning them as they are newer. I've been working with one for the last year and it's an absolute delight, and probably the first genuine leap forward in frontend dev in the last decade, since React.


> Except RSC doesn't solve for apps, it solves for websites

This isn't true, because RSCs let you slide back into classic react with a simple 'use client' (or lazy for pure client). So anywhere in the tree, you have that choice. If you want to do so at the root of a page (or component) you can, without necessarily forcing all pages to do so.

> which means its server-first model leads you to slow feeling websites, or lots of glue code to compensate

Again, I don't think this is true - what makes you say it's slow feeling? Personally, I feel it's the opposite. My websites (and apps) are faster than before, with less code. Because server component data fetching solves the waterfall problem and co-locating data retrieval closer to your APIs or data stores means faster round-trips. And for slower fetches, you can use suspense and serialize promises over the wire to prefetch. Then unwrapping those promises on the client, showing loading states in the meantime as jsx and data stream from the server.

When you do want to do client-side data fetching, you still can. RSCs are also compatible with "no server"-i.e. running your "server" code at build-time.

> To me it's a choice between lose-lose (complex, worse UX) and win-win (simpler, better UX)

You say it's worse UX but that does not ring true to my experience, nor does it really make sense as RSCs are additive, not prescriptive. The DX has some downsides because it requires a more complex model to understand and adds overhead to bundling and development, but it gives you back DX gains as well. It does not lead to worse UX unless you explicitly hold it wrong (true of any web technology).

I like RSCs because they unlock UX and DX (genuinely) not possible before. I have nothing to gain from holding this opinion, I'm busy building my business and various webapps.

It's worth noting that RSCs are an entire architecture, not just server components. They are server components, client components, boundary serialization and typing, server actions, suspense, and more. And these play very nicely with the newer async client features like transitions, useOptimistic, activity, and so on.

> Meanwhile, sync engines seem to actually solve these problems

Sync engines solve a different set of problems and come with their own nits and complexities. To say they avoid complexity is shallow because syncing is inherently complex and anyone who's worked with them has experienced those pains, modern engines or not. The newer react features for async client work help to solve many of the UX problems relating to scheduling rendering and coordinating transitions.

I'm familiar with your work and I really respect what you've built. I notice you use zero (sync engine), but I could go ahead and point to this zero example as something that has some poor UX that could be solved with the new client features like transitions: https://ztunes.rocicorp.dev

These are not RSC exclusive features, but they display how sync engines don't solve all the UX problems you're espousing they do without coordinating work at the framework level. Happy to connect and walk you through what a better UX for this functionality would look like.


Definitely disagree on most of your points here, I think you don’t touch at all on optimistic mutations, don’t put enough weight on the extreme downsides it enforces on your code organization, the limits and downsides of forcing server trips, the huge downsides of opting out (yes you can, but now you have two ways of writing everything and two ways of dealing with data, or you can’t share data/code at all), it is in effect all or nothing else you really are duplicating a ton and then even worse DX.

Many of the features like transitions and all the new concepts are workaround you just don’t really need when your data is mostly local and optimistically mutated, and the ztunes app is a tiny demo but ofc you could easily server render it and split transitions and all sorts of things to make it more of a comparable demo to what I assume you think are downsides vs RSC.

I think time will show that RSC was a bad idea, like Redux which I also predicted would not last the time of time, it’s interesting in theory but too verbose and cumbersome in practice, and other ways of doing things have too many advantages.

The problems they solve overlap more than enough, and once you have a sync engine giving you optimistic mutations free, free local caching, and free realtime sync, you look at what RSC gives you above SSR and there’s really no way to justify the immense conceptual burden and actual concrete downsides (like now having two worlds / essentially function coloring, forces server trips / lack of routing control) I just bet it won’t win. Though given the immense investment by two huge companies it may take a while for that to become clear.


Web is simultaneously not really designed for apps but also a sort of golden environment for app development because it is the one thing that ships on nearly every consumer machine that behaves more or less the same with an actual standard.

People bemoan the lack of native development, but the consuming public (and the devs trying to serve them) really just want to be able to do things consistently across phones and laptops and other computing devices and the web is the most consistent thing available, and it is the most battle-tested thing available.


Resolution is in design not engineering. Instead of trying to tireless work around the web as a sort of “broken mobile”, design with its strengths instead.

The difficulty is finding designers who understand web fundamentals.


>People are looking for a satisfying non-leaky abstraction to build upon and they don't find it with web technologies. They get close, but those last few pieces never quite fit, and we lack the power to reshape the pieces, so we tear out all the pieces and try again. Maybe this next time we'll find a better way to fit them together.

Also keep in mind the web standard puzzle is also changing all the time to try make the puzzle to fit better while developers are designing abstractions to catch up.

That how you get XMLHttpRequest -> ajax -> axio -> fetch and history.replaceState situation.

In general SPA has pushed web towards not so archiving friendly place. And PWA != SPA


I think you're mixing some things up there.

The history of making HTTP requests in the browser has only been two APIs: XMLHttpRequest -> fetch. Fetch is an upgraded version of XMLHttpRequest with a promise-based API and better async/streaming support.

Ajax was a word used to describe the technique of making http requests in the browser, and axios is a third party library that wraps different APIs on different platforms to provide a unified interface. These were never separate browser APIs.


Developers certainly are prey to that impulse, but management tends to want ROI... Rewriting existing apps / services / etc. that work and have been refined over time is usually not a money-maker (unless they can't scale, say). But it is good for PMs and PdMs to say "my team built Z in just two months! (which does do exactly what Y did, but we lost the guy who wrote that...)"


And that’s why React is the dominant framework in companies, and frameworks are either passion projects from open source developers (like Svelte), or from big tech, which has the resources for it (FB & React, Google & Angular, MS & Blazor)


Rewriting apps is a huge money maker. You need pretty much constant UI churn and product churn or consumers move on. This isn't the case necessarily with B2B, which is why we see a lot more "legacy" (aka stable) applications.


Clearly what we need is more, faster automated code generation.


You seem to be conflating javascript with web technologies. So many other web tech doesn't shift nearly as much as the javascript space does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: