Just my opinion but, server‑side rendering never really went away, but the web is finally remembering why it was the default. First paint and SEO are still better when markup comes from the server, which is why frameworks as different as Rails + Turbo, HTMX, Phoenix LiveView, and React Server Components all make SSR the baseline. Those projects have shown that most dashboards and CRUD apps don’t need a client router, global state, or a 200 kB hydration bundle—they just need partial HTML swaps.
The real driver is complexity cost. Every line of client JS brings build tooling, npm audit noise, and another supply chain risk. Cutting that payload often makes performance and security better at the same time. Of course, Figma‑ or Gmail‑class apps still benefit from heavy client logic, so the emerging pattern is “HTML by default, JS only where it buys you something.” Think islands, not full SPAs.
So yes, the pendulum is swinging back toward the server, but it’s not nostalgia for 2004 PHP. It’s about right‑sizing JavaScript and letting HTML do the boring 90 % of the job it was always good at.
Having a server provide an island or rendering framework for your site can be more complex than an SPA with static assets and nginx.
You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE.
And just like the specific use cases you mentioned for client routing I can also argue that many sites don’t care about SEO or first paint so those are non features.
So honestly I would argue for SPA over a server framework as it can dramatically reduce complexity. I think this is especially true when you must have an API because of multiple clients.
I think the DX is significantly better as well with fast reload where I don’t have to reload the page to see my changes.
People are jumping into nextjs because react is pushing it hard even tho it’s a worse product and questionable motives.
> As a user, the typical SPA offers a worse experience.
Your typical SPA has loads of pointless roundtrips. SSR has no excess roundtrips by definition, but there's probably ways to build a 'SPA' experience that avoids these too. (E.g. the "HTML swap" approach others mentioned ITT tends to work quite well for that.)
The high compute overhead of typical 'vDOM diffing' approaches is also an issue of course, but at least you can pick something like Svelte/Solid JS to do away with that.
My biggest annoyance with SPAs is that they usually break forward/back/history in various subtle (or not so subtle) ways.
Yes, I know that this can be made to work properly, in principle. The problem is that it requires effort that most web devs are apparently unwilling to spend. So in practice things are just broken.
A small but silly one: breaking middle and right click functionality for links.
An auction site I use loads in the list of auctions after the rest of the page loads in, and also doesn't let you open links with middle click or right click>new tab, because the anchor elements don't have href attributes. So that site is a double-dose of having to open auctions in the same tab, then going back to the list page and losing my place in the list of auctions due to the late load-in and failure to save my scroll location.
I would submit this as product feedback if you haven't. One of my favorite things as a dev working on client-facing things is when I get negative feedback that presumably has a pretty easy fix to at least part of it ("add 'href' to these links") where I can pretty quickly make someone's life a little easier.
This is not exclusive with an SPA. Even MPAs/SSR apps can have this issue. But I guess MPAs are probably not built with post load interactivity in mind and maybe that's why its less prevalent there.
This issue doesn't get enough attention; apart from the obvious implications on bad UX, I find myself losing interest in a project after realising its broken in so many subtle and non-subtle ways due to the underlying tech. I, like many others, got into programming due to the joy of creating something beautiful and attempting to follow (influencer led) JS trends nearly killed my interest in this field at a time.
I still have some ptsd from payment gateway integrations via iframes about 6-7 years ago. If you thought SPAs are bad by themselves for history tracking imagine those banking iframes randomly adding more entries via inside navigation/redirection that you have to track manually.
A lot can be said for just putting a "back" button a page. I still do it occasionally for this very reason. Then again, my user base for the apps I write are the most non-technical folks imaginable, so many of them have no concept of a browser back button to begin with. I am not being hyperbolic either.
I’m split on this. I used to agree with you but when I talked to internal users and customers, they really liked having a back button in the app. I would tell them the browser back button is there and we haven’t broken history so it should work to which they just often shrug and say they “just” prefer it.
My hypothesis is that they’ve had to deal with so many random web apps breaking the back button so that behaviour is no longer intuitive for them. So I don’t push back against in-app back buttons any more.
I think you're right on the money—those bad web apps that told people emphatically "do NOT use your browser's back button!" did the rest of us a lot of damage, as I really do agree that it trained many people to never press it unless they actually want to leave the app they're using.
I myself am guilty of (about 14 years ago now) giving an SPA a "reload" button, which had it go and fetch clean copies of the current view from the server. It was a social app; new comments and likes would automatically load in for the posts already visible, but NEW posts would NOT be loaded in, as they would cause too much content shift if they were to load in automatically.
Admittedly this is not a great solution, and looking back on it now, I can think of like 10 different better ways to solve that issue… but perhaps some users of that site are seeing my comment here, so yeah, guilt admitted haha.
It's okay if both buttons do the same thing. But OP (if I understood them correctly) proposed the in-app Back button as a hacky solution to the problem of browser one being broken, which kinda implies that they don't behave the same.
> Your typical SPA has loads of pointless roundtrips
This is an implementation choice/issue, not an SPA characteristic.
> there's probably ways to build a 'SPA' experience that avoids these too
PWAs/service workers with properly configured caching strategies can offer a better experience than SSR (again, when implemented properly).
> The high compute overhead...
I prefer to do state management/reconciliation on the client whenever it makes sense. It makes apps cheaper to host and can provide a better UX, especially on mobile.
Except for a user on a lower specced device that can’t performantly handle filtering and joining on that mass of data in JS code, or perhaps can’t even hold the data in memory.
> Except for a user on a lower specced device that can’t performantly handle filtering and joining on that mass of data in JS code, or perhaps can’t even hold the data in memory.
Just how low-spec and/or how much state-data are we talking about here? I ask only because I am downloading an entire dataset and doing all the logic on the client, and my PC is ancient.
I'm on a computer from 2011 (i7 870 @ 2.9GHz with 16MB of RAM), and the client-side filtering I do, even on a few dozens of thousand of records retrieved from the server, still takes under a second.
On my private app, my prospect list containing maybe 4k records, each pretty substantial (as they include history of engagements/interactions with that client) is faster to sort and filter on the client than it is to download the entire list.
I am usually waiting for 10s while the hefty dataset downloads, but the sorting and filtering happens in under a second. I do not consider that a poor UX.
10s is painful. A server-rendered app should be able to deliver that data, already rendered, in closer to a fifth of a second. Fast enough that the user doesn’t even notice any wait.
> 10s is painful. A server-rendered app should be able to deliver that data, already rendered, in closer to a fifth of a second.
How do you know how large the dataset is? All you know from my post is that a dataset that takes 10s to download (I'm indicating the size of it here!) takes under a second to filter and sort.
My point is that if your client-code is taking long to filter and sort, then your dataset is already so large that the user has been waiting a long time for it already; they already know that this dataset takes time.
FWIW, the data is coming in as CSV, compressed, so it's as small as possible. It's not limited by the server. Having it rendered by the server will increase the payload substantially.
The JS processing and rendering time on an underpowered CPU is the issue, not the payload size. It’s difficult to describe how excruciatingly slow some seemingly simple e-commerce and content sites are to render on my 2019 laptop or how slowly they react to something as simple as a mouseover or how they peg the CPU - while absolutely massively complex and large server-rendered HTML loads and renders in an eyeblink.
Yes, pure old school SPAs have at least one additional roundtrip on the first visit of the site:
1. Fetch index.html
2. Fetch js, css and other assets
3. Load personalized data (json)
But usually step 1 and 2 are served from a cdn, so very fast. On subsequent requests, 1 and 2 are usually served from the browser cache, so extremely fast.
SSR is usually not faster. Most often slower. You can check yourself in your browser dev tools (network tab):
> Your typical SPA has loads of pointless roundtrips. SSR has no excess roundtrips by definition
SSR also has excess round trips by nature. Without Javascript, posting a form or clicking a like button refreshes the whole page even though a single <span> changed from a "12 likes" to "13 likes".
This exactly. It seems like the last 10 years of JavaScript framework progress has been driven by DX, not UX. Like at some point everyone forgot this crap just needs to work at the end of the day, no user benefits from 3 rewrites over 5 years because the developer community decided functions are better than classes.
In my view, DX should be renamed "Developer Convenience" as we all know that convenience is often a trade-off.
Please forgive the self-promotion but this was exactly the premise of a conference talk I gave ~18 months ago at performance.now() in Amsterdam: https://www.youtube.com/watch?v=f5felHJiACE
We have been moving to localized cache stores and there aren't any client side loaders anymore outside of the initial cache generation. Think like Linear, Figma, etc
It just depends on what you are after. You can completely drop the backend, apis, and have a real time web socketed sync layer that goes direct to the database. There is a row based permissions layer still here for security but you get the idea.
The client experience is important in our app and a backend just slows us down we have found.
>and have a real time web socketed sync layer that goes direct to the database
you might be able to drop a web router but pretending this is "completely drop[ping] the backend" is silly. Something is going to have to manage connections to the DB and you're not -- I seriously hope -- literally going to expose your DB socket to the wider Internet. Presumably you will have load balancing, DB replicas, and that sort of thing, as your scale increases.
This is setting aside just how complex managing a DB is. "completely drop the backend" except the most complicated part of it, sure. Minor details.
I assumed they meant a client side DB and then a wrapper that syncs it to some other storage, which wouldn't be terribly different than say a native application the relies on a cloud backed storage system.
Which is fine and cool for an app, but if you do something like this for say, a form for a doctor's office, I wish bad things upon you.
> We have been moving to localized cache stores and there aren't any client side loaders anymore outside of the initial cache generation. Think like Linear, Figma, etc.
There's no way around waiting for the data to arrive. Being it JSON for SPA or another page for MPA / SSR. For MPA the browser provides the loading spinner. Some SPA router implementations stay on the current page and route to the new one only after all the data has arrived (e.g. Sveltekit).
With SSR all the data usually arrives in 100-200ms, in SPAs all the data tends to take seconds to arrive on first load so they resort to spinners, loading bars etc.
If you employ a "preload then rehydrate" data sync paradigm then you should never see a blank page -- except on initial JS load. This is just an improper data sync strat and has nothing to do with SPA.
"Devs are doing SPAs wrong" is irrelevant when 90% of devs do SPAs that way. That it's wrong doesn't help the fact that most SPAs have garbage user experience.
The obsession with DX tooling is exactly why JS is such an awful developer experience. They always chase something slightly better and constantly change things.
Maybe the answer was never in JS eating the entire frontend, and changing the tooling won’t make it better, as it’s always skirting what’s actually good for the web.
> The obsession with DX tooling is exactly why JS is such an awful developer experience.
I used to agree but these days with Vite things are a lot smoother. To the point that I wouldn't want to work on UI without fine-grained hot reloads.
Even with auto reload in PHP, .NET, etc you will be wasting so much time. Especially if you're working on something that requires interaction with the page that you will be repeating over and over again.
> Especially if you're working on something that requires interaction with the page that you will be repeating over and over again.
That’s honestly not that many things IRL. If you look at all the things you build only a minority actual demand high interactivity, or highly custom JS. Otherwise existing UI libraries cover the bulk of what people actually need to do on the internet (ie, not just whatever overly fancy original idea the designers think is needed for your special product idea).
It’s mostly just dropdowns and text and tables etc.
Once you try moving away from all of that and questioning if you need it at every step you’ll realize you really don’t.
It should be server driven web by default with a splattering of high functionality islands of JS. That’s what rails figured out after changing the frontend back and forth.
> Even with auto reload in PHP, .NET, etc you will be wasting so much time
Rails has a library that will refresh the page when files change without a full reload, using Turbo/Hotwire. Not quite HMR but it’s not massively different if your page isn’t a giant pile of JS, and loads quickly already.
> What if you have a modal opened with some state?
Stimulus controllers can store state.
> Or a form filled with data?
Again, you can either use a Stimulus controller, or you can just render the data into the form response, depending on the situation.
> Or some multi-selection in a list of items that triggers a menu of actions on those items?
So, submenus? Again, you can either do it in a Stimulus controller (you can even trivially do things like provide a new submenu on the fly via Turbo), or you can pre-render the entire menu tree server-side and update just the portion that changes.
> I used to agree but these days with Vite things are a lot smoother.
Didn't everybody say the exact same thing about Node, React, jQuery...? There is always a new and shiny frontend JS solution that will make the web dev of old obsolete and everyone loves it because it's new and shiny, and then a fresh crop of devs graduates school, the new shiny solution is now old and boring, and like a developer with untreated ADHD, they set out to fix the situation with a new frontend framework, still written in JavaScript, that will solve it once and for all.
I still build websites now the same as I did when I graduated in 2013. PHP, SQL, and native, boring JavaScript where required. My web apps are snappy and responsive, no loading bars or never-ending-spinning emblems in sight. shrug
Except you can't really build PWAs with those technologies and most web content is now consumed on mobile. I used to do it like that as well, but clients want a mobile app and management decided to give them a PWA, because then we could use the existing backend (Perl, Mojolicious, SQL). I now agree with them if it keeps the lights on.
> I used to do it like that as well, but clients want a mobile app and management decided to give them a PWA
I'm quite surprised to hear this is a common thing. Besides myself, I don't know a single person who has ever installed a PWA. For people in tech, despite knowing they exist. For people outside tech, they don't know they exist in the first place.
Does management actually have any PWAs installed themselves?
People outside tech just get installation instructions and do not care if it’s app store or something else. This is how sanctioned Russian banks continue to serve their customers via apps, when they cannot get into app store. The number of users of PWA is probably on the scale of millions.
It definitely makes complete sense in that scenario, but remains a very niche usecase where people have no other option.
>People outside tech just get installation instructions
People outside of tech don't need instructions to install non-PWA, store apps. So all this does to me is reinforce that no one is installing PWAs outside of niche scenarios where 1. people basically have to use the app due to a connection to a physical institution 2. they are explicitly told how to do it 3. the app is not available on the stores for legal reasons.
> People outside of tech don't need instructions to install non-PWA, store apps.
Depends on age and tech awareness. Many still do, when they cannot rely on a family member to do it for them.
Overall installing PWA is no more complicated than getting something from a store.
They don't want to be subject to app store approval policies, shitty TOS, nor pay Google or Apple a 30% cut. Installing the app is easy, visit the web site, clck the install banner, add to home screen and you're good to go. On the developer side you get to deploy as iften as needed.
Yes, the service worker thing is annoying but you possibly don't need it if you have a server backend. It's basically a glirified website with a home screen icon. Most of the native vehicle, asset or fitness tracking apps need a backend anyways and they fail miserably when disconnected from the network.
> They don't want to be subject to app store approval policies, shitty TOS, nor pay Google or Apple a 30% cut. Installing the app is easy, visit the web site, clck the install banner, add to home screen and you're good to go. On the developer side you get to deploy as iften as needed.
We don't care about people clicking it as it's not tiktok but an app that complements a certain hardware solution. If you don't have the hardware, you don't need the app.
Eh, I recently stumbled into an open bug in Npm/vite and wasted two days before just reinstalling everything and re-creating frontend app. Hot UI reloads are cool, but such things kill any productivity improvements.
There’s just no way for the abominations that are HTML, JS, and CSS to be used in an accessible and maintainable way. It’s absurd that we haven’t moved on to better technologies in the browser or at least enabled alternatives (I weep for Silverlight).
'Twas before my time. What was so great about it? I remember needing it installed for Netflix like 15 years ago. Did you ever work with Flash? How was that?
If you ever worked seriously on anything non-SPA you would never, ever claim SPAs “dramatically reduce complexity”. The mountain of shit you have pull in to do anything is astronomical even by PHPs standards and I hate PHP. Those days were clean compared to what I have to endure with React and friends.
The API argument never sat well with me either. Having an API is orthogonal: you can have one or do not have one, you can have one and have a SSR app. In the AI age an API is the easy part anyway.
I disagree, the problem with an SPA is that now you have two places where you manage state (the backend and the frontend). That gives you much more opportunity for the two places to disagree, and now you have bugs.
No you really don’t. I’ve worked on exceptionally complex legacy applications with essentially no state in the front end. At most, you’re looking at query parameters. You just make everything a full page reload and you’re good to go.
You don't need an SPA to handle incrementing a counter. If a page needs dynamic behavior you add JS to it, whether it's just adding an in-memory counter or an API call to store and retrieve some data. It's not difficult to write JavaScript.
The problem with SPAs is that they force having to maintain a JS-driven system on every single page, even those that don't have dynamic behavior.
> You don't need an SPA to handle incrementing a counter. If a page needs dynamic behavior you add JS to it, whether it's just adding an in-memory counter or an API call to store and retrieve some data. It's not difficult to write JavaScript.
I agree with this. Sprinkle in the JS as and when it is needed.
> The problem with SPAs is that they force having to maintain a JS-driven system on every single page, even those that don't have dynamic behavior.
I don't agree with this: SPAs don't force "... having to maintain a JS-driven system on every single page..."
SPA frameworks do.
I think it's possible to do reasonably simple SPAs without a written-completely-in-JSX-with-Typescript-and-a-5-step-build-process-that-won't-work-without-25-npm-dependencies.
I'm currently trying out a front-end mechanism to go with my high-velocity back-end mechanism. I think I've got a good story sorted out, but it's early days and while I have used my exploratory prototype in production, I've only recently iterated it into a tiny and neat process that has no build-step, no npm, and no JS requirement for the page author. All it uses is `<script src=...>` in the `<head>`, with no more JS on the rest of the page.
A codebase doesn't need that toolset to be an SPA. An SPA is just a website where all the site's functionality is done on the "root page", and it uses JS to load the data, handle navigation, etc. Doesn't matter whether that's all done through React in TypeScript and compiled by Vite or by handrolled JavaScript fetched in .js files.
> A codebase doesn't need that toolset to be an SPA.
That's kinda the goal I'm trying to reach. If you know of any SPA that doesn't come with all the baggage and only uses `<script src=...>`, by all means let me know.
True, I shouldn't have said in memory. As the GP mentioned, you can store the counter value in a URL param. There are ways to achieve dynamic behavior without having to load or store values into JS memory.
That is more work both for the developers and the servers though. You need to re-render the whole page every change, rather than make a local change (or a tiny request if it needs to persist)
You misunderstood what I was saying. I was saying that you could write some plain old JS to catch an event on incrementing and updated the URL and the UI, and some JS to get the data from the URL on page load to set the UI. No new server render, and that's maybe 5 minutes of writing JavaScript code (compared to, say, setting up react project and instantiating that whole beast from the page root until reaching the specific UI element that needs to be dynamic).
What's the business usecase for incrementing a counter?
We can sit here all day and think up counterexamples, but in the real world what you're doing 99% of the time is:
1. Presenting a form, custom or static.
2. Filling out that form.
3. Loading a new page based off that form.
When I open my bank app or website, this is 100% of the experience. When I open my insurance company website, this is 100% of the experience. Hell, when I open apartments.com, this is like 98% of the experience. The 2% is that 3D view thingy they let you do.
> What's the business usecase for incrementing a counter?
Notification count in the top right?
Remaining credit on an interactive service (like the ChatGPT web interface)?
So, maybe two(!) business use-cases out of thousands, but it's a pretty critical two use-cases.
I agree with you though - do all normal HTML form submissions, and for those two use-cases use `setInterval` to set them from a `fetch` every $X minutes (where you choose the value for $X).
In my experience it's just exceedingly rare to require this. My insurance company website has a notification thing, and it's actually static. You need to refresh the page, and considering how few and far between notifications are, and how common refreshes are, it works fine.
There's an entire domain of apps where you truly need a front-end. Any desktop-like application. Like Google Sheets, or Figma. Where the user feedback loop is incredibly tight for most operations.
Let's be honest -- the alternative is an API call with poor or no error-handling with the brilliant UX of either hanging with an endless loading indicator, or just flat out lying that the counter was incremented...
> or just flat out lying that the counter was incremented...
Which is what HN does and it sucks. It's very common for me to vote on a couple things and then after navigating around I come back to see that there are comments that don't have a vote assigned.
Of course the non-JS version would be even more annoying. I would never click those vote buttons if every vote caused a full page refresh.
You absolutely did. It was common practice to stuff things in cookies or query strings to retain state between trips to the server so that some JS could do its job.
Every form also normally ends up duplicating validation logic both in JS for client-side pre-submit UX and server-side with whatever errors it returns for the JS to then also need to support and show to the user.
Right, but validation logic and state transferred by the server isn't in-memory state. The fact that the pages completely reload on each request clears a lot of cruft that doesn't get cleared on pages whose lifetime is tens or hundreds of views.
Every SPA I come across, especially when using React, uses persistent state so that in-memory changes are synced to cookie/localStorage/server so they survive refreshes. Every popular state management library even supports this natively. And all of that state combined still requires less memory than any of the images loaded, or the JS bundles themselves.
I absolutely loathe that. State is the source of most bugs. If the page crashes then refreshing it should clear out the state and let me try again.
Anecdotally, it seems like I encounter a lot more web apps these days where refreshing doesn’t reset the state, so it’s just broken unless I dig into dev tools and start clearing out additional browser state, or removing params from the URL.
Knock it off with all the damn state! Did we forget the most important lesson of functional programming; that state is the root of all evil?
I’d rather have sluggish UI with proper feedback than potentially inconsistent states which I often experience with modern SPAs. At least that represents reality. Just today I was posting an ad on the local classifieds page, and the visual (client) state convinced me that everything was fast and my photos are uploaded. Turned out all state was lost and never reached the server, and I had to redo everything again.
Phoenix Liveview works pretty good without client side state. Sure if you just have a toggle mobile menu you might sprinkle some JS for it but other state lives in the server and the delta is sent to the client via websockets
Fortunately every browser made in the last 25 years supports keepalive. e.g. Firefox (and according the the reporter of this bug, Chrome) won't even let you disable it[0].
I've been a professional programmer for ~20 years and worked in a variety of languages on a variety of different types of projects, and Typescript with Bun is mostly just fine. It lacks some low level primitives I'd like to have available for certain projects (e.g. Go channels), and the FFI interface isn't as nice as I'd like, but it's basically serviceable for for a very broad range of problems.
You should still know a language like Rust or Zig for systems work, and if you want to work in ML or data management you probably can't escape Python, but Typescript with Bun provides a really compelling development experience for most stuff outside that.
I agree, nowadays working on mostly TS backend with some parts in JS written before async/await was introduced and I’m inclined to say TS is better than Python at most things bakcendy. I’m missing sqlalchemy and a sane numerical tower pretty much.
Python suffers from the same problems: its type system has many escapes and implicit conversions, making soundness opt-in and impossible to statically verify. Any language with an implicit cast from its bottom type to an upper type is unsuitable for use.
It reminds me of an older dev I met when I was just beginning who had worked even more years and said Fortran 95 was "fine". And he could use it to build pretty much anything. That doesn't mean that more powerful language features couldn't have increased his productivity (if he learned them).
There's something to be said for using the right tool for the job. There's also something to be said for maximizing your ability to hire developers. Software is a game of tradeoffs, and while I can and do still pick up modern hotness when warranted (e.g. Zig), sometimes the path to minimum total software cost (and thus maximum company value) is to take well trodden paths.
As fun side anecdote, if you're doing scientific computing in a variety of fields, Fortran 95 is mostly still fine ;)
No, it’s footgunny and riddled with bugs. Most JS barely works and edge cases just aren’t addressed.
I’ve seen undefined make it all the way to the backend and get persisted in the DB. As a string.
JS as a language just isn’t robust enough and it requires a level of defensive programming that’s inconvenient at best and a productivity sink at worst. Much like C++, it’s doable, but things are bound to slip through the cracks. I would actually say overall C++ is much more reasonable.
You really need to learn a language to use it. As for undefined vs null, I fine it useful. Particularly in a db setting. Was the returned value null? You know because the value is null. Did you actually load it from the database? sure, because the value is not undefined.
> I would actually say overall C++ is much more reasonable.
This is where I know that, some people, are not actually programming in either of these languages, but just writing meme driven posts.
JS has a few footguns. Certainly not so many that it's difficult to keep in your head, and not nearly as complex as C++, which is a laughable statement.
You've "seen null make it to the database," but haven't seen the exact same thing in C++? Worse, seen a corrupted heap?
I haven't seen null make it to the database, I've seen undefined. And here you demonstrate one of many problems - there's multiple null types!
In C++, there's only one null, nullptr. But most types can never be null. This is actually one area where C++ was ahead of the competition. C# and Java are just now undoing their "everything is nullable" mistakes. JS has that same mistake, but twice.
It's not about complexity, although that matters too. C++ is certainly more complex, I agree, but that doesn't make it a more footgunny language. It's far too easy to make mistakes in JS and propagate them out. It's slightly harder to make mistakes in C++, if you can believe it. From my experience.
C# introduced nullable reference types back in 2019, so it's been some time and now the vast majority of the ecosystem uses null-aware code. The only remaining warts are around a. codebases which refuse to adopt this / opt out of it and b. serialization.
It's not a different reality. To give perspective to what JS I've dealt with - I worked a couple years on a legacy webapp. It used vanilla JS and the only library used was jQuery. It heavily used iframes for async functionality in combination with XSLT to translate backend XML apis to HTML.
Opening up a 10K lines JS file is like jumping into the ocean. Nothing is obvious, nothing makes sense. You're allowed to just do whatever the fuck in JS. Bugs where always ephemeral. The behavior of the code was impossible to wrap your head around, and it seemed to change under your feet when you weren't looking.
Now, the backend was written in old C++. And yes, it was easier to understand. At least, I could click and go to definition. At least, I could see what was going in and out of functions. At least, I could read a function and have a decent understanding of what it should be doing, what the author's intention is.
The front end, spread across a good thousand JS files, was nothing of the sort. And it was certainly more buggy. Although, I will concede, bugs in C++ are usually more problematic. In JS usually it would just result in UI jankyness. But not always.
If you already know another backend language and framework, all you need to do is tell LLM or some code generator to convert your models between languages. There is very little overhead that way.
I greatly prefer Java with Spring Boot for larger backend projects.
That's not a JavaScript issue. It's the same for almost any language where you don't use some bignum type/library. This is something all developers should be extremely aware of.
I have repeated this elsewhere. APIs for UI tend to diverge from APIs in general in practice.
For applications that are not highly interactive, you don't quite need a lot of tooling on the BE, and since need to have a BE anyway, a lot of standard tooling is already in there.
React style SPAs are useful in some cases, but most apps can live with HTMX style "SPA"s
So here's the kicker: React Server Components don't need a server. They are completely compatible with a static bundle and still provide major benefits should you choose to adopt them (dead code elimination, build-time execution). This is effectively the design of Astro Islands, natively in React Server Components. Letting you write static and client-side dynamic code in a single paradigm through componentization and composition.
If you are curious, my most recent blog post is all about this concept[0] which I wrote because people seem to be misinformed on what RSCs really are. But that post didn't gain any traction here on HN.
Is it more complex? Sure–but it is also more powerful & flexible. It's just a new paradigm, so people are put off by it.
You can accomplish the "don't have to reload the page to see my changes" with htmx and it's still "server-side rendering" (or mostly server-side rendering). Legendarily, the fastest website on the internet uses partial page caching to achieve its speed
What do you like about HTMX? I coming from a world of plain JS usage -- no SPAs or the like. I just felt like HTMX was just a more complicated way to write what could be simple .fetch() requests.
Yeah, the JS could technically be shorter, but your example is functional enough to get the point across.
Going with your example, how would you do proper validation with HTMX? For example, the input element's value cannot be null or empty. If the validation fails, then a message or something is displayed. If the validation is successful, then that HTML is replace with whatever?
I have successfully gotten this to work in HTMX before. However, I had to rely on the JS API for that is outside the realm of plain HTML attribute-based HTMX. At that point, especially when you have many inputs like this, the amount of work one has to do with the HTMX JS API starts to look at lot like the script tag in your example, but I would argue it's actually much more annoying to deal with.
I appreciate the suggestion. Not sure I am a fan of this implementation though. It looks near identical to the HTMX JS API that is already backed into HTMX. Most of the annoyances I dealt with were around conditional logic based on validation.
After enough of the HTMX JS API, I figured, "What is HTMX even buying me at this point?" Even if plain JS is more verbose, that verbosity comes with far less opinions and constraints.
With an SPA you're writing two apps that talk to each other instead of one. That is, by definition, more complex.
> You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE.
Now you're dealing with 2 sets of tooling instead of 1.
> And just like the specific use cases you mentioned for client routing I can also argue that many sites don’t care about SEO or first paint so those are non features.
There is no app which would not care about first paint. It's literally the first part of any user experience.
> So honestly I would argue for SPA over a server framework as it can dramatically reduce complexity. I think this is especially true when you must have an API because of multiple clients.
So SEO and first paint are not necessary features, but an API for multiple clients is? Most apps I've worked with for over 15 years of web dev never needed to have an API.
> I think the DX is significantly better as well with fast reload where I don’t have to reload the page to see my changes.
With backend apps the reload IS fast. SPA's have to invent tooling like fast reload and optimistic updates to solve problems they created. With server apps, you just don't have these problems in the first place.
If you truly need for MVC to manage all things state, component communications, and complex IxD in the front-end, sure, but not every app has that level of front-end complexity to warrant a SPA, in my opinion.
As somebody with an expert level knowledge with MVC frameworks like Ruby on Rails and Phoenix Framework, etc., and an experience building large-scale enterprise-grade apps using simpler technologies like jQuery, StimulusJS and plain old JavaScript on the front end with a little bit of React thrown in here and there, I found Development cycles to be much faster with these simpler stacks overall. The complexity of the code base never ended up turning to be a liability that it creates significant overhead and bottlenecks for new engineers joining the team to jump in and understand the end-to-end workflows of things.
Fast forward to what I am doing today in my new job. We have a pretty complex setup using Redwoodjs along with several layers of abstraction with Graphql (which I approve of) and a ton of packages and modules tied together on the front end with react, storybook, etc. and some things I am not even sure why they are there.
I see new engineers joining our team and banging their heads to make even the smallest of changes and to implement new features and having to make code changes at multiple different places. I find myself doing similar things as well from time to time - and I always can't help but think about the complexity that I used to deal with when working with these MVC frameworks and how ridiculously easy it was to just throw logic in a controller and a service layer and and the view templates for the UI stuff. It all fit in so easily and shipping features was super simple and quick.
I wouldn't discount react as a framework but I am also starting to some cracks caused by using TypeScript on the backend. This entire Javascript world seems to be a mess you don't want to mess with. This is probably just me with an opinion, but but using Turbo, Stimulus and and sprinkles of LiveView got me really really far very quickly.
Interesting because I think jQuery, although a nightmare, is a much smaller one than the today's stack of React single page apps. Everything from bundling to package management and the hell with modules and dependencies seems to be too much to maintain. I am probably going to be okay to take it on the front end, but I cannot take JavaScript on the back end.
The good news is GraphQL is very quick and easy to pick up and it gives that inbuilt functionality to fetch exactly the amount of data that we need. On top of it, it also has enough flexibility to integrate with your business logic.
So it can be a straightforward replacement for a traditional REST API that you would have to manually build.
For the disadvantages, I cannot think of any. It is a bit slower than hand rolling your own REST API, but the difference is not severe enough to make you give up on it.
GraphQL APIs can easily DOS your backend if you don't configure extra protections (which are neither bulletproof nor enabled by default), they suffer from N+1 inefficiencies by default unless you write a ton of extra code, and they require extra careful programming to apply security rules on every field which can get very complex very fast.
On the plus side, it does have offer communication advantages if you have entirely independent BE and FE teams, and it can help minimize network traffic for network-constrained scenarios such as mobile apps.
Personally, I have regretted using GraphQL every time.
My biggest gripe is losing the entire layer of semantics that HTTP gives you. POST is the only verb and different error states are conveyed via error objects in the returned JSON.
This is probably true, and it can only be uncovered by rigorous testing. There is a bunch of layers of abstraction that won't be very obvious if you are using GraphQL as opposed to rolling your own REST API.
> Of course, Figma‑ or Gmail‑class apps still benefit from heavy client logic, so the emerging pattern is “HTML by default, JS only where it buys you something.” Think islands, not full SPAs.
> Every line of client JS brings build tooling, npm audit noise, and another supply chain risk.
IME this is backwards. All that stuff is a one-off fixed cost, it's the same whether you have 10 lines of JS or 10,000. And sooner or later you're going to need those 10 lines of JS, and then you'll be better off if you'd written the whole thing in JS to start with rather than whatever other pieces of technology you're using in addition.
Many interactions are simply better delivered from the client. Heck some can only be exclusively delivered from the client (eg: image uploading, drag and drop, etc).
With HTMX, LiveViews, etc there will be challenges integrating server and client code... plus the mess of having multiple strategies handling different parts of the UI.
HTMX has a very nice drag and drop extension I just found, though. And old-school forms can include image files. The little image preview can be a tiny "island of JS" if you have to have it.
> The little image preview can be a tiny "island of JS" if you have to have it.
I would consider that the bare acceptable minimum along an upload progress indicator.
But it can get a lot more complicated. What if you need to upload multiple images? What if you need to sort the images, add tags, etc? See for example the image uploading experience of sites like Unsplash or Flickr.
HTMX just ism't the right tool to solve this unless you're ready to accept a very rudimentary UX.
None of what you described requires anything more than an isolated island with some extra JS. No need for complex client-side state, no need for a SPA framework, no bundling required, not even TypeScript. If you relied on DOM and a couple of hidden fields, 90% of this would be a few dozen lines of code plus some JSDoc for type safety.
I could be misremembering, but didn't browsers used to have this built in? Like there used to be a status bar that showed things like network activity (before we moved to a world where there is always network activity from all of the spying), upload progress, etc.
I don't remember if it was in Firefox, but SeaMonkey even has a "pull the plug" button to quickly go offline/online in the status bar.
Bizarre that "progress" is removing basic functionality and then paying legions of developers to re-add it in inconsistent ways everywhere.
Hours so? I’ve found that Phoenix LiveView has made integrating the server and client code much simpler. It’s dramatically reduced the need to write JavaScript in general, including for things like image uploads. Or are you speaking of one of its many clones?
This is not my field, but my mental model was that server side mostly died when mobile apps started being mainstream, and treating the web app as another frontend for your common api was considered the best way to handle client diversity.
Was this not the case? And if so, what has fundamentally changed?
It's one of those things that's like "write one HTML file with zero styling, then you can have multiple different CSS files style the same content completely differently! Separation of Concern!" Sounds perfect in theory but just doesn't always work.
Having one API for web and mobile sounds good but in practice often the different apps have different concerns.
And SEO and page speed were always reasons the server never died.
In fact, the trend is the opposite direction - the server sending the mobile apps their UIs. That way you can roll out new updates, features, and experiments without even deploying a new version.
>In fact, the trend is the opposite direction - the server sending the mobile apps their UIs. That way you can roll out new updates, features, and experiments without even deploying a new version
Is that allowed by app stores? Doesn’t it negate the walled gardens if you can effectively treat the app as a mini browser that executes arbitrary code ?
Expo is the most popular React Native framework and markets remote updates as a feature and highlights that it lets you skip App Review update and Apple hasn't stopped them (Expo updates not exactly server-side mobile UI but it's similar idea).
However the code that takes this serialised UI and renders it, and maps the action names to actual code is shipped in the app itself. So, the app stores don't mind it.
This is what the GP is talking about.
It covers a surprising number of usecases, especially since many actions can be simply represented using '<a href>' equivalents -- deeplinks. With lotties, even animations are now server controlled. However, more dynamic actions are still client-controlled and need app updates.
Additionally, any new initiative , think new feature, or think temporary page for say valentine's day, is all done with webviews. I'm not clued in on the review process for this.
Nevertheless, if your app is big enough then almost every rule above is waived for you and the review process is moot, since once you become popular the platform becomes your customer as much as you are theirs. For example, tiktok ships a VM and obfuscated bytecode for that VM to hinder reverse engineering (and of course, hide stuff)
Figma is a definite yes. But Gmail is something we say from late 00s and somehow continue till now. I thought it has been proven we dont need SPA for Email Client. Hey is perfectly fine other than a little slow, mostly due to server response time and not Turbo / HTML / HTMX itself.
I still believe we have a long way to go and innovate on partial HTML swaps. We could have push this to the limit so that 98% of the web doesn't need SPA at all.
Been a web dev for over a decade, and I still use plain JS. I have somehow managed to avoid learning all the SPAs and hyped JS frameworks. I used HTMX for once project, but I prefer plain JS still.
I was a JQuery fan back in the day, but plain JS is nothing to scoff at these days. You are right though, in my experiences at least, I do not need anything I write to all happen on a single page, and I am typically just updating something a chunk at a time. A couple of event listeners and some async HTTP requests can accomplish more than I think a lot of people realize.
However, if I am being honest, I must admit one downfall. Any moderately complex logic or large project can mud-ball rather quickly -- one must be well organized and diligent.
People also forget just how far you can get without using client side JavaScript at all today. HTML and CSS have a lot of features built in that used to require JavaScript.
New inputs types have been glacially slow to come out and often underwhelming. Every new HTML thing I've seen (modals, datetime inputs, datalist select, etc) had better JS versions out for years before they released. I understand that the HTML spec is setting a baseline of sorts but most of the UI is ugly and sometimes not themeable/styleable.
The best approach is to use both. Which is why I never understood the pure server side or the pure "reactive" approach. Having to manage rendering in server side code is pure pain, and having to return DOM markup from inside a function is likewise just madness. They both break the separation of concerns.
The first framework I ever got to use was GTK with Glade and QT with designer shortly there after. These, I think, show the correct way to arrange your applications anywhere, but also it works great on the web.
Use HTML and CSS to create the basis of your page. Use the <template> and <slot> mechanisms to make reusable components or created widgets directly in your HTML. Anything that gets rendered should exist here. There should be very few places where you dynamically create and then add elements to your page.
Use javascript to add event handlers, receive them, and just run native functions on the DOM to manage the page. The dataset on all elements is very powerful and WeakMaps exist for when that's not sufficient. You have everything you need right in the standard environment.
If your application is API driven then you're effectively just doing Model-View-Controller in a modern way. It's exceptionally pleasant when approached like this. I have no idea why people torture themselves with weird opinionated wrappers around this functionality, or in the face of an explosion of choices, decide to regress all the way back to server side rendering.
I completely agree with the sentiment that we don’t need SPAs and similar tech for news sites and dashboards and the myriad crud apps we use on a day to day basis but I think what you’re proposing is throwing the baby out with the bath water. How would a site like google maps, which I’m sure we can all agree is extremely useful, work in a Web 1.0 style world? It needs to dynamically load tiles and various other resources. The web being a place where we can host and instantly distribute complex cross-platform interactive software in a fairly secure sandbox is a modern marvel.
Wouldn't this make users pay for every possible feature they could ever use on a given site? For instance, in Google Maps I might use Street View 1% of the time, and the script for it is pretty bulky. In your ideal world, would I have to preload the Street View handling scripts whenever I loaded up Google Maps at all?
If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.
Thinking about how the web was designed today, isn’t necessarily good when considering how it could work best tomorrow.
> If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.
Not quite, I wasn't trying to make a bigger point about is/ought dynamics here, I was more curious specifically about the Google Maps example and other instances like it from a technical perspective.
Currently on the web, it's very easy to design a web page where you only pay for what you use -- if I start up a feature, it loads the script that runs that feature; if I don't start it up, it never loads.
It sounds like in the model proposed above where all scripts are loaded on page-load, I as a user face a clearly worse experience either by A.) losing useful features such as Street View, or B.) paying to load the scripts for those features even when I don't use them.
“Worse” here is relative to how we have designed sites such as Google maps today. The current web would fundamentally break if we stopped supporting scripts after page load, so moving would be painful. However, we build these lazy and bloated monolith SPAs and Electron apps because we can, not because we have to. Other more efficient and lightweight patterns exist, some of us even use them today.
If you can exchange static content, you need very little scripting to be able to pull down new interactive pieces of functionality onto a page. Especially given that HTML and CSS are capable of so much more today. You see a lot of frameworks moving in this direction, such as RSCs, where we now transmit components in a serializable format.
Trade offs would have to be made during development, and with a complex enough application, there would be moments where it may be tough to support everything on a single page. However. I don’t think supporting single page is necessarily the goal or even the spirit of the web. HTML imports would have avoided a lot of unnecessary compilers, build tools, and runtime JS from being created for example.
How are you going to stop it, when you already are running JS? I can write a VM in JS that I can load, then I can load static assets after the page has loaded, and execute them in the VM. How would you block that?
I am thinking about a different time, when JS did less, and these decisions were being made.
Today, what you are saying is definitely a concern, but all APIs are abused beyond their intended uses. That isn’t to say we shouldn’t continue to design good ones that lead users in the intended direction.
We had that in the form of MapQuest, and it was agonizingly slow. Click to load the next tile, wait for the page to reload, and repeat. Modern SPAs are a revelation.
That ship has sailed. The web is nowadays an application delivery platform and there is no going back. Dynamic loading, iframes, and a whole host of other features all have their uses within that context - the issue is really their misuse and overuse.
"Right-sizing" is probably the most diplomatic take on all tech churn. It's the right way to look at it. It's not that we're done with it once and for all, it's just it's not the end all be all that conferences/blogs/influencers make things out to be. It's more of an indictment of the zealotry behind tech evangelism.
This honestly feels so much closer to the old jQuery approach of using JS for enhancements but ensuring you could gracefully degrade to work without it.
I think the confession that "Figma‑ or Gmail‑class apps still benefit from heavy client logic" is a telling one, and the reason I politely disagree with your thinking is that it relies in the app staying small forever. But that's not what happens. Apps grow and grow and grow.
I've heard people say they just want "Pure JS" with no frameworks at all because frameworks are too complex, for their [currently] small app. So they get an app working, and all is good, right until it hits say 5000 lines of code. Then suddenly you have to re-write it all using a framework and TypeScript to do typing. Better to just start with an approach that scales to infinity, so you never run into the brick wall.
The problem is that people are frequently using SPA JS frameworks for things that are clearly not gmail of figma -- i.e. news websites and other sites with minimal interactivity or dynamic behaviour. If you are genuinely building an 'app'-like thing, then of course you need some kind of JS SPA framework, but too often people are reaching for these tools for non-app use cases.
I say if you have any reactivity whatsoever you need a framework. If you don't your code will be crap, and there's really no getting around that. Once you start doing DOM calls to update GUI you've got a mess on your hands instantly, and it will only get worse over time.
Yeah, wouldn't want to rewrite the frontend in a new framework. Good thing the SPA frameworks are so stable and solid; when I choose one I will surely be able to use it for a good, oh, 3 to 6 months.
React hooks came out in 2019. That's 6 years ago. And they are still the way to write client components. Unless you're moving everything to server components (which you most likely can't and shouldn't anyways) you would be writing the same react code for 6 years.
We still have Vue 2 apps running strong. Our experimental stuff is on Vue 3, which is backwards compatible with Vue 2 for the most part if you avoided mixins (which even in the Vue 2 days was the common advice).
People who say stuff like this have obviously never actually used modern day FE frameworks, because they have all been very stable for a long while. Yes, APIs change, but that's not unique to JS/frontend, and also nothing really forces you to update with them unless you really need some shiny new feature, and at least IME Vue 3 has been nothing but gold since we got on it.
I agree. My preference is React, but I've got 4 years of Vue experience and so I know Vue is good too, and mature. There are just some people who are anti-framework entirely, and they're never actual professional web developers, but mostly hobbyists or college dabblers, who've never been involved with a large project, or else they'd know how silly their anti-framework attitude is.
This feels sarcastic but in reality ever since react switch to using hooks I’ve largely written the same style of react code for years. You don’t have to live on the edge.
Well any non-framework interactive app that ever reaches the point of being "bloated" is necessarily a train wreck, regardless. You can have massive apps that use a framework, and has a good design too. However if you have a massive app without a framework then it's absolutely guaranteed to be a dumpster fire. So frameworks help manage bloat, but lack of frameworks fails completely once the project is large.
That’s absurd, that’s like saying we should only use C++ for backend code because my CRUD business app might one day scale to infinity. Better be safe than sorry and sling pointers and CMake just in case I need that extra juice!
imo, even if the only "interactivity" a web app has is just a login page, then even that alone is enough to warrant using a framework rather than doing direct DOM manipulation (or even worse, full page refreshes after a form submit).
It's not about using the most powerful tool always, it's about knowing how to leverage modern standards rather than reinventing and solving problems that are already solved.
I’ve been saying this for a long time. It takes very little effort to spin up a react app so there’s little point in starting a project without it or whatever front-end framework you prefer.
As I’ve become more senior I’ve realized that software devs have a tendency to fall for software “best practices” that sound good on paper but they don’t seem to question their practical validity. Separation of concerns, microservices, and pick the best tool for the job are all things I’ve ended up pushing back against.
In this particular case I’d say “pick the best tool for the job” is particularly relevant. Even though this advice is hard to argue against, I think it has a tendency to take developers down the path of unnecessary complexity. Pragmatically it’s usually best to pick a single tool that works decently well for 95% of your use cases instead of a bunch of different tools.
I agree, just use React from day one. The reality is that web pages are hardly ever perfectly static, and once there's any dynamic nature to it at all you need something like React or else you'll have a train wreck of JS DOM-manipulation calls before you know it. React is perfect. You just update your state and the page magically re-renders. It's a breeze.
Multiple times in this thread you have been taking the hardline stance that a framework is always necessary while stating that others are saying the same in the opposite direction. In reality, most people seemingly advocating for non-React are actually saying to start simple and add the complexity where and when it’s needed.
Further, being against a bloated framework is not the same as being against frameworks. Those frameworks are actually principles. It’s possible for a team to come up with or use existing principles without using a framework.
Finally, “always use React” brings other costs. You need a team to build your system twice. That means you need bigger teams; more funding to do the same thing, and so on. You add complexity at the team level and at the software level when using frameworks. The person above you said that blindly “following best practices” is bad while stating a “best practice” of always start with React. That particular “best practice” not always being the best practice is the entire point of this thread.
Your reading of my point is very strange. When I said “best practices” I clearly meant the most commonly repeated best practices. If I cast doubt on those best practices then clearly I intend to replace them with other best practices that I think are better, and that’s what I did. And suggesting better practices doesn’t imply that I think people should blindly follow them.
> Most people seemingly advocating for non-React are actually saying to start simple and add the complexity where and when it’s needed.
In my experience that’s actually not the case. That might be what people claim, but in my professional experience some people really don’t like frontend work and they try to avoid frontend frameworks because they think it’ll make their work more tolerable, but what usually happens is they start out “simple” but pretty quickly product requirements come in that are hard to do without some framework, then there’s a scramble to add a framework or hack it into some parts of the app.
Exactly right. The "start simple and add the complexity where and when it’s needed" is a badly flawed way of thinking, in web apps, because once an app becomes too big to manage without BOTH a framework AND a type-safe language (TypeScript) then it you realize everything you've done up to that point must be reworked line by line, which costs you weeks, and you have to retest everything, and will make mistakes as you try to fix the mess. It's a mess that's easily avoided, just by using a framework from day one.
You can't just switch horses in the middle of the stream. You have to ride back to the ranch, saddle up on a different horse, and restart your entire journey on a better horse.
I went out of my way to say it's only my preference to use React, but that Vue is fine too. So the thing I have a "hardline stance" (your words) on, if anything, is that a framework should be used for any interactive web app.
Having been a web developer for a quarter century, I know how tempting it is (yes, for small projects) to try to just wing it and do everything without a framework, and I know what a tarpit that way of thinking is. If you disagree, then you were certainly welcome to share you own opinion.
I remember reading their blog post about how moving from pages router to app router in Next.js helped their SEO last year. This time they are moving from Next to React+Inertia.js because of growing bills from Vercel even though deploying the same app on your own VPS instead of relying on cloud provider would probably solve the issue. Nonetheless, I still don't understand their yearn for complexity - does book tracking app really need GraphQL, separate frontend framework and complicated build process or all that could have been solved by sticking to deploying monolithic RoR app with HTML templates on VPS from the very start?
I stumbled upon hardcover when I was looking for a book info api and saw that goodreads discontinued theirs. Although it's pretty rough around the edges I've been using it extensively since then.
As far as I understand hardcover was really created because goodreads discontinued their api and the team at hardcover saw how many people relied on it for a myriad of different niche projects.
If hardcover was just a replacement for the goodreads platform, then I'd agree with you. But it's not. It's there for the api, with the platform around it intended as a way to ensure free access of the api for everyone.
And from that pov choosing GraphQL makes a lot of sense imo. You can't anticipate all the cool and different things people might want to do with it, so they chose the most flexible api spec they could.
On the other hand, I'm not sure if a complete rails rewrite was the right choice. The App was slow and sluggish beforehand, with frequent ui glitches and it still has those same issues. Their dev blog claims significant performance increases, but as a user I haven't noticed a big difference.
Sticking with next.js, but moving to a selfhosted instance and then iteratively working on performance improvements would've been (imho) the better way forward. I see no reason why next.js somehow fundamentally couldn't do what they're trying to do, but rails can. Especially with just 30k users (which tbc is a great achievement, just not impressive from a technical standpoint).
Thanks for the comments! You hit on a lot of why our app is structured the way it is. I agree too, we could've put those investments into Next.js rather than migrating to Rails. The difference was with Rails I could envision what the endpoint looked like (codebase, costs, caching, dev env, deployment, hosting options, etc). If we were to invest that time Next.js, some of those answers were (and still are) unclear. Agree we could still get there, it just wouldn't be as clear a path.
That's a fair argument.
And to be clear (because my original comment might read as negative), I do like hardcover a lot. It might not work sometimes, but I still use it to track all my reading, because the ui is charming, because it has a good, open api and because it's very clearly made by people who really like reading.
Wishing you all the success you can get!
Every webapp built with something other than GraphQL ends up with an ad hoc, informally-specified, bug-ridden, slow implementation of half of GraphQL. Yes, a book tracking app absolutely needs GraphQL.
Do you need a separate frontend framework? No, probably not, and that's exactly the problem that Next solves - write your backend and frontend in the same place.
Do you need a complicated build process? No. You want your build process to be just "run npm". And that's what something like Next gets you.
"Monolithic RoR app with HTML templates on VPS" would introduce more problems than it solves. If Next-style frameworks had come first, you would be posting about how RoR is a solution in search of a problem that solves nothing and just overcomplicates everything. And you'd be right.
> Every webapp built with something other than GraphQL ends up with an ad hoc, informally-specified, bug-ridden, slow implementation of half of GraphQL.
Not remotely true. There are plenty of web apps that work just fine with a standard fixed set of API endpoints with minimal if any customization of responses. Not to mention the web apps that don't have any client-side logic at all...
GraphQL solves a problem that doesn't exist for most people, and creates a ton of new problems in its place.
The value of GraphQL is also its downfall. The flexibility it offers to the client greatly complicates the backend, and makes it next to impossible to protect against DoS attacks effectively, or even to understand app performanmce. Every major implementation of GraphQL I've seen has pretty serious flaws deriving from this complexity, to the point that GraphQL APIs are more buggy than simpler fixed APIs.
With most web apps having their front-end and back-end developed in concert, there's simply no need for this flexibility. Just have the backend provide the APIs that the front-end actually needs. If those needs change, also change the backend. When that kind of change is too hard or expensive to do, it's an organisational failing, not a technical one.
Sure, some use-cases might warrant the flexibility that GraphQL uses. A book tracking app does not.
> With most web apps having their front-end and back-end developed in concert, there's simply no need for this flexibility
But also no problem with it. There might be some queries expressible in your GraphQL that would have severe performance problems or even bugs, sure, but if your frontend doesn't actually make queries like that, who cares?
> Just have the backend provide the APIs that the front-end actually needs. If those needs change, also change the backend.
Sure, but how are you actually going to do that? You're always going to need some way for the frontend to make requests to the backend that pull in related data, so that you avoid making N+1 backend calls. You're always going to have a bunch of distinct but similar queries that need the same kind of related data, so either you write a generic way to pull in that data or you write the same thing by hand over and over. You can write each endpoint by hand instead of using GraphQL, but it's like writing your own collection datatypes instead of just pulling in an existing library.
> There might be some queries expressible in your GraphQL that would have severe performance problems or even bugs, sure, but if your frontend doesn't actually make queries like that, who cares?
People with bad intentions can make those slow queries happen at high volume with custom tooling, they don’t have to restrict themselves to how the frontend uses the queries
> People with bad intentions can make those slow queries happen at high volume with custom tooling, they don’t have to restrict themselves to how the frontend uses the queries
Depends how your system is set up. I'm used to only allowing compiled queries on production instances, in which case attackers have no way of running a different query that you don't actually use.
Do you understand that decompilers and reverse engineering are a thing?
Adversaries are not restricted to using your system the way you designed your system. GraphQL queries are trivial to pull out of Wireshark and other sniffers. If you deliver it to the browser, any determined-enough adversary will have it, period. I wouldn't be surprised in the least if it is already a thing for LLM models to sniff GraphQL endpoints in the quest for ever more data.
> Do you understand that decompilers and reverse engineering are a thing?
Do you understand how compiled queries in GraphQL (or even an old-school RDBMS) work? All that gets sent over the wire is the query id. There's physically no way to make the server execute a query the author didn't write.
Shrug, databases have been doing the same thing since the 1970s (and consider also e.g. regexes). Turns out a flexible, expressive language for writing queries isn't always the most secure or performant thing to use as your wire format.
Exactly. People pull this argument right out of their hat, already prepared, where spending just a few moments thinking about it would give one pause.
The tools and patterns to limit these (very common, in any kind of system) drawbacks are so well-established that its a non-issue for anyone sincerely looking at the tech.
> Sure, some use-cases might warrant the flexibility that GraphQL uses. A book tracking app does not.
I agree! If you're in control of the experience, then I wouldn't choose GraphQL for a limited experience either.
The project started because Goodreads was retiring their API, and I wanted to create something better for the community. I have no idea how people will use it. The more we can provide, and the more flexible it is, the more use cases it'll solve.
So far we have hundreds of people using the GraphQL API for all kinds of things I'd never expect. That's the selling point of GraphQL for me - being able to build and figure out the use case later.
But I would never want to create a GraphQL API from scratch (not again). In this case, Hasura handles that all for us. In our case it was easier than creating a REST API.
GraphQL is the new mongodb. This fancy new thing that people want to use and makes no sense in reality and just causes more problems than it solves. It solves a very specific problem that makes sense at Facebook. It makes 0 sense for companies that have a web app or web and mobile app. And nothing else. Anyone deciding to use graphql is making a dumb decision.
I sometimes have to write integrations with external data providers (most of them being government agencies), and they love graphql, where (IMHO) it makes a lot of sense. They provide data about some entity¹ split into X fields, your application needs maybe 20% of them, and thanks to graphql you don't have to request anything but those 20%. When loading hundreds of millions of records, it saves you from loading, parsing, and then throwing away gigabytes of unnecessary JSON.
1: one example being tax records with all associated information about tax collecting agencies and taxpayers — it's a lot of data
Facebook does not really use GraphQL in their public apps. The apps call named queries defined server side, so GraphQL is just a glorified RPC mechanism, not a query language.
> ad hoc, informally-specified, bug-ridden, slow implementation of half of GraphQL.
Everytime I hit the "should we use GraphQL" question in the last decade we balked because we already had fast REST like APIs and couldn't see a how it would get faster.
To your point it was more of a mish-mash than anything with a central library magically dealing with the requests, so there is more cognitive load, but it also meant we had much more control over the behavior and performance profile.
People hate on GraphQL and every time I read it, I just default assume they haven't used it at scale and don't understand the benefits, or fail to grasp just how hard frontend dev is for anything non-trivial. It has worked so remarkably well, and scaled from app to app to app in an almost copy/paste kind of way (all type-safe!), that it is easily my favorite tech, along with Relay.
We've been using it in production for 10 years. Would I change a single thing? No. Every day I come to work thankful that this is the tech stack that I get to work on because it _actually works_ and doesn't break down, regardless of size.
Greenspun's rule worked in favour of Common Lisp (to the extent that it did...) because CL solves a lot of hard problems that ad-hoc solutions do poorly, like automatic memory management, DSLs, etc.
But lots of apps can do with a lightweight pull API that can be tailored, fits the applications' access control model (as points of contrast to GraphQL) and it's less work and less risk than finding, integrating and betting on a GraphQL implementation.
How so? You've got all the same debuggability that you'd have with rest - sure you need to look at your requests and responses through your tools rather than directly, but that was already the case unless you're not using HTTPS (which is a bigger problem). Throw up GraphiQL on one of your developer/admin pages and you've got easier exploratory querying than you could ever get with an old-style Rest API.
Simple questions like "which teams own the slowest endpoints" suddenly become a nightmare to compute with GraphQL. There's a reason why every industry moved to division of labor.
Then the security looks also annoying to manage, yeah sure the front-end can do whatever it wants but nobody ever wanted that.
Shrug. Your tracing tools need to understand your transport protocol (or you need to instrument all your endpoints), sure, but that's always been the case. Likewise with security. IME the stuff that's available for GraphQL isn't any worse than what's available for raw HTTPS and is often better since you have more information available (e.g. if you want to redact a particular object field from all your responses depending on the caller's authorisation, it's much easier to do that in GraphQL where that field only exists in one place than in a bunch of handwritten endpoints where you have to find every response that field might appear in).
We used NextJS on a couple of projects where I work and are already phasing them out. The reasons are manifold, but a few key factors:
* difficult auth story. next-auth is limited in a few ways that drove us to use iron-session, such as not being able to use a dynamic identity provider domain (we have some gov clients who require us to use a special domain). This required us to basically own the whole openid flow, which is possible but definitely time we didn’t expect to have to spend in a supposedly mature framework.
* because the NextJS server wasn’t our primary API gateway we ended up having to proxy all requests through it just to add an access token to avoid exposing it on the client. The docs around this were not very clear, and this adds yet another hop with random gotchas like request timeout/max header size/etc.
* the framework is very aggressive about getting you on their cloud, and they make decisions accordingly. This was at odds with our goals.
* the maintainers aren’t particularly helpful. While on its own this would be easy to look past, there are other tools/frameworks we use in spite of their flaws because the maintainers are so accessible and helpful (shout out to Chillicream/HotChocolate!)
What did you move to? We've been using NextJS as a frontend with somehelpful server-side/api handling, but the backend is done in Django. We are basically just using ReactJS with the convenience of NextJS (like file based routing)
We have some other projects that are Angular, and NextJS was sort of a proof of concept for us. The goal is to just have one front-end framework for our devs to work with (and keep up to date, ugh!), so we’re folding those deploys back into our Angular family of features.
Have you checked out https://astro.build/ yet? You can drop in any framework where you need it, so if you need to bring along those Angular components you can but you can also lean on React if you need it.
Not OP; but when I was thinking about using Next.JS, and doing a deep investigation, I came to the decision that, for server-side rendering, I'm quite happy to use Kotlin and Ktor (my backend is also Kotlin - I have a lot of client-types, which is why they're separate), and I've been quite happy with Ktor's html dsl + htmx for speed.
And Kotlin + Ktor feels very good to write in on serverside. Fast, easy and fluent to write in, like Ruby; but with Java's ecosystem, speed and types.
How are you doing reusable components with the html dsl? From the little bit I’ve tried ktor this was something I could not figure out and it kinda just pushed me away since I couldn’t find anything.
call.respondHtml {
body {
div(CSS_CLASS_NAME) {
radioButtonWIthLabel(MORE_CSS_CLASS_NAME, "group", "id") {
+"Text for the label"
}
}
}
}
More complicated examples just extend that quite a lot.
I've also got whole files dedicated to single extension functions that end up being a whole section that I can place anywhere.
---
And then to test those single-function components, I'll do something like this:
class SingleSectionTest {
private suspend fun ApplicationTestBuilder.buildApplicationAndCall(
data: DataUsedForSection
): Document {
application {
routing {
get("test") {
call.respondHtml {
body {
renderSingleSection(data)
}
}
}
}
}
val response = client.get("test")
val body = Jsoup.parse(response.bodyAsText())
return body;
}
@Test
fun `simple test case`() = testApplication {
val data = DataUsedForSection("a", "b", "c")
val body = buildApplicationAndCall(data)
// all the asserts
}
}
And so on. Is this what you were wondering? Or would you like a different sort of example?
That's pretty much what I was looking for, thank you for sharing! I think that's probably the nicest way I've seen to do it, annoying that Kotlin seems to force you to add it as a child (? idk the word...) of an existing HTML tag, wish you could just import a component or something :(
Kotlin calls those "Extensions" [0], and yeah, they can be pretty annoying. I first learned about them in C# and I had a bunch of frustrations with them, especially when I found some code on StackOverflow or Microsoft's docs and they'd use one of those methods and it just plain wouldn't be there for me because I hadn't downloaded the right nuget package or anything.
I've gotten more used to them and I get why they can be so great now; but there's still some real annoyances with them I just can't shake (like the import problem and the related "where is this code?!" problem)
---
Purely importing a component that's just a simple class, like you can do in Java/Typescript with their jsx and tsx files would be pretty cool, yeah. You could fake it by making a data class and adding a
fun renderInto(tag: HtmlBlockTag) { ... }
type method, but with how Ktor's Dsl is implemented, you're still going to need to connect it into the giant StringBuilder (or whatever) that the dsl is building to.
To help me get something close to that idea, I tend to dedicate whole files to bigger components and even name the file the same as the root HtmlBlockTag Extension method (like RadioButtonWithLabel.kt if I did it for the earlier one). Those files are great because you can put a bunch of helper methods in the same file and keep it all contained, a lot like a class.
Great that it's working for them, but for the end user the feel of the site nearly unusable. There's 1s+ of delay on every interaction - pressing "home" button from explore tab takes 1,85 seconds (on a gigabit connection) before home view comes active, without any other feedback to the end user except for "Home" icon becoming active.
You cannot just blindly trust the page speed metric but it should be impossible to miss things like this when you are actually using the site. Compare the experience to something like GoodReads that's using plain old SSR and you'll immediately notice the difference.
True, the 1s+ delays (both on mobile and desktop!) and no spinners is a very annoying UX.
I've been saying this forever and this is a great reminder for those React-hating folks here on HN: usually it's the developer's fault his web is slow, not the framework's.
That is one of the slower pages on the entire app. I'd like to move that one to use an InertiaRails.deferred setup, so it loads instantly with a loading spinner - like you'd see with Suspense + RSC. (Hardcover founder here)
I didn't realize you were trying mobile initially. When I tried mobile, it also seems incredibly slow. My desktop didn't have this issue (when going from explore page to home). Everything seems a lot slower on mobile.
I truly wonder what people do when they want JS full stack both frontend an backend especially with a DB involved. ORM situation looks pretty fragmented or you write pure sql. And then you still have to decide on the backend. Going raw with express? Next.js, well known, but with a questionable agenda (, Remix, Astro, TanStack, and so on. It's a mess, because you always have to recalibrate and re-evaluate what to use.
I often see myself going back to Ruby on Rails for my private stuff. It's always a pleasure. On the other side, there are so few rails people available (compared to js) that it's not viable for any professional project. It would be irresponsible to choose that stack over js and often java for the backend.
Yep. The ORM situation in JS is not great. There’s no one go-to, and it seems like the question often prompts a patronizing response about how ORMs aren't really necessary. Kysely is really great, but it’s not an ORM.
My take: the JS ecosystem tends to avoid abstraction for whatever reason. Example: they don’t believe that their web framework should transparently validate that the form submission has the correct shape because that’s too magical. Instead the Right Way is to learn a DSL (such as Zod) to describe the shape of the input, then manually write the code to check it. Every single time. Oh and you can’t write a TS type to do that because Reasons. It all comes off as willful ignorance of literally a decade or more of established platforms such as Rails/Spring/ASP.NET. All they had to do was steal the good ideas. But I suspect the cardinal sin of those frameworks was that they were no longer cool.
I have a hard time relaying this without sounding too negative. I tried to get into SSR webdev with TS and kept an open mind about it. But the necessary ingredients for me weren’t there. It’s a shame because Vite is such a pleasure to develop with.
The curse of being an experienced developer is watching good things go away, and then get re-invented and everyone hails them as a major innovation without any awareness that this has existed for a long time.
Someone will steal the good ideas eventually. And everyone will act like it’s the first time this idea has ever come up. I’ve seen it happen a few times now, and each time it makes me feel ancient.
What's wrong with TypeORM (besides being javascript of course)? Works alright, creates migrations based on entities automatically, and I really haven't had any issues with it. Even having several different dbs in the same project is straightforward.
We had to write a migration layer on top of Prisma that can run arbitrary code to do things like transactions in migrations. Kind of a bummer that something like that's not built-in to the system but it was also trivial to put together.
Well, we're not the "go to" yet :-) but if you want an entity-based ORM that isn't just a query builder, Joist has several amazing features (no N+1s) and great ergonomics https://joist-orm.io/
We currently have two major apps, One in typescript and one in rails. I have to hire devs for both, and I have not experienced it being any more difficult to find a rails developer or a node/typescript developer. If anything, I think finding a rails developer with relevant experience is even easier because the stack is so much more standardized. With people with node experience, there is a huge chance that they won't actually have any experience with the libraries that we are using, even though they've used other libraries in the node ecosystem. With rails, however, pretty much everybody with experience in a rails app will be able to jump into our application and will see a lot of stuff that is familiar right out of the gate.
I'm personally an elixir Phoenix Fanboy now, so I don't choose rails as my first choice for personal projects, but I think it is an excellent choice for a company. In fact, I would probably recommend it the most over any framework if you need to hire for it.
I really hope that Elixir / Phoenix will gain more traction.
It is very easy to write a server with it, hosting and deploying is painless, upgrading it (so far) has been painless, linting and debugging has been a breeze.
If you're coming from Ruby, then learning Elixir requires a small mental adjustment (from Object Oriented to Functional). Once you get over that hump, programming in Elixir is just as much fun as Ruby! :)
Agree. The one thing LiveView is missing is 1:1 Tailwind and Shadcn libraries that are more or less interchangeable with the huge ecosystem around them. I just want to be able to pull in popular components for the wider dev community. There’s some commendable attempts at UI libraries for LiveView, but they are too opinionated stylistically or just slightly off mainline shadcn. I don’t really want to hang my hat on this type of thing and later get burnt when it’s no longer maintained (which is more often the case). Also, the AI tooling for Elixir is greatly lagging which is disappointing as that language is particularly well suited for it.
That's a point which cannot be underestimated, almost every Rails codebase looks mostly the same while I've never seen two similar node projects. Standardization also has advantages on training and hiring.
Can't speak to ORMs, but I'd have a look at SolidStart. If you need an API, add in tRPC. End result is highly typed, can do SSR, and once you get used to it, it's a much better experience than using React.
I still haven't found an ORM with JS that really speaks to me.
> there are so few rails people available (compared to js) that it's not viable for any professional project
I don't think this is true; Shopify is a Rails shop (but perhaps it's more accurate to say it's a Ruby shop now). It feels easy to make a mess in Rails though, imo that's the part that you could argue is irresponsible
The JS ecosystem would be so much better if developers concentrated to contributing to libraries rather than writing new frameworks. After about 10 years of JavaScript, I recently moved over to .NET and I'm finding that my team can focus on actually developing features than maintaining the plumbing.
Yes, my experience as well. Last year I had to make a decision for the stack of a small app at work that needs a SPA (3D viewing large data sets using threejs and agGrid if anyone cares) and with long term stability as very high prio.
Long story short: I ended up choosing ASP.NET Core with Minimal APIs. The main reason was indeed EF Core as ORM, which I consider as one if not the best ORM. In the Node world there's so much promise (Prisma, Drizzle, ...) but also so much churn.
I think there are a lot of Rails developers available. The concern many devs have is that they fear they can't get a job anymore (they're mostly mistaking the downturn of the entire market for one limited to Rails).
Why is that a mess? From my experience you have to recalibrate your project constantly. the framework that's hip and new will be phased out in 5 years.
I don't mean that rewrite hell is a permanent state, but you will always be rewriting parts of your project. I'd rather choose an ecosystem where the friction for rewriting is minimal.
If you need to rewrite everything every 5 years then you are making the wrong technical decisions up front.
Choose boring tech that doesnt change since its already mature and battle tested and because it is not beholden to the whims of some VC money or whatever.
React itself (not Next.js) doesnt change a lot and will let you run your app for the next decade at least.
Same with any boring PHP, Ruby, Python, Java, dotnet framework out there.
You might need to upgrade versions, but there will very seldom be breaking changes whete you have yo rewrite a lot.
Frontend and Backend developers have never really been good at talking, for as long as I've been a developer.
As a historically backend-developer, I've tended to dislike Html/JS/CSS. It's a meaningfully different paradigm from the Swing/Awt, WinForms, Android UX, etc. That alone was enough to frustrate me and keep me on the backend. To learn how to make frontend, I've had to since learn those 3. They're finally becoming familiar.
BUT, for front-end developers, they needed to learn "yet another language"; and a lot of these languages have different / obnoxious build systems compared to nvm and friends. And then, like anyone who's ever changed languages knows, they had to learn a whole bunch of new frameworks, paradigms, etc.
Well, they would have, but instead, some of them realized they could push Javascript to the backend. Yes, it's come with *a lot* of downsides; but, for the "Get Shit Done" crowd - and especially in the world of "just throw more servers at it" and "VC money is free! Burn it on infra!" these downsides weren't anything worth worry about.
But the front-end devs - now "full stack devs" but really "javascript all the things" devs -, continued to create in a visible way. This is reflective of all the friggin' LinkedIn Job Postings right now that require Next.JS / Node.JS / whatever roles for their "full stack" positions. One language to rule them all, and all that.
Just some ramblings, but I think it's strongly related to why people would choose Next.JS __ever__, given all its downsides.
When I see articles and discussions about web + stack I can’t but ask “What problem are they actually solving”? The answer is always: put text on screen.
When your business goal is put text on screen the next logical step is to ask how much time and money does the tech stack really save? I have never found a developer that answer that question with a number. That’s a really big problem.
I get where you're coming from but that's actually quite a bit of an oversimplification even for many web apps outside of the 1% for which a lot of modern web development solutions and frameworks seem to have been created.
For one thing it doesn't take any account of input. When someone draws something with Figma or writes something in Google Docs or buys something from Amazon - or indeed any online shop at whatever scale - or gets a set of insurance quotes from a comparison site or amends something in an employee's HR record or whatever it may be the user's input is a crucial part of the system and its behaviour.
For another, we're not just putting text on the screen: we're putting data on the screen. And whilst data can always be rendered as text (even if not very readably or comprehensibly), depending on what it represents, it can often be more meaningfully rendered graphically.
And then there are integrations that trigger behaviour in other systems and services: GitHub, Slack, eBay, Teams, Flows, Workato, Salesforce, etc. Depending on what these integrations do, they can behave as inputs, outputs, or both.
And all of the above can result in real world activity: money is moved, orders are shipped from warehouses, flow rates are changed in pipelines, generators spool up or down, students are offered (or not offered) places at universities, etc.
You are confusing information for data. I suggest reading about the DIKW model. Nonetheless, the relational ontology of content has no bearing on the tech stack used to display such, which is why well written content on paper does not require a tech stack to achieve what you describe.
If you reduce things so much that all detail is lost, you can't really reason about the original thing any more. The obvious counterpoint here is, you try turning amazon.com into a plain TXT file and see how much sales increase.
I suppose you could have custom CSS (e.g. via Stylebot) remove 90% of the elements and all but one of the pictures, but would that really make the amazon purchasing experience better?
Maybe ilit would. Because maybe the pictures might load consistently for once. Instead of this fat mess, where opening the pictures randomly lags, scrolling the page to the reviews is random, and the back button works depending on the orbit of the planet.
Even the search box itself lags when typing because somehow the text input is synced to the autocomplete search?
What kind of answer would you expect to a question like that? I couldn't tell you how much time and money I save writing in a programming language instead of raw machine code but I can rest assured that it's the right call.
Are people on this site just stuck in the 90s or something? The product I work on is nowhere near Figma or Google Docs level of complexity, but we're still MILES away from "just rendering text on screen".
That's about as absurd a statement as saying all of Backend is just "returning names matching a certain ID" for how out of date and out of touch it is.
I know two reasons for server-side rendering: (1) site indexing, (2) time to first screen update. With faster networks and client devices (2) isn't as important as it used to be.
The reasons I prefer client-side rendering: (1) separation of concerns UX in the front, data/business in the back (2) Even as a back-end dev, prefer Vue to do front-end work rather than rendering text + scripts in the backend that run in the browser, (3) at scale it's better to use the client hardware for performance (other than initial latency).
I don't think any business goal of anywhere I've worked for the past 10 years has been "put text on a screen". It's usually more like, "interactive application that can view and manage complex data representations in intuitive ways". Charts, forms, graphs, error validation, nested tables, consistent styling, nice animations are all vital components that are much much easier with a modern web tech stack.
But you are not just putting text on screen. That is a drastic simplification. To put text on screen, we had TV teletext/videotext. You can also just put a .txt file as your index.txt and serve that as your website. Or create a PDF from your word document.
You won't need any developers at all for that.
Please don’t confuse method for intent. People tend to make that mistake as an equivocation error to qualify a mode of action. They do what they know and then extrapolate why they do it from what they have done.
This is overly reductive. Sure, you can say all webdev does is put letters on screen. Oh, and graphics - don't forget about those. Just letters and graphics! Oh wait, that actually describes everything anyone has ever coded.
It's like saying that the entire job of a politician is to speak words out loud. You're reducing a complex problem to the point that meaningful discussion is lost.
Are people going on with an estimate of how much time and money a specific tech stack saves? You come up with a number for this and it's accurate, I assume. Like if I were to say Node+TypeScript+Express vs. Golang you'd have an answer. If you get that right more often than not then the answer is you're really good at it in a way most people aren't.
“Always” is doing a lot of heavy lifting there. At my last few jobs the goals have involved interactive visualizations, 3D model viewers and peer-to-peer screen sharing. There is a huge diversity of business goals outside of things that can be reduced to “put text on screen”.
I can't speak to the technical aspects here (I'm only familiar with nextjs not rails, so it's unclear to me how much of the article is just a reflection of the author's own comfortability with rails or a reflection of a more technically suitable architecture). But I do find it really weird that a company which apparently has multiple software engineers is worried about infrastructure costs amounting to less than $1k a month... Seems penny-wise pound-foolish to be worried about hosting bills.
We have one developer (me) and we're bootstrapped and not yet profitable. That means I'm paying the difference every month in hosting and working for free. The site makes it look like we have a lot more together than we are.
Was using TeamCity, then dropped some moving to another system.
The broader point was basically that the Rails UI integration tests took a very long time, and required the whole system up, and we had a pretty large team constantly pushing changes. While not 100% unique to Rails, it was exacerbated by RoR conventions.
We moved much of the UI to a few Next.js apps where the tests were extremely fast and easy to run locally.
I’ve written a bit of rails and still don’t really get what the raving is about. It was perfectly fine, I didn’t find anything extra special about it.
Having just hit severe scaling issues with a python service I’m inclined to only write my servers in Go or Rust anymore. It’s only a bit harder and you get something that can grow with you
What makes Rails stand out is the focus on convention-over-configuration as a guiding principle in the ecosystem which results in a lot less code (have you seen these relatively thin models and controllers?), as well as established dependencies and the lack of tendency to bikeshed in libraries (geocoder or devise for example have been mostly stable over close to a decade, with few popping up to replace it)
> What makes Rails stand out is the focus on convention-over-configuration as a guiding principle in the ecosystem which results in a lot less code
Convention over configuration and less code is fine, but unfortunately Rails is not a great example of it IMO. The "rails" are not strong enough; it's just too loosey goosey and it doesn't give you much confidence that you're doing it "the right way". The docs don't help much either, partly because of the history of breaking changes over releases. And the Ruby language also doesn't help because of the prolific globals/overrides and implicitness which makes for "magic".
So you're encouraged/forced to write exhausting tests for the same (normally dumb CRUD) code patterns over and over and over again. Effectively testing the framework moreso than your own "business logic", because most of the time there barely is any extra logic to test.
So I'm also surprised it gained the reputation is has.
ActiveRecord may be both the best and worst part of Rails. Currently the largest scaling problem that I'm facing is with all the before_* and after_* callbacks which run per model record rather than a batch of changed records. This is an N+1 query landmine field for which ActiveRecord offers no solutions.
I agree that ActiveRecord isn't particularly opinionated about how to deal with updates to batches of records, but there are multiple ways of approaching this and AR won't get in your way.
upsert_all[1] is available to update a batch of records in a single write that does not invoke model callbacks.
activerecord-import[2] is also very nice gem that provides a great api for working with batches of records.
It can be as simple as extracting your callback logic and a method (def self.batch_update) and running your callback logic after the upsert.
By upsert_all not invoking model callbacks, it's admitting that the ActiveRecord approach doesn't scale.
"It can be as simple as extracting your callback..." Isn't this the kind of repetitive thing a framework should be doing on your behalf?
To be fair, ActiveRecord isn't a fault Rails invented. Apparently it's from one of Martin Fowler's many writings where each model instance manages its own storage. Even Fowler seems to say that the DataMapper approach is better to separate concerns in complex scenarios.
TBF no framework will do everything perfectly, and having clean escape hatches is pretty good in itself.
Even outside of batch processing, there will usually be a few queries that absolutely benefit from being rewritten in a lower layer of the ORM or even plain SQL. It's just a fact of life.
The prevailing sentiment is that once you hit scaling issues with frameworks like Rails or Django you should have enough resources to simply throw money at the problem either in the form of more hardware, cloud computing, or better software engineers that can identify bottlenecks and optimize them.
Since most websites will never scale past the limitations of these frameworks, the productivity gains usually make this the right bet to make.
Github can be horribly slow sometimes (tested from different machines and through different ISPs) and difficult to tell if that is down to the framework used to render the pages or any other parts of the system.
Agree, the new code viewer is horrid. Also breaks right-click navigation between files for no reason. Not once have I triggered the interactive editor mode on purpose, but happens all the time accidentally, trying to select / hightlight a line. So frustrating…
Hard disagree on this. I went with this sentiment and deeply regret it. With LLM assisted coding it's very fast and easy to write a Go or even a Rust server. They have less bugs and can actually do things like threads that you end up working around in python/ruby.
I am not sure why people are comparing a web framework with writing your own code in another programming language.
How many CVEs have you had reported yet against your custom code for probably the middleware you wrote to get a request with params available to use them (the params) in an SQL query?
Of course if you are good at this and have a lot of experience than even using an LMM I am sure your code is more than fine. But on average I think it is safe to say that any middleware or library code to deal with HTTP requests and make the information available for business logic generated by an LLM has probably a lot of bugs (some subtle some very visible).
For me the power of Rails is that if you do CRUD web apps it is battle tested and has a long history of giving you what you need for fast building business logic for that CRUD app. Is it the knowledge that is put into designing a web framework that works for 90% of the operations you need to write your custom business logic.
They honestly really haven’t though. I’d’ve thought they would’ve by now, but I still find bringing up a backend with something like Go to be annoyingly tedious and feature-incomplete in comparison.
Like yeah, I know you can do it. But it was much more effort to do things like writing robust migrations or frontend templates. I’d love to find something in Go or Typescript that made me feel quite as productive as Rails did.
Preach. I found the whole "just use stdlib" culture in Go so annoying. I love the language (both Go and Ruby actually), but Go's ecosystem and tooling is eons behind.
Maybe I am comparing apples and oranges, not sure.
If you're thinking about going back to SSR, I think you owe it to yourself to check out Phoenix LiveView (Elixir) and play with it for an afternoon.
I've built a few apps in it now, and to me, it starts to feel a bit like server-side React (in a way). All your HTML/components stream across to the user in reaction to their actions, so the pages are often very light.
Another really big bonus is that a substantial portion of the extras you'd typically run (Sidekiq, etc) can basically just be baked into the same app. It also makes it dead simple to write resilient async code.
It's not perfect, but I think it's better than RoR
I've been curious for a while now. One thing that gives me pause though is how Phoenix LiveView apps perform when you're dealing with high latency. I'm aware that many apps will be serving primarily the US market and so might not recognise this as much of an issue. I'm also aware that I could deploy 'at the edge' with something like fly.io. Still, when I run a ping test to 100 different locations around the world from NZ, the majority of results are 300ms+. That seems like it would have a pretty noticeable impact on a user's experience.
TLDR; Are most Phoenix deployments focused on a local market or deployed 'at the edge' or are people ignoring the potentially janky experience for far-flung users?
while its true that phoenix liveview's default is to have all state be on the server, there are hooks to run javascript behavior on the frontend for things like optimistic updates and transitions. This gives plenty of ways to make the frontend feel responsive even when the roundtrip is 300+.
I haven't done a lot of optimistic updates with LiveView yet. I'm not sure how sanely you could really achieve it (because it seems you'd lose the primary benefit: server-side rendering / source of truth).
However, there are a few mechanisms you can use to note that the page is loading / processing a LV event that can assist the user in understanding the latency. e.g., at the very least, make a button spindicate. I've experienced (in my own apps) the "huh is the app dead?" factor with latency, which suggests I need to simulate latency more. If the socket is unstable or cannot connect, the app is just entirely dead, though the fallback to longpolling is satisfactory.
I think it would really shine for internal apps due to the sheer velocity and simplicity of developing and deploying it.
In the worst case, you could fall back to using regular controllers or APIs controllers, so I still see it being a "better version of Ruby" overall. However, if we're going back to this, I would rather use SolidStart and do it all in TypeScript anyway.
At the end of the day, I'm very torn between the resilience/ease/speed of Elixir and the much better type system in TS. The ability to just async something and know it will work is kind of crazy for improving performance of apps (check out assign_async)
> the majority of results are 300ms+
Another thing to consider is that a lot of apps (SPA powered by API) take 300~1000ms to even give you a JSON response these days. So if you can get by with making a button spin while you await the liveview response (or are content with topbar.js) I think you can get roughly close to the same experience.
> deployed 'at the edge'
The nice part of Elixir is you could probably make a global cluster quite easily. I've never done it though. You could have app nodes close to users. I think you'd have to think of a way to accelerate your DB connection however (which probably lives in 1 zone).
Yes, unfortunately that is the big weakness of LiveView. It also suffers from what I call the elevator problem, where LiveView apps are unusable with unstable connections and flat out stop working in an elevator or areas with spotty connections.
However, Elixir and Phoenix is more than just LiveView! There’s also an Inertia plugin for Phoenix, and Ecto as an “ORM” is fantastic.
I fell in love with React because of its SPA approach, as opposed to SSR.
You must imagine my chagrin when React started moving towards rendering on the server(SSR, Server Components, etc). I was happy to move to a full client implementation. Sadly, SEO cannot be ignored.
I think if Rails had focused on giving real first party support to interoperability with whatever frontend framework you brought to the table it would be so much bigger right now. They put a lot of work into Hotwire but I just want to use React, and I'm sure others want to use what they're familiar with.
I've built api only. It would be sick if it were easier to sprinkle react/vue/svelte/whatever in your haml views if you only needed a little bit of interaction but didn't want to spin up a whole other frontend.
I’m hardly an expert with Rails, and I integrated React twice, on two very different sites, using API controllers. The nice thing about React is that you can limit it to an island on the page, and don’t need to buy into the router, etc. that said, I did disable Hotwire to make my life easier.
Yeah but I wish in an alternate reality DHH chose a different route. If you go API only then you lose half of what makes rails great. It would be sick if you could render React/Vue/Svelte easily in your haml views and not have to have a js repo then figure out jwts and auth.
Dunno I loved rails, built monoliths, built api only, but when I tried sprinkling a bit of react in my views (say you only need a little bit of interaction, or want to use a react date picker) then theres all these sharp edges.
The reason I want it to be bigger is that user base helps the single dev, with help resources, up to date gems, and jobs.
> Our hosting bill grew from $30 in April to $142 by June, $354 in August. Hardcover was growing, but 10x cost increase in a few months was too much.
without ANY irony or sarcasm, i just want appreciate that its funny how that happens completely without explicit desire or intention to have this effect from the developers of Next (i'm serious, don't hate me guys, we are friends, i do believe that this ofc is not intended)
i'm sure there's a good and meaningful explanation (that I'm interested in reading) but lots of little microdecisions compound when the developer of the framework does not also experience it as a paying customer (or, more subtly, the developer of the framework wants to serve the 10000x larger enterprise customer and needs to make choices to balance that vs the needs of the small)
I feel like the elephant in the room here is that their back end was in RoR before Next.js and remained that way the entire time. They then switched from next.js to a framework designed, in part, with RoR in mind. It seems unsurprising that they had a much better experience using a thing that was tailored to their use case.
I love Next.js. I have used other frameworks including RoR and there is nothing like it (except Svelte or Nuxt but I view them as different flavors of the same core idea). But I only use it to make monoliths. I can see getting frustrated with it if I was using it alongside a different back end.
Rail is probably one of the most intuitive framework that I have ever used. No doubt it is highly opinionated but it hides all the complexity for small to medium applications.
It all depends what you’re working with. ActiveRecord can get gnarly. The rest of it is pretty easy to understand once you know what methods are called.
I think what confuses people is Ruby’s meta programming. The ability to dynamically define named methods makes rails seem far more magical than it actually is. You also can’t tell the difference between method calls and local variables.
It is too much to hold in my head at once sometimes. I can understand how it all fits together but the lack of types means I’m holding a lot more in my head at once.
I wonder why there is a debate Next.js vs. SSR. Nextjs is a hybrid and performs quite well. Contrasting with other SPA frameworks, Nextjs produces prerendered html output for fast first loads, efficient js chunks, config switches to eagerly-load those chunks (ie. when hovering over a link or preloading all n+1 links after page render) and efficient image (pre-)loading depending on breakpoint (usually the achilles heel when comparing to a pure SsR solution).
I would really be interested in real world performance metrics comparing load times etc. on a stock nextjs app using defaults vs. rails and co.
NextJS has a lot of significant drawbacks, that's why there's an ongoing debate (which is healthy):
- Cost
- Complexity
- Learning curve
- Scalability
- Frequent changes
- And surprisingly bad performance compared with the direct competitors
Nowadays, NextJS is rarely the best tool for the job. Next and React are sitting in the "never got fired for buying IBM" spot. It is a well earned position, as both had a huge innovational impact.
Do you need best in class loading and SEO with some interactivity later on? Astro with islands. Vitepress does something similar.
Do you need a scalable, cost efficient and robust stack and have moderate interactivity? Traditional SSR (RoR, Django, .NET MVC, whatever) with maybe some HTMX.
Do you have a highly interactive app? Fast SPA like Svelte, Solid or Vue.
NextJS generates by default all assets and js-chunks with a sha256 hash in the filename - essentially making them immutable. As outlined in the NextJS, I serve my assets folder with `Cache-Control: public, max-age=604800, immutable`. In a webapp where your users use your app on a semi-daily basis that means all assets and resources will be cache forever, or until you re-deploy a new version of the app. The data comes via REST (in whatever backend-language you want to use) so I don't see how any SSR can outperform nextjs here.
The whole isomorphic framework trend has always scared the poo out of me. I feel like it's just asking for security issues.
For people who commonly use these frameworks -- is it common to have issues where data or code intended only for server execution makes its way onto the client? Or are there good safeguards for this?
Next.js has introduced some keywords such as 'use server' and 'use client' that you enter in the file at the top. Much like 'use strict'. If you attempt to use server code in a client file for example, it will get caught by the ts compiler / Linter.
But for sure the lack of clear lines for where the server ends and the client begins has always been a pain of these kinds of framework offerings.
Not just accidental inclusion but intentional insecure inclusion. FE developer gets a BE ticket (because why not that's the whole point right?) and forces something through all proper-channels leading to trusted (server) code running on the client.
I switched from Rails to the node.js ecosystem back in the 3.2 to 4 transition, however looking back I share a similar sentiment as the OP.
I recently initiated the backmigration and my approach thus far however has been to take out the "administrative" part out into Rails to benefit from all the useful conventions there, but keep the "business services" in JS or Python and have the two communicate. Best of both worlds, and the potential of all of rubygems, npm and pypi combined.
Reminds me of what I did to bring AI into my SpringBoot Java app. I just created a Python-based WebService (microservice), that deploys as part of my docker stack, and now I get the benefit of everything going on in the AI world which is mostly Python, with no lag. Meanwhile other Java Develpers are busy trying to port all that stuff over into Java language. To me that porting is just a waste of time. Let AI stay in Python. It's a win/win, imo. Of course I had to learn Python, but as a Java Dev it came easy. Other Java devs are [mostly] too stubborn to try Python if you ask me. Sorry if this drifted off topic, but it shows how you don't have to be a purist, but you can just do what works and is easiest.
The right tool for a given problem is usually much more ergonomic and productive. To me purism of language or tooling is a disservice to an engineer’s instinct of solving a problem. Use Python where it is a strong option. Use Spring Boot where it makes sense.
BTW, I’m also on a similar trajectory using a mix of Java, Python and Node.js to solve different problems. It has been very pleasant experience compared to if I had been bullish on just one of these languages and platforms.
I think that's very smart, thanks for sharing! With the prevalence of coding agents currently the cost of context/language switching is much lower and these best-of-breed multilang setups are likely to become more prevalent in the future.
Right, and when I "learned" Python it was basically by asking an AI agent to generate whatever I wanted to do, and then looked at what it generated. For example, I'd just say stuff like "How does Python do hashmaps?" or "How can I loop over this array", etc. AI wrote most of my AI Python code!
This is a good approach I think. Rails is outstanding at delivering a good CRUD experience and data model management - sir I find it powerful to build the data model and admin tools using it, and allow other frameworks to access either the database or specific services as needed. Best of all worlds!
IMO the problem with Next is that it can’t decide whether it wants to be a framework for client side apps that require interactivity or server side rendered mostly static content sites. To support both it has codeveloped some baffling features in React like RSC which have made it far less fun to work with.
“use client”, server actions that aren’t scrutable in a network tab, laggy page transitions, and, until recently, inscrutable hydration errors: these are some of the recent paper cuts I experienced with Next.
I’d still use it for new projects, but I am keen to use TanStack Start when it’s ready
i’m personally really interested in the next wave of frameworks that make local first development intuitive, like One or something that bakes in Zero
I'm thankful that I don't work on projects that have SEO needs. SSG (for JS frameworks specifically) feels too unstable for me. I get the value, I understand why people need to do it, but it just makes everything more complicated. Also, I'm not sure if you can have an offline site with SSG? They might be compatible but I'm not sure. I know some SSG is essentially "SPA with the first page rendered already" so maybe that can work offline?
I looked at InertiaJS and it feels like too much "magic" for me personally. I've never used it so I could be wrong but it feels like too many things you have to get working perfectly and the instability in the JS ecosystem makes me worry about adding too many layers.
Tangential but I noticed a certain common conflation between pre-rendering and server-side rendering. Very often plain SSR is all it takes for good SEO performance, SSR in this case simply being rendering the page on-demand before serving it to the user.
Pre-rendering (as popularized by static site generators) is the additional step that increases complexity significantly, sometimes security issues too when session-protected cached pages are mistakenly served to the wrong users.
Server Side Rendering is awesome and FAR simpler and productive and immensely less buggy. Hoping we continue to see people leave the js framework madness behind.
Like you say, trying to learn to do something in Asp.net core can feel like pulling teeth. I'm usually surprised to find there's a first party library that does what I need, yet I couldn't find it for hours.
I have complaints about Laravel, but I think it's a lot easier to find examples, and modern PHP has static typing improvements. But I would much rather use C#
Yep. It’s ASP.NET. Arguably ASP.NET’s ORM is better than ActiveRecord even. With Blazor SSR you can use components on the server. IMO Blazor SSR needs a bit more time to bake and not reload is a huge mess currently. But the stack is great and will probably be undervalued simply due to the fact it is in C#.
It’s really more so because it’s Microsoft. And this is a shame, since C# has been IMO one of the great languages for at least 5 years now (C# 9/.NET 5), and has only gotten better since then.
I've been loving C# since .... 6 or 7? About 10 years now. It's genuinely a great language - took a lot of the rough spots of Java and fixed them and then kept getting better.
Languages with stronger types like typescript (unfortunately) perform much better than dynamic languages like Ruby, Elixir or even plain JS in an AI editor world. Because the editors are type smart and you can quickly pop the type error into the AI chat and it will attempt to correct it. The feedback cycle is just insane. I really hate to say it, but Typescript has won.
Have you seen any studies that validate this? I feel this would be the case, but I can’t say I’ve actually seen it work out. Cursor writes better Elixir code for me than it does Kotlin, or at least it anecdotally seems so. I find it confusing.
I remember many years ago an akin experience, talking to John Brant and Don Roberts who had done the original refactoring browser in Smalltalk. Java was on its meteoric rise with tons of effort being dumped into Eclipse. They, and others with them, were eager to port these techniques to eclipse, and the theory was they’d be able to do even more because of they typing. But Brant/Roberts that surprisingly it has been more difficult. Part of the problem was the AST. Java, while typed, had a complex AST (many node types), compared to that of Smalltalk (late/runtime typed) which had a very simple/minimal AST. It was in interesting insight.
No studies other than some serious experimentation on my own. I’m a strong Elixir dev but Cursor and friends are just more productive with Typescript due to the editor type checking cycle and training. Thought Jośe is working on a new MCP project to help: https://github.com/tidewave-ai/tidewave_phoenix
Even a cursory glance at the runtime performance difference between these two frameworks reveals that either this project won't scale to the point that cloud costs are relevant or they have a dubious prioritization of DX over deployment economy. We are talking orders of magnitude fewer RPS for Rails.
I don’t understand your integration of performance and cloud costs here.
“Deployment economy” is also new.
Rails has a very strong track record of matching internet scale.
Cloud is highly optimized for traditional server applications. From my experience with Next.js - this is the opposite. A lot of deployment components that don’t naturally fit in, and engineering required to optimize costs.
Quite simply: at certain threshold counts of users you will be forced to add many more cloud instances/pods running Rails than you would need running node.js (or Java or go or many others). But it doesn't stop at instances because this will also require more persistent disk / object storage, more logs, more alerts, more notifications from the cloud provider that instance xyz needs to be restarted (due to a firmware upgrade or whatever), etc. etc. All of these have human management overhead costs and most of them increase monthly financial costs.
It's less expensive now with Rails than our hosting was with Next.js. If there was more traffic, we'd save even more money in comparison. That was mentioned in the post.
Exploring web programming from the frontend view (e.g., Island Architecture) is always intriguing. I'd love to see how JS/SPAs surpass the SSR/HTTP paradigm with LLM/AI. Until then, focus on mastering infrastructure, protocols, databases, and APIs — engineering over designing.
For my most recent project, it was Spring Boot with Java at the backend and Solid.js at the frontend, with REST api in the middle. It has work very well. Solid has a very solid signal based React-style stack. Spring Boot is mature and stable that covers pretty much everything you need at the backend. The only wrinkle is we need to connect to a number of different database systems and the default DataSource can’t do the job. We ended up writing our own multi tenant datasource. We code gen most of the backend code for the hundreds of db tables. The path from DB to the frontend is automated. The whole project took 4.5 months with 1.5 junior developers and .5 architect and senior dev. One advantage is the business side has been nailed solid, with firm spec and feature sets.
I followed the same journey but was unimpressed by Rails attempt at modernization with Hotwire. Decided to give Elixir + Phoenix a try and immediately fell in love. Just like I had with rails years ago. I highly recommend people to check it out, liveview is a game changer for building modern web apps without the complexity of JS, and without the baggage of using Rails to do it. And performance is mind blowing.
Funnily enough, a JS library that I often use with Elixir, Phoenix and Live View is StimulusJS (which is part of Hotwire) I also have written a hacky stimulus controller to integrate it with Phoenix hooks for full integration.
Not sure about Rails, haven't used it in more than a decade, but NextJS was a major contributor to massive burnout. Of one thing I'm certain: Phoenix is my last web framework. I love it to bits, and I hope to retire before it stops being cool.
Is this article comparing apples and oranges? For example
> loading the entire homepage only takes one query [if you're logged out]
You can do this with Next.js SSR - there's nothing stopping you from reading from a cache in a server action?
They also talk about Vercel hosting costs, but then self host Rails? Couldn't they have self hosted Next.js as well? Rails notoriously takes 3-4x the resources of other web frameworks because of speed and resources.
Yep! It'd be possible with Next.js. The difference is how it's organized. In Next.js with RSCs, we were fetching data for each part of the page where it's used (trending books, Live events, blog posts, favorite books). Each of those could be their own cache hit to Redis.
One advantage of Rails is the controller. We can fetch all data in s single cache lookup. Of course it'd be possible to put everything needed in a single lookup in Next.js too, but then we wouldn't be leveraging RSCs.
I tried self-hosting Next.js on Digital ocean, but it crashed due to memory leaks without a clear way to understand where the leak was. Google Cloud Run and Vercel worked because it would restart each endpoint. We have more (and cheaper) hosting options with Rails.
I think from a business perspective, the hiring pool for Rails is small and younger engineers don’t have an interest in learning Rails (check recent university hackathons). It takes a decently long time (2-3+ months) to upskill a non-ruby engineer to be productive in Rails (although this is dampened by AI tools these days) and many senior non-ruby engineers aren’t interested in Rails jobs whereas you can get an Node or Java engineer to come to your Go shop and vice versa. Rails can also be hard to debug if you work in a multi-language shop, you can’t map your understanding of Java or Typescript over to a Rails codebase and be able to find your way around.
All that being said I still use (and like) Rails, currently comparing Phoenix/Elixir to Rails 8 in a side project. But I use typescript w/ Node and Bun in my day job.
If you "screw up and succeed" by gaining many users/customers, any Ruby or Python framework provides orders of magnitude fewer requests-per-second on the same VM or hardware than a comparable solution deployed with node.js, go, Java, C# (Including DotNet Core on Linux), or rust. And this will quickly ballon your cloud compute costs to keep up.
I believe(?) stats have shown that Java, C#, Elixir, Rust and friends are going to be quite fast, but Node.JS is going to be even slower than Ruby. At least, Next.JS (which is on top of node, I think?) will be.
Sounds improbable. Unlike Ruby, JavaScript has multiple runtime implementations with capable JIT compilers that sometimes let it even compete with Java on numeric code. Ruby is very, very far away. Please note that Elixir is also in the same single-threaded performance ballpark as Ruby and Python, of course it does not suffer from any of the single-threaded bottlenecks the other two do though.
I have now done several Google searches to - well, admittedly, to try and counter your argument; but what I've since found is:
* Every friggin' benchmark is wildly different [0, 1]
* Some of these test pages are obnoxious to read and filter; **BUT** Javascript regularly finds itself to be **VERY** fast [0]
On a more readable and easily-filtered version (that has very differnet answers) [1],
* plain Javascript (not Next.js) has gotten *REALLY* fast, serverside
* Kotlin is (confusingly?!) often slower than JS, depending on the benchmark
^-- this one doesn't make sense to me
^-- in at least one example, they're basically on par (70k rps each)
* Ruby and Python are painfully slow, but everyone else sorta sits in a pack together
I will probably be able to find another benchmark that says completely different things.
Benchmarking is hard.
I'm also having trouble finding the article from HN that I was sure I saw about Next.JS's SSR rendering performance being abysmal.
FWIW web-frameworks-benchmark is bad and has strange execution environment with results which neither correlate nor are reproducible elsewhere. TechEmpower also has gotten way worse, I stopped looking at it because its examples perform too little work and end up being highly sensitive to factors not related to the languages chosen or may be a demonstration of underlying techniques optimizing for maximum throughput, which in real world is surprisingly rare scenario (you would probably care more about overall efficiency, reasonable latency and reaching throughput target instead). TechEmpower runs on very large machines where you get into a territory that if you're operating at such scale and hardware, you're going to (have to) manually tune your application anyway.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/... is the most adequate, if biased in ways you may not agree with, if you want to understand raw _language_ overhead on optimized-ish code (multiplied by the willingness of the submission authors to overthink/overengineer, you may be interested in comparing specific submissions). Which is only a half (or even one third) of the story because the other half, as you noted, is performance of frameworks/libraries. I.e. Spring is slow, ActiveJ is faster.
However, it's important to still look at the performance of most popular libraries and how well the language copes with somewhat badly written user code which absolutely will dominate the latency way more often than anyone trying to handwave away the shortcomings of interpreted languages with "but I/O bound!!!" would be willing to admit.
Post author here. I've been developing in Ruby and Rails for almost 20 years. Here are some of the downsides in my opinion.
- The Global interpreter lock (GIL) in Ruby is less performant than async thread programming in JS (and some other languages)
- Rails creates a monolith rather than a bunch of independent endpoints. If you have a large team, this can be tricky (but is great for smaller teams who want to move fast)
- How Rails integrates with JS/CSS is always changing. I recommend using Vite instead of the asset pipeline, unless you're going with the stand Rails stimulus js setup.
- Deploying Rails in a way that auto-scales the way serverless functions can is tricky. Their favored deployment is to server of set size using Kamal.
The biggest downfall in my experience has been it can be a massive pain to find out where a method is defined in a huge codebase, especially with all the crazy ways in which one can declare methods. You can spend a non-trivial amount of time just trying to find the definition for a method.
Sorry if this sounds like a stupid question but - is there no "Go to definition" command in an IDE that can help with something like this? I mean, I understand that there is, but it doesn't work well with Ruby. Why?
Other people have mentioned "dynamic typing" as being the reason for this, but that's not actually true. The real reason is two Ruby features: `define_method` and `method_missing`.
If you have a class `Customer` with a field `roles` that is an array of strings, you can write code like this
class Customer
ROLES = ["superadmin", "admin", "user"]
ROLES.each do |role|
define_method("is_#{role}?") do
roles.include?(role)
end
end
end
In this case, I am dynamically defining 3 methods `is_superadmin?` `is_admin?` and `is_user?`. This code runs when the class is loaded by the Ruby interpreter. If you were just freshly introduced into this codebase, and you saw code using the `is_superadmin?` method, you would have no way of knowing where it's defined by simply grepping. You'd have to really dig into the code - which could be more complicated by the fact that this might not even be happening in the Customer class. It could happen in a module that the Customer class includes/extends.
The other feature is `method_missing`. Here's the same result achieved by using that instead of define_method:
class Customer
ROLES = ["superadmin", "admin", "user"]
def method_missing(method_name, *args)
if method_name.to_s =~ /^is_(\w+)\?$/ && ROLES.include?($1)
roles.include?($1)
else
super
end
end
end
Now what's happening is that if you try to call a method that isn't explicitly defined using `def` or the other `define_method` approach, then as a last resort before raising an error, Ruby checks "method_missing" - you can write code there to handle the situation.
These 2 features combined with modules are the reason why "Go to Definition" can be so tricky.
Personally, I avoid both define_method and method_missing in my actual code since they're almost never worth the tech debt. I have been developing in Rails happily for 15+ years and only had one or two occasions where I felt they were justified and the best approach, and that code was heavily sprinkled with comments and documentation.
That code is *literally* calling class_eval with a multi-line string parameter, where it inlines the helper name (like admin, user, whatever), to grow the class at runtime.
It's been widely understood in the Ruby community for some time now that metaprogramming—like in the example above—should generally be limited to framework or library code, and avoided in regular application code.
Dynamically generated methods can provide amazing DX when used appropriately. A classic example from Rails is belongs_to, which dynamically defines methods based on the arguments provided:
class Post < ApplicationRecord
belongs_to :user
end
This generates methods like:
post.user - retrieves the associated user
post.user=(user) - sets the associated user
post.user_changed? - returns true if the user foreign key has changed.
Aren’t all these enhancement methods that are added dynamically to every ActiveRecord a major reason why regular AR calls are painfully slow and it’s better to use .pluck() instead? One builds a whole object from pieces, the other vomits put an array?
Mainly because of the dynamically typed nature of the language. Not limited to Ruby/Rails. My colleagues used RubyMine because of this. I'm using Neovim with LSP, it's ok but nowhere near Go for example.
I tried rails when it was pre version 1 and the early stages. I always felt like rails is very powerful and lots of things feel like magic until you come to a point where you want something that isnt implemented in that way.
You can prototype stuff very fast with rails and its a mighty tool in the right hands.
Rails is a sharp knife. There is Rails way to do things. You may of course choose to do them differently (this is a contrast with other toolkits that fight this hard), but you are going to have to understand the system well to make that anything but a giant hassle.
With rails, the way it scales is statelessness. You have to map the front end actions to individual endpoints on the server. This works seamlessly for crud stuff (create a thing; edit a thing; delete a thing; list those things). For other use cases it works less seamlessly. NB: it works seamlessly for nested "things" too.
Complex multi-step flows are a pain point. eg you want to build data structures over time where between actions on the server (and remember, you must serialize everything you wish to save between each action), you have incomplete state. Concretely: an onboarding flow which sets up 3 different things in sequence with a decision tree is going to be somewhat unpleasant.
You must keep most state on the server and limit FE state. Hotwire works extremely well but the FE must be structured to make hotwire work well.
I've actually found it to work pretty well with individual pages build in react. My default is to build everything with hotwire and, when the FE gets too complex, to fall back to react.
Rails is nobody's idea of a fast system. You can make it perform more than well enough, but fast it is not.
Upsides, my take: it is the best tool to build websites. The whole thing is built by developers for developers. DX and niceness of tools are valued by the community. Contrast with eg the terrible standard lib that other languages (hi, js) have. Testing is by far the most pleasant I've used, with liberal support for mocking rather than having to do DI. For eg things like [logic, api calls, logic, api calls, logic, db calls] it works incredibly well. It is not the most popular toolkit and it's not react, so that can count against you in hiring.
You have to be comfortable with the ORM in every layer - it lives inside your domain models, rather than in another layer shuffling DTO's to presentation/rendering. It also makes it easy to avoid separation of concerns and stuff all your logic in a controller method and call it a day.
The upsides is that by not trying to hide the database and pretend it doesn't exist you can avoid a whole class of work (and the safety abstractions provided) and be incredibly productive if the requirements align.
You have to program it using Ruby which is not a good language. It's slow. It doesn't have good static type annotations (as far as I can tell the community "gets it" even less than in Python).
Rails also uses way too much magic to dynamically construct identifiers and do control flow.
The over-use of magic and the under-use of static types makes it extraordinarily difficult to navigate Rails codebases. It's one of those things where you have to understand the entire codebase to be able to find anything. Tractable for tiny projects. For large projects it's a disaster.
Rails is a bad choice (as is Ruby).
My favourite web framework at the moment is Deno's Fresh. You get the pleasure of TSX but it's based around easy SSR rather than complex state management and hooks. Plus because it's Deno it's trivial to set up.
inertia is so nice. rescues you from the hotwire mess. you choose your own frontend framework could be react, vue, svelte while at the same time not spinning an api or dealing with client state etc.
> hitting a GraphQL API (Hasura) for getting data, and caching as much as possible using Incremental Static Revalidation. The first load was often a bit slow, but caching helped.
Why do you need GraphQL here?
If your developer workstation can't send a few KB of data over a TCP socket in a reasonable amount of time due to the colossal amount of Zoomer JavaScript abstraction nonsense going on, something has gone terribly wrong.
The whole idea of needing "islands" and aggressive caching and all these other solutions to problems you created -- that you have somehow managed to make retrieving a trivial amount of data off a flash storage device or an in-memory storage system of some kind slow -- is ludicrous.
Yeap. Once I squinted hard enough at GraphQL, I realized it was a tantrum against coordinating front end calls with back-end API signatures efficiently, masquerading as a solution. A classic end-around.
What's funny is that people struggling after deploying it now think that they have invented the N+1 problem.
Many SPA websites don't need to be SPAs, and the overhead in terms of complexity vs "old-fashioned" server-side ajax calls (even using something as "ancient" as jQuery) is not worth it, and do not improve the user experience.
That’s funny, after using it on a couple projects I felt that it was under-engineered/lacked some basic things I was used to having in other frameworks.
Most websites are significantly simpler to build and maintain with SSR and traditional tools. An entire generation has forgotten this it seems and still thinks in terms of JS frameworks even when trying SSR.
As one example take this website, which serves the page your wrote your comment on using an obscure lisp dialect and SSR.
The majority of websites do in fact still do that and add interactivity with ajax. Works great (far better and far simpler than react and similar frameworks).
The oldest web apps - web email clients being probably the canonical and most familiar example - didn't do dynamic refresh at all, because there was no way to fetch data from the server, so you couldn't do it even with JS. Any user action that required any part of the page to be updated involved a (very visible) whole page refresh. You could limit the scope of refresh with frames to some extent, but it was still very noticeable.
Microsoft introduced XMLHttpRequest in 2000 for this exact reason - its original purpose was to allow the newly introduced Outlook web UI to fetch data from the server and use that to update the DOM as needed. This was then enthusiastically adopted by other web app authors (most notably Google, with GMail, Google Maps, and Google Talk), and other browsers jumped onto the bandwagon by providing their own largely compatible implementations. By 2006 it was a de facto standard, and by 2008 it was standardized by W3C.
The pattern was then known as AJAX, for "asynchronous JS and XML". At first web apps would use the browser API directly, but jQuery appeared right around that time to provide a consistent API across various browsers, and the first SPA frameworks showed up shortly after (I'm not sure if I remember correctly, but I think GWT was the first in 2006).
While this is mostly true, there were similar techniques even before XMLHttpRequest. iframes could communicate with parents, and also JSONP. I think JSONP was mostly pioneered as a technique after XMLHttpRequest, but the iframe trick did work (I even used it! just a tiny 16x16 iframe communicating with the parent element by calling functions on window.parent, worked great on IE5).
Oh, I'm familiar with the history. I was thinking maybe you had similar concerns (lack of dynamic content) with something modern like HTMX which is a modern take on Server Side Rendering -- but it does in fact include mechanisms for AJAX-like calls.
I use nextjs with static exporting (so technically no SSR). You get SEO and quick first page loads. Once the user is logged in, I use CSR only and the data loads via REST.
Rails is still wonderful. But someone should fork Rails so it ceases to be associated with DHH. CEOs who reveal who they really are become really toxic to the brand. We've seen that happen with Tesla.
The problem is CEO worship. You should worship Rails, not DHH. It's okay that he sometimes has other opinions, we don't have to agree with them to benefit from Rails. And there are a couple of things differentiating him from Musk: first, Rails is a byproduct of his business, not his business. 37Signals isn't even shilling some cloud service like Vercel is with NextJS. Second, DHH isn't involved in government.
He became very unpopular for his no politics at work stance at the time, but it seems to have ultimately been the right call in the long run. The toxic individuals left and 37signals is stronger than ever.
Beyond Rails and 37signals, I'm most familiar with him as a car racer, photography enthusiast, and his recent "buy once"/post-subscription software advocacy.
He’s politically naive. I agree with him on much, such as don’t make workplace political, and cancel culture and DEI have in many cases gone mad, but his tolerance, even gentle celebration of Trump in the name of free speech is a classic example of the paradox of tolerance.
However he is right in many cases, and I don’t expect anyone to be right all the time, myself included. It’s strange to look for political leadership from a programmer anyhow.
Americans don't seem to understand nuance, so when DHH posts about support for people's right to protest, how he loves being a father, how he doesn't want politics in the workplace and doesn't proclaim the sky is falling because of politics they seem to think he's the devil.
Or maybe American culture warriors should chill a bit. First they tried to cancel Matz, now DHH for holding opinions that are considered progressive everywhere in the world except SV.
Just eww... you were an expert at Rails 10+ years, failed to become an equivalent expert at Next.js so you went back to what you're used to. You just didn't dive in deep enough.
I was the same expert level with Python, now I'm using trpc, nextjs, drizzle, wakaq-ts, hosted on DO App Platform and you couldn't pay me enough to go back to Python, let alone the shitstorm mess that's every Rails app I've ever worked on.
I've also not seen the 1s Next.js pageloads you had, but I'm confident of figuring a fix if that becomes a problem.
The real driver is complexity cost. Every line of client JS brings build tooling, npm audit noise, and another supply chain risk. Cutting that payload often makes performance and security better at the same time. Of course, Figma‑ or Gmail‑class apps still benefit from heavy client logic, so the emerging pattern is “HTML by default, JS only where it buys you something.” Think islands, not full SPAs.
So yes, the pendulum is swinging back toward the server, but it’s not nostalgia for 2004 PHP. It’s about right‑sizing JavaScript and letting HTML do the boring 90 % of the job it was always good at.
reply