I want to preface this by saying I have nothing against React, I have used it professionally for a couple years and it's fine and perfectly good enough.
That being said React is slow. That is why you need useTransition, which is essentially manual scheduling (letting React know some state update isn't very important so it can prioritise other things) which you don't need to do in other frameworks.
useOptimistic does not improve performance, but perceived performance. It lets you show a placeholder of a value while waiting for the real computation to happen. Which is good, you want to improve perceived performance and make interactions feel instant. But it technically does not improve React's performance.
It is pretty established at this point that React has (relative) terrible performance. React isn't successful because it's a superior technology, it's successful despite being an inferior technology. It's just really difficult to beat an extremely established technology and React has a huge ecosystem, so many companies depend on it that the job market for it is huge, etc.
As to why it is slow, my knowledge is super up-to-date (haven't kept up that well with recent updates), but in general the idea is:
- The React runtime itself is 40 kB so before doing anything (before rendering in CSR or before hydrating in SSR) you need to download the runtime first.
- Most frameworks have moved on to use signals to manage state updates. When state change, observers of that state will be notified and the least amount of code will be run before updating the DOM surgically. React instead re-executes the code of entire component trees, compares the result with the current DOM and then applies changes. This is a lot more work and a lot slower. Over time techniques have been developed in React to mitigate this (Memoization, React Compiler, etc.), but it still does a lot more work than it needs to, and these techniques are often not needed in other frameworks because they do a lot less work by default.
The js-framework-benchmark [1] publishes benchmarks testing hundreds of frameworks for every Chrome release if you're interested in that.
> It is pretty established at this point that React has (relative) terrible performance.
> it is slow
You're not answering my question, just adding some more feelings.
> The React runtime itself is 40 kB
React is < 10 kb compressed https://bundlephobia.com/package/react@19.2.0 (add react-dom to it). That's not really significative according to the author's figures, the header speaks about up "to 176.3 kB compressed".
> Most frameworks have moved on to use signals to manage state updates. When state change
This is not kilobytes or initial render times, but performance in rendering in a highly interactive application. They would not impact rendering a blog post, but rendering a complex app's UI. The original blog post does not measure this, it's out of scope.
I don't know how bundlephobia calculates package size, and let me know if you're able to reproduce them in a real app. The simplest Vite + React app with only a single "Hello, World" div and no dependencies (other than react and react-dom), no hooks used, ships 60+ kB of JS to the browser (when built for production, minified and gzipped).
Now the blog post is not just using React but Next.js which will ship even more JS because it will include a router and other things that are not a part of React itself (which is just the component framework). There are leaner and more performant React Meta-Frameworks than Next.js (Remix, TanStack Start).
> This is not kilobytes or initial render times, but performance in rendering in a highly interactive application
True, but it's another area where React is a (relative) catastrophe.
The large bundle size on the other hand will definitely impact initial render times (in client-side rendering) and time-to-interactive (in SSR), because it's so much more JS that has to be parsed and executed for the runtime before even executing your app's code.
EDIT: It also does not have to be a highly interactive application at all for this to apply. If you only change a single value, that is read in a component deep within a component tree you will definitely feel the difference, because that entire component tree is going to execute again (even though the resulting diff will show that only that deeply nested div needs to be updated, React has no way of knowing that beforehand, whereas signal-based framework do)
And finally I want to say I'm not a React hater. It's totally possible to get fast enough performance out of React. There are just more footguns to be aware of.
React's bundling system and published packages has gotten noticeably more complicated over time.
First, there's the separation between the generic cross-platform `react` package, and the platform-specific reconcilers like `react-dom` and `react-native. All the actual "React" logic is built into the reconciler packages (ie, each contains a complete copy of the actual `react-reconciler` package + all the platform-specific handling). So, bundle size has to measure both `react` and `react-dom` together.
Then, the contents of `react-dom` have changed over time. In React 18 they shifted the main entry point to be `react-dom/client`, which then ends up importing the right dev/prod artifacts (with `react-dom` still supported but deprecated):
Then, in React 19, they restructured it further so that `react-dom` really only has a few utils, and all the logic is truly in the `react-dom/client` entry point:
So yes, the full prod bundle size is something like 60K min+gz, but it takes some work to see that. I don't think Bundlephobia handles it right at all - it's just automatically reading the main entry points for each package (and thus doesn't import `react-dom/client`. You can specify that with BundleJS though:
The complete Framework Desktop with everything working (including said Ryzen AI Max 395+ and 128 GB of RAM) is 2500 EUR. In Europe the DGX Spark listings are at 4000+ EUR.
It's a different animal. Ryzen wins on memory bandwidth and has 'AI' accelerator (my guess matrix multiplication). Spark has times lower bandwidth, but much better and more generic compute. Add to that CUDA ecosystem with libs and tools. I'm not saying Ryzen is bad, actually it's great Mac substitute for poor man. $2K for 128GB version on Amazon now.
Online only at https://frame.work AFAIK. I don't think people shelling out 2-4k for an AI training machine are concerned whether or not they can find it at a hardware store locally or online, but I may be wrong.
I moved away from using Tailwind CSS, but still use their "preflight.css" [1]. It doesn't really care about backwards compatibility (IE stuff), but does a great job at unstyling everything so you have a clean cross-browser base to work with (button will look like text until you add your styling).
I took a look thinking that this might actually be useful and somehow came away with an even lower opinion of Tailwind. It’s all like this:
/*
1. Add the correct height in Firefox.
2. Correct the inheritance of border color in Firefox. (https://bugzilla.mozilla.org/show_bug.cgi?id=190655)
3. Reset the default border style to a 1px solid border.
*/
hr {
height: 0; /* 1 */
color: inherit; /* 2 */
border-top-width: 1px; /* 3 */
}
Why can’t they do anything reasonably? It would be easy to put code comments against the actual code it is commenting, but instead they do this weird comment index up front.
Could you write a few words why you moved away from TailwindCSS?
I am an amateur dev (I write open-source useful for me and possibly others) and I am prone to the "front-end diarrhea syndrome", where when I see something cooler, I jump on it and regret afterwards the time spent.
I am on TailwindCSS right now and I am afraid to learn the drawbacks, but one must be courageous in life.
It has some good points (and vanilla-extract IS awesome), but it's also a bit unfair, eg ignoring tailwind affordances like `apply`.
That said, for learning the "right" way to think about and use CSS, https://every-layout.dev is hands-down the best resource I've encountered in my 20+ years working with websites.
Thank you, this is a nice post. On the other hand, the author is happy to have HTML and CSS generated with JS, which is weird as well (I know - React).
Because his article is about bringing back CSS to its glory (more or less :)) and at the same time is happy with his HTML being generated by JS. Instead of having a pure semantic HTML section.
I am not saying that is good or bad (I use Vue for instance), just a bit contradictory.
Now, to be fair, they wrote the article in 2025, and Tailwind linting was only released five years prior (in 2020) ... five years is hardly long enough to learn relevant tech for your industry /s
The rest of the article seemed similarly ill-informed, with the author fixating on meaningless byte-size differences in contrived examples. However, he ignores the fact that Tailwind is used on some of the most performant sites on the Internet. He also ignores the fact that (for 99% of sites at least) sacrificing a k or two of bandwidth is well worth it for a major increase in developability.
With Tailwind you completely get rid of stylesheets: that alone is huge! There's a reason why so many devs use Tailwind: they don't worry about minimal file size differences, but they do care about massive savings in development time and complexity reduction.
You described a lot of orthogonal points and highlighted your opinions, more than pointed out flaws in the article.
I use Tailwind at work at a large company, and it's... Okay. Its biggest strength is the documentation, since most companies have poorly documented style guide/component library.
I'd never use it for a personal project though. It's fine to disagree
I find it just inevitably leads to massive classes that pollute markup. Makes it a lot harder to parse HTML and figure out the structure. There are ways to make it a bit better but it's still not great (for me).
It's even worse with conditional styling since you're not toggling a single class on an off but dozens.
You can't use string interpolation for classes, e.g. `<div class={`bg-${color}-400`}>` (Tailwind won't include the classes in your CSS because it doesn't recognize you want all or a subset of bg-<color>-400 classes. They recommend using maps but this becomes very verbose once there are more than a few options.
I'm using Svelte which has a great built-in styling solution with component-scoped styles by default, and modern CSS with nesting is a lot more compact than it used to be.
On macOS you can use OrbStack [1] for a much better experience working with Docker on Mac, as well as quickly spinning up headless Linux VMs (WSL on Mac kind of thing). The free tier will probably be rug-pulled someday, but I've been using it for a couple of years and it makes my life a lot easier when I'm on macOS.
All of these things are running Linux under virtualization and faking it to the Mac user and they naturally have all the same limitations as running a VM. They are effectively like WSL.
P.S. I believe that specific company did some level of rug-pull early on already and started charging people who already relied on it for free because they left Docker which started charging earlier, so I would be vary of relying on them.
You can argue all day whether it’s ok to do this, and I’d absolutely say it’s fine, even laudable that they’re trying to make a real business where you have to pay for a product. Great for them!
But “rug pull” is absolutely still a correct description of what’s happening, because it was free, and now it’s not. Here’s a nice rug, but you have to get off of it by $DATE because we’re going to pull it. It’s a rug pull.
If it wasn’t a rug pull, I’d be able to keep standing on the rug (the free version.)
Very strange logic. If we follow your example, going to the dealership and taking a car for a test drive is a rug pull because eventually the car dealer will ask you to pay for the car?
No, because that would be absurd. You're not "following my example", you're using reductio ad absurdum. Any phrase can sound stupid if you take it out of context like that.
To make a non-fallacious analogy: If a ride sharing service gave car rides for free for a month, and a friend said "I'm going to use this instead of buying a car", you would very rightly say "they're going to pull the rug on the free rides, you may want to rethink that". And that would be a perfectly valid thing to say, even if the company told everyone the free rides were only for a month. Because the purpose of the discussion is whether it's a good idea to depend on the free service or not.
You seem hung up on this, like it's a judgement call or something. Maybe just free yourself of negative connotations with the term. It's fine to do this. I don't think it's a problem whatsoever.
The phrase is useful for what the metaphor implies: Likening using the product to sitting on a rug. If you start getting used to your place on the rug (putting your stuff on it, eating dinner on the rug, etc), you have to be aware that they're going to pull it, so you have to have a plan for when that happens (either pay or switch to a competitor.) Being aware of this is important: If you start developing a workflow that depends on this kind of software, you have to understand that it won't be free in the future and that you should either not depend on it, or be willing to pay. This is all fine.
The fact that you don't like the negative connotation doesn't mean the phrase isn't applicable.
There's a way to do that. Don't call it free beta with no pricing attached. Call it "free trial for X period" and ideally advertise the price ahead of time as it was always done in the past. Not calling it a "trial" is not an accident. It is deliberate and that's what makes it a rug pull.
NT was designed to have different subsystems[1]. Win32 subsystem was a layer on top of NT, and was a peer to the POSIX subsystem, and IIRC even OS/2. WSL1 ideas was certainly not new in the context of NT. FreeBSD also has a Linux subsystem.
I agree - we could already run Linux VM's on windows anyways. The change in architecture from native Linux on native Windows to just another VM on windows was very disappointing. Sure, they improved FS performance but broke a lot of magical stuff that could be done earlier. FS performance for WSL-1 should have been improved by enhancing the Windows API.
Technically WSL1 is superior, sure, but is that what users want? Seems like users are happily using VMs to run Docker on Mac and many don't even know it is VM underneath. I suppose max compatibility with Linux kernel is more of a desirable behavior, plus the engineering cost of WSL2 is minuscule compared to WSL1, especially once it is done. WSL1 would require continuous reimplementation of complex Linux kernel features as Windows syscalls.
Indeed, I misunderstood "Docker and virtualization isn't the same" as "Docker and virtualization on Mac is not as good as on Linux". Now I understand you meant "Docker and virtualizating Linux isn't the same as bare metal Linux".
You can in principle use USB passthrough to do that on a USB WiFi card to do that, I believe, but sucks to carry around a USB dongle especially since macs seem to barely have USB ports...
It's a terrible bicycle. If it was extremely affordable because it's mostly recycled plastic acquired for cheap, then it would make sense as a product but the 1200 EUR price tag is absolutely demented.
> presumably you would've been able to replicate it completely in the first X years of using vim, and then there is no hell anymore?
I agree with this, but being able to ssh into a server and just grab Helix instead of copying over my Vim config and whatever else it depends on is really nice. Makes your dev env feel a lot more portable (although also more barebones than a crazy Vim config)
It's not like Helix is as ubiquitous as Vim on a random remote server. If that's not the case surely the effort to install Helix is bigger than copying Vim config?
this was my plan, but then i got really into using LSPs and i'm not about to install every LSP on every server i ssh into.
Currently i just use sshfs to mount whatever directory i want to work on locally, nice to have all lsps available and also see my changes reflected on the server instantly.
Glad it works for you! My dot files are in a private repo, I don't want to bother with adding new SSH keys to my GitHub account and then remembering to cleanup everything for what could be a one-time use. `apk add helix` is way easier for me.
> you will spend on average 1/3 of your time not playing when you came to play
The buy phase is playing. It's coordinating with your team on what loadouts to get to best counter the enemy team. The decision making is fun and is play. Because you're not shooting at other players does not mean it is not playing?
But generally, for other recent games I have to play, that is a strong complaint that I have.
Like BO6 is awful for that for example.
One example I noticed in the past years is with racing games like Burnout. The original game was perfect for a quick relaxing session, you start the game, you play (driving) most of the time. I the last versions of Burnout, you are stuck with hours lost waiting for intro, start and end of race cinematics, that are unskippable. And the whole interface making it painful to have a long session of "really playing" in a row.
Obviously, most often when games are online, despite the fact that there is not a need for a real "loading" phase to reload the same assets 50 times when you play over and over the same game.
Zig make allocations extremely explicit (even more than C) by having you pass around the allocator to every function that allocates to the heap. Even third-party libraries will only use the allocator you provide them. It's not a fallacy, you're in total control.
Nothing, but it would be bad design (unless there is a legitimate documented reason for it). Then it's up to you as the developer to exercise your judgment and choose what third-party libraries you choose to depend on.
You can if you want. You can write your own allocator that never actually touches the heap and just distributes memory from a big chunk on the stack if you want to. The point is you have fine grained (per function) control over the allocation strategy not only in your codebase but also your dependencies.
That's not really an argument. What prevents the author of a library in any language from acting in bad faith and use antipatterns? That's not a problem that would only happen in the Zig language.
They can, and they wouldn't be necessarily be wrong.
However if the library is trying to be as idiomatic / general purpose / good citizen as possible, then they should strongly consider not doing that and only use the user provided allocator (unless there is a clear and documented reason why that is not the case).
I don't think it would make sense to restrict this at the language level. As a developer it's up to you to exercise your judgement when you examine what libraries you choose to depend on.
I appreciate the fact it's a common design pattern in Zig libraries and I also appreciate the fact I'm not forced to do it if I don't want to. If it matters to me then I'll consider libraries that are designed that way, if it does not matter to me then I can consider libraries that do not support this.
and so why should he be forced to not do so? he can cook his own thing outside of what most people using this language will do, and they will just not use it, and nothing's wrong with that.
That being said React is slow. That is why you need useTransition, which is essentially manual scheduling (letting React know some state update isn't very important so it can prioritise other things) which you don't need to do in other frameworks.
useOptimistic does not improve performance, but perceived performance. It lets you show a placeholder of a value while waiting for the real computation to happen. Which is good, you want to improve perceived performance and make interactions feel instant. But it technically does not improve React's performance.