> and how little documentation there seems to be on it
DISCLAIMER: After years of using Angular/Ember/Jquery/VanillaJs, jumping into React's functional components made me enjoy building front-ends again (and still remains that way to this very day). That being said:
This has been maybe the biggest issue in React land for the last 5 years at least. And not just for RSC, but across the board.
It took them forever to put out clear guidance on how to start a new React project. They STILL refuse to even acknowledge CRA exist(s/ed). The maintainers have actively fought with library makers on this exact point, over and over and over again.
The new useEffect docs are great, but years late. It'll take another 3-4 years before teh code LLMs spit out even resemble that guidance because of it.
And like sure, in 2020 maybe it didn't make sense to spell out the internals of RSC because it was still in active development. But it's 2025. And people are using it for real things. Either you want people to be successful or you want to put out shiny new toys. Maybe Guillermo needs to stop palling around with war criminals and actually build some shit.
It might be one of the most absurd things about React's team: their constitutional refusal to provide good docs until they're backed into a corner.
There are many different tools that attempt to solve the same problem, with varying levels of competency.
They can't all use the same name. If you want to build a better alternative to an existing solution, you need to choose a different name, this leads to names being arbitrary.
For the occasional local LLM query, running locally probably won't make much of a dent in the battery life, smaller models like mistral-7b can run at 258 tokens/s on an iPhone 17[0].
The reason why local LLMs are unlikely to displace cloud LLMs is memory footprint, and search.
The most capable models require hundreds of GB of memory, impractical for consumer devices.
I run Qwen 3 2507 locally using llama-cpp, it's not a bad model, but I still use cloud models more, mainly due to them having good search RAG.
There are local tools for this, but they don't work as well, this might continue to improve, but I don't think it's going to get better than the API integrations with google/bing that cloud models use.
End-to-end encryption doesn't mean anything where it is semi-validly used. It's used on phones, where you as a user (or company) don't control what code executes. For example, WhatsApp was end-to-end encrypted. Well, it doesn't actually provide security because with either physical access to the phone or if you have if you can use the app store to "upgrade" the app, you can upload code to the phone. You can upload an apk that replaces the WhatsApp app. It even still uploads the messages to a central server so you can get those messages from Meta, then get the key from the phone some time later or earlier and use the key to decrypt it when the message is already erased from the phone.
(aside from the fact that people don't seem to know/remember WhatsApp backs up to google drive)
Code that then gets access to the end-to-end encryption keys ... so you're not safe from state actors, you're not safe from police, you're not safe from the authors of the code and you're not safe from anyone who has physical access to your phone.
A soft-realtime multiplayer game is always incorrect(unless no one is moving).
There are various decisions the netcode can make about how to reconcile with this incorrectness, and different games make different tradeoffs.
For example in hitscan FPS games, when two players fatally shoot one another at the same time, some games will only process the first packet received, and award the kill to that player, while other games will allow kill trading within some time window.
A tolerance is just an amount of incorrectness that the designer of the system can accept.
When it comes to CRUD apps using read-replicas, so long as the designer of the system is aware of and accepts the consistency errors that will sometimes occur, does that make that system correct?
The impulse console command originates from Quake, the Half-Life 1 engine (GoldSrc[0]), was based on the Quake engine, and the Half-Life 2 engine (Source), was based on GoldSrc.
In quake, the impulse commands were used mostly to switch weapons[1].
I'm not really sure about the naming though, why choose the word "impulse".
In fairness react present it as an "experimental" library, although that didn't stop nextjs from widely deploying it.
I suspect there will be many more security issues found in it over the next few weeks.
Nextjs ups the complexity orders of magnitude, I couldn't even figure out how to set any breakpoints on the RSC code within next.
Next vendors most of their dependencies, and they have an enormously complex build system.
The benefits that next and RSC offer, really don't seem to be worth the cost.