Hacker Newsnew | past | comments | ask | show | jobs | submit | WatchDog's commentslogin

When I looked into RSC last week, I was struck by how complex it was, and how little documentation there seems to be on it.

In fairness react present it as an "experimental" library, although that didn't stop nextjs from widely deploying it.

I suspect there will be many more security issues found in it over the next few weeks.

Nextjs ups the complexity orders of magnitude, I couldn't even figure out how to set any breakpoints on the RSC code within next.

Next vendors most of their dependencies, and they have an enormously complex build system.

The benefits that next and RSC offer, really don't seem to be worth the cost.


> and how little documentation there seems to be on it

DISCLAIMER: After years of using Angular/Ember/Jquery/VanillaJs, jumping into React's functional components made me enjoy building front-ends again (and still remains that way to this very day). That being said:

This has been maybe the biggest issue in React land for the last 5 years at least. And not just for RSC, but across the board.

It took them forever to put out clear guidance on how to start a new React project. They STILL refuse to even acknowledge CRA exist(s/ed). The maintainers have actively fought with library makers on this exact point, over and over and over again.

The new useEffect docs are great, but years late. It'll take another 3-4 years before teh code LLMs spit out even resemble that guidance because of it.

And like sure, in 2020 maybe it didn't make sense to spell out the internals of RSC because it was still in active development. But it's 2025. And people are using it for real things. Either you want people to be successful or you want to put out shiny new toys. Maybe Guillermo needs to stop palling around with war criminals and actually build some shit.

It might be one of the most absurd things about React's team: their constitutional refusal to provide good docs until they're backed into a corner.


People did complain about next exposing "react, not ready for production" things as "the latest and greatest thing from nextjs" for quite a while now

I had moved off nextjs for reasons like these, the mind load was getting too heavy for not too much benefit


There are many different tools that attempt to solve the same problem, with varying levels of competency.

They can't all use the same name. If you want to build a better alternative to an existing solution, you need to choose a different name, this leads to names being arbitrary.


For the occasional local LLM query, running locally probably won't make much of a dent in the battery life, smaller models like mistral-7b can run at 258 tokens/s on an iPhone 17[0].

The reason why local LLMs are unlikely to displace cloud LLMs is memory footprint, and search. The most capable models require hundreds of GB of memory, impractical for consumer devices.

I run Qwen 3 2507 locally using llama-cpp, it's not a bad model, but I still use cloud models more, mainly due to them having good search RAG. There are local tools for this, but they don't work as well, this might continue to improve, but I don't think it's going to get better than the API integrations with google/bing that cloud models use.

[0]: https://github.com/ggml-org/llama.cpp/discussions/4508


I used Mistral 7B a lot in 2023. It was a good model then. Now its not anywhere near where SOTA models are.


I ran your exploit-rce-v4.js with and without the patched react-server-dom-webpack, and both of them executed the RCE.

So I don't think this mechanism is exactly correct, can you demo it with an actual nextjs project, instead of your mock server?


I'v updated the code, try it now with server-realistic.js:

1. npm start 2. npm run exploit


I'm trying that, nextjs is a little different because it uses a Proxy object before it passes through, which blocks the rce.

I'm debugging it currently, maybe I'm not on the right path after all.


A CVSS score of 10.0 may be warranted in this case, but so many other CVSS scores are wildly inflated, that the scores don't mean a lot.


Regardless it can still provide some context and adjustment cs none.

The above could be seen as spin too, how could cvss be more accurate so you’d feel better?


Any new term you come up with, will end up being misused by marketers.


End-to-end encryption doesn't mean anything where it is semi-validly used. It's used on phones, where you as a user (or company) don't control what code executes. For example, WhatsApp was end-to-end encrypted. Well, it doesn't actually provide security because with either physical access to the phone or if you have if you can use the app store to "upgrade" the app, you can upload code to the phone. You can upload an apk that replaces the WhatsApp app. It even still uploads the messages to a central server so you can get those messages from Meta, then get the key from the phone some time later or earlier and use the key to decrypt it when the message is already erased from the phone.

(aside from the fact that people don't seem to know/remember WhatsApp backs up to google drive)

Code that then gets access to the end-to-end encryption keys ... so you're not safe from state actors, you're not safe from police, you're not safe from the authors of the code and you're not safe from anyone who has physical access to your phone.


Yes, the government can also just implant tiny cameras in your eyeballs and just record everything you see anyway, so you’re not safe.


FWIW that's the initial plot for the Ghost In The Shell: Stand Alone Complex (2002) animated series.


A soft-realtime multiplayer game is always incorrect(unless no one is moving).

There are various decisions the netcode can make about how to reconcile with this incorrectness, and different games make different tradeoffs.

For example in hitscan FPS games, when two players fatally shoot one another at the same time, some games will only process the first packet received, and award the kill to that player, while other games will allow kill trading within some time window.

A tolerance is just an amount of incorrectness that the designer of the system can accept.

When it comes to CRUD apps using read-replicas, so long as the designer of the system is aware of and accepts the consistency errors that will sometimes occur, does that make that system correct?


There's a difference between:

- the system compensating for the network being fallible

- the system not fulfilling its design goals

- the system not being specified well enough to test if the design goals were fulfilled


Another way to think about the price, is that it's slightly less than we spend per day on the NDIS(~126 million)


Cleaning up algae doesn’t buy votes


The most useful LLM "extension" isn't even mentioned in this article, and that is shell use.

An LLM with a shell integration can do anything you need it to.


A man with a spoon can dig a swimming pool but Id prefer a backhoe


sudo apt-get install backhoe


mise use -g backhoe


The impulse console command originates from Quake, the Half-Life 1 engine (GoldSrc[0]), was based on the Quake engine, and the Half-Life 2 engine (Source), was based on GoldSrc.

In quake, the impulse commands were used mostly to switch weapons[1]. I'm not really sure about the naming though, why choose the word "impulse".

[0]: https://en.wikipedia.org/wiki/GoldSrc.

[1]: https://github.com/id-Software/Quake/blob/0023db327bc1db0006...


Noclip FTW


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: