Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is updating 2000 boxes per second really considered an "impossible" amount to update? That seems like a shockingly small number.

That's not what's happening in the demo. It's effectively updating 2000 virtual DOM nodes at 60 frames a second by scheduling updates so as many things are updated in each 16ms frame as possible. It'll scale with the device. If you have a beast of a computer it might update all 2000 every frame. If you're on a $100 smart phone it'll schedule the updates across several frames.

In the demo each node is a three.js box geometry - react-three-fiber uses react's virtual DOM reconciler to update state on three.js things. That doesn't have to be the case though. The nodes could be HTML elements or SVG things or any other browser renderable item. React doesn't care.

What the demo really shows is that concurrent mode React moves the bottleneck out of the framework and back to the browser - how fast the UI can be updated will be down to the browser instead of what the JS framework can do. That's a really big deal. It'll make writing performant UIs a lot easier which is good for everyone.



I believe the question is that if I generate 2000 updates a second, but its applying those 2000 updates over 4 seconds (applying 500 updates a second to maintain 60fps), then

at second 1 I have 2000 updates remaining(+2000 new)

At second 2 I have 3500 updates remaining (+2000 new, -500 processed)

At second 3 I have 5000 updates remaining (+2000 new, -500 processed)

That is, my backlog will indefinitely grow larger, unless it's dropping updates.


Reacts concurrent mode isn't a magic fix for apps that literally want to do more than the computer can cope with. It's a way of distributing changes across frames so the browser isn't doing a huge update in one frame and then idling in the next 5. That's very useful for 99.9% of web UIs right now. If your app is in that 0.1% then you'll still have some perf work to do yourself.


You cannot generate 2000 updates a second if your computer cannot handle them.

Javascript runs a "main loop" that just executes "tasks". Tasks are chunks of synchronous code, and are not preemtible.

Let's say you have a "generator" task that generates 2_000 "update" tasks. The way this works is:

1. "generator" schedules 2_000 "update" tasks to run as soon as possible, and schedules another "generator" task for 1s from now.

2. The browser starts running "update" tasks one after the other as fast as it can.

3a. If the "update" tasks are all done 1s after the previous "generator" task, then the browser will run "generator" again.

3b. If they are not done, the browser will continue running "update" tasks and only invoke "generator" again _after_ it has finished with all the "update" tasks, be it 1 or 100 seconds after the previous "generator" task.


You can. This is what the demo does. The scheduler takes that amount and schedules is, which is the point. Every game engine does that (for instance frustrum culling). Games face an impossible amount of data, and schedule it. This is also true for native dev where you have priority dispatchers and threads. What React does is exciting because it schedules at the very root.


Games don't face an impossible amount of data. It might look like an impossible amount of data, but they play all sorts of cheats. In you have a crowd with 8k people, there might be 50 animations being computed and shared between the rest of the skeletons.

All members of the staff -- environment artists, character artists, level designers, animators, FX artists -- are all very technical and have the power and wisdom to use the framerate wisely.

I'm not even sure why you're bringing up frustum culling -- you're suggesting that we have too many objects to run the cull math on so we schedule across frames? But we don't; that results in visual popping. If culling is a bottleneck, we usually solve by broad-phase data structures like octrees, or ask the artists to condense multiple separate models into one so we have less objects to manage (another big cheat, artist labor).


Frustum culling is completely unrelated to scheduling. It’s just a technique for avoiding spending processing or rendering time on things that can’t be seen by the camera. It existed before games were multithreaded, during the time when they typically had a single main loop.

Games are not using complex scheduling and multithreading to achieve acceptable performance updating 2000 entities. You can achieve that easily on a single thread by laying your data out to take advantage of locality of reference and the CPU cache.


I put cannot in cursive because yes, you can schedule over the processing capacity limits. However, the CPU acts as a rate limiter in and of itself. If your CPU cannot process 2_000 updates per second, you will not be doing more than that, period. You can schedule 4_000 updates, but they will take 2s to process, no way around it. If you keep scheduling over the processing app capabilities, you will just run out of memory to hold the queue. Nobody is interested in doing that.

What games do is fundamentally different than what react is doing here. In games there are many places where you can trade off quality for speed (for instance frustrum culling). You can decide to compute less to fit within your computation time slice. Likewise, physics can take into account the update rate to adjust to the CPU/GPU's processing capability (i.e: you can use a delta_t in your computations to adjust for frame differences).

In the website there are some such opportunities (you could do it with animations for instance), but you cannot do it in general because most apps logic would not tolerate it (yeah sorry I dropped your request callback bad luck). For this reason I would be very surprised if React went that route.


No game engine does that and they don't face an "impossible" amount of data. A game designer may choose to spread work over several frames, but that is done on purpose by the developer.

And frustum culling has nothing to do with this nor it reduces any amount of updates.


If you have 2000 nodes and you generate 2000 updates a second, you will never have more than 2000 things to update per frame, as you only have 2000 nodes. Thus even at second 2 or 3, you will still only have 2000 things to updates, as the older updates are now outdated and has been replaced by something more recent. If you updates only 500 of them, that will means that in average, it will takes 4 seconds before you see the most up to date information for a node.


> If you have a beast of a computer it might update all 2000 every frame.

The point is that you shouldn't need a beast of a computer to update 2,000 boxes. That's way outside of the performance profile I expect from any recent CPU.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: