Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I have no idea why, but tree models scare the shit out of college educated developers.

Very few people are "scared" of tree models.

The problem of working with the DOM is that it's:

- 90s JAVA-like verbose unwieldy API that requires tons of boilerplate to do the simplest things

- Extremely anemic API that is neither low-level enough to let you do your own stuff easily, nor high-level enough to just create what you need out of existing building blocks

- An API that is completely non-composable

- A rendering system that is actively getting in the way of doing things, and where you have to be acutely aware of all the hundreds of pitfalls and corner cases when you so much as change an element border (which may trigger a full re-layout of the entire page)

- A rendering system which is extremely non-performant for anything more complex than a static web page (and it barely manages to do even that). Any "amazing feats of performance" that people may demonstrate are either very carefully coded, use the exact same techniques as other toolkits (e.g. canvas or webgl), or are absolute table stakes for anything else under the sun. I mean, an frontpage article last week was how it needed 60% CPU and 25% GPU to animate three rectangles: https://www.granola.ai/blog/dont-animate-height

> So you get people investing their entire careers trying to hide from the DOM with unnecessary abstractions

The abstractions of the past 15 or so years have been trying to hide from the DOM only because the DOM is both extremely non-performant and has an API even a mother wouldn't love.



This is exactly what I am talking about. All these excuses, especially about vanity, are masking behaviors.

DOM access is not quite as fast now as it was 10 years ago. In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory. People with more modern hardware were getting closer to 5 billion ops/second. That isn’t slow.

Chrome has always been much slower. Back then I was getting closer to a max of 50 million ops/second perf testing the DOM. Now Chrome is about half that fast, but their string interpolation of query strings is about 10x faster.

The only real performance problem is the JS developer doing stupid shit.


"In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory."

When you profile something and you get "a billion per second" what you've got there is a loop where the body has been entirely optimized away. Presumably the JIT noticed you were doing nothing and making no changes and optimized it away. I don't think there's a single real DOM operation you can do in an amortized 3-ish CPU cycles per operation (2015 3GHz-ish CPU, but even at 5GHz it wouldn't matter).

That's not a real performance number and you won't be seeing any real DOM operations being done at a billion per second anytime soon.


Or, more likely, it’s a traversal of a data structure already in memory. If so, then it is a very real operation executing as fast as reported, at near memory speed.


You can't even "traverse" a data structure at that speed. We're talking low-single-digit cycles here. A compiled for loop tends to require two cycles just to loop (you need to increment, compare, and jump back but the CPU can speculate the loop will in fact jump back so the three operations can take two cycles), and I don't know what the minimum for a JS-engine JIT'd loop is but needing 4 or 5 wouldn't stun me. That's literally the "compiled loop do-nothing speed".

I mention this and underline it because this is really an honorary "latency value every programmer should know"; I've encountered this multiple times online where someone thought they were benchmarking something but they didn't think about the fact that .6 nanoseconds per iteration comes out to about 2-3 cycles (depending on CPU speed) and there's no way what they thought was benchmarking could be done that quickly, and I've now encountered it twice at work. It's a think worth knowing.


I have experienced numerous conversations about performance with people who invent theories while never actually measuring things. Typically this comes from students.

That is not correct in practice. Some operations are primarily CPU bound, some are primarily GPU bound, and some are primarily memory bound. You can absolutely see the difference when benchmarking things on different hardware. It provides real insight into how these operations actually work.

When you conduct your own measurements we can have an adult conversation about this later. Until then it’s all just wild guessing from your imagination.


Sure, go ahead and show me your "data structure traversal", in Javascript (JIT'd is fine, since clearly non-JIT is just out of the question), that works in 3-5 cycles.

The whole topic of this conversation is a measurement in which it was claimed that "a billion DOM operations per second" were being done in 2015. That's a concrete number. Show me the actual DOM operation that can be done a billion times per second, in 2015.

The burden of proof here is on you, not me. I'm making the perfectly sensible claim that all you can do in a low-single-digit number of cycles is run a loop. I have, in fact, shown in other languages down at the assembler level that things that claim to be running in .6ns are in fact just empty loops, so I'm as satisfied on that front as I need to be. It's not exactly hard to see that when you look at the assembler. You don't even need to know assembler to know that you aren't doing any real work with just 3 or 4 opcodes.

I don't know how you expect to just Measure Harder and get a billion operations per second done on any DOM structure but I expect you're going to be disappointed. Heck, I won't even make you find a 2015 machine. Show it to me in 2025, that's fine. Raw GHz haven't improved much and pipelining won't be the difference.


I am not preventing you from measuring things. The burden of proof is equally on everybody concerned with speed.

https://jsbench.github.io/#b39045cacae8d8c4a3ec044e538533dc

I cannot go back in time to 2015 conditions, but you can run the tests yourself and get your own numbers. Try running that in different browser and on different hardware. Another interesting thing is experimenting with is HTTP roundtrip speed and WebSocket send versus receive speed on different hardware.

This is interesting because many assumptions are immediately destroyed once the numbers come in. Many of these operations can execute dramatically faster on hardware with slower CPUs so long as the bus and memory speeds are greater.

What's also interesting is that many developers cannot measure things. It seems as if the very idea of independently measuring things is repulsive. Many developers just expect people to give them numbers of something and then don't know what to do with it, especially if the numbers challenge their assumptions, like cognitive conservatism.


> All these excuses, especially about vanity, are masking behaviors.

1. These are not excuses, these are facts of life

2. No idea where you got vanity from

> DOM access is not quite as fast now as it was 10 years ago. I was getting just under a billion operations per second

Who said anything about DOM access?

> The only real performance problem is the JS developer doing stupid shit.

Ah yes. I didn't know that "animating a simple rectangle requires 60% CPU" is "developers doing stupid shit" and not DOM being slow because you could do meaningless "DOM access" billions time a second.

Please re-read what I wrote and make a good faith attempt to understand it. Overcome your bias and foregone conclusions.


> when you so much as change an element border (which may trigger a full re-layout of the entire page)

This is easily avoided: use 'outline' instead of 'border', or just keep the border width fixed and change the border color to/from transparent.


> This is easily avoided: use 'outline' instead of 'border'

Yup. And DOM is full of footguns like this. "Oh, you can't do this primitive thing no UI kit has a problem with, you have to use this workaround".


Well, no. First of all visual rendering is an aside not directly relevant to the DOM, a data structure.

Secondly, there are two kinds of visual operations that can occur from a DOM change: a screen repaint or a local change. A repaint occurs because the configuration of an element is modified relative to its dimensions, which includes: size, shape, or location. Everything else is really just a color change and is very fast regardless of the DOM size.

Even still modern visual rendering is pretty fast. I have a personal project that is an OS GUI, including file system display, written in TypeScript that displays in a browser. It took close to 10000 DOM visual on the page at one for drag and drop to become slower and laggy. I wouldn’t build a AAA 3D game engine in that, but it’s also not the intended use case.


> there are two kinds of visual operations that can occur from a DOM change: a screen repaint or a local change.

There's:

- reflow (most expensive)

- repaint (second most expensive)

- composition (least expensive)

The footguns in HTML and CSS are randomly scattered all over the place.

Oh? You changed a single value that wouldn't even register on any performance monitoring tool anywhere else? Oops, the browser now has to reflow the page, re-paint it, and re-compose leading to noticeable CPU and GPU spikes.

Or maybe it won't, you'll never know. Because none of these are documented anywhere and randomly change as browser internals change.

E.g. why would focusing an element reflow the entire document? https://gist.github.com/paulirish/5d52fb081b3570c81e3a Who knows.

Again, a recent article on HN's front page dealt with 60% CPU and 25% GPU while doing the most primitive of all animations on just three rectangles.

> I wouldn’t build a AAA 3D game engine in that, but it’s also not the intended use case.

Yes. Because the use case is displaying static pages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: