Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Uggghhh, the article states correct facts about the DOM but grossly incorrect conclusions. Most developers have always feared working with the DOM. This irrationality is not new. I have no idea why, but tree models scare the shit out of college educated developers. That’s supremely weird because computer science education spends so much energy on data structures and tree models.

It also makes the conversation about WASM even more bizarre. Most college educated developers are scared of the DOM. Yes, it’s fear the emotion and it’s completely irrational. Trust me on this as I have watched it as a prior full time JS dev for over 15 years. Developers are continuously trying to hide from the thing with layers of unnecessary abstractions and often don’t know why because they have invested so much energy in masking their irrational nonsense.

Other developers that have not embraced this nightmare of emotions just simply wish WASM would replace JS so they don’t have touch any of this. This is problematic because you don’t need anything to do with JS or the DOM to deploy WASM, but it’s a sandbox that ignores the containing web page, which is absolutely not a replacement. For WASM to become a replacement it would have to gain full DOM access to the containing page. Browser makers have refused to do that for clear security reasons.

So you get people investing their entire careers trying to hide from the DOM with unnecessary abstractions and then other developers that want bypass the nonsense by embracing that thing they don’t know they are yet afraid of it.

That is super fucking weird, but it makes for fun stories to nondevelopers that wonder why software is the way it is.



> I have no idea why, but tree models scare the shit out of college educated developers.

Very few people are "scared" of tree models.

The problem of working with the DOM is that it's:

- 90s JAVA-like verbose unwieldy API that requires tons of boilerplate to do the simplest things

- Extremely anemic API that is neither low-level enough to let you do your own stuff easily, nor high-level enough to just create what you need out of existing building blocks

- An API that is completely non-composable

- A rendering system that is actively getting in the way of doing things, and where you have to be acutely aware of all the hundreds of pitfalls and corner cases when you so much as change an element border (which may trigger a full re-layout of the entire page)

- A rendering system which is extremely non-performant for anything more complex than a static web page (and it barely manages to do even that). Any "amazing feats of performance" that people may demonstrate are either very carefully coded, use the exact same techniques as other toolkits (e.g. canvas or webgl), or are absolute table stakes for anything else under the sun. I mean, an frontpage article last week was how it needed 60% CPU and 25% GPU to animate three rectangles: https://www.granola.ai/blog/dont-animate-height

> So you get people investing their entire careers trying to hide from the DOM with unnecessary abstractions

The abstractions of the past 15 or so years have been trying to hide from the DOM only because the DOM is both extremely non-performant and has an API even a mother wouldn't love.


This is exactly what I am talking about. All these excuses, especially about vanity, are masking behaviors.

DOM access is not quite as fast now as it was 10 years ago. In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory. People with more modern hardware were getting closer to 5 billion ops/second. That isn’t slow.

Chrome has always been much slower. Back then I was getting closer to a max of 50 million ops/second perf testing the DOM. Now Chrome is about half that fast, but their string interpolation of query strings is about 10x faster.

The only real performance problem is the JS developer doing stupid shit.


"In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory."

When you profile something and you get "a billion per second" what you've got there is a loop where the body has been entirely optimized away. Presumably the JIT noticed you were doing nothing and making no changes and optimized it away. I don't think there's a single real DOM operation you can do in an amortized 3-ish CPU cycles per operation (2015 3GHz-ish CPU, but even at 5GHz it wouldn't matter).

That's not a real performance number and you won't be seeing any real DOM operations being done at a billion per second anytime soon.


Or, more likely, it’s a traversal of a data structure already in memory. If so, then it is a very real operation executing as fast as reported, at near memory speed.


You can't even "traverse" a data structure at that speed. We're talking low-single-digit cycles here. A compiled for loop tends to require two cycles just to loop (you need to increment, compare, and jump back but the CPU can speculate the loop will in fact jump back so the three operations can take two cycles), and I don't know what the minimum for a JS-engine JIT'd loop is but needing 4 or 5 wouldn't stun me. That's literally the "compiled loop do-nothing speed".

I mention this and underline it because this is really an honorary "latency value every programmer should know"; I've encountered this multiple times online where someone thought they were benchmarking something but they didn't think about the fact that .6 nanoseconds per iteration comes out to about 2-3 cycles (depending on CPU speed) and there's no way what they thought was benchmarking could be done that quickly, and I've now encountered it twice at work. It's a think worth knowing.


I have experienced numerous conversations about performance with people who invent theories while never actually measuring things. Typically this comes from students.

That is not correct in practice. Some operations are primarily CPU bound, some are primarily GPU bound, and some are primarily memory bound. You can absolutely see the difference when benchmarking things on different hardware. It provides real insight into how these operations actually work.

When you conduct your own measurements we can have an adult conversation about this later. Until then it’s all just wild guessing from your imagination.


Sure, go ahead and show me your "data structure traversal", in Javascript (JIT'd is fine, since clearly non-JIT is just out of the question), that works in 3-5 cycles.

The whole topic of this conversation is a measurement in which it was claimed that "a billion DOM operations per second" were being done in 2015. That's a concrete number. Show me the actual DOM operation that can be done a billion times per second, in 2015.

The burden of proof here is on you, not me. I'm making the perfectly sensible claim that all you can do in a low-single-digit number of cycles is run a loop. I have, in fact, shown in other languages down at the assembler level that things that claim to be running in .6ns are in fact just empty loops, so I'm as satisfied on that front as I need to be. It's not exactly hard to see that when you look at the assembler. You don't even need to know assembler to know that you aren't doing any real work with just 3 or 4 opcodes.

I don't know how you expect to just Measure Harder and get a billion operations per second done on any DOM structure but I expect you're going to be disappointed. Heck, I won't even make you find a 2015 machine. Show it to me in 2025, that's fine. Raw GHz haven't improved much and pipelining won't be the difference.


I am not preventing you from measuring things. The burden of proof is equally on everybody concerned with speed.

https://jsbench.github.io/#b39045cacae8d8c4a3ec044e538533dc

I cannot go back in time to 2015 conditions, but you can run the tests yourself and get your own numbers. Try running that in different browser and on different hardware. Another interesting thing is experimenting with is HTTP roundtrip speed and WebSocket send versus receive speed on different hardware.

This is interesting because many assumptions are immediately destroyed once the numbers come in. Many of these operations can execute dramatically faster on hardware with slower CPUs so long as the bus and memory speeds are greater.

What's also interesting is that many developers cannot measure things. It seems as if the very idea of independently measuring things is repulsive. Many developers just expect people to give them numbers of something and then don't know what to do with it, especially if the numbers challenge their assumptions, like cognitive conservatism.


> All these excuses, especially about vanity, are masking behaviors.

1. These are not excuses, these are facts of life

2. No idea where you got vanity from

> DOM access is not quite as fast now as it was 10 years ago. I was getting just under a billion operations per second

Who said anything about DOM access?

> The only real performance problem is the JS developer doing stupid shit.

Ah yes. I didn't know that "animating a simple rectangle requires 60% CPU" is "developers doing stupid shit" and not DOM being slow because you could do meaningless "DOM access" billions time a second.

Please re-read what I wrote and make a good faith attempt to understand it. Overcome your bias and foregone conclusions.


> when you so much as change an element border (which may trigger a full re-layout of the entire page)

This is easily avoided: use 'outline' instead of 'border', or just keep the border width fixed and change the border color to/from transparent.


> This is easily avoided: use 'outline' instead of 'border'

Yup. And DOM is full of footguns like this. "Oh, you can't do this primitive thing no UI kit has a problem with, you have to use this workaround".


Well, no. First of all visual rendering is an aside not directly relevant to the DOM, a data structure.

Secondly, there are two kinds of visual operations that can occur from a DOM change: a screen repaint or a local change. A repaint occurs because the configuration of an element is modified relative to its dimensions, which includes: size, shape, or location. Everything else is really just a color change and is very fast regardless of the DOM size.

Even still modern visual rendering is pretty fast. I have a personal project that is an OS GUI, including file system display, written in TypeScript that displays in a browser. It took close to 10000 DOM visual on the page at one for drag and drop to become slower and laggy. I wouldn’t build a AAA 3D game engine in that, but it’s also not the intended use case.


> there are two kinds of visual operations that can occur from a DOM change: a screen repaint or a local change.

There's:

- reflow (most expensive)

- repaint (second most expensive)

- composition (least expensive)

The footguns in HTML and CSS are randomly scattered all over the place.

Oh? You changed a single value that wouldn't even register on any performance monitoring tool anywhere else? Oops, the browser now has to reflow the page, re-paint it, and re-compose leading to noticeable CPU and GPU spikes.

Or maybe it won't, you'll never know. Because none of these are documented anywhere and randomly change as browser internals change.

E.g. why would focusing an element reflow the entire document? https://gist.github.com/paulirish/5d52fb081b3570c81e3a Who knows.

Again, a recent article on HN's front page dealt with 60% CPU and 25% GPU while doing the most primitive of all animations on just three rectangles.

> I wouldn’t build a AAA 3D game engine in that, but it’s also not the intended use case.

Yes. Because the use case is displaying static pages.


I closed my web dev business just three years ago. I found that many people who work with the web don't want to do the work to understand how it all works. They think there must be a library somewhere to do "that" while doing "that" is simple enough using standard components and features.

Another issue is people basing their fears of things in the past. Yes, the web was more difficult to do fancy things but often they're trying to push the web to do things it just couldn't do back then. Now you can using basic, built-in functionality and it's often easier that way.


I found the same at my last two tech jobs (S&P 100 and 500). People hate getting their hands dirty with fundamental web dev.

My favorite part of web dev is working directly with the DOM, vanilla JS, and using minimal dependencies (if any).

I feel like the web equivalent of an assembly programmer these days, but apparently nobody is interested in hiring for this sort of thing anymore.


The reason WASM does not have dom access is that many recent DOM APIs return and expect javascript objects and classes like iterators, so you would still need some thin js glue wrapper between the dom and wasm. Security has nothing to do with it as (performace aside) wasm+minimal js glue can already do anything js can do


> For WASM to become a replacement it would have to gain full DOM access to the containing page.

To become a total replacement, as in no-JavaScript-at-all-needed, sure, WASM would need to be able to access the DOM. But to to replace JavaScript as the language you’re writing, you can easily generate DOM bindings so you trampoline via JavaScript, and people have been doing this for as long as WASM has been around.

Giving WASM direct DOM access does not enable anything new: it merely lets you slim down your JS bindings and potentially improve time or memory performance.

> Browser makers have refused to do that for clear security reasons.

Actually, they’re a long way down the path of doing it. They’ve just been taking their time to make sure it’s done right—they’ve headed in at least three different directions so far. But it’s been clear from just about the start that it was an eventual goal.


> Giving WASM direct DOM access does not enable anything new: it merely lets you slim down your JS bindings and potentially improve time or memory performance.

What is the point of WASM if it introduces substantially increased overhead instead of reduced? If you cannot talk to the DOM without full round tripping then you should just cross compile to JavaScript.


The point of WASM is a universal compile target that executes a sandbox. That is all.

The idea is that any application can be compiled to WASM and delivered via webpage instead needing to be installed to an OS desktop. Developers see something radically different because they want it to solve a different problem, but let’s remember it’s not about developers but portability and user experience for end users.


I always think of WASM as being like a C FFI. Something you reach out when your module is too slow by being JavaScript. Kinda how Python bind to almost every library in C and C++.

Not so usuful for CRUD, but imagine building some node based editor, you can out the solver in WASM.


Modern JS executes at about the same speed as Java or about 25% C language speed. A JavaScript application is often faster to initialize into memory than a C language application, largely because less overhead is required for application initialization, but otherwise slower because JS is garbage collected.

These distinctions are crucial when hardware performance really matters, like gaming or scientific data analysis. Otherwise these performance differences just aren’t noticeable to a common user.

Before WASM was a thing 3D gaming engines were ported into Emscripten demos to show case the potential. The output was too slow to play heavy 3D games in a portable browser container but far beyond what you could get away with using JS alone. All that misses the point that you could now run this giant game engine in a web page without installing anything.


And that’s why it’s ten years old and just getting traction now. Until it has DOM access nobody working on the front end will be particularly enthused about its utility.

They’ve made a solution looking for a problem while the problem is staring them right in the face. It is a frankly ridiculous situation.


> And that’s why it’s ten years old and just getting traction now.

There’s not any particularly meaningful change in its traction. It has specialised situations where it’s very desirable, and it has been used in those specialised situations extensively for quite a few years; and for more general use, it’s trudging along as it ever has been, because it’s not compelling.

> Until it has DOM access nobody working on the front end will be particularly enthused about its utility.

This is also false. Shipping WebAssembly or shipping Web Workers each add complexity, compared with just using main-thread JavaScript, and most people simply can’t justify that—that’s why they’re not interested. But as for DOM access transforming things, I’ll say that Rust is one of the main languages used for targeting WebAssembly (because most languages aren’t suitable), and native DOM bindings is going to change approximately nothing. It will allow/require a slight change in the build process, and slightly change the way you write your own bindings, but that’s all.

I don’t entirely understand why people keep on thinking giving WASM direct access to DOM objects will be transformative. In truth, it’s very minor.


The reason working with the DOM directly is hard is that you have to implement arbitrary patching to go from one state to another.

The entire point of frameworks like React is to avoid the problem, by automatically creating and applying the patch for you.

It's not irrational; quite the contrary.


Yeah, I prefer vanilla DOM and I don't have any problems with state. State is as ridiculously simple as storing state of user interactions to an object, saving that to somewhere like localStorage, and the applying it on page load. React makes this ant hill into a mountain of complication.


State can be easy when you’re talking about document-based app (forms and content). But it can become quite hard when talking about some long lived interactions and the state diagram becomes difficult to draw. React (the library, not the ecosystem) make it easy to deal with that case. Otherwise you have to write bespoke reactive code. Not that difficult, bit it’s buy VS build.


What about an OS GUI with windows an various different types of utilities? State was still just as simple. The content and utility of the application had no bearing on how state management worked.


Reactivity is just one pattern to help manage state. We have observable (which is just reactivity in another cloth), Entity-Component, View-Model (also Presenter), etc,…


Ever use knockout.js?

I think it had/has the perfect balance of utility without being overly opinionated. State was also much easier to deal with.

I haven't used it in a while, but still look back on that project fondly.


React isn’t about persistence between page loads. React is about declaratively declaring two different page states and “diffing” them so that only the diffs are applied to the DOM.

Storing the diffs to local storage is an interesting idea though.


Svelte seems to do this just fine. It's much simpler to work with, doesn't introduce too much proprietary code, and is both lightweight and incredibly fast.


> Browser makers have refused to do that for clear security reasons.

Because only javascript should be allowed to screw up that badly.


some of the worst dom api were designed with java compatibilty in mind


So its the java prefix that matters ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: