Hacker News new | past | comments | ask | show | jobs | submit login

That is a glib, and inaccurate claim of browser engine development based on the reality of the web more than a decade ago.

It is however absolutely the behavior of web developers, that's why the web used to be filled with IE only sites, and why we are now getting chrome only sites. It is much easier to blame other engines that ask whether your site is depending on implementation details.

The TLDR for this is: the modern web is very well specified, and all browser engines work very hard to conform to those specs, and now when divergent behavior is found the erroneous engine is corrected, or if the specification itself was incomplete considerable effort is expended making sure it is complete so that it is possible to ensure conforming implementations. The driving force for this was engine developers, not site developers.

Anyway.

The entire point of the html5 and subsequent "living" spec, and the death of ES4 and subsequent ES3.1 and ES5 specs, and then ES's "living spec" was dealing with the carnage of the early web and the Netscape vs IE insanity it produced. This was a huge amount of effort driven almost entirely by the engine developers, specifically so that the specs could actually be used to implement browser engines. The existing W3C and ECMA specifications were useless as they frequently did not match reality, where they did match reality they had gaps where things were not specified, and frequently they simply did not acknowledge features existed.

It took a huge amount of effort to determine the exact specification for parsing html, such that it could be adopted without breaking things. It took a huge amount of effort to go through the DOM APIs, the node traversal, event propagation, and on and on to specify them.

The same thing happened with ecmascript. A lot of effort for many years was spent replacing the existing spec, ignoring a bunch of time wasted by some parts of the committee creating ES4, making it so that the ecmascript specification actually matched reality.

There were places where we found that there were mutually incompatible behaviors between Gecko and Trident, but in most cases we were able to replace old badly written specs, with real specifications that were compatible with reality, and were sufficiently detailed that they could be used to implement a new engine, and be confident that their engine would actually be usable.

The immense work required for this also means that the spec authors and committees are acutely aware of the need for exact and precise specification of new features. So it is expected that new specifications completely specify all behavior.

As an example, I recall that after originally implementing support for IMEs in webkit on windows, I spent weeks stepping through how key down, up, and press events were fired in the DOM when a user was typing with an IME. The spec at that point failed to say what events should be fired in that case - text entry is not keydown/press/up once IMEs are involved, do not assume one keyup will result in a single character change - it was a months long effort to get to something that only managed to specify keydown/up/press, none of the actual complexity of IMEs. The specification has since expanded to be more capable of handling IMEs, but they have an example of what the "keys typed by a user" vs "key events you receive" [1], and alas my work is now largely "legacy" :D [2]

The problem as ever, is that it is very easy for web developers to rely on some implementation detail that a specification failed to dictate, and then say any browser engine that does not behave identically is wrong. This is what webdevs did with IE, and now it's what webdevs do with Chrome. It is always easier for a webdev to paste "this site requires ie/chrome" than to work out if what they're doing is actually specified behavior. Sure, it's possible it's a bug in the other engine, but if you are saying "install chrome" you're saying it doesn't work in Gecko or WebKit, so it's much more likely to be a site bug.

[1] https://www.w3.org/TR/uievents/#keys-IME

[2] https://www.w3.org/TR/uievents/#legacy-key-models




I don't disagree with any of that (and excellent work; the Wild West days were hard to get past)...

... But we both arrive at the same place, where if a site works on incumbent browsers (by spec or by shared quirk because in an ambiguity of the spec everyone lucked into the same implementated behavior) and not an outlier browser, and the site is popular, users perceive the outlier to be broken. Because users don't understand this problem space by parsing specs; they understand it as "well it works on my sister's computer when she double-clicks the rainbow circle; I guess the compass just doesn't work with the whole web. Maybe I should just get myself a rainbow circle." Hence, the existence of a Quirks.cpp file.


> But we both arrive at the same place, where if a site works on incumbent browsers (by spec or by shared quirk because in an ambiguity of the spec everyone lucked into the same implementated behavior) and not an outlier browser, and the site is popular, users perceive the outlier to be broken

The difference between the past and now, is that it is very well understood by all the major vendors (gecko, blink, webkit) that if the spec has a section where different behaviour is permitted - other than things that are necessarily non-deterministic (networking, timers within some bound, ...), or where platform behavior is different as with IMEs - the specification itself is broken. Similarly if the spec disagrees what browser engines are actually doing, the spec is wrong. Once an issue is identified, the spec is then fixed, regardless of effort, to ensure that the gaps are filled out and the errors corrected.

The point is that if a new browser comes along, and correctly implements the spec, that browser should work with the same content as any other engine, and if it can't the spec is broken. This is the model the engine developers want. Yes it may cause them to face new competition, but having a complete spec is a massive enabler. It lets you make massive internal changes to your engine without having to worry about "are there sites that depend on X"[1].

Now, even when there are gaps or errors in the spec such that observable implementation details leak out, if a developers site only works in one browser (sans browser bugs, which please file bugs, the engineers at all these companies do care, and do value them) that site is depending on unspecified behavior, so the site is wrong.

From an end-users point of view, yes it appears that other browsers are broken, but that isn't the problem.

The problem is that the web developer turns around and says "it's not my site that is broken, it's the other browsers". This is the development model that means fundamentally Chrome is the new IE: it is the only browser that is resulting in sites saying "you need Chrome to continue" or developers saying "it works in Chrome but not X, X must be wrong" and not considering any alternative.

[1] Obviously there are quirks as listed in this file, but you can see that the size of this file and a couple of similar ones are very small, and the quirks are exceptionally specific, essentially they are as close as possible to "Apply this quirk, to this site, only if the site is still using this specific design/layout". In the past engines essentially had to go "we've got one site depending on this behavior, which has no real specification so we need to guess whether this is the actual behavior we should have, or whether it's uncommon, or even just a one off".




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: