bool Quirks::requiresUserGestureToPauseInPictureInPicture() const
{
#if ENABLE(VIDEO_PRESENTATION_MODE)
// Facebook, Twitter, and Reddit will naively pause a <video> element that has scrolled out of the viewport,
// regardless of whether that element is currently in PiP mode.
// We should remove the quirk once <rdar://problem/67273166>, <rdar://problem/73369869>, and <rdar://problem/80645747> have been fixed.
if (!needsQuirks())
return false;
if (!m_requiresUserGestureToPauseInPictureInPicture) {
auto domain = RegistrableDomain(m_document->topDocument().url()).string();
m_requiresUserGestureToPauseInPictureInPicture = domain == "facebook.com"_s || domain == "twitter.com"_s || domain == "reddit.com"_s;
}
return *m_requiresUserGestureToPauseInPictureInPicture;
#else
return false;
#endif
}
I'm a developer on a web media player and remember that we had at some point an issue with picture in picture mode: We had a feature that lowered the video bitrate to the minimum when the page that contained the video element was not showing for some amount of time.
That made total sense... until the Picture in Picture mode was added to browsers, where you would see a very-low quality after watching your content in that mode in another page long enough (~1 minute).
The sad thing is that because I was (still am) developing an open-source player and because the API documentation described clearly the aforementioned implementation, I had to deprecate that option and re-create a new one (with a better API documentation, talking more about the intent than the implementation!) instead, which would have the right exceptions for picture in picture mode.
Seeing that part made me remember this anecdote, we should just have asked for quirks :p
The first rule of browser development is "forget the standard, forget consistency... If it's popular and works in other browsers but not yours, your browser is broken."
It's more in the category of "the new kid on the block tries to implement to spec and discovers what everyone else already knows: spec's hosed."
I once received a bug report on a site that consistently went down after a computer woke from sleep... But only if the computer was a macintosh, and only if the browser was Chrome. It turned out that the root cause was that when the machine slept and reawoke, XML HTTP requests that were attached to timers in an open webpage would fire all at once.
On Windows and Linux, apparently, the network stack would dutifully pause those requests while the radio took a moment to reestablish connection. Mac OS x, adhering to the spec, did not pause but instead immediately reported on wake that the network was unavailable.
So, the other browsers on Mac OS wisely broke speck and ignored the first couple network down that came in after sleep, quietly retrying rpcs. Chrome adhered to spec and dutifully reported the dropped network as an error that failed all those rpcs.
As a result, client's page was broken, but only on Mac os, only on wi-fi, and only on Chrome. Would you guess that their first solution was to painstakingly rewrite all of their set timeout logic to move the retries up to the JavaScript layer, or would you guess that their first solution was to report a bug to Google and tell their regular users Chrome was broken?
In any case, it's a moot point now because at some point Chrome changed their network stack implementation to match everybody else's. ;)
It’s getting closer to the point where we call the entire current “web” a Google-specific network, Google Chrome is renamed a “Google” browser instead of a web browser, and we re-make a new interlinked network which does not require one specific company’s product to use. (Never mind an advertisement company.)
The “Google” network and sites can be kept on as a necessary evil proprietary service, like Facebook is for many, and also LinkedIn.
That sounds like requesting an awful lot of volunteer labor from web developers who don't want to do that.
Web developers, ultimately, have very little vested interest in what browser is winning or who's using what as long as (a) people can access their site and (b) they don't have to write the site twice. That's their incentive model. Telling them that the spec is X and if Google does Y Google is wrong when Google is like 90% market share is just kind of a funny idea for them to laugh at and then go right back to solving the problem in a way that reaches 90% of the possible users (and then maybe, time permitting, writing pieces of the site twice to pick up a fraction of the remaining 10%).
> Web developers, ultimately, have very little vested interest
Yeah, of course. It's only the platform they depend on. Why not cede control of it to Google, right? What's the worst that could happen?
Sometimes I ask myself why people even try. What is the point when people have such an apathetic attitude? What is the point of these web standards? Some huge company comes in, dominates the market and suddenly they're the standard. Nobody cares as long as they're making money, even though the huge company is usurping control of the platform. Not even a year ago I saw a post here about people at Google talking about moving the web away from the previous "owned" model to a "managed" model or something like that. As long as people don't have to work too hard to get paid, who cares, right? This notion of an open platform is just a funny idea to laugh at.
Those people are then, to further the analogy, not “web” developers but “Google network” developers. Therefore, I would not ask them to do anything more than they are doing; what they are doing is irrelevant to the new interlinked network.
Which is fine. I'm sure they will care when the new interlinked network becomes relevant to anyone for anything.
(If one wants to do that road, one should probably start reasoning from the "killer app" of a novel network model. The killer app of the web was HTML, and specifically the hyperlink combined with the URL, which allowed for association of information in a way that hadn't been possible before. It'll be hard to one-up that, but if someone could find a way to do it that would be hard for HTML to just grow to consume, there may be room for a novel information service).
It's useful to distinguish between them, though. Apps are almost always first party software that only does what's officially supported. Browsers have a long history of customizability, extensibility, programmability, adversarial interoperability.
What if instead of browsers and ad blockers we had an extensive collection of web scrapers for every web site out there?
I do believe we have all of those things currently, but with the scrapers working against the human condition. Our only recourse is smart human scrapers but that job sucks really fast.
That is a glib, and inaccurate claim of browser engine development based on the reality of the web more than a decade ago.
It is however absolutely the behavior of web developers, that's why the web used to be filled with IE only sites, and why we are now getting chrome only sites. It is much easier to blame other engines that ask whether your site is depending on implementation details.
The TLDR for this is: the modern web is very well specified, and all browser engines work very hard to conform to those specs, and now when divergent behavior is found the erroneous engine is corrected, or if the specification itself was incomplete considerable effort is expended making sure it is complete so that it is possible to ensure conforming implementations. The driving force for this was engine developers, not site developers.
Anyway.
The entire point of the html5 and subsequent "living" spec, and the death of ES4 and subsequent ES3.1 and ES5 specs, and then ES's "living spec" was dealing with the carnage of the early web and the Netscape vs IE insanity it produced. This was a huge amount of effort driven almost entirely by the engine developers, specifically so that the specs could actually be used to implement browser engines. The existing W3C and ECMA specifications were useless as they frequently did not match reality, where they did match reality they had gaps where things were not specified, and frequently they simply did not acknowledge features existed.
It took a huge amount of effort to determine the exact specification for parsing html, such that it could be adopted without breaking things. It took a huge amount of effort to go through the DOM APIs, the node traversal, event propagation, and on and on to specify them.
The same thing happened with ecmascript. A lot of effort for many years was spent replacing the existing spec, ignoring a bunch of time wasted by some parts of the committee creating ES4, making it so that the ecmascript specification actually matched reality.
There were places where we found that there were mutually incompatible behaviors between Gecko and Trident, but in most cases we were able to replace old badly written specs, with real specifications that were compatible with reality, and were sufficiently detailed that they could be used to implement a new engine, and be confident that their engine would actually be usable.
The immense work required for this also means that the spec authors and committees are acutely aware of the need for exact and precise specification of new features. So it is expected that new specifications completely specify all behavior.
As an example, I recall that after originally implementing support for IMEs in webkit on windows, I spent weeks stepping through how key down, up, and press events were fired in the DOM when a user was typing with an IME. The spec at that point failed to say what events should be fired in that case - text entry is not keydown/press/up once IMEs are involved, do not assume one keyup will result in a single character change - it was a months long effort to get to something that only managed to specify keydown/up/press, none of the actual complexity of IMEs. The specification has since expanded to be more capable of handling IMEs, but they have an example of what the "keys typed by a user" vs "key events you receive" [1], and alas my work is now largely "legacy" :D [2]
The problem as ever, is that it is very easy for web developers to rely on some implementation detail that a specification failed to dictate, and then say any browser engine that does not behave identically is wrong. This is what webdevs did with IE, and now it's what webdevs do with Chrome. It is always easier for a webdev to paste "this site requires ie/chrome" than to work out if what they're doing is actually specified behavior. Sure, it's possible it's a bug in the other engine, but if you are saying "install chrome" you're saying it doesn't work in Gecko or WebKit, so it's much more likely to be a site bug.
I don't disagree with any of that (and excellent work; the Wild West days were hard to get past)...
... But we both arrive at the same place, where if a site works on incumbent browsers (by spec or by shared quirk because in an ambiguity of the spec everyone lucked into the same implementated behavior) and not an outlier browser, and the site is popular, users perceive the outlier to be broken. Because users don't understand this problem space by parsing specs; they understand it as "well it works on my sister's computer when she double-clicks the rainbow circle; I guess the compass just doesn't work with the whole web. Maybe I should just get myself a rainbow circle." Hence, the existence of a Quirks.cpp file.
> But we both arrive at the same place, where if a site works on incumbent browsers (by spec or by shared quirk because in an ambiguity of the spec everyone lucked into the same implementated behavior) and not an outlier browser, and the site is popular, users perceive the outlier to be broken
The difference between the past and now, is that it is very well understood by all the major vendors (gecko, blink, webkit) that if the spec has a section where different behaviour is permitted - other than things that are necessarily non-deterministic (networking, timers within some bound, ...), or where platform behavior is different as with IMEs - the specification itself is broken. Similarly if the spec disagrees what browser engines are actually doing, the spec is wrong. Once an issue is identified, the spec is then fixed, regardless of effort, to ensure that the gaps are filled out and the errors corrected.
The point is that if a new browser comes along, and correctly implements the spec, that browser should work with the same content as any other engine, and if it can't the spec is broken. This is the model the engine developers want. Yes it may cause them to face new competition, but having a complete spec is a massive enabler. It lets you make massive internal changes to your engine without having to worry about "are there sites that depend on X"[1].
Now, even when there are gaps or errors in the spec such that observable implementation details leak out, if a developers site only works in one browser (sans browser bugs, which please file bugs, the engineers at all these companies do care, and do value them) that site is depending on unspecified behavior, so the site is wrong.
From an end-users point of view, yes it appears that other browsers are broken, but that isn't the problem.
The problem is that the web developer turns around and says "it's not my site that is broken, it's the other browsers". This is the development model that means fundamentally Chrome is the new IE: it is the only browser that is resulting in sites saying "you need Chrome to continue" or developers saying "it works in Chrome but not X, X must be wrong" and not considering any alternative.
[1] Obviously there are quirks as listed in this file, but you can see that the size of this file and a couple of similar ones are very small, and the quirks are exceptionally specific, essentially they are as close as possible to "Apply this quirk, to this site, only if the site is still using this specific design/layout". In the past engines essentially had to go "we've got one site depending on this behavior, which has no real specification so we need to guess whether this is the actual behavior we should have, or whether it's uncommon, or even just a one off".
By the way, this is as old as web. It's ironic to re-read https://dev.opera.com/blog/opera-s-site-patching/ published in 2009, which uses 2022(!) as a stand-in year for far future when hopefully none of this will be required.
I've found another webkit quirk in the past which is outside of the Quirks.cpp file, ObjectPrototype.cpp has some special code for the PokerBros app. Looks like it's still there.
Also not as disgusting as Quirks.cpp, but I was debugging some video decoding stuff in Chrome this week and found some fun things today, special code to work around various GPU driver bugs.
In Windows, this is just a getter to a private variable variable called m_pcszwclbShouldSuppressAutocorrectionAndAutocaptializationInHiddenEditableAreasForHost
No, it's a function that takes a pointer to a struct in which you have to fill in dwLength before calling it. The struct also has at least 3 reserved "must be zero" fields. The function and the struct also have A and W variants.
The problem is when you have multiple very long identifiers that differ only in a few characters. When scanning through code, it's much easier to see the difference between a saaainheafh and a saaainheafi than it is a shouldSuppressAutocorrectionAndAutocaptializationInHiddenEditableAreasForHost and a shouldSuppressAutocorrectionAndAutocaptializationInHiddenEditableAreasForInput.
Also, autocomplete doesn't work when you're reading and not writing, or just using a text editor or reading/annotating a printout (yes, I still do that.) IMHO writing code that almost completely relies on special tools to handle it is a bad trend.
I'm surprised that I strongly disagree. My eyes glaze right past the difference between the fh and the fi in that example.
They also glaze past the end of the mega-strings, but I usually solve that problem by either actually ctrl-F finding for the string (which will highlight it) or by finding an examplar and selecting it (which will highlight all instances of the same symbol in the languages and IDEs I use).
I'm a firm believer that "code isn't just text" (in fact, most of my frustration with code is tools that treat it so... The set of strings that aren't valid programs is vastly larger than the set that are, so why should I be treating programs as if they're mere strings? It'll lead me to create non-compilable artifacts). So I try to avoid being in situations where the only tools I have to work with to understand code are a text editor or an annotated printout (I don't doubt that's done in places, but I've gone my whole career managing to avoid it so far).
> My eyes glaze right past the difference between the fh and the fi in that example.
Same here. The difference isn't significant enough for my eyes to latch onto.
If I had to come up with short readable function names, I would still use whole words, but would 1) cut the number of words down to the bare minimum 2) make each name as unique as practically possible.
This weird abbreviation fetish was a big turnoff when I just got into programming (about 15 years ago when it was a lot more common). It’s just nonsensical and off putting.
You should try to change your perspective: Source code is its own language, and much like with natural languages, to get good at programming you should learn the language of the source code and not try to dumb-down by constantly trying to translate back to English.
It can be a time saver and helpful when you’re writing it. But hurts even yourself a year or two later. Others with no previous code base experience have it 10x worse.
Just kidding. Mostly. Lots of little fiddly text editing patches for Google Docs, which doesn’t surprise me. It’s, I would assume, the most-used application suite on the web, and there’s going to be weird edge cases that pop up around rich text editing that get magnified by the sheer number of users. Including probably inside Apple.
Users can still install Chrome on a Macintosh. Apple's larger concern is probably whether they'll lose hardware share if, say, Reddit doesn't load right on MacBooks (you'd be surprised how many people buy an expensive machine as basically an internet appliance) or, more importantly: iPhones.
Apple has had record Mac sales for the past two years, since the release of Apple Silicon.
There’s no way wether or not any one particular website renders as expected is going to meaningfully impact Mac or iPhone sales.
Sure, Apple would prefer that users stick with Safari but they’re not going to lose any sleep if customers use Chrome/Firefox/Edge or whatever else on a new $2499 MacBook Pro.
It’s certainly way more in Reddit’s or other big web property best interest to perform well for Safari users, especially on iPhones and iPads where all browsers use WebKit as their rendering engine.
It's not just about being the default browser, they're the only browser with no user choice to switch or manually upgrade or downgrade independent of the OS.
Clarification, not to disagree or to minimize, but in case this gives the wrong impression to anyone who doesn’t know: only on iDevices. macOS has no restrictions specific to browser engines.
I like it, though I’m used to Apple library function/constant/class names so I’m not surprised the way some might be.
As an exercise: what would you name it that’s shorter?
I’m having trouble thinking of anything that doesn’t make it seriously compromise the clarity unless you had a lot of autocorrect and autocapitalization problems and could shorted that part to AandA.
The only other option, which I do t really like, is to strip all the info and call it something like isBug315255() and put a comment in the function explaining it. But that’s a big loss in my eyes.
It's incredible, most of these are large companies (Zoom, YouTube!) or government entities. They should just be able to send an email (or a registered mail on company letterhead) saying "Hello, we are Google, your website is broken. Please make the following changes.". Fixing sites for other people is less work and technical debt than adding it permanently to the source code of the browser.
And if it is not bugfixes, but keeping around features that were removed: if youtube.com uses something, why can't everybody else use it?
Many issues certainly get resolved over email or bug trackers. Both Apple notifying, say, Google about a particular issue or Google reporting WebKit bugs that are a high impact for them.
Yes, good point. I thought of it as minor safari updates, but small daily packets makes sense.
I guess another issue is that most of these have actual logic in C++, so you'd have to move some of this code to JS (I believe Firefox does that (the JS part, not the sending code part)). Sending code is a bit of a security concern though.
Just imagine a domain from such file being recycled in 10-20 years from now and you building a webpage for the new owner - everything works as expected on dev, stage, test etc - but not on prod when deployed...
did not realize that webkit has the equivalent of site-specific hard-coded stuff like this, or even isGoogleMaps(), it's like nvidia drivers for the web, crazy
The quirks are called at the appropriate places, and can be inlined using LTO. The same places would need to hash the current context and check if it's in the hash map (is it one global hash map, or a specialized one for each quirk, with an optimised hashing function?). And on a match, you need to do the full check anyway, due to likely collisions.
The conditionals in the full check seem to use symbolised strings where possible, so they're quite fast. Probably faster than producing a suitable hash.