Hacker News new | past | comments | ask | show | jobs | submit login
Passive event listeners (github.com/wicg)
235 points by franze on Aug 16, 2017 | hide | past | favorite | 60 comments



While passive event listeners are an improvement, they are not a complete solution. It's more like the "least of evils". There is a new set of primitives that is being implemented called IntersectionObserver which solves most of the issues of having to detect when something is in view + they run on a separate thread. So you don't need to do any computations, reading the window size and whatnot magic to detect if an element is inside the window. We are currently using this in prod for our algolia static websites and it has worked very well for the majority of all of our use cases. https://github.com/WICG/IntersectionObserver/blob/gh-pages/e...


Why do you need to detect whether something is in the viewport?


For lazyloading stuff like images and ads. Or that technique where you have an infinite list, and the elements that are scrolled off-screen are recycled on the other end.


You can easily scroll so that the intersection observer sentinels are not seen and cause bugs. Disappointing.


The linked description provides several use cases.


You can achieve smooth scrolling animations by using requestAnimationFrame and watching the scroll position within it, bypassing the blocking nature of the scroll event handler.


`scroll` listeners are not blocking, however `wheel` ones are. If you've tried to implement a parallax effect in JS you may have noticed the subtle differences on timing between these two (wheel fires before the scroll position is updated, scroll afterwards). That said, yielding immediately within a listener (like you recommend) is indeed a best practice.


This coupled with storing the scroll position and only executing code when it actually changes is the gold standard IMO. Works fantastically well on mobile too.


> Many developers are surprised to learn that simply adding an empty touch handler to their document can have a significant negative impact on scroll performance.

Maybe browsers could do some static analysis for the simple cases?

Or, would it be possible to fire all events asynchronously, and if the event cancels scrolling, revert the scroll position to where it was when the event happened? (And possibly mark the event handler as canceling scrolling.) That could sometimes cause jankiness when scrolling is canceled, in exchange for smooth scrolling in other cases.


> Maybe browsers could do some static analysis for the simple cases?

The risk here is that you end up where cases where non-obvious changes cause the static analysis to no longer work; at least in the current world there's a clear line between having a non-passive event listener and not having one. Attempting a heuristic could cause both unpredictability and interoperability issues (since the heuristic will surely vary by browser vendor).


how's this for an idea: maybe developers could write code that isn't shitty


Why didn't anyone think of that before?! I mean, what a breakthrough - think of the time that could be spent elsewhere, if only everyone got everything perfectly right. Coming up with the idea was the hardest part, implementing it should be easy, right?

In other words, "We won't allocate any time for testing and debugging: just don't write any bugs into the code" -a PHB I used to know

(Do you think people write shitty code _on purpose_?)


I don't think I said people write shitty code on purpose, I'll have to go back and read my own comment to make sure though.

You might find that it is even a better option to do _more_ testing and debugging as opposed to cramming yet another pointless feature into already bloated browsers. And as a sibling to my previous comment mentioned: this is ripe with "unpredictability and interoperability issues"

Maybe create a `static js analysis` browser extension for developers to aide them in their endeavours to stop writing shitty code, but don't force that down everybody's throat.


Well, okay, great. Let's assume that I do write perfect code to run inside the webpage (which is what I strive for, and there are tools that help me with that; your radical new idea for JS static analysis tool has been implemented for years, and I'm actively using it). Great. And then the code needs to cooperate with multiple ad network codes, third-party plugins, all while working around browser-based bugs (oh, and did I tell you that there are various browsers, each with its own set of quirks?).

So, yes, I am trying my hardest not to write shitty code, and to make it resilient - but no: it doesn't help nearly as much as your original comment suggests it would, because there are numerous forces beyond my control affecting the result. Worse, some behaviors that are inconvenient to me are seen as desirable by the respective library/browser/OS makers ("you say bloat, I say essential feature"), so even if all the parties involved up and down the stack were producing 100% shit-free code, the result of their interactions could still be shitty.

As for the automated-under-the-hood-fixes inside browsers - yeah, that would be great if there wasn't a need for those, or if they didn't exist at all (even though I'm aware that their existence enables inflationary expectations in JS developers). I'd also like a pony while we're in wishing-land.

TL;DR: No, handwaving away complex and leaky abstraction stacks doesn't work.


I'd hardly call this feature pointless. Scroll events, with cancellation support, have been a thing for well over a decade. This solves a very real problem of having to execute single-threaded application code that blocks scrolling. Particularly on mobile where resources are constrained and scrolling is smooth, this is a big problem.

Having developers hint to the browser that the event will not be cancelled solves it very simply. Far more simply and predictably than any kind of static analysis could.


You seem to be misunderstanding my point. I'm definitely all for passive listeners. But I am very much against putting more workarounds for shitty developers directly into the browser.


The change we're discussing here is something that the developer will need to explicitly add to their code;

By marking a touch or wheel listener as passive, the developer is promising the handler won't call preventDefault to disable scrolling.

Shitty developers who are lazy aren't likely to add that to their code if it 'works' without it. They are shitty after all. They won't care about a janky scroll on their webpage. Heck, they probably won't even test different ways of scrolling. Therefore it's fair to think this change is for awesome developers who test things and find that there's a problem, and need a way to fix it.


yes, passive listeners are great. What I don't want in my browser is a static analyzing tool as the first comment I answered to suggested


Well...that's a bummer, as modern JS virtual machines used in browsers all use some sort of predictive code path model, and such static analysis is an integral part of this.


The whole point of TFA is that it doesn't matter what code the developers write; the browser needs to block on handlers regardless due to how the events are specified. The described new feature fixes this.


But wouldn't a developer who is surprised by performance penalties caused by empty scroll event handlers also not use passive listeners?

The point of passive listeners would be to make it easier for developers to not accidentally write shitty code and boost performance on well written code that doesn't have to be blocking the scroll.


Chrome gives nice warnings when passive listeners are not used[1]. This is demonstrably[2] causing people who have not used passive listeners / may have written poor code in the past in the past to take note and use them. It works.

1. https://developers.google.com/web/updates/images/2016/10/avo...

2. https://github.com/search?utf8=%E2%9C%93&q=non-passive+event...


The intercept is a fun example of where Passive Listeners would help. If you check out this article on mobile, you'll see the header jump around; passive listeners would fix that https://theintercept.com/2017/08/15/fearful-villagers-see-th...


Hmm, I couldn't get that to happen. Seems fine to me.


Hah, maybe you have a better device than I do. If you toggle log level "verbose" on in the chrome console you'll see a warning being emitted specifically about the scroll listener not being passive.


It seems like the root of this entire problem is that Javascript is single-threaded, but current browsers want to be multithreaded (and specifically to do rendering and scrolling in a separate thread).

Does JS need to be single-threaded? I wonder if browsers could speculatively run JS event handlers in multiple threads, then block and transfer to a central "main" thread iff they mutate any global state.


The issue is that the browser is waiting for a return value from a called event function. The browser changes its behaviour depending upon the return value. Singlethreading versus multithreading is irrelevant.


It's not irrelevant, it's right at the heart of the problem!

Modern browsers use separate threads for Javascript and rendering. But if you have a (non-passive) scroll event, the rendering thread has to block on the Javascript thread, and that's why scrolling is janky.


To clarify a bit: the passive event listener idea is a good start because it means the scrolling thread doesn't have to block on the main JS thread.

But it seems like a big use of scroll handlers is to reposition some elements on the page, e.g. for parallax scrolling. Ideally that should synchronize exactly with the rendering thread, so elements don't jiggle slightly out of position as you scroll. The best way to do that is to run the scroll handler in the scrolling thread -- as long as you can avoid waiting for another thread.


Consider the need to block on e.g. the DOM.


Multiple threads have to synchronize their DOM reads and writes, sure, but that doesn't mean all access needs to be single-threaded.

I guess what I have in mind is roughly like managing DOM and JS state via STM, and speculatively parallelizing the JS. I don't know enough about STM to know how well it works out in practice, and for what kind of workloads. The goal would be to improve JS latency (ability to run small functions quickly) even if it reduces throughput.

This is all pretty speculative, I realize, I'm just wondering if there are obstacles beyond all the obvious ones, like having to rewrite the entire JS and DOM implementations. :)


Webworkers lets JS be multi-threaded.


Web workers are isolated. They can only do computations and return results asynchronously to the main thread. You can't use a web worker as a DOM event handler.


Shouldn't calling `preventDefault` on an event marked passive throw an exception? It seems sloppy not to but maybe I'm missing something.


It should be logged as a warning but I don't think it should throw. Throwing obviously stops execution which is unlikely what the programmer intended. Javascript is, for better or worse, a permissive language. It doesn't like to break code when it doesn't have to.


Of course it was not what the programmer intended, it is an error.

Javascript currently throws on many kinds of errors.


It's truly the Visual Basic of our times. On Error Resume Next...


> For instance, in Chrome for Android 80% of the touch events that block scrolling never actually prevent it

I'm curious how they get at this data. I haven't fully thought it through, but something about this raises the "creepy flag" in my head.

Does this data come through the "Automatically send usage statistics and crash reports to Google" setting? Google's help page [1] regarding that doesn't seem to indicate it's recording information about website's javascript.

[1]: https://support.google.com/a/answer/151135?hl=en

EDIT: I'd also be curious if they have done any analysis to see if the mechanism that gathers this data is related to the performance implications of touchEvent listeners.


They measure lots of client-side feature usage, in part so that they know when they can deprecate something. Here's a list and short explanation [0].

Firefox has something similar [1] as part of Telemetry, although they measure less things [2].

[0]: https://chromium.googlesource.com/chromium/blink/+/master/So...

[1]: http://gecko.readthedocs.io/en/latest/toolkit/components/tel...

[2]: https://dxr.mozilla.org/mozilla-beta/source/obj-x86_64-pc-li...


The Chrome team probably runs some kind of automated conformance tests over various popular websites, so absent other information it would be simpler to assume that's where this data point comes from.

(Though with that said, 80% seems surprisingly low. I can't think why a site would cancel a scroll event besides scrolljacking, and while that's getting more common surely it can't be on 20% of all sites?)


Do I understand it correctly, to cause a problem, a page needs to:

1) Add a touch handler to the window.document element 2) Do computations even when there is no user interaction

So when the user wants to scroll, the touch handler cannot be run because some other javascript is already running. So the browser waits until the other javascript is finished, executes the touch handler and only then scrolls.

Is that correct?

If so, I am not in much danger. Because I usually do not do a lot of computation without user interaction. And I also do not put a touch handler on the document element.


The problem is that the client side browser side implementation which has it's own display thread (It displays prerasterized tiles) must post the scroll event to the rendering thread which runs the JavaScript engine and then wait for the response before it can show the result of the scroll. With this flag the client side can respond to the scroll request immediately and display already rasterized tiles without waiting for the js thread.


I'm thinking that maybe event listeners should be passive by default.


this was considered, long discussion here https://github.com/WICG/interventions/issues/18


TL;DR: Some browsers already started doing exactly that. Early results so far seem rather encouraging.

> We improved the performance on a vast swath of the mobile web while causing problems for a very small number of developers/sites and virtually no users (we've heard almost no complaints from users about broken sites). Our data suggests we made the right trade-off for the web platform as a whole and for Chrome as a product.


The vast majority of GUI frameworks allow cancelling events with no observable side-effects. Cake for whoever figures out how they achieve this mythical benefit and why it makes passive look like a bloody hack.


Yes, but that would break the internet.


I know this is a silly question and probably argued already over the years.

But can't and shouldn't this be solved by some mechanism? Like, some umbrella "native-web-1" meaning that any project without that tag renders 2020 and below whilst anything with it as a brand new shiny implementation.

It would also help prevent the relentless "wow I really like that 2024 feature, lets make a shit version now today and another 7 over the years so that everyone is stuck with a dependency which doesn't even follow the specification" (promises etc)


Although a nice idea in principle I think you'll just end up with horribly convoluted and hacky if/else logic throughout your code - much like the era of nasty IE6 kludges.


That's a good question. That <!DOCTYPE> things is that such mechanism, isn't it? (Or it was supposed to be.)

Perhaps the problem is that shipping, for example, two different Javascript engines would be too bad for browsers?

I want to know.


The WHATWG's philosophy for the HTML standard is never to sway too far from what browsers currently do, mainly to make sure that the browsers can keep up: https://whatwg.org/faq?#living-standard

The browsers seem to like that, so it's probably unlikely that we'll get a wildly different HTML6 anytime soon, if ever.


Yeah exactly. I can't even really find what I am trying to say, but Doctypes had some other piece of data with it. Was it a link or string to a spec? Like XHTML vs HTML4. Randomly I am sure that if you omit it, then it is "html5".which is random, shouldn't they have forced a spec HTML5?

That choice basically says to me that the idea of partitioning, or whatever those doctypes were for, is now abandoned.


I don't understand all this need for backward compatibility. Most of the web is rotting anyway. When can we just wrap it up, say ad acta, name it "web 1.0" (congratulate everyone) and put it away? Lessons learned. No more updates to browsers, html, css or js, not even bug fixes. Just document everything and live with it for existing sites. And then we can work on a tabula rasa, clean new technologies, this time with some thought put into it. This probably means not letting MS or Brendan anywhere near.


> And then we can work on a tabula rasa, clean new technologies, this time with some thought put into it.

I'm sorry - that just seems a bit...naive.

If "we" did as you are saying, twenty years would pass and we'd be having this same conversation again.

History is rife with examples of trying to fix things by wholesale rebooting, and people thinking "this time it will be perfect!"

...except it never is.

I think engineers of all stripes understand this; we've never seen this (yet...) in industries like aerospace or automobiles, or other kinds of hands-on engineering. For one, if it were done, people would probably die. But mainly, I think, because engineers know that throwing the baby out with the bathwater is a bad idea in general; that the institutional knowledge built into the outputs and practices are worth having in place and examining - both to learn what works right, as well as what doesn't.

Your suggestion is akin to a "not invented here" mindset, where rather than using existing frameworks and/or libraries, a developer instead opts to "go at it alone", to prove their implementation is better. Sometimes it is; sometimes it does advance things in a good way. But those tend to be outliers; "your implementation" likely won't be that case.

Also, to be honest, many of those "game changing" libraries and such are "built on top of" earlier stuff and package it better (with cleanup and tweaks added too). This isn't to denigrate either; in fact it's part of the process (and also show's why "first mover advantage" can sometimes be a fallacy as well).


Sounds like Plan 9 after Unix was soiled by all and sundry.


Is WICG a new kind of WHATWG? Does anybody know the difference between them?


The WICG is a working group for W3C. Here's their charter: https://wicg.github.io/admin/charter.html


how does having a function(){setTimeout(handler,1)} compare to this?


That function won't block the rendering operation. Having a lot of timeouts/etc. will potentially slow the JS down in general (indirectly slowing the page down or costing battery, CPU, etc., but it will not cause the user-action jank that the article is talking about.


And while they're at it, please remove all api's which allow for scrolljacking.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: