Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Firefox's protection against fingerprinting (support.mozilla.org)
323 points by pmoriarty on May 23, 2022 | hide | past | favorite | 180 comments


That's an awesome feature, although it doesn't seem to be complete.

I did some research whether it is possible to stop fingerprinting using a browser extension that patches JS environment before loading the page, and it turns out that it is difficult or impossible. Because first of all, there is no API for patching a JS environment from an extension.

I think that many of new HTML standards are poorly designed in regards to privacy. For example, WebGL reports the video card that you use. Who needs that? Well, maybe there is a tiny percentage of sites that use this information to detect bugs but main use of this feature is a reliable unforgeable signal for fingerprinting a device. Most of sites do not need WebGL at all.

It seems that browser vendors hurry to push forward as many features as possible without much thinking about user's privacy.

So basically today we have lots of standards that provide signals for fingerprinting (Web Audio, WebGL, Canvas API, WebRTC, port scanning via fetch or websocket, probing extensions list) and zero APIs or settings that allow to control it or block (unless you are ready to patch a browser).


> For example, WebGL reports the video card that you use. Who needs that? Well, maybe there is a tiny percentage of sites that use this information to detect bugs but main use of this feature is a reliable unforgeable signal for fingerprinting a device. Most of sites do not need WebGL at all.

WEBGL_debug_renderer_info is an optional extension to webgl specifically so it can be denied to the page at the browser's discretion. And Firefox's privacy.resistFingerprinting (the subject of this article) disables it: https://developer.mozilla.org/en-US/docs/Web/API/WEBGL_debug...


If it is enabled in standard configuration of major browsers, it is not optional anymore. Sites become dependent on it.


And disabling it now becomes a tracking bit. Profit!


>It seems that browser vendors hurry to push forward as many features as possible without much thinking about user's privacy.

Vendors in plural makes it sound like the market isn't a monopoly of Google at this point. Google naturally loves to be able to fingerprint people based on as many metrics as possible, for their ads.


Indeed, the WebGL extension that leaks videocard and driver name [1] is written by Google's employee.

[1] https://www.khronos.org/registry/webgl/extensions/WEBGL_debu...


I think you do need driver or video card name if you need to make it work (or have a usable performance) in every vendor.

Video cards behaves wildly different between vendors. Some shader runs smoothly in one vendor while lag like hell in another. And some don't even run correctly under certain vendor. Without knowing the vendor, it is not possible to patch them up to get a usable program.

They share the same API doesn't mean they behave identical.


Sure but most websites don't need this, and if they do they can ask nicely.


There is no way you can expect a non-technical user to understand the full implications of a prompt like that.


Something like: "This web site is requesting access to 3D acceleration. This is required for fancy 3D graphics, but can also give the web site more information about your computer than it would otherwise, which may be used to track you, so only say yes if you are expecting fancy 3D graphics and you trust this web site" seems usable?


1) Almost noone is going to read all that. It also assumes the common person knows what 3D acceleration is (yes gamer types and graphics professionals know what it is but the average grandparent probably won't).

2) WebGL is being used for a lot more than just 3D objects now -- it's used for everything from 2D games, graphing libraries and even webgl-accelerated machine learning in the browser (like tensorflow.js).


Then this feaure should be disabled so that nobody relies on it.


Most of the major applications that actually rely on WebGL do need this. Largely so they can avoid graphics glitches for the small fraction of users with buggy graphics drivers


…these are few and far between in comparison to the sheer amount of the web that now gets to freely query that information.

They should be asking nicely, not getting it for free.


Totally agree. I bet in the wild 99% of pages that use this are ad trackers and 1% are valid WebGL applications.


Yes... And people keep bashing at the Safari team for holding the web development back. I mean, it's clear that the Safari team also has their own agenda, regarding competition with the App Store. But the stuff that Google is pushing is just more ways for them to track you in the end.


The things safari lack isn't even relevant to privacy.

They are more about buggy layout (not the shiny grid layout or style worker, it doesn't even works correctly with css2 a lot of time), incorrectly implemented API (not the 'not implemented due to privacy' one, they are those implemented but reporting wrong result).

Firefox also rejects a lot of proposal due to security reasons. But for every one it supported, it works correctly. Instead of work in funny way and require devs to guess the browser and insert ransom workaround just to make it not using the improper result safari gives and breaking the page.

Fun fact: Safari doesn't even handle the full ISO8601 correctly until very recent version. A timestamp without timezone will be parsed into incorrect timezone. A timestamp without colon in the middle of timezone don't even be parsed(which is unfortunately go's default output format).


Regarding timezones, this seems to be a fault of ISO standard. Why do they allow several representations of the same thing?


> Google naturally loves to be able to fingerprint people based on as many metrics as possible, for their ads.

I guess that's why they have a explicit policy of prohibiting browser fingerprinting in all their ad business.

https://digiday.com/media/googles-opaque-practices-to-restri...


It is actually. They want their system to do it, but not the people who buy ads from them, so their advertisers become reliant on google's user segmentation.


Last I checked, Google still adheres to w3c standards on these things. And they haven't captured the w3c process itself... We need to be holding the w3c accountable for considering privacy as they refine new rfcs for apis.


It's actually the WHATWG you need to watch[0], not the W3C. The latter has had basically no power for some time now and only endorses what the WHATWG propose[1].

[0] https://github.com/whatwg/sg

[1] https://www.w3.org/2019/04/WHATWG-W3C-MOU.html


> Last I checked, Google still adheres to w3c standards on these things.

It adheres to web standards that it primarily writes and pushes through.

> We need to be holding the w3c accountable for considering privacy as they refine new rfcs for apis.

You know, when w3c does that HN is ablaze with "Safari is the new IE" and "Firefox is irrelevant". See all of hardware APIs (Bluetooth, Serial, HID etc.) that are rejected by both Safari and Firefox primarily on privacy issues.


How did that saying go?

It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It

Follow the money, people in tech get their paychecks from companies that do not respect privacy. And I don't mean that as an accusation: in capitalism it is your duty to do whatever you can get away with.


You know Google literally writes the specifications that are subsequently used by advertisers to fingerprint users, right?

For example, the AudioContext API allows websites with zero audio to query sensitive personalized information about your audio equipment, with no permission. It was written by a Google employee, and implemented first in Chrome, then abused throughout DoubleClick's (owned by Google) display advertising network.


That's what I mean by "holding them accountable." So Safari and Firefox should implement that API behind a privacy-check flag (same as we get for location services, etc.). If Google complains it's not to spec, tell Google to fix the spec.

Don't rubber stamp specs that aren't privacy-conscious; Apple, Mozilla, and Microsoft are WHATWG standards members too.


"We need to be holding the w3c accountable for considering privacy as they refine new rfcs for apis."

How do we do that?


Who do you think proposose the standards?


At this point, what's really needed is an additional layer of abstraction. The site needs to run in a virtual computer within the browser, identical to all others, and every interaction where any information from the user is needed for the site needs to go through a filter that will sufficiently randomize and sanitize identifying inputs to make it much much harder to fingerprint. That will probably break some content like games and whatnot, but the reality is, as much as everyone loves their web-apps, it really is a tiny fraction of web-browsing.


The sandbox provided by Javascript (or WASM) engine is already like a virtual machine. The problem is that browser vendors leak important information into this sandbox.


Also what I don't like that enabling this feature disables page zoom. This is inconvenient because you have to choose between using zoom and fingerprint protection.

Actually I think that when zoom is less than 100% (i.e. the page is made smaller) it is possible to report to the page original window size rather than scaled up size. Many sites today use gigantic font sizes and they are difficult to read without zoom or high-DPI display.


> Also what I don't like that enabling this feature disables page zoom. This is inconvenient because you have to choose between using zoom and fingerprint protection.

I don't think it does disable page zoom; I have privacy.resistFingerprinting set to true and can still zoom just fine. (It does disable per-site zoom settings, but you can still zoom any site to whatever size you want each time you visit—which is certainly inconvenient, but, well, the state of the web is such that you can choose convenience or privacy; Firefox giving you that choice does not mean that the necessity of choosing is their fault.)


Page zoom is mandatory for me as it seems that most web devs love to use tiny text. Here is a blatant example: http://www.aaronsw.com/weblog/

Are you fricking kidding me with that? Sure 30 year old me could have read that as is, but why would I want to?


12px is not tiny. A bit small for body text considering the narrow font, but not unreadable. If you consider that text tiny then you should configure your browser (or OS) to enable DPI scaling (effectively zoom by default) instead of demanding all websites have giant text.


The extension Zoom Page WE can re-enable this if you want.


> WebGL reports the video card that you use. Who needs that?

Basically anybody trying to do a high performance game in webgl. Even today, the quirks of individual cards are enough that a game relying on sufficient feature capabilities is going to need a quirks list of cards to substitute implementations on. There's nothing to be done about it if a card just lies about its capabilities in the gestalt API or has an honest to God bug that you have to work around in the shader.

But, if an API like that isn't behind a permission allow dialogue and is instead on by default, that's a major privacy mistake.


I am not sure that it would work because it is unlikely that you can test your application with all existing videocards (and yet to be released cards). And even if you do that the performance might depend on other factors (OS, CPU model, RAM type and size, driver version).

Instead game developers should provide an option to select graphic settings: more textures, less textures, more shaders, less shaders and so on, and display a FPS counter so that the user is able to decide what is better for them. Or automatically switch to lower settings if fps falls below cetrain threshold.


It's not just about performance. There are bugs in certain GPUs and drivers that can crash the browser or even produce fundamentally wrong results. It's not just "scale graphics quality", in some cases specific work-arounds with completely different semantics are needed.


Like which ones?


One particular one I've struggled with only appears on macOS with Intel Iris Xe GPUs in which the texture sampling in my default, high-performance, seamless skybox renderer bugs out and only makes one face of the cube render correctly. I have to fall back to physical geometry to fix it, which is not as efficient, causes ugly seams at the edges of the cube, and also overcomplicates handling transparent images and image alignment if I'm not very, very careful.

And my app isn't even all that complex. There are so many bugs to handle, but luckily Three.js handles the vast majority of them for me. You can't count on even very basic apps to be completely compatible across all devices.


I encountered problems on an Intel card config once where a simple multiplication of three values in the shader resulted in 0 being emitted due to a bug in the shader compiler itself. Workaround was to wrap the whole equation in a multiplication by 1.00000000001, which (we assume, closed source architecture that's very difficult to crack the black box on) forced the compiler to stop trying to use an optimization that was dropping our values on the floor.


This bug presense can be tested without knowing the GPU model.


> This bug presense can be tested without knowing the GPU model

Testing for bugs is fingerprinting - you are back at square zero.


And the bug should be fixed in the driver, yes. Still no reason to provide the fingerprinting info for free for all cards and vendors.


> And the bug should be fixed in the driver, yes.

But that's the problem. The developer has no power to make that happen. Between vendors failing to support their chips once they released to the public and users failing to upgrade their machines... Some of the Intel bugs are over a decade old.

This is one of those network effect problems not unlike the question of whether a website is broken if it doesn't work under Firefox. If a game doesn't work on a user's computer, the game is broken, not the computer.

This whole conversation has raised an interesting point, because bugs like that will never be fixed in the driver. And similarly, different versions of web browsers have different quirks and bugs in their implementations. I suspect you could build quite a fingerprinting profile by tickling those bugs and quirks... Not just in the webgl implementation, but in the DOM implementation, etc.


It's tested far faster with a card list.

But you're right, in practice a lot of engines do both... They render a known image at boot-up and check it for consistency. But once you know you have a problem, you don't know how to solve it, so you might as well query the card anyway.


Like sibling comments say, this behavior is really common, trying to tell people to stop doing it just isn't going to happen.

If you really want to get your blood boiling, Vulkan since day 1 has had an application info struct which gets passed down to drivers, so drivers can work around application bugs and optimize for their specific loads! https://www.khronos.org/registry/vulkan/specs/1.3-extensions...


You might not be able to test every possible card, but game engines that target browsers might have that capacity.


Yet another reason why making the web into an app platform was a mistake.


I really, really wish they would make it site-specific: i.e., give me the option to disable it for certain domains. Currently, I have it enabled, but this causes sites (like GMail) to not know the current time. So my GMail shows emails' times in GMT.

I mean, I'm already logged into the site; I have already given them my name, password, etc. So what am I trying to hide from them??


You can add exemptions to preference: privacy.resistFingerprinting.exemptedDomains

edit: wording


You also need privacy.resistFingerprinting.testGranularityMask set to 4 for it to work.

https://old.reddit.com/r/firefox/comments/q9kql8/help_settin...


for some reason, neither of these work.

I've been trying to get slack to show the current time on messages.

does it just hardcode GMT?


have you tried closing the tab and restarting the browser? failing that, there might be some mechanism where slack saves your timezone in its user profile (or localstorage) and uses that even if the timezone reported by the browser changes


Does disabling tracking protection for the website not work? When I click the little shield in the address bar and toggle tracking protection, most sites relying on tracking scripts start working again.

I suppose there's no field to enter these domains beforehand but I don't really run into any trouble because of this feature.


That doesn't seem to work. I disabled "enhanced tracking protection" for GMail, and my emails still show times of 7PM (I'm in LA Timezone, and it's currently noon here).


Email systems usually have a setting for timezone which was likely set once when you created the account. Gmail has this for example.


I have the same problem - I wonder if that just sets browser timezone to GMT


One workaround for the the wrong time is to add an addon that change your timezone.


Great idea - would love this as well. And of course even better if there was an easy way to sync across browsers like other settings


I both love and hate this.

I'm all for anti-fingerprinting, but i'm also for interactive graphics on the web, and getImageData() is essentially your way of accessing a pixel buffer... I would be better if it was more conditional.

e.g instead gate the call that ultimately attempts to send any derivatives of that data over the network - although I understand that may entail significant complexity in the JS engine. Alternatively, gate getImageData() only if fingerprintable context methods have previously been called, i.e those with antialiasing, compositing, blending differences etc or any other rendering method with potential differences emerging from the underlying algorithm. That way someone just trying to use the pixel buffer as an output doesn't get punished by needlessly causing modals to be thrown in front of the user.


> interactive graphics on the web, and getImageData() is essentially your way of accessing a pixel buffer

I'm sure it's a great feature when you need it, but most websites have no legitimate need for it. Having this sort of feature off/blocked by default and whitelisted on a case-by-case basis makes a lot of sense to me. Are you trying to use a webapp image editor? Makes sense to whitelist it. Are you trying to read a local newspaper article online? Keep that shit off by default.

In fact, that's how I treat even first-party javascript, because most websites are made worse by turning javascript on. It seems to follow a 90-9-1 rule; 90% of websites need no javascript, 9% need first-party javascript whitelisted, and 1% require some 3rd-party javascript.


The problem here is that the main audience impacted by these "permissions prompts" has zero idea whether what the site is requesting makes any sense.

You - as a developer - probably know that an image editing site might need this permission and other sites don't. The average user has zero (ZERO) clue.

They don't know, they don't really care. Instead they arbitrarily accept or deny based on how scary the popup text is, or how annoying it becomes, or how much they depend on the site.

Which is pretty terrible from all perspectives

- Users get prompts that require decisions that they don't have the knowledge or context for.

- Valid sites & businesses lose traffic because some users are scared away by scary sounding prompts that they don't understand.

- Malicious sites still get a broad swath of data on users who don't give a rats ass about the prompts and have been conditions to just always hit yes.

I think this style of implementation is basically always indicative of a failure.


I think that is basically why vista's UAC prompt is so hated. They jump on so many place may or may not relevant to user's safety. And users don't really have clue the request is valid or not. Ends up just accept everything or "ok google, how to disable the damn UAC propmt"


Whether or not you need it is tangential imo. People should be able to experience cool interactive visuals on websites without sacrificing their privacy


I wonder whether introducing slight per-pixel-per-draw noise into the values could be used here, to mask this sort of detail? Or would you be able to average it out somehow


In general noise is never the answer to fingerprinting. It makes gathering "accurate" data a little harder; it doesn't stop statistical analysis from revealing the truth.


This totally works as an approach! Brave does it for canvas data and a bunch of other web APIs as well. The "farbling" noise is deterministic per profile/origin combo to prevent being able to average it out across multiple page loads, but otherwise random.

https://brave.com/privacy-updates/4-fingerprinting-defenses-...


This allows to run fingerprinting script on hundred of domains and filter out the noise though.


If you own hundreds of domains, you're welcome to try, but there's no good reason to. Fingerprinting scripts rely on a canvas being identical between sessions to identify that the same user is even there in the first place. If you can already single out one user's activity across a hundred domains, you don't need to fingerprint them any further.


>Or would you be able to average it out somehow

Isn't it pretty obvious? Generate the same image 1000 times and take the median pixel value for each pixel to get the "real" image.


If the noise would not change between individual calls but only on page reloads it would be pretty hard to get 1000 samples.


how do you ensure the noise doesn't change between calls? if I want to fingerprint how the letter "a" is rendered, how do you ensure that I can't try drawing "a1", "a2", "a3", etc. which are "different" draw calls, but still allow me to build a composite image of what "a" looks like?


I think you're right on both... adding noise would reduce the accuracy of the fingerprint, but boarder biases would continue to be detectable. For instance i'm aware of some quite significant biases between firefox and chrome in antialiasing fillRect() for subipixel values as the dimensions drop bellow 1x1 pixel that only a distracting amount of noise would be able to conceal for one sample. I know that's already detectable via the useragent string, but I suspect those who are focused on collecting those biases will find strong ones for other data points.


Pages without a large visible (z index) canvas don't need this info.


I've also encountered problems with getImageData(). I just want to be able to blit a sprite sheet for god's sake. It would be tolerable if there were a couple pre-defined functions that could handle blitting, compositing and other operations, and return an HTMLImageElement with the same permissions as the source.

The problem I forsee with getImageData() and gl.readPixels() is that despite all this security, it's possible that it leaks some data through the openGL implementation, like with a spectre/meltdown type attack. Like imagine there is some cached data on the GPU that lingers between draw calls, I don't know.


Gimping features like this by default is going way too far and will only lead to giving Chrome more dominance.


What do you mean "by default"? Unless you have resistfingerprint enabled (not the default), webgl/canvas should behave as expected. If you're the type of developer that takes offense to this (ie. the user agent acting in a manner that the user would like it to act), I think that's going too far for you as a developer, and would only lead to the dominance of your competitor ;)


The problem starts when the user has not been educated about the potential drawbacks of enabling such a feature. The privacy.resistFingerprinting option is the leading cause for angry Firefox users and unwarranted negative reviews on my part, despite being an obscure feature that can only be enabled from config.

Things will break in unexpected ways, while web developers are the ones expected to spend their time offering support for a browser that has been rendered broken.

They need to be very careful about how these features are presented, and evaluate how they affect the entire web ecosystem.


>They need to be very careful about how these features are presented, and evaluate how they affect the entire web ecosystem.

It's hidden behind an about:config option (rather than being in the settings page), and the linked article literally says "It is likely that it may degrade your Web experience so we recommend it only for those willing to test experimental features". What more do you want?


> What more do you want?

Inform users when they silently pass garbage through web APIs such as HTMLCanvasElement.toDataURL(), when it happens. And the UX of that should also be very carefully considered.

Otherwise you might end up with some critical document scans on a government website being uploaded as striped nonsense images without your knowledge, and the "may degrade your Web experience" that you glanced over a year before when you enabled the option may not cut it when your Visa application is delayed or rejected.


> Otherwise you might end up with some critical document scans on a government website being uploaded as striped nonsense images without your knowledge, and the "may degrade your Web experience" that you glanced over a year before when you enabled the option may not cut it when your Visa application is delayed or rejected.

I use a different (from my regular privacy optimized) unmodified browser for such websites and use cases.

While this is clearly not a perfect solution, I also subscribe to the adage[0], that “perfect is the enemy of the good”[1]

[0] https://www.thefreedictionary.com/adage

[1] https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good


>Inform users when they silently pass garbage through web APIs such as HTMLCanvasElement.toDataURL()

>[...] when your Visa application is delayed or rejected.

Isn't that what the prompt (pictured in the article[1]) is for? It doesn't always show up, but AFAIK it only does that when the page tries to grab canvas data before the user has interacted with the page. For a page where you're uploading documents, that seems unlikely.

[1] https://user-media-prod-cdn.itsre-sumo.mozilla.net/uploads/g...


It can auto decline the canvas request and return fake image data from the API by default when privacy.resistFingerprinting is enabled. privacy.resistFingerprinting.autoDeclineNoUserInputCanvasPrompts must be set to false to always show the popup, and even then they shouldn't serve fake data when the user declines the request, but throw an error for the API call.


>It can auto decline the canvas request and return fake image data from the API by default when privacy.resistFingerprinting is enabled. privacy.resistFingerprinting.autoDeclineNoUserInputCanvasPrompts must be set to false to always show the popup

Right, it can auto-decline it in certain circumstances. As the name suggests, it auto-declines it when there there isn't any user input. That seems fairly reasonable to me, and is unlikely to cause issues with you uploading documents for a visa application (you need to interact with the site to upload the document in the first place). That said, I was playing around with it using various codepen demos and discovered that even if you interacted with the page, if the page was in an iframe it would always not show the popup. That might cause issues in certain circumstances and I do hope it will get fixed.

>and even then they shouldn't serve fake data when the user declines the request, but throw an error for the API call.

Whether that's the best approach is debatable. For the use case of uploading a document, I agree that would be the best behavior, but for other cases (ie. it's trying to display something), an exception would likely crash the app. In many cases (eg. google maps), the garbage data doesn't interfere with my use of the app, and crashing the app would be far more disruptive.


A better UX should be "I'm sending this image instead of this one, because of fingerprinting. Are you OK with that? Yes, No send the original image this time."

This would prevent problems but I don't think it can cope with the number of requests in the normal flow of web browsing.


> A better UX should be "I'm sending this image instead of this one, because of fingerprinting. Are you OK with that? Yes, No send the original image this time."

I'm presuming you want the browser to somehow detect whether the garbage image data ends up in a POST request? I don't see how you can implement that in a reliable way, considering there are dozens of ways to go from canvas data to a POST request.


Some flag on the data carried on along the line (I remember Perl's tainting) but I'm not holding my breath.


Give the settings page a "blog" (like the extensions page) describing each setting that has been changed manually in a way users can understand it so that they might learn the drawbacks and can restore it.


1. it doesn't change any of your settings. If you want to revert it, all you have to do is set the about:config value back to 0

2. The linked blog posts already lists the things it does (although it's non-exhaustive)


People follow instructions from a web page that tell them how to change the about:config values. Then they forget what they've changed.

I just noticed there is a "Show only modified preferences" which is wonderful. It shows me uhh.. a few hundred things I haven't changed myself.

I really can't remember what I've changed.

(5 minutes later)

I remembered, last time I've looked the about:config right click context menu was replaced with the rather useless default webpage menu. Kinda funny as I was looking to disable dom.event.contextmenu.enabled

Putting "enabled" and "disabled" behind the preferences feels kinda silly. It would make more sense if the gui replaced true/false with disabled/enabled.


> Things will break in unexpected ways

Like the webgl max texture size dropping to 2048. I wish they'd give that one a bump.


If this stays as an option then that's fine.


If you want even better protection from tracking and fingerprinting, I recommend arkenfox user.js [1]. It's a configuration file for firefox. I have created tmpfox [2] a simple program that creates a temporary firefox profile on /tmp, and installs arkenfox user.js and some plugins I find useful.

[1] https://github.com/arkenfox/user.js

[2] https://github.com/cmitsakis/tmpfox


Are these settings available individually? For example I would he happy with most of these except I can't live with UTC time zone and no site-specific zoom. I would probably also keep the performance API available.

Maybe this will become available when they roll out more broadly with a configuration UI.


If the site notices your canvas data is returning garbage, but your timezone is somewhat legit (ie. not UTC), then they can conclude you have canvas protection enabled but not timezone spoofing. That can be used to build a fingerprint of your browser depending on what fingerprinting protection settings you enabled.


This is true, but the fact that I have canvas disabled must give far less entropy than actually getting a canvas fingerprint. Presumably this is true for each individual option. Even with correlations between the different options from an individual point of view it is likely that each disabled item is a pretty decent drop in entropy. Especially since you can probably guess my time-zone and IP are very highly correlated anyways.

I think the main downside to this is that it hurts the people who do want the most protection. If 5% of people enable max settings then they have a reasonable set size. However if only 1% do and the other 4% opt out of a few options that is a dramatic privacy reduction for those users without anything they can personally do to improve it.


The goal is to provide a testbed of Tor features so that Tor is not broken when they update their Firefox ESR version. It has a secondary behavior of making Firefox roughly look like a Tor user thus slightly increasing the population of possible Tor users. Allowing people to select individual PRF features would make you look extremely unique.


> It has a secondary behavior of making Firefox roughly look like a Tor user thus slightly increasing the population of possible Tor users.

Not really, considering you can just check if the IP is a Tor exit node.


I tried a before-and-after on the EFF Cover Your Tracks tool[0].

Before: One in 69970.67 browsers have the same fingerprint as yours. Currently, we estimate that your browser has a fingerprint that conveys 16.09 bits of identifying information.

After: One in 104957.5 browsers have the same fingerprint as yours. Currently, we estimate that your browser has a fingerprint that conveys 16.68 bits of identifying information.

So according to that, my browser is more fingerprintable after enabling the setting!

I didn't expect this experimental feature to be a silver bullet, but I certainly didn't expect it to make me more unique. I'm not sure what to think of that.

[0] https://coveryourtracks.eff.org/


The pref makes you look like a Tor user, mostly. So yes that population of users is pretty small. The Firefox/Tor privacy protections favor uniformity over randomizing everything.


Sometimes resistFingerprinting can break some site, but rarely. But if it happens the addon "Toggle Resist Fingerprinting" [1] can be helpful to temporarily inactivate it with a simple click on an button. Instead of having to go to about:config and change "privacy.resistFingerprinting" to "false" manually.

[1] https://addons.mozilla.org/en-US/firefox/addon/toggle-resist...


What bothers me is that RFP breaks many addons as well. For example, the reduced timer precision breaks Surfingkeys on Windows (vim combinations are behaving erratically, jerky scrolling etc). Another example, Alt key is completely disabled by RFP as some national keyboard layouts can be used for fingerprinting. [1] As a result, hotkeys with Alt become inaccessible for addons. etc etc etc

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1598862#c3


The problem with a lot of these attempts at fingerprinting prevention is that they cause additional data which can be used to more accurately fingerprint users.

getImageData() is blocked - datapoint

Any detectable difference from what a “regular” browser would return is another point of entropy.


It's a problem to be aware of, definitely.

However, the accuracy of device fingerprinting with `getImageData()` is as far as I can tell a lot higher than the accuracy from trying to fingerprint people based on whether they're returning blank data from that call.

If turning off a feature reveals a new 3 bits of information, but leaving it on would have revealed 5 bits, then it's still probably a good idea to turn it off.

Again, not to say that people shouldn't care about those 3 bits, they should. But it's not necessarily a waste of time even if a site tries to use anti-fingerprinting as its own metric. It only becomes a waste of time if the anti-fingerprinting is more unique than leaving the holes open.


Yup, I agree with you about this. It’d be interesting to do a deep dive into a library like FingerprintJS and see what has the most weight in terms of uniqueness. Maybe getImageData is worthwhile blocking, but perhaps other APIs will increase the amount of entropy.


If the default for Firefox is that it blocks these, then you don't really get a useful datapoint.


At the moment it’s not the default though. So people who enable this feature will, ironically be more unique and therefore more accurately fingerprintable.


"Dear Valued Customer, your browser does not appear to support our online banking application. Please use Google Chrome. Thank you."



Probably not online banking, but a lot of game applications or media intense applications in the browser are going to get real squirrely if they cannot detect the user's configuration.

Even things like the GPU make and model can end up necessary because the webgl mechanisms for determining those things are allowed to lie (i.e. I've seen cards that report in the gestalt data that they allow various features, when those features are in reality implemented in software and therefore basically unusable).


You can test the effectiveness of this setting using the EFF's Cover Your Tracks tool: https://coveryourtracks.eff.org


Stuff like this is why I hope Firefox remains viable as a browser long term. Chrome is now what? 80% of the browser market share and they have no incentive to protect user privacy.

I'm really worried that we're going to head down the road of chrome becoming the only browser anyone tests their site against and we're going to go back to the bad old days of IE 6 compatible sites that are completely broken in other browsers.


I would agree with other posters that it's already like that, reddit.com, one of the biggest websites in the internet, will ask if you want to continue on chrome or the app if accessing from a mobile phone, essentially calling the mobile browser Chrome.


To be fair to the internet at large, I'm 100% convinced that whoever develops the Reddit website intentionally makes it obnoxiously unusable. Even on a relatively beefy desktop I can't scroll past more than five or six videos before the video player becomes unusable. Opening a reddit link on mobile is a guarantee to get an ad for their app that requires several click to bypass.

It's like they're deliberately ruining the experience in everything but their app to feed their ever growing hunger for more user data. It's the worst website I regularly visit by a mile.


Even besides the performance issues, the UI is unusable (at least for people without accounts.) Many times I have found reddit links that seem relevant in my search results; I click through to the link to read the reddit discussion, read one or two comments (because that is all their UI will fit onto my screen) and scroll down to read more. Suddenly I'm looking at another reddit discussion about a different topic entirely. What the fuck?

Incidentally the problem (bug? feature?) goes away if you have javascript disabled.


It used to be like that, but not anymore. I just checked and if you access it through Firefox, it will have Firefox in the prompt.


From what I remember, it labeled the option "Browser" but always used the Chrome logo for it.


It's already like that TBH, especially with really important sites like banking or government services.

That in particular makes me nervous sometimes. If the implementation can't even run properly on other browsers, I don't want to know what other corners they're cutting behind the scenes.


According to this[1], Firefox is under 5% now. It’s already too fringe for some sites to worry about.

If those numbers are right, Safari has about 5x the number of users. Realistically, Safari is our only hope.

[1]: https://gs.statcounter.com/browser-market-share


yet for most the inconvenience of having to port over all of their profiles, history, cookies will be enough to keep them on Chrome. Most can't be bothered with installing another browser or even aware that they can import Chrome profiles into Firefox.

Even for me for some stuff I keep on Chrome since its too connected to all the business/saas/hosting logins and etc.


those days have never gone away. all browser rendering and JS engines have their own quirks and bugs. even a very simple website with only basic CSS will look different in all three desktop browsers, often different enough to be broken.


Same here. And I will continue to use it


For every step in the right direction on privacy, Mozilla has taken 2 steps back. They're a bit like Apple in that they market themselves as the guy who cares about privacy, when in reality they're just slightly better than Google.

I don't believe you can care about privacy with half your org being hyper-political lefty activists, and Mozilla seems to be infested with them. Having monitored the Firefox reddit for 2 years, FF devs & leadership are often at odds with FF users who are people who want privacy above all.


Should we really care what the politics are internally at an organization? They make a solid browser that works well for me. Maybe I am just in the dark on areas they have intentionally reduced my web privacy but I cannot think of any off hand


This is not sustainable and fingerprinting is just one side of this whole fragmented mess we're in. Browsers should present very few fingerprintable attributes by default. By now I'm convinced user-preferred languages is the only really defensible header. Everything else? Ask for permission.

The way we're doing capability permissions on the web (to the extent browsers do it at all) is just broken. A barrage of piecemeal modal dialog boxes is not the way forward. It needs to be drastically simplified. A website should be treated exactly like any other kind of app: if it needs to use extended features, it should put that into a manifest so the browser can provide a specific list of items for the user to approve or reject.

If none of these permissions are given, sites should be extremely restricted in what they can do, including cookies and localStorage.

Let's get rid of UserAgent and codec compatibility headers. Especially UserAgent is already useless and both should be replaced entirely by an improved feature detection system.

There are only 3 major browser vendors left. They could fix this within months. This is not a technology problem, it's a question of will and ad revenue.


I'd like to have this capability but not if it makes typical web browsing annoying with too many alerts. I'd prefer to have the feature work by default with a blacklist of sites known or likely to do fingerprinting (ie larger social and media sites). Of course it can also have an optional strict mode for those who want a higher degree of anonymity in exchange for more disruption of their browsing.

My personal concern with fingerprinting isn't so much any individual low-traffic site recognizing my browser. It's the higher-traffic sites working together to aggregate profiles. I don't need "zero tolerance" anti-fingerprinting. I just want to make it harder for big data aggregators to compile highly accurate, large population databases. Hopefully, a sweet spot can be found in testing which is minimally disruptive for typical users but frustrates data aggregator's ability to compile highly-lucrative data products across sites. I'd imagine just applying anti-fingerprinting to the 1,000 highest traffic websites might be enough to cut the profitability of cross-site aggregation significantly.


>I'd like to have this capability but not if it makes typical web browsing annoying with too many alerts. I'd prefer to have the feature work by default with a blacklist of sites known or likely to do fingerprinting (ie larger social and media sites). Of course it can also have an optional strict mode for those who want a higher degree of anonymity in exchange for more disruption of their browsing.

That sounds like brave/firefox's "tracking protection", which blacklists well-known fingerprinting scripts

>My personal concern with fingerprinting isn't so much any individual low-traffic site recognizing my browser. It's the higher-traffic sites working together to aggregate profiles. I don't need "zero tolerance" anti-fingerprinting. I just want to make it harder for big data aggregators to compile highly accurate, large population databases. Hopefully, a sweet spot can be found in testing which is minimally disruptive for typical users but frustrates data aggregator's ability to compile highly-lucrative data products across sites. I'd imagine just applying anti-fingerprinting to the 1,000 highest traffic websites might be enough to cut the profitability of cross-site aggregation significantly.

What counts as a "low-traffic site"? how would this work with tricks like CNAME cloaking?


I think virtualized rendering might be the future both for privacy and security. MS has (had?) Appguard with edge for example, although it wasn't for privacy. Hardware enforced boundaries.

Although, ultimately my opinion is this is a legal problem. The entities that are a threat to my privacy that also use fingerprinting operate within the boundaries of criminal law.


Somewhat tangential — now that webrender is in, are there any recent benchmarks for Firefox comparing against Chrome and Safari for real world web pages?

I’d also love to see interoperability with history, keychain, and bookmarks with other browsers. Would make it far easier to switch between.


> The browser window prefers to be set to a specific size

Any idea how this works? Can you still set your browser window to any size you want in your window manager, does it misreport it? Could that not cause rendering issues too?


The two mitigations I'm aware of are:

1. by default, any non-maximized windows will default to a 1000x1000 viewport. This is consistent with how the tor browser works. Of course, this doesn't do anything when your window is maximized. On tor browser they warn you not to maximize your window for this reason. The idea here is that if everybody's window is 1000x1000, you won't be able to fingerprint based on people's monitor sizes, OS decoration sizes, and the user's window size preferences.

2. you can optionally enable a feature called "letterboxing", which rounds the viewport size to multiples of 100px. This works even if you maximize/resize your browser window.


Seems like using a viewport size of 1024x768 would be less unique (and thus better resist fingerprinting) than exactly 1000x1000.


Presumably the idea is that other features of fingerprinting resistance (eg. UTC time zone, missing performance data) is already going to make it painfully obvious that you have it enabled, so there's no point disguising that data point. The only thing that using 1024x768 would help is against naive analytics scripts.


That's around 1/8th of the surface area of a 4K monitor so I hope a few other standard sizes will emerge


1000x1000 is measured in device independent pixels, I believe. ie. if you're using 2x scaling the window on your desktop would actually be 2000x2000.


I can't seem to find this option in firefox, perhaps it refers to the feature in Tor browser that restricts the window size to a few most common resolutions.


Given that not many people use Firefox, wouldn't the usage of fingerprinting protection within a given geography (can be inferred from IP address) be considered a fingerprint itself?


Yeah I would rather FF provide falsified, plausible data than no data. The website should ideally not be able to detect fingerprint protection.


It seems like that's basically the approach they're already taking. Items such as these read like plausible fake data:

> Your timezone is reported to be UTC

> Not all fonts installed on your computer are available to webpages

> Your browser reports a specific, common version number and operating system

> The Media Statistics Web API reports misleading information


A user with that particular combination of settings is very likely to be running with fingerprinting resistance enabled. Your parent is hoping for something where the site is unable to tell whether the feature is enabled.


Librewolf does this. It's a patchset for Firefox that removes all the bad stuff and makes a bunch of security/tracking improvements. It fakes your agent, timezone, screen resolution, disables a bunch of APIs that can be used to track you, etc.

It _does_ break some websites, but I just use another browser for those one-offs.


Setting privacy.resistFingerprinting to true turns YT's dark mode off. Resetting it to false brings dark mode back.


Guessing this will be an unpopular opinion, but anyone troubled by this war on web standards and web developers? Like, if we're just going to take stuff that should work, and decide unilaterally now it won't work because <X> ... can we really rely on anything to work?


Unfortunately Firefox' enhanced tracking protection makes the browser tell websites to use a light color scheme instead of indicating no preference, which would have the same privacy benefit.


This messes up alt modifier (event.altKey is always false), which is an instant deal breaker for me as I have many `alt-*` shortcuts in tridactyl.


The irony of this is that you'll end up with a fingerprint that is probably more unique than just running standard Chrome at 1920x1080.


The flaw with this is that the set of fingerprintable features is far more than just "browser brand you're using" and "monitor resolution". At the very least running without fingerprinting resistance leaks your GPU model. That might be fine

1. you're forgetting about other fingerprintable features. the obvious one would be WEBGL_debug_renderer_info which leaks your gpu model. that might be fine if you're running intel uhd graphics, but for someone with high end discrete graphics it's quite revealing. There's also more[1]

2. what if you don't have a 1920x1080 monitor? if you have a 1440p or 4k monitor, then what? JS APIs allow you to get both the viewport size as well as the monitor size. That's going to make you stick out as well. You can try to mitigate this by running in a VM, but then your GPU model would show up as "VMware SVGA 3D" or "Virtualbox Graphics Adapter", which also makes you stick out like a sore thumb.

[1] https://browserleaks.com/javascript


The overall attack surface of fingerprinting is so huge that even if some fantasy world in which all of WebGL, Canvas, Web Audio, WebRTC, etc., could be permanently removed, you're still fingerprintable. The time of day you visit specific sites is enough to eventually fingerprint you. This data can and is collected on back-ends and then shared across data networks. It doesn't matter what you do in the client, you're fingerprintable.


>The overall attack surface of fingerprinting is so huge that even if some fantasy world in which all of WebGL, Canvas, Web Audio, WebRTC, etc., could be permanently removed, you're still fingerprintable.

I guess that's true in the abstract, but the more fingerprinting vectors there are, the easier it is to identify a specific person. The best case scenario is something like the iPhone, where each model behaves identically and there are limited amount of user-configurable settings, such that there are tens of thousands of people in your city alone that have the same model/settings (eg. timezone/dark mode on/off). You can do all the fingerprinting you want, but for a medium traffic site you're probably still going to get hundreds/thousands of users with the same fingerprint.

>The time of day you visit specific sites is enough to eventually fingerprint you.

How does this even work? If I'm on a VPN (ie. shared IP with hundreds of other users) and have total cookie protection (separate cookie jars for each site), it's effectively impossible to tell whether I'm one user visiting a dozen sites, or a dozen users viewing one site each.


The "browser window prefers a specific size" part is infuriating to say the least. I wonder what the person who came up with this detail was thinking. Can't you just leave the window size up to the user but still keep the other changes?

I'd like to believe that all the other changes still contribute plenty towards obscuring the fingerprint, but there's no way I can adjust to manually having to resize the browser window every time I open a new one.


The TOR browser (based on FF) has this implemented for some time now. It resizes the window by a large (I think 50px) grid instead of single pixels. That way the window sizes are adjustable by the user, but still less unique than pixel adjusted.


Instead of changing window size, a browser could display site with a fixed with (e.g. 1280 pixels) centered inside a window. This way you can have your window fullscreen or any reasonable size but the site will "see" standard 1280 pixel wide screen.

Of course, 1280 pixels should not be hardcoded, there could be a choice between several popular sizes.


AFAIK that's exactly how Tor Browser works.


It's an interesting solution, but if I were to guess I think the users who surf with anything but a full-size window are in minority.


Right, but there are different monitor sizes out there (1366x768, 1920x1080, 2560x1440, 3840x2160) so that's still a fingerprinting vector. In addition, because of various user settings (eg. whether they're using compact theme or not, various OS settings), "full screen" on a 1920x1080 monitor for one person might result in a different viewport size for another. That also becomes another fingerprinting vector.


It's mainly a problem for macOS users, since we tend to adjust the application dock to quite individual preference. Windows users will with full-size window mostly end up in the same few groups. Linux users to a lesser degree.


I use tree style tabs and though my browser is full screened, I sometimes get popups from people and websites trying to be clever, thinking that the only reason a window seems full screen but isn't must obviously be those darned dev tools.

I quickly close websites like that (and blacklist them in my pihole if they're particularly offensive in their messaging) but for people like me these protections do actually help.


I'm not sure if that's still the case, but I definitely remember Tor Browser throwing a warning you when you try maximising the window for this reason.


Probably but there is a lot of space even on a 15" laptop screen. I usually have my browser windows half width. The other half is an editor, terminal, whatever. I'm full screen only for sites that actually use all the space. AWS and Google console, vSphere, spreadsheets, etc.


I'm not sure this would work, in general. A page will often render slightly differently based on the size of the window, so whatever size is "reported" to the webpage would end up being how big the page is rendered.


Only if there's active use of e.g. JavaScript or some server-side scripting that needs the info for some processing, for example using JavaScript to place a floating modal flush against bottom/right corner. The browser itself doesn't need to relay the dimensions "to the webpage" in order for HTML/CSS to render as any specific size, nor to achieve responsive/scaling UI design that adjusts to the window size.


It does, via media queries that include resources loaded at specific shapes.


What I'm trying to convey is that it's a misunderstanding that the server renders the page for the browser, and that reporting innerHeight/Width properties to the web server ("the webpage") is a requirement for having anything pop up on the screen.


I was a long-time FF user and yesterday I switched to Brave. Why? Because of ridiculously high CPU usage of FF on my Mac (M1, to be specific). I was surprised to realize that Brave uses less CPU when playing a 4K video, compared to a FF with no tabs!

I really wish Mozilla would address battery/CPU issues of FF first. I'm not an ordinary user, but I can see why many people would just pick Chrome/Brave/Safari if they realize FF drains their battery too fast.


Does Firefox not have hardware decoding support on the M1? Did the API change when Apple changed architectures or something?

Youtube and Chrome have the same CPU usage on my machine when playing 4K60 video (downscaled to 2K even for even more load). Tested on Firefox and Chrome using this video: http://ftp.vim.org/ftp/ftp/pub/graphics/blender/demo/movies/... (had to hack in a <video> element on the parent page to get it to play instead of download). Firefox uses about 150% of a CPU core, Chrome about 160%. CPU is a i7-7700k, GPU is a GTX1080 but I'm on Ubuntu so the latest Nvidia update probably broke something to get these load numbers.

Many bloated Javascript websites are slower on Firefox for sure, but video playback is one of those things it seems to handle fine (or, just as badly as their competitors do).


TBH, Firefox seems snappier than Chromium-based browsers. Check your extensions.


Are you saying specifically on the M1 Mac or on all machines? I have not personally noticed Firefox ever taking a lot of CPU on my machine


Are you sure it was using hardware acceleration?


yes it was.


How does/will this new feature compare against an addon like CanvasBlocker?


It's far more comprehensive (covers more fingerprinting vectors), and since it's baked into the browser you're far less likely to run into webextension limitations causing your protections to be defeated[1]. Finally, being the "official" implementation, and with only an on/off flag, you won't get fingerprinted based on what fingerprint mitigating extensions you have installed (eg. maybe you have canvas blocker installed but not webaudio blocker?).

[1] https://palant.info/2020/12/10/how-anti-fingerprinting-exten...


> Not all fonts installed on your computer are available to webpages

I recognize that I'm probably in the minority for wanting to avoid web fonts in my web pages, to make my web pages more lightweight / faster / environmentally friendly. I prefer to use locally installed fonts, a.k.a. native fonts or system fonts. Unfortunately, this privacy measure may somewhat interfere with that, to an extent that depends on the details of how it's implemented.

I'll accept the privacy win, since surveillance capitalism and trackers also significantly increase bandwidth / energy usage and GHG emissions, but it's always sad when some efficiencies have to be discarded because we've turned the internet into a panopticon (or for other reasons related to humans behaving badly).


This break dark theme, so nope.


I've yet to see a satisfying, realistic answer to what to do about fingerprinting. Every proposal I've seen would A) cause significant harm to legitimate use of Web API features, and B) not actually make users un-fingerprintable.

So WebGL/Web Audio/et. al. leak information about your system. What are you going to do about it? Argue that these sorts of features should be reserved for native apps and remove them completely, or hobble them to the point that they are basically useless? (At which point you might as well just remove them completely.)

Okay, let's do that. Let's force every WebGL developer to start making native apps instead. We do not have a write-once-run-anywhere platform for developing applications that is anywhere near as reliable as the Web. So now, developers need to massively explode their project configuration to build all the native versions of their app for all the platforms that were previously supported by their web app. Every new release of their project has to go through a walled garden approval process. Hope you aren't a doing anything Apple or Google don't approve of!

It was painful, but here we are: users aren't fingerprintable.

Wrong. It's even easier now. The native application platforms give the nefarious data collector so much more access to be able to fingerprint users. Hell, you are often readily given a unique user identifier by the app platform. Even if the user resets that ID, it's not going to bug you too much because you'll just re-fingerprint them and then be able to correlate the new data to the old. Oh, you're not allowed to do that by the platform ToS? How long does it take to catch you doing that? And do you actually get banned, or told to cease and desist in favor of purchasing the same data directly from the platform? It's so nice of Google to make suggestions on what not to do here, wink wink nudge nudge https://developer.android.com/training/articles/user-data-id.... How would they even know if you were bridging advertiser IDs in your own database completely out of their control?

And your browser sessions are still fingerprintable, too, because even the most basic of common information that has been transmitted to HTTP servers since day 1 is enough to fingerprint users.

You've succeeded in none of your goals to reduce fingerprinting but have harmed legitimate developers massively. All for what? Because of essentialist arguments about how the Web used to be?

The problem is not that application platforms leak data. We are walking, talking data leaks by our very nature of not being 100% literal clones of everyone else. Maybe in the 1960s you could walk into a shop and pay cash for something and reasonably assume that nobody was videotaping you and collecting your credit card number and storing all this information in a database, but that ship sailed a long time ago. The problem is that collecting this data, correlating it, colluding between other data collectors, and selling it off to the highest bidder is not regulated. Look at all the data breaches that credit reporting services like Experian have allowed to happen to them. All of this data is a potential weapon against users and it's just sitting around in woefully underregulated databases, everyone hoping the likes of Equifax know what they're doing (when they have demonstrated on several occasions they don't).

But browsers and web app developers. That's where we need to draw the line. Right.


There is some middle ground here, it doesn't have to be some unrealistic all-or-nothing scenario. Fingerprinting is not solvable in the general case, but I argue there is still value in restricting the kinds of information any random ad vendor can extract from your browser. Speaking with the users' interests in mind, the browser could be made almost as secure as an app ecosystem without sacrificing developer freedoms.

For example, if we had a 3-tiered model, "basic", "web app", and "custom" that would solve most of the problems. For "basic", the browser acts like a featureless monolithic platform: no UserAgent header, no (3rd party) cookies, no localStorage, no advanced JS API, maybe even limited DOM access. Anything that needs access beyond "basic" would trigger a permissions dialog. "web app" would be full JavaScript, but no WebGL and only limited media support. And "custom" could be anything the website asks for, preferably in a manuscript file.

This would solve a whole host of issues, including security, for the most common cases. The number of websites that need to ask for more than "basic" is limited, at least as far as the typical user is concerned. Over the lifetime of a browser's settings profile that would probably be about 10 sites that need "web app" access.


Obligatory post about https://librewolf.net/

It’s a fork of Firefox with security / privacy configurations set as default.


> If you discover the (fingerprinting) setting has become re-enabled, it is likely a Web Extension you have installed is setting it for you.

How very helpful, you extension! I'll wager this "feature" gets baked into all sorts of extensions :)


So that's why I never get caught ban evading on Reddit lol.


Fingerprinting has important usages for preventing abuse. Trying to protect against fingerprinting will someday cause companies to seek more drastic measures and before you know it communicating with websites will require a running their code in a secure enclave and you can say good bye to being able to customize the experience or make alternate clients.

Fingerprinting is necessary for an open web to work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: