I have two use cases requiring private browsing, and dealing with both of them is very annoying.
On one hand, I don't want to be tracked. Disabling cookies in this case is fine, because if I open my webmail then all I do afterwards is tracked, courtesy of analytics code. On the other hand, disallowing cookies leads to all the problems mentioned in the post.
I wish there was a feature "keep multiple tabs, but cookies are not shared between them". Currently I manage with a combination of Firefox extensions and using two browsers at the time. But I wish it was easier.
What Firefox extensions are you using? Check out Multi Account Containers on Firefox if you haven't already. It fulfills all my privacy requirements when paired with uBlock Origin and Privacy Badger. https://addons.mozilla.org/en-US/firefox/addon/multi-account...
I tried this the other day because Google Stackdriver claims to need third-party cookies enabled, which is an absolute no-go, but it also refuses to work in incognito mode and even the container mode didn't help, IIRC.
> "keep multiple tabs, but cookies are not shared between them"
That's how Safari does it. It can be a little awkward, in that [eg] if you log into HN, then open a comments page in a new tab, you'll need to log in again -- but I think it's better than Chrome's approach.
There's a middle ground that seems like it'd be the best of both worlds — sharing things between multiple private tabs, but not multiple private windows.
But in two decades, after your data has been sold and bought countless times, to who knowns which government? In 30 years, when I'm nearing retirement, I have no idea which political system I will live in, what power companies will have over my daily life, or who has bought all the data mined about me.
Best case I will not get any insurance because I googled some weird disease symptoms in 2018, for worst case scenarios just look 30 years back in Eastern Europe.
And then what does said government do with that data? They can buy and sell my information all they want. I don't care. Nothing has happened to me. Nothing will happen to me. I only wish I started a business to cater to people who live with such great fear as so many companies have been doing for the past 10 years or so.
The boogie man went away when I turned five years old.
Your analogy is poor. I didn't say I left my computer wide open for bad guys to look at. I said I don't care if they publish my address in the phone book.
I use a more general approach starting firefox and/or chrome within firejail (i.e. firejail --private google-chrome-stable)
https://firejail.wordpress.com
The same approach can be used also with other applications one wants to isolate from the network and/or home data.
This sounds like pretty much the same argument that @eganist and I made to Google and Mozilla a little while back before demoing an HPKP supercookie (https://github.com/cyph/hpkp-supercookie) at Black Hat and DEF CON.
Our position was that doing just about anything less than what Chris did here was essentially lying to users about incognito mode's threat model, but if I recall correctly both teams viewed other security tradeoffs (such as carrying over HPKP and HSTS state) as worth considering infringing on incognito's stated purpose.
In the end they did both follow our suggested mitigation for the HPKP issue (before Google turned around and deprecated HPKP out of nowhere ಠ_ಠ), but it isn't surprising to hear that similar issues may still exist.
Well, the timing wasn't 100% random since the newly supported Expect-CT header was HPKP's "replacement", but I do think three years is a ridiculously small turnaround time between initially adding support for the feature and killing it with low adoption as a stated reason.
I'd also say the footgun aspects of HPKP are a weak excuse to kill it, given that nothing really new about them has been discovered that wasn't acknowledged as a consideration in the original spec. If anything, I think it would've made more sense to improve the UX for both end users and admins/devs to reduce the likelihood of deployment mistakes (better documentation and tooling) and the potential for damage when mistakes did happen (e.g. make HPKP error screens skippable like any other TLS errors).
> make HPKP error screens skippable like any other TLS errors
That largely defeats the point. Almost no one knows what to make of those errors. And just training everyone to ignore them makes HPKP pointless.
The problem with HPKP was that it could be used to attack any site on the internet with no way for websites to opt out. Basically, the same problem as with certificate authorities - but worse.
Those issues we're know when the spec was written, true. But, it was still a dangerous and extremely difficult to deploy feature. Good riddance.
That largely defeats the point. Almost no one knows what to make of those errors. And just training everyone to ignore them makes HPKP pointless.
How so? Are you suggesting that TLS as a whole is pointless because browsers allow users to skip the error screens ("Proceed to balls.com (unsafe)")?
Even if the error screen is skippable, it makes it clear to the user that something is very wrong and that they're advised to abort their usage of the site.
The problem with HPKP was that it could be used to attack any site on the internet with no way for websites to opt out. Basically, the same problem as with certificate authorities - but worse.
Definitely agreed on that. (One of our BH/DC demos, RansomPKP, showed that you could actually pivot from a server compromise or MitM to deploying ransomware that would brick an entire domain and hold it hostage until you got paid.)
I just think there are much better ways of going about addressing that. Specifically, my proposal to the Chromium team was as follows:
1. Short-term (Chrome 67): Any time dynamic PKP is used, print a console warning that additional requirements are planned to be attached to the use of dynamic PKP with a relevant link.
2. Medium-to-long-term (ecosystem-wide collaboration): Disregard dynamic PKP headers unless the domain in question has some kind of new indicator in the certificate to show that the CA has validated that the site owner is really really sure they want to use HPKP and understands the risks involved (i.e. offload the whitelisting/validation responsibility from individual browser vendors to the broader CA industry).
(And I had some other ideas about roping CAA into the mix to address some specific concerns, but it wasn't critical or the meat of the idea.) The response was kind of handwavy — not so much caring about the footgun aspect (i.e. accidental self-bricking and hostile pinning), but more an entirely unrelated concern about the HPKP implementation being hard to maintain for some unspecified reasons.
Those issues we're know when the spec was written, true. But, it was still a dangerous and extremely difficult to deploy feature. Good riddance.
I hear this repeated a lot, and frankly I think it's nonsense. I just can't see how anyone with a basic knowledge of deploying TLS would be confused about how HPKP works. The idea that only a veteran sysadmin or crypto expert can understand how to use certs, public keys, and hashes just seems really elitist to me.
Obviously people have occasionally screwed up in the wild, but in those cases I think the fault lies more in the tooling and documentation than in the existence of the standard itself. Further, if we do collectively feel that everyone's hands need to be held, attaching additional requirements to its usage as in my proposal would neatly accomplish that while minimally imposing on all of us who already depend on HPKP in production.
> Even if the error screen is skippable, it makes it clear to the user that something is very wrong and that they're advised to abort their usage of the site.
If users do abort their usage of the site, the site is effectively bricked. If they don't, then HPKP accomplished nothing because users are using the site despite a possible mitm.
I guess a user could use the site, but more cautiously - such as not entering passwords. That's possible - but I'm skeptical that many users would actually do so.
> I hear this repeated a lot, and frankly I think it's nonsense. I just can't see how anyone with a basic knowledge of deploying TLS would be confused about how HPKP works.
It's not that it's hard to understand, it's that it's hard to actually implement it. You need to have multiple certs in case one of them gets compromised. And if you mess that up, then you either self brick yourself or you need to keep using a known compromised cert.
HPKP just wasn't worth it for most websites - a reduction in the risk of someone presenting a forged cert in exchange for the risk of accidentally self bricking your website.
If users do abort their usage of the site, the site is effectively bricked. If they don't, then HPKP accomplished nothing because users are using the site despite a possible mitm.
That's exactly the same situation as any other TLS failure, not at all unique to HPKP in any way that I'm seeing.
It's still effectively bricked for non-advanced users and partially bricked for careful advanced users in the way you noted, but at least users can choose for themselves how to proceed, and admins of bricked sites can give them guidance that doesn't involve following convoluted instructions to navigate about:config or chrome://net-internals.
It's not that it's hard to understand, it's that it's hard to actually implement it. You need to have multiple certs in case one of them gets compromised. And if you mess that up, then you either self brick yourself or you need to keep using a known compromised cert.
More accurately, multiple keys, not multiple certs. All you need is to back up the spare key somewhere without throwing it out, which is a minor annoyance but not at all technically difficult.
If users are having trouble with understanding and/or following through with this, I would start with building a better interface than the openssl CLI (possibly as a certbot command) before deciding that the entire concept of key pinning is somehow inherently too difficult to be useful.
HPKP just wasn't worth it for most websites - a reduction in the risk of someone presenting a forged cert in exchange for the risk of accidentally self bricking your website.
Yeah, it should certainly be highly discouraged for almost everyone, but getting rid of it after we already have it is a huge step backwards for the 1% of sites with strict enough security requirements to justify it.
> That's exactly the same situation as any other TLS failure, not at all unique to HPKP in any way that I'm seeing.
Yup. But the feeling I'm getting is that browser vendors see this behavior as non-ideal since it trains users basically ignore the error. Yeah, in theory the user gets to make their own decision. My theory is that almost no user is actually equipped to make such a decision.
> admins of bricked sites can give them guidance that doesn't involve following convoluted instructions to navigate about:config or chrome://net-internals.
I see this as a worst case outcome - explicitly telling users its ok to bypass a security warning.
> All you need is to back up the spare key somewhere without throwing it out, which is a minor annoyance but not at all technically difficult.
Not technically hard, but still plenty of ways to mess it up. And once it's messed up, there isn't much of a good way to fix it.
Ah, well that's fair, and I think I'd generally agree with that. I don't have an alternative proposal for handling TLS failures in general, but I think it's silly to arbitrarily make HPKP's UX a special case, and then cite that special case UX as a reason for deprecating it.
How is this UX behaviour a special case? HSTS also requires the brickwall UX, so does the OpenSSH key change scenario.
The original sin the Browsers had is that the initial SSL UI was built by people who had no security UX background because almost nobody had any security UX background. This was the era when PGP was considered usable security technology.
So when HCI studies start being done (e.g. at Microsoft) and they come back with the scary result that real users just perceive TLS error dialogs and interstitials as noise to be skipped, there is a problem. Lots of real world systems depend upon skipping these errors. I worked for a large Credit Reference Agency which had an install of Splunk, but for whatever insane reason it was issued a cert for like 'splnkserver.internal' and the only HTTP host name that it accepted was 'splunkserver.internal'. So every single user of that log service had to skip an interstitial saying the name doesn't match. For years. Probably still happens today.
Browsers couldn't just say "OK, that was bad, flag day, now all TLS errors are unskippable" because of the terrible user experience induced, so what happened instead is a gradual shift, one step at a time, from what we know was a bad idea, to what we think is a better idea. That means e.g. "Not Secure" messages in the main browser UI replacing some interstitials, and brick walls ("unskippable errors") in other places where we're sure users shouldn't be seeing this unless they're being attacked.
HPKP was new, so like HSTS it does not get grandfathered into the "skippable because this is already so abused we can't salvage it" state. If you went back and asked HPKP designers "Should we do this, but with skippable UI?" they would have been unequivocal, "No, that's pointless". HPKP and HSTS only improve security if the users don't just ignore them, and the only way we've found to make the user actually pay any attention is to make the error unskippable.
Yes that means "badidea" and subsequent magic phrases in Chrome were, as they say themselves, a bad idea. Because users who know them just skip the unskippable errors and end up back in the same bad place.
Thanks for all the interesting context and backstory; I wasn't aware of any of that.
In any case, if it was unclear, my point here wasn't that I necessarily dislike the brickwall UI. In light of the studies you've referenced, I definitely prefer it, and if it were up to me it would be enabled for all of TLS regardless of how many existing services with broken deployments are out there.
My point is that, if the more secure UX is part of the reason for Google's decision, I would rather have HPKP with a less secure UX than not have it at all.
The more awkward point to me was that HPKP and HSTS were invented to begin with. It's like everyone is sitting on top of a pink elephant, going "Well everyone, we can't acknowledge the pink elephant in the room, but we can make it a nice hat."
The annoying first-time experience can be disabled by putting some files in the new profile. i.e. instead of creating a new empty profile every time, clone one that is already initialized.
I noticed a while back that Safari disallows HTML storage in private browsing more - dead giveaway. Chrome and FF allow it but probably clear storage when the session ends.
My main use case of private browsing is to avoid having to agree to the ubiquitous tracking websites demand, when they use every dark pattern in the book to make opting out hell. A private session can let me agree and shrug it off.
I do wish Firefox especially, as it positions itself as the users champion, would make their 'private' or persistent containered mode standard, with an option to whitelist and opt out for selected trusted sites.
Doesn't that assume that the only way they track your data is with cookies? Perhaps by agreeing you are also allowing them to track you based on browser fingerprint / IP address so private browsing wouldn't be fullproof for this use case?
You are right, and I have considered that, but what is the practical alternative? At least if they do they seem to have the decency to ask for reconfirmation each visit.
I'm disappointed (though not surprised) that extra measures are actually required for true incognito mode. Hopefully people can share other good ways to accomplish this.
tldr: Why should I be made to feel dirty if I don’t want to be surveilled?
If the new profile is sufficiently similar to your actual profile, the browser’s “fingerprint”, combined with network and machine information might still register.
On a non technical note I have a problem with browser makers chosing “sleuth” or risqué masquerading looking icons to represent “private” browsers. I turn on private browsing and the browser makes me feel sneaky or shifty or shady or suspicious (I’m just now noticing how many of those words start with “s”, interesting).
When I step out of my house and hope to not be tracked and surveilled I do not put on some Carmen Sandiego looking private eye get-up. (Unfortunately, I actually might have to in some cities, like London. And I’d def need to leave my digital devices at home.)
We opt-in to “do not track” and the servers ignore. We choose “incognito” mode and that’s not quite enough, even though at this point we are feeling dirty and sneaky because of the UI. I add uBlock Origin and Disconnect and now I feel like a full blown activist.
How do we flip the table so that the servers have to opt in to “Private Eye” mode? And if they’re still not getting enough info they have to enable “Five Eyes” mode. And if they’re still not getting what they need they can ask us to install the optional “surveillance state” module.
Not having the largest social network and largest browser maker both be for profit corporations with business models based on harvesting user information would be a start.
On one hand, I don't want to be tracked. Disabling cookies in this case is fine, because if I open my webmail then all I do afterwards is tracked, courtesy of analytics code. On the other hand, disallowing cookies leads to all the problems mentioned in the post.
I wish there was a feature "keep multiple tabs, but cookies are not shared between them". Currently I manage with a combination of Firefox extensions and using two browsers at the time. But I wish it was easier.