Combine this with exploits into one or more broadly trusted certificate authorities (which surely exist) and it's pretty amazing how much data China would have been able to obtain.
Every time I bring up the following point someone chimes in that it's a bad idea, but I still fail to understand why it's not easy to pick which CAs I want to trust by picking a list of entities/people I trust and then adopting their recommendations for which CAs to trust.
This would be a few clicks of UI to let me be intelligently paranoid while maintaining only a layperson's understanding of why (say) Bruce Schneier decides to trust some and not others.
This should absolutely be exposed in browser UIs, esp Firefox which uses its own store. Why can I not easily select/deselect all, sort by country of origin, issuer, plain text search filters, and so on? The ability to click-through, or even to simply display the "insecure" badge, would still be there.
Or, as you said, being able to subscribe to other recommendations would be cool.
You could create an extension which checked certificates of visited sites, which includes information like country of origin, CA, etc. then have an interface for warning about certain parameters and configuring what to trust, warning about anything that doesn't meet your criteria.
The other part would be creating a way/file-format for experts to provide information about which CAs they trust and which are suspicious, then let the extension consume those.
It's not as good as built-in browser support, but it's a heck of a lot faster and more do-able.
they would want their captive users to get smart and disable tracking features that are essential to their business, such as globally enabled referrer, etc.
just like microsft didn't want windows users to be able set a different default browser...
Exactly. And if you subscribe to the recommendations of more than one expert, a useful option would be to decline to trust any CAs that are mistrusted by any of the chosen experts.
China would only be annoyed if you made it out to be a bad thing. Everybody knows that, like the US, they snoop up any information they can get their hands on. Which makes it even more likely they didn’t do an active attack because there’s more chance of being caught if you don’t care.
> Combine this with exploits into one or more broadly trusted certificate authorities (which surely exist) and it's pretty amazing how much data China would have been able to obtain.
The attacker doesn't even need to compromise a CA.
If someone hijacks the IP address of example.com, he could easily get a valid Let's Encrypt certificate for that domain.
If there's a CAA record for example.com that doesn't allow Let's Encrypt (CAA tag "letsencrypt.org") then this won't work. Let's Encrypt will check CAA (they will use DNSSEC if your domain has that) and verify that their tag is present before issuing if you have a CAA record.
If you're a larger outfit you should pick a trustworthy CA vendor or (two independent vendors according to your risk management profile), and lock CAA to those trusted vendors. You can then agree whatever terms suit your business on top of the Ten Blessed Methods, so as to avoid bad guys stealing your names. For example Facebook has an agreement with their chosen CA that all facebook.com and fb.com issuances get signed off by Facebook's network security people. No "But I'm the head of Asian marketing! I need this immediately" bullshit, it gets a sign off from netsec or it doesn't get issued.
this can easily be fixed by having some sort of mandatory waiting period (eg. 7 days) after issuance before a certificate can be considered valid. ct can be used to ensure no backdating occurs
Great point. Ideally it would also be possible to track which certificate was used last time and if the certificate has changed, verify somehow whether the change was intentional.
This did exist in the form of HTTP Public Key Pinning (now deprecated).
Unfortunately, the mechanism which replaced it (Expect-CT) can only help after the fact - someone would have to notice the extra issuance, and you would have no idea whether your traffic was affected.
The party you want to connect to chooses the CA, not you. Are you really not going to use YouTube because you don’t trust the Google CA?
Anyway, redirecting and sniffing traffic is one thing, intercepting and changing encrypted traffic while being undetected is another. It’s quite a stretch really.
> Are you really not going to use YouTube because you don’t trust the Google CA
Not because I don't trust it, but I might because Bruce doesn't. At least, if Bruce stops trusting it I'd like to know why before I decide to trust it again.
I think if you imagine the way that compromised CAs would be used in actual attacks it is clear why we can't expect compromised CAs to be widely observed (they would be used only for targeted traffic), and why we should not trust all CAs equally.
The idea being: stronger market forces leading to more competition among CAs to end up on more people’s trusted list. Combined with the option to serve multiple signatures for the same cert, this might actually just work :)
For instance say you want to go to example.com, and it says “you need to trust X CA”. Ignoring that a user doesn’t know what that even means, all of history demonstrates that a user’s goal when encountering a barrier in the way of something they want is to get rid of the barrier. Arguing for user education isnt the right response because technology needs to be designed to work with humans, not the other way round.
Instead we defer to groups like the various CARB orgs to ensure that the trust stores contain only robust CAs. Historically there have been few teeth as evicting CAs from the truststores is hard - look at how long the Symantec distrust is taking, and look at how many people (incl. HN commenters) have argued against it.
That said the addition of things like CT logs has finally given trust stores a view into what is being issued, so can finally detect poorly run CAs at the time that screw up, rather than maybe catching them months later, if at all. That then provides the evidence needed to justify distrusting a ca or auditor.
That is all a large amount of exceedingly technical work, that no regular user could hope to grasp and make a reasonable choice about.
Instead the market pressure is as it should be for CAs now: ensure you are following the rules, or risk distrust. That is pretty much the most effective market pressure you can have in the CA market.
Remember that the resources, scripts and images can be loaded from hosts using different CAs. A page doesn’t even have one CA. The only thing that is even remotely feasible is blocking one or more CAs and that causes so much disruption normal people are never going to do that.
If my bank appears to switch to a CA with a record of issuing fraudulent certificates or enabling same (cf. Comodo, Symantec), that's exactly what I want to happen!
I realize the desire to control my own security is thought by browser-makers to be the death of ecommerce.
Ok, what happens if it switches to a CA you haven’t heard of?
What happens when some 20 year old gets their first computer, it doesn’t have any CAs in the trust store so how do they find the good ones? How do they know what the good ones are?
You think you want it, but you don’t necessarily know which CAs are good - have you read all the audits (which you can only trust if you got over tls, which requires you trust it).
What happens when you get a site saying “you need to trust ‘Google CA 1’/‘LetsEncrypt’” (or whatever their public CA is). I’m joking of course - the name is meaningless - there’s no root of trust to verify that it’s not someone making a certificate that just claims to be that CA. So you actually need to know the public key for each CA that you think is trustworthy.
If it was all CAs by default, but then allowed you to remove the ones you didn't trust, then that would help a lot. Then the vast majority of users will be going with the browser recommended CAs, but advanced users can customize. If there was a distribution format for experts to "vote/warn" on which CAs are trusted, then you could subscribe to their lists with a common configuration of (Trusted by All, Trusted by At Least One, Not Warned By Any, Not Warned by All). That way you could get updates automatically from your favorite experts. We often outsource things like reviewing all the advisories in this way. Now we just need someone to develop this in. Could it be done as an extension?
I don’t know about windows or Linux, but you can already do this on Mac. Keychain access let’s you change trust settings of each root, including individual “purpose” granularity (eg restrict to just mail servers, etc)
you know deep down that's not the rigth solution. if you have to accept any CA for each site, then next day you will be complaining why you have to click "yes" on that damn dialog for every new site. and then ask browsers to just accept it and not bother you :)
I'm continually amazed at how insecure almost every aspect of internet routing is - it mostly boils down to a sort of "gentlemen's agreement" that everybody will follow the rules.
Such is the nature of BGP. Something like SPF (eg: an authorized AS list for an IP block) and DMARC (reporting about who tried to broadcast what IP block and was rejected) would be great, perhaps even have the latter component convey attack info so ISPs could deal with infected clients automatically.
Basic security mechanisms when it comes to large ISP networks are a pipe dream though, instead we get vendors pushing extremely vulnerable Juniper gear cause its reasonably priced, meanwhile these boxes have new root exploits found multiple times a year. None of the vendors give a crap about security, Cisco pays it some lip service (to win gov't contracts) but charges a premium for basic features.
Internet routing (BGP), SMTP, and DNS (not inclusive, just off the top of my head) were developed during the very beginnings of the internet, without much thought into today's use and scale.
Today you'd do better, with hindsight being 20/20.
That's certainly true. But now that we have the benefit of hindsight, isn't the only reasonable option to start to take the steps to correct the obvious problems?
One of the best steps is modern protocols. China - or whomever - can collect all the QUIC packets they want and it won't tell them much. The incentive for these games goes way down when all you get is some connection metadata and cryptographic line noise.
Not if you control CAs. Cert pinning only works in a limited amount of cases, and certificate transparency only works with CAs who have agreed to implement them (Which is not the vast majority).
Um, you're aware that Chrome requires SCTs (the proof that a certificate has been logged) when connecting to a site right? Do you think "the vast majority" of CAs deliver a product that doesn't work with Chrome ?
CAs aren't mandated to log certificates for you (and indeed some offer the possibility to deliberately not do so for reasons I'll get to in a minute) but if you run a mass-market CA logging certs by default is the only possible way to remain in business since otherwise your entire customer service budget will be spent explaining to customers how to log the certificates and make them work with Chrome.
Firefox and Safari have announced plans to require SCTs but without a specific version or timestamp deadline. Apple's language says "calendar year 2018" but that's probably ambitious. It scarcely matters, Chrome is already too many users for a commercial CA to ignore.
So, why aren't all CAs logging every certificate and baking the SCTs into the final certificate? Well, when a certificate is logged that makes it public, but power users may want the ability to sidestep that. For private systems they may just have decided to never run Chrome (and good luck to them in the future when IE6 on Windows XP is the only option left that doesn't check CT). But for public systems if you're technically capable you can get yourself unlogged certificates, then at launch time log them, collect the SCTs and deliver those to the TLS client rather than baking them into the certificate. Google does this, a few branding practitioners do it. It's very important to get it exactly right because if you screw up your certificates are worthless until you fix it. But if protecting naming is important to your business it's an option.
SCTs are signatures from log servers. So the presence of the SCT means now not only the CA vouches for this certificate, but also the signing logs vouch for having seen this certificate. Chrome has a policy baked into it about which logs it will trust.
Under current policy this "nothing" means Google plus at least one independent log operator claim to have seen it and logged it. This eliminates the scenario in which a powerful adversary obtains certificates but only shows them to a single victim or small victim group. Whatever they did, everybody will see it.
Finishing the entire Certificate Transparency system will take time, but the elements that exist today already work fine. Install Google's Chrome browser. The browser checks for SCTs (the proof that the certificate was logged) and will reject new certificates that don't include such proof. It has been doing this since April.
If you visit that in Chrome it gives you a full page interstitial warning it's bogus and if you click past the page is labelled "Not Secure".
In other popular browsers it works fine, because it has a perfectly nice certificate but the Bad SSL site is deliberately not presenting the SCTs for it. [[ It's hard to do this by accident, most places that give lay folk a certificate will assume your goal is to have your certificate accepted, so they will log a "pre-certificate" for you and bake the SCTs inside the certificate they give you and you can't remove those ]]
But yes, fully completing Certificate Transparency will be more work, we need a Gossip system so that monitors can consult each other to detect a split horizon, and mechanisms for clients to show summaries of what they know to determine if there are conflicts.
What we have now is like if you have a house you've half-built, there is no roof over two rooms, and no electricity, and the floor is bare dirt. But, it's still a house, and in a rain storm it's better to be inside that unfinished house than out in the cold and wet. The people outside in the rain don't think "That guy's house doesn't have triple-glazed windows" they think "Lucky bastard isn't out in the rain like me".
Yes, with a "but" the size of celestial bodies: it's a herculean effort. Witness how long IPv6 has taken to obtain traction (and the lack of any traction on DNSSEC, and the resulting DNS over HTTP shims). These are improvements that occur over years, if not decades and require substantial human and financial resources to deliver on.
The "DNS over HTTP shims" are not the result of DNSSEC taking too long to be adopted, but rather the fact that DNSSEC doesn't provide the protection that DoH does. People have a lot of weird ideas about what DNSSEC does; in particular: it doesn't encrypt queries.
Why do you think IPv6 never took off? Do you think the format of addresses was less human readable, and therefore that’s what led to its slow adoption? What if the address was instead displayed as a mapping using a data format like JSON?
Networks found ways to reduce IPv4 usage, or support dual stack early on when necessary. Turns out every internet endpoint doesn't need to be directly addressable, and most Internet use cases are one to many (CDNs to eyeballs).
At this point I'm inclined to think you'd be more likely to get bogged down for decades bikeshedding behind proposals in a standards consortium that has no actual power to enforce them, and the results would be a horrific mishmash with terminal second-system syndrome...
The two aren't at odds. Packet routing was designed to survive sudden and severe loss of network paths, but it still assumes that participants on the network are cooperative players.
I'd like to point out that government used to run by agreements that were like that, and look what has happened in that domain. I say this as a warning what the internet could become.
Your analogy is confusing. I have no idea if you are talking about international, national, or local agreements. I also have no idea what your opinions are.
Your comment simultaneously contains almost no information is super off topic.
CT and Chinese ISPs have been hijacking user traffic for decades, profiting off of it by selling traffic dump to data exploiting companies, insert ads in webpages, steal social media tokens (for follower boosting and ads retweeting).
> Go ahead, monkey around with BGP, since I have the public key of the recipient of my packets I can detect this and block any type of misdirection.
And how did you get that public key?
An attacker could pretty easily obtain a valid Let's Encrypt certificate using a BGP hijack.
Also, the CA system is in bad shape - CAs have been hacked and certificates were leaked. Not to mention that some of the CAs your browser trusts are not entirely trustworthy or are located in untrustworthy countries. Oh, and from time to time there are attacks against TLS itself (e.g. https://drownattack.com/)
Because the public keys are baked into the OS trust store. For the exact reason of not being able to get the keys from the internet if you don’t already have a root of trust.
The other issues (trust worthiness of CAs in countries that have the ability to compel a ca to issue a fake cert -Australia say), are intended to be mitigated by the CT logging that is now required by the major trust stores. Sure your Aussie CA might issue a fake certificate, but in doing so they ensure they get a global distrust...
In order for CT to really work, we will need a better way to handle actually distrusting CAs.
I think that includes a way for a site to have multiple different certs at the same time, so their one CA isn't a single point of failure.
Without this, we will always be dragging our feet in dropping CA trust, because it will leave some perfectly valid sites shit out of luck.
The dream is definitely not trusting certs which haven't been written to a log. I think that the path is actually in sight too. The CAB forum seems relatively on board.
You can experience this dream today by simply installing Google's "Chrome" browser. If you prefer a different browser you probably don't have long to wait, Firefox and Safari have announced plans to check CT (Apple says in Calendar Year 2018 but I won't be astonished if that slips) and it's something Microsoft's browser team are contemplating - if you care about trust in the Web PKI you obviously shouldn't use Microsoft's products anyway, but if you do...
We should definitely talk more about those CAs and should totally have a way to force only certain CAs should be able to give out certs for a domain. Oh wait, it's called HPKP and it's being removed D:
HPKP was a bad standard - there’s no way it could be used safely at scale. There are just too many ways to accidentally screw up, and that’s before you start dealing with actual attackers.
CT allows you to detect misissuance - theoretically you could have a monitor service that watched all the logs for changes to your domains.
Longer term something (no opinion stated on exactly what) needs to be done to rectify the trust model for BGP and DNS
Let's Encrypt is already taking steps to mitigate this. BGP hijacking is a noisy event - it should be possible to see that routes have changed recently and deny issuance. They can also perform challenges from multiple geos / networks, so that if there's a disagreement among routes, the challenge fails.
I would guess that the author copied the results into a table and prettified them and added in details like location.
At the top of the screenshot it says "traceroute from London to ..." - no traceroute program knows where it is in the world!
Also the locations of each hop in traceroute NY > Chicago > Ashburn etc., no traceroute program will know where in the world those IPs are. I suspect the author has guestimated based on the reverse DNS record for the IPs and latency.
Traceroute does have the ability to show you the ASNs in a path but that is based on a WHOIS lookup of the IPs that it's discovering. So it could be wrong by assuming the IP address of each hop was announce by the ASN that owns it.
Its difficult for the "average user" (define as you please) to know what what path should look like though. Lots of ISPs will have private peerings to others ISPs/content providers/carriers etc. which aren't publically listed anywhere.
I'm not suggesting that the (say) Firefox extension would show the path. It would just show whether the path included devices in whatever country. In this case, China. Users wouldn't need to know details. There are many sources of geolocation data that the extension could draw upon.
Tangent, but are traceroutes spoofable (barring timing differences), or would they break too many other things to be practical? I'm wondering if anyone might do that to hide their tracks.
I just don’t understand why the telecom agreements are not reciprocal. If no foreign nation is allowed to put a POP in China, then why is China allowed to put POP’s all around the world?
Its not as though our domestic technology vendors care about security. JunOS is constantly having new vulnerabilities found, and Cisco ain't much better, but charges a premium price as they are viewed as the market leader and pay some lip service to security.
Edit: ha! ironically, Oracle site about china spying on you won't load the content unless you allow google analytics code to run. If google analytic code fail, the rest of their code also fails.
I can read the article just fine on Firefox for Android with uBlock origin. It also loads with no problems through my pi-hole, which blocks Google Analytics.
Every time I bring up the following point someone chimes in that it's a bad idea, but I still fail to understand why it's not easy to pick which CAs I want to trust by picking a list of entities/people I trust and then adopting their recommendations for which CAs to trust.
This would be a few clicks of UI to let me be intelligently paranoid while maintaining only a layperson's understanding of why (say) Bruce Schneier decides to trust some and not others.