An Egyptian telecom company, MCS Holdings, contracted with CNNIC, the Chinese national Internet authority, to obtain a CA=TRUE certificate for use in their internal enterprise proxy --- ostensibly for use only with MCS's own hostnames. Users of MCS traversed that proxy to get to Google, at which point the proxy dutifully generated a (fake) Google certificate to bypass TLS for that connection. Google noticed.
Internal enterprise MITM proxy sounds creepy but isn't. There's a bunch of good reasons why a company would need to decrypt TLS traffic leaving their own network.
But enterprises don't need delegated CA=TRUE certificates to accomplish this. They can just roll their own self-signed root CA=TRUE certificate and built it into their machines. There is no reason a CA should need to put the entire Internet at risk solely for the convenience of a single company's IT operations.
A Chicago company called Trustwave did something similar a few years back. Are they still an HTTPS CA?
As you can see, there is a lot of trust that is given to CAs. The whole cert security depends on it. The only (current) real remedy is the nuclear option - removing those CA's certs from the major browsers. Then the other side (Chinese browser vendors) can retaliate, of course. So negotiation is required to maintain detente.
> The only (current) real remedy is the nuclear option
Blockchain-based solutions like Namecoin & DNSChain would have prevented this attack without forcing people to rely on untrusted third-parties (if Google stored their domain info in a blockchain).
> Derivatives of Moxie Marlinspike's Convergence cert plugin that allows you to assign your own trust authorities for verifying signatures. [0]
The only derivative of Convergence that actually addresses the problems with Convergence (ironically), is FreeSpeechMe, which btw, relies on Namecoin's blockchain.
But downvote me again for pointing out facts. lol.
> If Google stored their domain info in a blockchain how would anyone know what identifier actually belonged to Google?
The same way they know that Google chose google.com and Twitter chose twitter.com instead of twitter.io or something else.
> The last time I brought this up you admitted there was no good solution yet, has that changed?
This is referring to something else I think, how to transfer .com's into a blockchain while preserving ownership, is that right? That could be done (if needed) with a centralized registrar whose job it was to vet the ownership of the names upon first registration in some namespace. After the initial registration, control would be transferred to the owner of the domain. This is again, however, specific to the case of migrating ICANN domains (if someone wants to do that).
However, we can reduce the scope of the problem, and just focus on forcing certificates to be public. [1]
If we went a bit further, and required that the certs be in the public log for some minimum amount of time (say 6 hours), that would have made it possible to shut down MCS before they got started.
> However, we can reduce the scope of the problem, and just focus on forcing certificates to be public. [1]
DNSChain/Blockchains already provide certificate transparency (publicly auditable log of certs issued), and they do a far better job of it than Certificate Transparency.
>DNSChain/Blockchains already provide certificate transparency (publicly auditable log of certs issued), and they do a far better job of it than Certificate Transparency.
>The CT spec allows only one SCT to accompany a certificate, making this attack feasible
No, it doesn't. It describes the format of multi-SCT on page 16, and it explains the rationale for this (basically all of the points you brought up) on page 32.
> No, it doesn't. It describes the format of multi-SCT on page 16, and it explains the rationale for this (basically all of the points you brought up) on page 32.
I see the wording wasn't very clear, so I removed the word "only" to make the meaning clearer. It now reads:
"The CT spec allows one SCT to accompany a certificate, making this attack feasible"
For instance, if you are a company that handles confidential medical information (any health care organization, many insurers, every employee benefits management organization, &c), you may be required to have controls in place to ensure that nobody uses your Internet connection to exfiltrate people's PII through Google Mail.
Similarly, many investment banks and financial information firms have strict requirements to monitor all communications owing to SEC rules and insider trading regulation.
> if you are a company that handles confidential medical information (any health care organization, many insurers, every employee benefits management organization, &c), you may be required to have controls in place to ensure that nobody uses your Internet connection to exfiltrate people's PII through Google Mail.
Yes, but what are those controls? You check every packet to see if it contains any information from one of your databases?
What if the person sending the data just applies a simple obfuscation technique to the data, or just tunnels through some other encryption scheme?
You don't check for every record in your database, you create a regex (or multiple regexes) which matches the patterns you don't want leaked. This is how I've seen Data Loss Prevention done in the Sophos UTM.
Yes, if even the simplest obfuscation technique is employed, this system falls flat on its face. (Shh don't tell the regulators)
Sophos is a low-end solution. Higher end solutions (e.g. Vontu) do in fact let you detect on individual records or groups of non-regex detectable groups of records using fingerprinting.
Search for things like “data loss appliance”. As an example, when BlueCoat isn't helping repressive regimes spy on their citizens, they're helping businesses watch every outgoing packet:
“Blue Coat DLP allows you to easily create policies that analyze the data source, content, destination and more.
…
accurate data
“fingerprinting” capabilities, in addition to powerful keyword, pattern, and regular expression support, so you can
create precision policies to effectively
secure your data while minimizing false
positives.”
Sure, the every HN reader might have questions about this but I'd bet a LOT of C-level executives are receptive to this.
Oh, sorry, I meant questions like the ones raised about how someone might try to smuggle data past such filters or some of the security aspects of having a single point with access to everything.
I certainly agree that if you have a requirement to watch outbound data like this, having a system to selectively capture it is much better than simply attempting to record everything.
Simple answer: Yes, they basically check every packet, or at least as many as they can. No, DLP isn't perfect, and it doesn't always work. This should not be a shocker.
Notes:
1) Modern DLP solutions have some pretty sophisticated obfuscation detection tech. Like almost all of these kinds of technologies, they're looking for the 80% case, not the 99% case.
2) Tunneling out encrypted tunnels is subject to traffic analysis techniques. It's not as uncommon as one might suspect to detect out-of-band ex-filtration of many different types this way.
Please, point out any systems which have believeable claims for doing this. In my experience most 'DLP' systems do no such thing, they are just like the bit of string which stops you stealing pens at the bank, basically theatre.
Automatic analysis to statistically detect hidden channels is a research topic, it can be used to put bounds on the exfil rate but not reliably detect it.
What if the site you're accessing is required to have controls in place to ensure that nobody can intercept the communication between the user and the site? I'd expect that to be the case when the site handles things like confidential medical information.
In a typical deployment of MitM tech (e.g. Bluecoat, Websense, etc.), things like personal banking, health care sites, etc., are exempted from the interception policy to avoid personal privacy issues and HR headaches. This can be overridden in the local policy of course, but I've rarely seen that in practice (anecdote isn't fact, blah, blah).
Be aware that the site you're going to may be MitM'ing sessions to meet other compliance regulations (e.g. SOX in the financial sector).
> In a typical deployment of MitM tech (e.g. Bluecoat, Websense, etc.), things like personal banking, health care sites, etc., are exempted from the interception policy to avoid personal privacy issues and HR headaches.
How does it know? Does it have a list of all "personal banking, healthcare sites, etc" from the whole world? How is that list kept up-to-date? What happens if the site the employee is accessing is missing from the list? What happens if the employee knows these sites aren't monitored and finds a way to use them to bypass the monitoring?
> Be aware that the site you're going to may be MitM'ing sessions to meet other compliance regulations (e.g. SOX in the financial sector).
If it's the site itself, is it really a MITM? And even if they technically use a MITM, does it really matter, since the site would have access to the plaintext anyways?
It's the golden unsolved, perhaps unsolvable problem in crypto. Trusting trust and such. There's always a key, somewhere, that has to be shared.
One way to hack the system is if you have actual knowledge of the person you are talking to, and you assume some limited amount of tampering which can be done in real-time. For example, if I know the sound of your voice, and we want to agree on a key with no MITM, we can setup an audio channel and speak some code words to each other. Baring an adversary which can in real-time intercept and synthesize my voice convincingly speaking a different code, this is pretty secure. [1]
[ZRTP] allows the detection of man-in-the-middle (MiTM) attacks by displaying a short
authentication string (SAS) for the users to read and verbally compare over the phone.
Another imperfect defense is spreading over time the data that an attacker would have to intercept and modify in order to MITM. That's what Chrome is doing with their pin lists. Now an adversary would have to alter the pinning when Chrome is downloaded. Of course in this very thread we're talking about technology which can do exactly that. E.g. technology which has any hope of preventing data exfiltration, would have an easy time altering Chrome's pin-list. Of course the Chrome binaries are signed, so there's another layer to defeat, etc. etc.
So the end result is there are a lot of good technologies to prevent MITM. If you can keep the attacker out once, you can generally be confident your future conversations will be secure as well, since good protocols don't start from scratch each time, but rather "ratchet" new keys from the old as you go. [2]
One of the big trade-offs is false positives and privacy. For example, it might be nice if my browser remembered the public key of a site I visit, like HN, and let me know if it changed. Two issues are a naive implementation would also serve as a great tracker for every site I've visited, and how do I know if when I get a warning, it's a real attack and not just an expiring certificate rotating out? Now we would need a way for sites to indicate, by signing with their old key, that indeed they are switching to a new key, and complexity explodes from there.
> It's the golden unsolved, perhaps unsolvable problem in crypto.
I think this problem was solved fairly well by Namecoin back in 2011. Software like DNSChain [1] then makes it possible to securely access blockchains like Namecoin without having to run a full node on your phone or other device.
If you can't run your own DNSChain server (or don't have a friend's you can use), you can query two or more independent servers and make sure the responses match.
Dionysis Zyndros recently came up with a mechanism whereby you can even query a single DNSChain server (that you might not trust), and still be assured of correct replies if you received an accurate key once (we'll be publishing info on this technique soon over at blog.okturtles.com; it's somewhat similar to what you're talking about with ratcheting keys).
We maintain a comparison of various approaches here:
Of course how could I have not mentioned the blockchain? Thank you!
Part of the trick with blockchain is validation. Everyone is not going to keep a full node, not even close, and just delegating trust is not the answer. You want to trust but verify.
I'm not sure what the state-of-the-art is these days for SPV-type verification, but I don't see anything in the current DNSChains response which would allow any kind of independent verification of the returned data.
> Part of the trick with blockchain is validation. Everyone is not going to keep a full node, not even close, and just delegating trust is not the answer. You want to trust but verify.
Right, so hence the two techniques I mentioned in my reply: query more than one server, and/or use Dionysis' "proof of transition" (for lack of a better name).
An interesting thought would be using a bloom filter to store certificate fingerprints. It would prevent someone from getting a list of all the websites/certificates a user has seen. However the significant downside is that a certificate hit could be a false positive and the user hasn't ever seen that certificate before.
> It's the golden unsolved, perhaps unsolvable problem in crypto.
Sorry, perhaps my question wasn't clear. I wasn't asking about MITM in the general case, I was asking about this particular case. The certificate chain for Hacker News seems to go AddTrust -> COMODO -> Another COMODO -> *.ycombinator.com. So in this case, if you're MITM'd by MCS Holdings, is MCS Holdings going to be part of the chain (after a CNNIC)?
Yes, the chain would be different. MCS Holding cannot become Comodo (Comodo = AddTrust, btw), so the chain would change to CNNIC -> MCS Holdings -> *.ycombinator.com.
For web site with authentication (e.g. bank account), protocols like SRP (Secure Remote Password) would prevent the man-in-the-middle if he doesn't know your password. SRP is a mutual authentication protocol with zero knowledge and forward secrecy, it would be nice if major browsers supported it, it's not usable without browser support.
Would love to know this as well. I only have a high level understanding of the purpose of CA Certs, but beyond that I'm lost.
Ignorant questions ahoy:
1. Using Chrome, would you have to manually accept the MITM certificate?
2. Could such a certificate be valid across multiple domains?
3. Would it pose any threat to the computer if it was moved from the MITM network to an outside network?
4. What kind of potential problems could occur if I issued a self-signed certificate for my network?
As far as I understand it (please someone correct me if I'm wrong):
1. In this case you would not have to manually accept anything, as the root certificate (the CNNIC cert) is already in your browser/os and the certificate chain for certs created by MCS would be OK (because their cert is signed by CNNIC).
2. As CNNIC issued them an intermediate CA cert, MCS was able to create certificates for any domain they wanted and these certificates would be considered valid by everyone that has CNNIC in the root store. So the MCS cert is not valid accross multiple domains, but it allows MCS to create certificates for every domain which kind of has the same consequences.
3. I think it would pose a threat when leaving the MITM network, but not as a consequence of having been in the MITM network. Only the root certificates are stored locally. Websites have to send a complete certificate chain that anchors their certs in one of the root certs. This means that the cert generated by MCS is not stored and therefore not used when leaving the network anymore. The danger is that this intermediate cert allows MCS to generate certs for any domain and use them outside their network, too.
4. A self signed certificate would have to be installed on the machines in the network. Otherwise users would get a certificate warning and would have to add the cert to their rootstores themselves. Other than that I think that this would grant you the same MITM-powers as this intermediate cert did for MCS, with the only restriction that you couldn't create certs for domains not in your control that would be accepted by users outside your network/that don'd have your self signed cert installed.
Check the certificate store for the browser you are using. Mozilla Firefox has its own. Internet Explorer and Chrome on Windows rely on the Windows certificate store.
One reason would be an Intrusion Detection System: make sure no malware gets downloaded, or to analyze traffic for stuff that looks like malware phoning home.
While there may well be good reasons to MitM HTTPS on a corporate network, the case isn't quite so clear for secretly doing so. IIRC chrome disables altogether certificate pin violation notifications if the offending certificate is locally installed. While it probably is the right move to forgo the giant security warning in that case, I don't see why they can't use an "orange light" type indicator and in the popup explain corporate monitoring (with a further note that if you aren't on an institutional computer you probably have a virus). While corporate IT departments could recompile chromium sans message, or just ban chrome altogether, why make it easy for them?
If you are using corporate hardware it should not really be secret that your traffic is being man in the middled, this should be assumed. I think it makes sense for a corporation to be able to whitelist their certificate to make intermediate network devices appear as trusted, because they are. I do agree that having a little asterisk or something for the lock ice to indicate that you are using a certificate that falls outside of the default trust store could be beneficial.
The stuff I work with has the option to pop up a warning/comfort page to explain the practice/policies etc every N hours - seems to be a good thing to use.
A good friend of mine works for a major electronics manufacturer (trust me, you'd recognize the name) where some employees exfiltrated 200,000 internal design documents/artifacts to a competitor that they were planning on joining. As a result, no one can use gmail/gchat from work machines or VPNs anymore. I think a MITM proxy that detected giant file exfils but allowed people to send gchat messages to their family would be a huge plus in that environment.
Signing another "CA=TRUE" cert seems like it should be a very restricted and audited operation, right? Is it out of the question to say that all such certs should be cleared by 3rd parties (like Mozilla and MS), on pain of revocation? Or is there a large use case outside of CA infrastructure I'm unaware of?
I would agree; there's really no reason that all major browsers couldn't ship with a complete list of all acceptable CA=TRUE certificates, intermediate or otherwise.
Unlikely and would cause problems. Parent was suggesting that they should be cleared separately without having to update browsers. I like the certificate transparency idea better though, and I wonder if it is possible to refuse new certs via public endpoints but allow certs to be manually added to the logs and SCTs to be manually issued, in case going that far is needed.
What problems, precisely? Sure, it would prevent current CAs from selling sub-CA certificates without coordinating with browser vendors. That's the point. What's a legitimate use case for doing so?
At the end of the bug discussion here:
https://bugzilla.mozilla.org/show_bug.cgi?id=724929
it was decided to give Trustwave a reprieve. Mozilla policy was updated to explicitly forbid such usage, and each of the CA was required to verify that they were complying with the new policy or state when they would come into compliance.
Kathleen Wilson's comment on the bug was:
https://bugzilla.mozilla.org/show_bug.cgi?id=724929#c66
"My intent is to make it clear that this type of behavior will not be tolerated for subCAs chaining to roots in NSS, give all CAs fair warning and a grace period, and state the consequences if such behavior is found after that grace period."
However, over 14th months later when it came out the ANSSI (aka the French government) was doing the exact same thing, rather than revoking the root certificate Mozilla decided to limit them to issuing certificates to: .fr, .gp, .gf, .mq, .re, .yt, .pm, .bl, .mf, .wf, .pf, .nc, .tf
which AFAICT essentially acquiesces in MitM French firefox users that go to French websites.
> Mozilla decided to limit them to issuing certificates to: .fr, .gp, .gf, .mq, .re, .yt, .pm, .bl, .mf, .wf, .pf, .nc, .tf
Is there any way to do the same, manually, for the other "national" CAs? I woudln't mind if CNNIC handed out a certificate for every .cn domain out there, but if they ever try to sign one for an Egyptian entity (or even worse, a .com domain), I want to see a big red warning. Ditto for the Japanese and Taiwanese governments, which Firefox also seems to trust unconditionally.
I actually do this to some extent, as I don't quite trust the NIC of my own government. I told my browser not to trust it, so whenever I try to visit a government website, I get a big red warning. I override the warning after confirming that I am indeed visiting a government website protected with a government certificate. But if the government NIC ever tried to show me a certificate for a non-government website, I would know immediately. This works, but it's inconvenient, so I'd love to be able to restrict any given CA to subdomains of specific TLDs and/or second-level domains.
> But enterprises don't need delegated CA=TRUE certificates to accomplish this. They can just roll their own self-signed root CA=TRUE certificate and built it into their machines. There is no reason a CA should need to put the entire Internet at risk solely for the convenience of a single company's IT operations.
Doing it this way, however, means that they don't need to worry about pushing their self-signed certificate out to all their machines, right?
That could be done easily with group policies on Windows machines which would take care of, at least, Internet Explorer. I haven't used a Windows machine in a long time, though, so I'll ask: do the other major browsers use the built-in certificate store? If not, they'd still have to address the problem of getting their self-signed certificate "trusted" by Firefox and Chrome, for example.
> Users of MCS traversed that proxy to get to Google, at which point the proxy dutifully generated a (fake) Google certificate to bypass TLS for that connection. Google noticed.
I'm curious about the mechanism of Google noticing - was Chrome side-channeling information about it's cert to Google? Because if it was a true MITM proxy, google would never have talked to the browser directly to know what cert the browser was being presented. That's kinda how the whole MITM thing is dangerous - it's invisible to both sides if done correctly...
Chrome ships with a list of CAs allowed to issue Google certificates. If Chrome encounters a Google certificate signed by some other root authority, it phones home.
Google Chrome automatically reports back to Google if a certificate appears for Google and it is not issued by Google's own intermediary. It also blocks it from ever loading via HPKP.
If you're on a machine which has self-signed root certificates in its trusted store does this mean that all bets are off? Can you achieve authentication, integrity and confidentiality despite having an adversarial root certificate on your machine (for example, if all network connections go via proxy which do MITM on TLS connections and DPI)?
In the blog post it says that CNNIC issued the CA=TRUE on the basis that MCS H only use it for domains that they have registered.. Wouldn't it be better to just issue a CA cert with name constrains extensions?
Why do we have that extension if nobody uses it :( ?
At this point I feel like we need to simply remove Chinese root CAs from trust stores and have user's opt-in to allowing certificates issued from china. I realize that any CA can be mismanaged, but the risk of Chinese government hands in things like this seem too high to me.
Edit: I have no delusions that this is not happening in the US, it is simply that as someone in the US, I don't have any options to lop off CAs that the US could influence. I can however make the decision to not trust some foreign CAs entirelly.
You can always remove CNNIC from your own trust store. Saying they should be removed from all trust stores would rather annoy people actually in China, I'd assume.
I wonder if certificate transparency could be mandated for intermediate certificates sooner than a full DV rollout could. It seems some CAs can't quite resist bending the rules when a sweet contract is dangled in front of their faces. It makes me wonder how much CNNIC was being paid to do this. Given that MCS Holdings sells "security products" it makes me wonder if this was an attempt to do or prepare to do bulk SSL stripping. I guess the blog post says there was no evidence of abuse though, so I guess not.
Not this particular attack, as this was a test intermediate only valid for 2 weeks, but the attack was limited to an internal corporate network. For other cases it would allow browser vendor to demand audit reports for example.
So, as mentioned in the first link, client audits via the browser would do absolutely nothing during an attack:
"None of CT’s proofs (audit or consistency proofs) will detect mis-issuance of a certificate by a rogue CA, not even if gossip of STHs (signed-tree-heads) successfully occurs [1]"
And that's for today's attacks. In the section before that paragraph, another attack is demonstrated that also cannot be prevented by CT's audit proofs.
That's a little over the top, especially considering USA's efforts in this regard. Following this logic, the majority of CA's have some kind of connection to abuse, so certificate transparency is the sensible way to detect this anomalies in future.
Honest question: as a United States internet user, is there any practical reason I need to have a root certificate from the Chinese national Internet authority installed?
Corollary, is there a short list of CAs that folks around here trust more than average? Is there any value in such a whitelist, or are all CAs so rotten it doesn't much matter?
There was a bit of controversy a few years ago when Mozilla added CNNIC to Firefox's list of trusted CAs. I removed CNNIC from my browser shortly afterwards. No problem so far.
I don't think you'll have much problem even if you only trusted a few U.S. megacorporations, such as Verisign, Comodo, GeoTrust, GoDaddy, etc. They're no more trustworthy than the rest, but at least they're much more widely used than some government agency of a country you have nothing to do with.
They had a contractual agreement w/ MCS Holdings which almost certainly said they wouldn't do something like this. Since they did, CNNIC can say "they promised they wouldn't" and absolve themselves of responsibility.
Of course, MCS Holdings can then just change their name or create a new company or whatever, get a new agreement (with CNNIC or another Root CA) and continue on.
If CNNIC decides it wants to rent out their trust bits like this, they need to realize they are putting their trust on the line. Any actions performed by sub-CAs under their trust authority should be their responsibility. They need to re-evaluate if taking money to rent their CA bits is worth the stakes.
The alternative is that it's a free-for-all for everyone in the trust store. Cash in selling sub-CAs and shrug if they get caught? Really?
Not removing CNNIC just says that other CAs won't be punished, either. Like Comodo.[1]
Browsers should start considering scoping CAs by default. If CNNIC signs, say, a Mexican domain, that might be cause for suspicion. It's a bit more complicated since .com and others are sorta generic. But there's gotta be something that can limit exposure for many customers. How many US users often run into CNNIC, or those South American CAs?
1: On one of their sales calls, I told them they failed at the one thing they were supposed to do as a CA. Without missing a beat, the guy shifted to trying to sell me antivirus software.
Adopting a zero-tolerance policy for CAs that are bad actors (including those that allow others to have their full power who themselves act as bad actors) and removing their root certificates from trust stores would create a substantial disincentive for CAs to be bad actors.
Yeah, I can't believe the "oh, that's OK, a silly bureaucratic snafu, boys will be boys" response from Google. But at least they told us, they didn't sweep it under the rug.
I would have preferred the Pulp Fiction version. Google should have instead said to CNNIC:
You hear me talkin', hillbilly boy? I ain't
through with you by a damn sight. I'ma get
medieval on your ass.
Could, but would? At least ban the ones that are proven to be untrustworthy. Otherwise the entire concept of a trust store is a joke and a racket to print money (certificates)
So, Google's page on this promotes a project for certificate transparency. I am not familiar with this project; does anyone here know more about it and if so can you comment on whether you think it's a good idea for the overall ecosystem.
CT is a simple idea. Currently, it's possible for certificates to be issued privately. The PKI was designed to scale (and scale it does), so there is no requirement that a certificate be downloaded from some trusted source: an SSL server can provide the client with a certificate chain that acts as a proof that a public key is owned by a particular named entity.
That has some advantages, most obviously, scalability and robustness. It also has one giant disadvantage: the only way to catch misbehaviour is to actually find a bogus certificate being used in the wild.
Certificate transparency is Google's plan to fix this. The idea is to evolve the PKI in a backwards compatible way. It creates public logs in which every certificate is meant to be registered. The certificates (or SSL handshakes, or a few other things) can then have a short mathematical proof embedded in them that the certificate was logged.
If the log proof isn't present then browsers remove the security indicators in order to apply pressure to people to get their certificates logged. It's supposed to be done by CAs so most SSL users should never notice any of this is happening.
Once certificates are being logged publicly the idea is anyone can do data mining over the log, for example to find certificates issued for their own website that they know they didn't request. Thus it allows crowdsourced policing of the CA system. Violations of the rules could be detected much faster.
Currently however only Chrome implements CT, and only for EV certs (the ones that make the address bar green), and the majority of CAs have been ignoring it, although the big fish are taking part. The customers of the smaller CAs that are pretending CT isn't happening will get a nasty surprise once Chrome stops treating their certificate as EV.
Small nitpick: Chrome implements CT for all certificates and shows its status, however they only currently plan to downgrade an EV certificate to a normal certificate.
Firefox has an open bug for implementation[1] but it's inactive for whatever reason.
You don't need implementation in the browser. You need the CAs to provide the public audit logs and all HTTPS domain owners to check them for unexpected issuance.
The browser is just a political tool to enforce the CAs to provide the logs, for example by no longer marking their certs as trusted unless they do so.
There is none, which is why I said the browser is just a tool in a politics game. It enforces the existence of the log. The security doesn't come from checking whether the cert is in the log!
On its face the idea of making a note of what certificates are created is a simple idea, but in reality CT is far from simple, and it does not actually "fix" these sorts of problems.
CT is an attempt a transparency, that's it. It cannot prevent MITM attacks because it allows unaffiliated third-parties to issue certificates on your website's behalf (same as X.509), something that they have no business doing (and is completely unnecessary).
Nor does it guarantee that mis-issued certificates will be found. The reason is partly because, as you note, it is mostly a voluntary effort on behalf of the CAs out there, but also because its design is ineffective. Even if every CA participated in CT, it would still not accomplish much:
1. It does not prevent these types of attacks from being used on users.
2. It does not guarantee that mis-issued certificates would be found because it requires website owners to query all the logs out there in order to find out whether or not someone mis-issued a certificate (a sort of needle in a haystack problem that almost no one [except maybe large companies like Google] is going to engage in, and in the end doesn't prevent attacks from happening).
Whoever downvoted the parent, how about replying to the comment instead?
We spent a good amount of effort analyzing CT, and if you believe we missed something, your reply is worth a lot more than a downvote & run.
It's odd how much effort is being put behind this effort, especially given that in this particular attack Google would have nothing to gain from CT (since all it can hope to do is tell them who issued the fraudulent cert, which they already know).
I love that they came back to downvote your appeal to good netiquette.
HN needs a learning option for downvote external validity -- consitent upvotes on something a user downvoted should degrade the weight of their downvote for all articles.
The fourth point is not true, I looked at the intermediate myself. In fact the test intermediate is only days old at the time of the writing and last less than a month before it expires.
What can a site administrator do today to combat these kinds of flaws? Is there some certificate pinning technology (I don't fully understand what that is) I can use on my own sites now to push in the right direction?
I try to be an early adopter of such practices such as using SSL all the time on all my sites.
Is anyone else frustrated after trying to load this page on a mobile device?
I zoom in to read the text and when I try to slide the screen over, it interprets that as me wanting to go to the next page.
No, I wanted to read the text on THIS page.
An Egyptian telecom company, MCS Holdings, contracted with CNNIC, the Chinese national Internet authority, to obtain a CA=TRUE certificate for use in their internal enterprise proxy --- ostensibly for use only with MCS's own hostnames. Users of MCS traversed that proxy to get to Google, at which point the proxy dutifully generated a (fake) Google certificate to bypass TLS for that connection. Google noticed.
Internal enterprise MITM proxy sounds creepy but isn't. There's a bunch of good reasons why a company would need to decrypt TLS traffic leaving their own network.
But enterprises don't need delegated CA=TRUE certificates to accomplish this. They can just roll their own self-signed root CA=TRUE certificate and built it into their machines. There is no reason a CA should need to put the entire Internet at risk solely for the convenience of a single company's IT operations.
A Chicago company called Trustwave did something similar a few years back. Are they still an HTTPS CA?