Internet routing (BGP), SMTP, and DNS (not inclusive, just off the top of my head) were developed during the very beginnings of the internet, without much thought into today's use and scale.
Today you'd do better, with hindsight being 20/20.
That's certainly true. But now that we have the benefit of hindsight, isn't the only reasonable option to start to take the steps to correct the obvious problems?
One of the best steps is modern protocols. China - or whomever - can collect all the QUIC packets they want and it won't tell them much. The incentive for these games goes way down when all you get is some connection metadata and cryptographic line noise.
Not if you control CAs. Cert pinning only works in a limited amount of cases, and certificate transparency only works with CAs who have agreed to implement them (Which is not the vast majority).
Um, you're aware that Chrome requires SCTs (the proof that a certificate has been logged) when connecting to a site right? Do you think "the vast majority" of CAs deliver a product that doesn't work with Chrome ?
CAs aren't mandated to log certificates for you (and indeed some offer the possibility to deliberately not do so for reasons I'll get to in a minute) but if you run a mass-market CA logging certs by default is the only possible way to remain in business since otherwise your entire customer service budget will be spent explaining to customers how to log the certificates and make them work with Chrome.
Firefox and Safari have announced plans to require SCTs but without a specific version or timestamp deadline. Apple's language says "calendar year 2018" but that's probably ambitious. It scarcely matters, Chrome is already too many users for a commercial CA to ignore.
So, why aren't all CAs logging every certificate and baking the SCTs into the final certificate? Well, when a certificate is logged that makes it public, but power users may want the ability to sidestep that. For private systems they may just have decided to never run Chrome (and good luck to them in the future when IE6 on Windows XP is the only option left that doesn't check CT). But for public systems if you're technically capable you can get yourself unlogged certificates, then at launch time log them, collect the SCTs and deliver those to the TLS client rather than baking them into the certificate. Google does this, a few branding practitioners do it. It's very important to get it exactly right because if you screw up your certificates are worthless until you fix it. But if protecting naming is important to your business it's an option.
SCTs are signatures from log servers. So the presence of the SCT means now not only the CA vouches for this certificate, but also the signing logs vouch for having seen this certificate. Chrome has a policy baked into it about which logs it will trust.
Under current policy this "nothing" means Google plus at least one independent log operator claim to have seen it and logged it. This eliminates the scenario in which a powerful adversary obtains certificates but only shows them to a single victim or small victim group. Whatever they did, everybody will see it.
Finishing the entire Certificate Transparency system will take time, but the elements that exist today already work fine. Install Google's Chrome browser. The browser checks for SCTs (the proof that the certificate was logged) and will reject new certificates that don't include such proof. It has been doing this since April.
If you visit that in Chrome it gives you a full page interstitial warning it's bogus and if you click past the page is labelled "Not Secure".
In other popular browsers it works fine, because it has a perfectly nice certificate but the Bad SSL site is deliberately not presenting the SCTs for it. [[ It's hard to do this by accident, most places that give lay folk a certificate will assume your goal is to have your certificate accepted, so they will log a "pre-certificate" for you and bake the SCTs inside the certificate they give you and you can't remove those ]]
But yes, fully completing Certificate Transparency will be more work, we need a Gossip system so that monitors can consult each other to detect a split horizon, and mechanisms for clients to show summaries of what they know to determine if there are conflicts.
What we have now is like if you have a house you've half-built, there is no roof over two rooms, and no electricity, and the floor is bare dirt. But, it's still a house, and in a rain storm it's better to be inside that unfinished house than out in the cold and wet. The people outside in the rain don't think "That guy's house doesn't have triple-glazed windows" they think "Lucky bastard isn't out in the rain like me".
Yes, with a "but" the size of celestial bodies: it's a herculean effort. Witness how long IPv6 has taken to obtain traction (and the lack of any traction on DNSSEC, and the resulting DNS over HTTP shims). These are improvements that occur over years, if not decades and require substantial human and financial resources to deliver on.
The "DNS over HTTP shims" are not the result of DNSSEC taking too long to be adopted, but rather the fact that DNSSEC doesn't provide the protection that DoH does. People have a lot of weird ideas about what DNSSEC does; in particular: it doesn't encrypt queries.
Why do you think IPv6 never took off? Do you think the format of addresses was less human readable, and therefore that’s what led to its slow adoption? What if the address was instead displayed as a mapping using a data format like JSON?
Networks found ways to reduce IPv4 usage, or support dual stack early on when necessary. Turns out every internet endpoint doesn't need to be directly addressable, and most Internet use cases are one to many (CDNs to eyeballs).
At this point I'm inclined to think you'd be more likely to get bogged down for decades bikeshedding behind proposals in a standards consortium that has no actual power to enforce them, and the results would be a horrific mishmash with terminal second-system syndrome...
Today you'd do better, with hindsight being 20/20.