Hacker News new | past | comments | ask | show | jobs | submit login
Here’s to more HTTPS on the web (googleblog.com)
128 points by syck on Nov 4, 2016 | hide | past | favorite | 106 comments



As much as I herald the ongoing march toward mass HTTPS adoption, I'm not sure Google should be taking the credit.

It seems to me that they are now walking a very fine line between improving the web, and taking over the web.

For many people Google IS the Internet - it's the provider of the search results they always see. Chrome badgers you to sign up for a Google account, and with it the associated services such as GMail.

9 out of 10 smartphones now run Android, and almost all of those are Google controlled (AOSP[1] is hardly a thing any lay-user knows about). Therefore for 90% of internet users, Google is their gateway.

If Google don't want to you know about something on the internet they have many routes to ensure you never find out about it.

If this was Microsoft doing this - the world would be in uproar, but apparently Google are allowed to get away with it? Where's the Antitrust lawsuits against Google? There's a one smoldering on in Germany[2], but it's getting almost no press - most probably because Google are suppressing any mention of it in their results or in Google News (Crazy? Maybe, but you can't disprove this either).

There are no web standards anymore - if Google want to add a feature to Chrome, they just go ahead and do it, forcing the other browser makers to adopt that feature lest they be left behind in the browser market.

Almost all big sites use some form of Google Analytics, along with advert providers such as DoubleClick[3] (also a Google company). You simply cannot do anything on the web without Google seeing in one way or another.

Put simply, the Internet is dead; All we have now is the Googlenet.

--

[1] https://source.android.com/

[2] http://www.ibtimes.com/android-antitrust-googles-big-problem...

[3] https://en.wikipedia.org/wiki/DoubleClick


> As much as I herald the ongoing march toward mass HTTPS adoption, I'm not sure Google should be taking the credit.

Are they taking credit? I read this as more of a report on the state of HTTPS in 2016, not as Google taking credit for HTTPS adoption on the web.

Though to be fair, Google absolutely has done a lot to promote the adoption of HTTPS. They're a platinum sponsor of Let's Encrypt, they [made HTTPS a (small) positive ranking signal][1] back in 2014, and they're even using Chrome to push HTTPS adoption by [making the security problems with HTTP more visible to users][2].

[1]: https://webmasters.googleblog.com/2014/08/https-as-ranking-s...

[2]: https://security.googleblog.com/2016/09/moving-towards-more-...


The "ranking signal" for https sites has been instrumental in me selling full https adoption within multiple organizations. It's an easily understood, immediate positive benefit from SSL adoption.


> There's a one smoldering on in Germany[2], but it's getting almost no press - most probably because Google are suppressing any mention of it in their results or in Google News (Crazy? Maybe, but you can't disprove this either).

I agree with a number of things in your post, but I don't think this is a likely explanation. A couple of months ago, a German newspaper published some data about their traffic sources[1], showing that 20% of their visitors came from Google. Some others released similar data (see the linked thread - note that it's mostly in German). I don't think that's anywhere near enough to effectively suppress these reports (though the whole echo chamber thing is a problem of its own), so I don't think it'd be worth it for them to even try and risk possible legal ramifications.

I think the more likely explanation is that just about no one cares, which is much more worrisome.

[1]: https://twitter.com/zeitonline/status/781779996473958401


Modern (US) Anti-Trust law requires prove that the end user is actually harmed. Proving the existence of monopoly is not sufficient for conviction (This is a change that happened in the 70-80s and has been adopted and accepted as a useful defined interpretation of the law).

Do you want everybody to just not do anything until there is a standard? The modern web you have to start by having a working implementation for testing and refinement. Once that is done, you go for the standard. Mozilla also 'just implements' stuff and then tries to standardise it later.

> You simply cannot do anything on the web without Google seeing in one way or another.

without Google seeing SOMETHING, but that does not mean that SOMETHING is useful


> There are no web standards anymore - if Google want to add a feature to Chrome, they just go ahead and do it, forcing the other browser makers to adopt that feature lest they be left behind in the browser market.

Isn't that normal? First you add the feature to your browser, then if people start using it you think about standardizing it. The alternative is to spend a bunch of effort standardizing things nobody wants.

If anything, I'd like to see more standards that first require a concrete implementation. I think everyone's been handed a spec that has loads of problems when you actually try to implement it. Look at the 'export' keyword in C++.


> If anything, I'd like to see more standards that first require a concrete implementation.

This is how the standards process has always worked! Per RFC6410, a proposed standard can't advance to an "Internet Standard" until there are at least two independent working implementations.

The W3C has a bad habit of writing "standards" before any code has been written; sometimes even before it's clear that any code should be written. (EmotionML, for example.)


Never heard of EmotionML before. OMG.


The thing currently considered "normal" at least for Google and Mozilla goes like this:

1) Send an intent to implement to a public mailing list, describing the thing you want to implement and its existing standardization state, if any.

2) Add the feature to your browser, enable it on your nightly/dev/canary/devedition/whatever channels but not release.

3) Get feedback from people trying out the feature on those channels. Explicitly solicit feedback via twitter, conversations at conferences, etc, etc.

4) Write a draft spec, run it by other browser implementors.

5) Send a public intent to ship, indicating what other browser vendors think of the idea, what web developers think of the idea, stability level of the spec, etc.

6) If the relevant people in your org (the API owners for Blink; for Mozilla it's a bit less clear so far) approve, you ship.

Step 6 can precede finalized standardization. But it can come after too; in particular the "two implementations" criterion for standardization can be fulfilled by pre-release implementations that are on track to become released.


No, it's not normal. That's how we got things like the awful WebKit gradient syntax that everyone is now being forced to implement for legacy compatibility reasons.


Never understood the comparison re: Microsoft and Google. At the time of the Microsoft litigation (and still), they were the dominant OS. The OS wasn't/isn't free. It restricted competition esp. browsers. Google is (mostly) free and ad based. They can't keep other search engines off the internet. They're just what people use (so far).


Windows was (and is) effectively free, in that the vast majority of users got it with their purchase of a PC.

Furthermore, "they can't keep other search engines off the internet; they're just what people use" could equally well be said about browsers in 1998. Microsoft never prevented anyone from downloading and installing Netscape.


OEMs pay for those Windows licenses, that's definitely not what anyone would call "free". It's like saying CPUs are effectively free because the vast majority of users got one with their purchase of a PC. Sure, you didn't send any money to Intel or AMD directly, but you still paid for it.


  > Therefore for 90% of internet users
I would not be so sure that 9/10 phones running android directly translates to 9/10 mobile traffic being from android phones.


Not sure why this was downvoted; it's making a very good point.

There are a lot of cheap Android phones in the wild which are used primarily, or even exclusively, for voice and SMS. These devices count towards Android's overall market share, but they have very little effect on web usage.

Even beyond that, studies have consistently shown that iOS users use the web more than Android users:

http://www.businessinsider.com/android-is-beating-ios-in-web...

(Old study, but no reason to suspect the results have changed dramatically.)


That article suggests the results had been changing dramatically for 10 months.


> As much as I herald the ongoing march toward mass HTTPS adoption, I'm not sure Google should be taking the credit.

The post seems to say that they're trying to get out of the way of https adoption. It doesn't even mention their initiatives such as requiring encryption for http2 or the location API.

> - most probably because Google are suppressing any mention of it in their results or in Google News

Is the world going completely mad[1]? This is so obviously wrong, simply because people don't google [news] to be informed, nor does Google News have enough influence to be meaningful. You think the WSJ/NYT/Daily Mail/SPIEGEL depend on google to find out the news? Then where would these news reports actually come from that google is supressing?

> There are no web standards anymore - if Google want to add a feature to Chrome[...]

Which isn't actually true, because Safari is large enough to require support. And because Google is a dependable participant at the WHATWG. It's also the only way that actually works, because you need an implementation for any specification. Looking at the pace of browser improvements and level of compatibility – if that's the result of Google's corruption, I want more of it.

> forcing the other browser makers to adopt that feature lest they be left behind

Or as they used to call it: competition. It's markedly different than MS's extend-and-extinguish strategy back in the days. I can't think of any features that disproportionally benefit Google.

> You simply cannot do anything on the web without Google seeing in one way or another.

There seems to be some sort of newfangled "adblocking software". I'm also not quite sure if google correlates analytics data with user profiles – I know they don't do it on my sites because I'm using their feature to anonymize the last 3 digits of IP addresses (as required by law in my country).

> Put simply, the Internet is dead; All we have now is the Googlenet.

As long as I can substitute "Google" with "Facebook" and make the sentence actually more believable I don't think we're quite there, yet.

[1] https://en.wikipedia.org/wiki/Conspiracy_Nut(Gullible)


Fortunately, the country with the largest number of Internet users is still a no-Google zone.


The big difference is that apparently Google is a Silicon Valley sweetheart and Microsoft never was.


That Microsoft was convicted to pay millions, for actually harming users for profit, might be another difference.


You mean just like Google is going to be?

I never felt harmed by Microsoft, using their products since MS-DOS 3.3.


> I never felt harmed by Microsoft

That's why justice, thankfully, isn't based on your own personal feelings. If you read the sentence, it's pretty strong.


Well, Microsoft kind of predates Silicon Valley and grandpa might be cute, but he's not as cool as the kids you're playing with.


Huh? Silicon Valley. Think about it. What comes first, the hardware or the software?


Exactly and somehow the hipster kids believed that companies are abstract entities with human behaviors and not money making machines that only answer to their board.


I asked this earlier today on its own thread:

Why don't browsers default to trying HTTPS if there is a DNS record but no response on port 80? Or better yet, default to just trying HTTPS first. I understand I can add my site to browser based HSTS lists, but this doesn't really help people developing command line applications or API client libraries where a 302 could be followed without the developer even noticing and if I want to develop a MITM proof application I currently have a usability problem.

Is there a reason to support HTTP at all, other than current user experience in the browser?


That was discussed here, starting with [1]. tl;dr: Too many broken sites out there, and this kind of opportunistic encryption doesn't hold up in any adversarial threat model (MitM blocks :443 and forces downgrade to :80).

[1]: https://news.ycombinator.com/item?id=12872100


This is great news. Furthering HTTPS adoption is the best way we have to combat pervasive surveillance, and we're making steps in the right direction before HTTP2's opportunistic encryption [1] will improve things on an even wider scale.

We'll also face some new challenges. Over the years, many users have learned to "look for the padlock" before entering their passwords. More sites moving to HTTPS will also increase the percentage of phishing/attack sites served over encrypted connections -- with the padlock to match. We'll need to renew our focus on training users to check for the correct domain name (and possibly an EV cert), or phishing sites with a cert from Let's Encrypt will become a real threat. Browsers are moving in the right direction, displaying https less conspicuously than before, and emphasizing connections with EV certs.

[1] http://httpwg.org/http-extensions/draft-ietf-httpbis-http2-e...


Furthering HTTPS adoption is the best way we have to combat pervasive surveillance

No it isn't. HTTPS fights passive surveillance. The pervasive surveillance is already inside the HTTPS link, and deliberately so.

The actual best way we have to combat pervasive surveillance is to grow a spine and work to end the 1st and 3rd party surveillance our organizations implement, rather than just bitching about surveillance on the internet.

There are only a few actors positioned on the network that can passively perform surveillance pervasively (not just incidentally). My threat model extends beyond the NSA and ISPs. We've been taught to hate and fear them, but I'm sick of only thinking about the last crisis instead of focusing on the underlying problem.


Pervasive surveillance is also helped with browser fingerprinting, and Google, Facebook and so on are doing nothing to prevent that.

I'd like to see the default User-Agent header for HTTP2 be as short as "Chrome". Remove cookies, find ways to keep the identity of a user private.


Good idea. I wonder why the privacy-protecting Add-Ons (ABP, uBlock Origin) don't mask the UA by default.

On the other hand, trackers will probably just switch to detecting the capabilities of the browser using JS and then map each distinct set of capabilities to the matching browser version. In that scenario, an Add-On replacing your UA will actually increase trackability, as long as the majority of people aren't using it.


Because many sites (including most of Google's) won't work then ;)

Also it would be very easy to detect if a user is running an adblocker then.


String matching is much simpler than timing and other means. Terse+honest agent strings could also avoid the confusing bloat seen in UAs like Edge.


uMatrix offers a feature to randomly pick a UA every 5 (configurable) minutes from a set of configurable UAs. This works, although GMail might complain about the UA being too old. Reloading GMail resolves it.


Never gonna happen if we keep using Chrome.


> Over the years, many users have learned to "look for the padlock" before entering their passwords.

This is - unfortunately - bad advice. The padlock tells you that you're on the site that you are. But it doesn't tell you anything about the trustworthiness of the site.

If you're at https://evilhacker.com and you enter your password then evilhacker will get your password.


Absolutely. Which is why I think we still have a lot of work ahead of us, making users actually check the domain name, and the validated identity (for EV certs).


Having them use a passsword manager is a better apporach. Checking the domain name is weak to keming probiems.


...and password managers are weak to lots of problems [1], the least of which is malware stealing your password container plus the master key [2].

I'm still leaning towards the password manager side of the dilemma, but the situation isn't great on either.

[1] https://twitter.com/taviso/status/769378052254015488

[2] http://arstechnica.com/security/2014/11/citadel-attackers-ai...


> We'll need to renew our focus on training users

Totally useless, studies show that user just don't look or care. This after all the effort put into this.

The solution is new authentication that takes into account the domain.

This is exactly what has been done in the new FIDO standards, UAF and U2F. If you use U2F as a second factor on google, dropbox or github phishing is already a problem of the past.

These standards have now been given to the w3c and they are working on a further standard based on the fido ones.

https://fidoalliance.org/specifications/overview/


"is the best way we have to combat pervasive surveillance"

LOL, how fucking naive people are. HTTPS is good, yes. But HTTPS is also being fostered by service providers (Googles, Facebooks and other PRISM supporters) to try decrease revenue from operators and network providers, making sure they stay as random data pipes.

Google is probably the first company out there helping the government and other parties gather all your data, filter it and serve it to whomever is the highest bidder or for whichever obscure political reason.

If Google and the like were so big on privacy they'd do more to ensure that users can directly and privately talk to each other without intermediaries.


I agree. HTTPS is only good against your neighbor script kiddie and your isp injecting ads. Also now good if you want to support https2 features.

It's naive to think it's any good against states or sophisticated attackers.


Isn't HTTP2 supposed to be encrypted by default? I don't see that changing any time soon. I remember Firefox tried opportunistic HTTP1 encryption, but they rolled that back in a few days, and AFAIK, no one supports it.


In theory HTTP2 isn't required to use HTTPS, but in practice I don't think any browsers support HTTP2 without it.


A solution is to separate the encryption and authentication parts of SSL icons. Lets Encrypt and the likes get only the icon for ecrypted, the other old CA:s gets also the authenticated icon.


Ignoring the price tag, there's no difference between Let's Encrypt and "other old CAs". They both perform the same kind of validation for DV certificates. Some might even say Let's Encrypt does more than others, given that they support CAA.

Other types of certificates (such as EV, which attempt to verify the business entity requesting the certificate exists and is indeed requesting it) already get different treatment in most browsers (namely the organization's name being shown in a green box).


Do you think people who haven't studied crypto will understand the distinction between the two?

I'm still amazed at the fact that FCC regulations confuse encryption with authentication. On ham radio frequencies, you usually can't transmit encrypted traffic, because that's not what the frequency allocation is for. But for certain types of remote command of automated craft, you're allowed to encrypt the message because someone sending a tampered/spoofed message could crash your craft. Which completely confuses encryption and authentication! And that's from people whose entire job is to think about rulemaking for electronic communication.


Are you suggesting that signing a cleartext message would not involve encryption? Because I'm pretty sure the way "signing" works is to encrypt a hash of the message. So you need to send an encrypted message either way, even if it's a smaller one.


No, that's not how signing works - encryption is a function of a message and a public key, such that you can take the encrypted message and a private key and recover the cleartext. Since the public key is known to everyone, an attacker can modify the message, generate their own hash, and encrypt that with the public key, just as easily.

It's true that you can construct a signing scheme by using a decryption algorithm: treat the hash as if it were a ciphertext and "decrypt" it with the private key, then anyone can verify it by "encrypting" it with the public key. That way only the person with the private key can sign it, which is the direction you want. But this doesn't run afoul of the FCC's rules, because you're not obscuring the message; everyone has the public key.

If you're using so-called "textbook RSA" (i.e., the type of RSA you can explain on a whiteboard, with no hardening against real-world attacks), it's also true that the encryption and decryption operations are the same mathematical function. But you don't actually want to use textbook RSA, at the very least because of malleability (if you multiply two textbook-RSA ciphertexts, you get the same thing as if you'd decrypted them, multiplied the cleartexts, and encrypted that, except you don't need to know the private key), but also because of determinism, timing attacks, etc. Real-world RSA signature algorithms bear little resemblance to real-world RSA encryption or decryption algorithms.


Well... depends on the authentication scheme.

One authentication scheme which doesn't involve any encryption is HMAC with a shared secret key. Sure, you need the key to authenticate the message -- but that's perfectly fine in many situations.


It's 2016 and I still know no casual Internet user that looks for the green padlock before entering sensitive information. Heck, even many of my peers (developers, etc.) don't seem to either care or are ignorant to it. I don't think it's their fault, I believe it's more of an educational issue. Perhaps browsers should display a confirmation dialog when the user is trying to send some sensitive information over HTTP.


Firefox used to have a preference you could set:

  security.warn_submit_insecure
When set to true, Firefox would warn the user that the data they were about to send was not encrypted and would be sent over plain-text HTTP.

This setting was removed at some point (FF19, IIRC) and is currently (AFAICT) a NOP; cf. bug #799009, "Remove support for obsolete SSL-related warning prompts" [0]

[0]: https://bugzilla.mozilla.org/show_bug.cgi?id=799009


Chrome is going to start displaying an (admittedly easy-to-miss) warning in situations like that, starting in January: https://security.googleblog.com/2016/09/moving-towards-more-...


Why is https is promoted as a way to fight surveillance when the typical HN user knows https does no such thing.

How does https stop NSA or any resourced state actor? Even ordinary run of mill organizations routinely mitm https.

And to add insult to injury this naive https protects against surveillance is promoted aggressively by people and entities who are known to be in bed with state entities, leading purveyors of spyware and are aggressively building profiles of individuals every second.


The interests of Google and users are aligned when it comes to making sure that intelligence agencies and other criminals can't read everyone's communications. I don't trust Google across the board but I'm still happy that Google promotes HTTPS.

If a nation-state attacker decides they really want your secrets they will probably get them. This can mean measures up to and including clandestine hardware modifications of systems you use. HTTPS isn't going to save you in that case.

HTTPS can, however, take a lot of the sting out of pervasive information insecurity promoted by nation states. It forces nation state attackers to go from passive to active attacks if they want to tamper with your system or read the contents of your communications. Pinned keys can thwart MITM tampering, when pinning is available, and even when an active attack can compromise security it's more likely to leave traces. It forces a natural balance on intelligence agencies that are (sadly, apparently) un-constrained by the consent of the governed: use that great 0-day exploit to monitor everyone you slightly suspect and it will be discovered/fixed that much faster.

HTTPS doesn't do anything to prevent metadata based surveillance, of course. But keeping the contents of communications private still imposes significant and worthwhile limitations on how metadata surveillance is used for further targeting. Any broad security measures that make the NSA's adversarial role harder to perform also makes life harder for for attackers from other nation-states; I'd rather that everybody be able to keep the contents of their conversations secret than nobody be able to to, if we have to pick one extreme or the other.


I guess maybe you're being downvoted for the tone of your comment, but I came here wondering the same thing. Usually we think of communication as secure when messages can be read only by their intended recipients. We don't really get that when we're using the public key infrastructure in modern browsers. I'm not sure how much it matters (I don't mind if the government knows my credit card number), but I don't like the way PKI makes "private communication" sort of ambiguous...


The public key infrastructure in modern browsers allows you to use HPKP, which gives you all you need in order to fully control that a message can only be read by its intended recipient, if you need that level of assurance. (Yes, it's TOFU, but is there any large-scale protocol that isn't either TOFU or requires out-of-band verification?)

The public key infrastructure in modern browsers is also about to require that CAs use Certificate Transparency (a year from now), which is definitely an obstacle even for nation-state adversaries if they want to stay under the radar (once it's fully rolled out and backdating in order to bypass the requirement isn't possible anymore).

Finally, if you're the target of a nation-state adversary, there are probably things you'd have to worry about other than the Web PKI. It's unlikely to be the most effective way to own you. The Web PKI isn't the weakest link by far.


Here's to static text blog posts that don't display without JavaScript.

That surely doesn't make the web any safer ;P


Works fine without JS for me, except the Google+ comment box and some Youtube channel link widget thing.


Looking at the page source, the whole content can be found both within a <script type='text/template'> tag and within a <noscript> tag (yes, there are two copies of the whole content). The problem is that some Javascript blockers like uBlock Origin ignore the <noscript> tag; see https://github.com/gorhill/uBlock/issues/308.


I can't think what we're the engineers that implemented that stuff were thinking.

Blogger sort of worked fine until they hugged it with js and their latest super cool network and view engine. Let's see if they rewrite the whole blogger thing in webgl soon. I wouldn't be surprised.

Also I'm sick and tired if closing those cookie notifications.

PS. I browse the web with js enabled like the majority but there is no need to use js to render 50kb of text.


Not here. http://imgur.com/ERwQrQq is all I see


It is still expensive and cumbersome to implement HTTPS. Everybody seems to be neglecting the small websites deployed by hand, which are probably 99% of all the internet websites.


Cumbersome? For most non-professionals running a small site, absolutely. That's where hosting providers need to step up to make it as easy as possible.

Expensive? How?

Thanks to Let's encrypt and others, certificates are now free.


Free is not the same as 'automatic' - until the process of obtaining a certificate and installing it on all the popular web hosting platforms is invisible to the site-owner then there will always be a barrier to implementation.


cPanel, which runs most of the cheap web hosting platforms, now includes a Lets Encrypt plugin. In the coming update, it will be a plugin enabled by default.


I'm sorry but I wouldn't call certificates with a lifetime of 90 days practical and I'm a developer.

Let's Encrypt is still far from being convenient and usable for small companies or individuals that don't want to manage infrastructures and administration like writing a cron job that refreshes your certificate.


Yes that one liner takes so much effort to write [EDIT: copy from one of the bazillion articles that have already done it for you]. That's why we can't have security. It takes too much effort.


For fun, next time you are on public transport, or sitting in an airport lounge, stand up and ask loudly 'Does anyone know what cron is?"

The lack of raised hands, should answer for you why LetsEncrypt is not ready for mainstream use.

Until Cert renewal is baked into all the hosting apps/platforms - HTTPs will never be ubiquitous.


Most of those people don't host their own websites, either.


But there are some who do.

Not that they host it with their own computers, but they rent some space online and host it there.


uMatrix offers a feature to randomly pick a UA every 5 minutes from a set of known UAs. This works, although GMail might complain about the UA being too old. Reloading GMail resolves that.

EDIT: Replied to the wrong parent, please ignore in this context.


I'm pretty sure you replied to the wrong comment. Or I'm more lost than I thought...

EDIT: You can't delete comments with replies anymore. It's very annoying.


I did, sorry. And I cannot delete it, will amend the comment and find and comment in the right context.


I would counter that small web sites deployed by hand are a lot less than 99% of all internet sites, because hand-deployed sites tend to go to down and never come back up.


Most of those smaller sites are deployed using cPanel, and most cPanel providers have/will have a LetsEncrypt plugin to make automated HTTPS setup easy.


Ok, so we are good for all websites written in PHP (?)


I build a lot of "small sites" and you're correct - implementing SSL for them on a typical shared host is not drop dead easy (also not free).

I also host with WPEngine, a more progressive WordPress only host - which has made SSL implementation extremely simple. They're an example of a modern approach to web hosting.

It's the low cost, shared (LAMP) hosting providers that need to implement it into their sales process and make switching existing non-encrypted sites smooth. Unfortunately, I don't see them moving to "free SSL for everyone!" anytime soon as it's a revenue generator for most.


You could also use something like Laravel Forge to spin up a LEMP server which has lets encrypt support built in.

https://forge.laravel.com

(it's not exclusive to just laravel apps)


I wonder how effective HTTPS is against MITM attacks. When I type www.somedomain.com into my browser, it goes to http://www.somedomain.com. If the page then redirects to https://www.somedomain.com it is too late already.

A MITM attacker could intercept my initial request and then proxy everything afterwards. So he would get all the data in clear text.

Somebody could probably put an app onto a phone that offers a wifi hotspot and then MITMs everything. By walking around with that phone he would probably capture all kinds of "private" communication all the time. And none of us would know.

When I am in an airport and see a hotspot "Free Airport Wifi" I always suspect it is such an app in reality. I mean, everybody can call their hotspot "Free Airport Wifi".


There are two fixes for this:

* Pages can send the Strict-Transport-Security header to tell the client that future visits to them should be HTTPS-only

* Pages that send that header can also ask to be included in a list that ships with the browser to handle the first-time visitor case: https://hstspreload.appspot.com/


This can also be dangerous. If Globalsign starts sending bad OCSP data again, or your browser vendor blacklists your CA due to bad behavior from the CA (no fault of your own), you'll end up with a site that now can't be accessed at all. Not even to display to the user, "sorry, we're having some technical issues, please check back later", with all the login stuff disabled (and the site using secure cookies by default anyway, so they won't leak in HTTP mode.)

It is really difficult for the average user to clear HSTS/HPKP data. You can't just clear your history, you have to go to about:permissions or chrome://net-internals#hsts to do that.

Maybe that's the experience you want. But I think it'd be a tough sell to any business to have their users presented with a certificate error message for up to four days because Globalsign screwed up.

Public key pinning is even more dangerous. Not the least of which because there are no longer two free providers of SSL certificates for your backup cert.

This is yet another problem that could be solved with DNS signing -- both the risks of HSTS and HPKP would be gone. The domain could have a new TXT record of "TLS=Always" that you could flip on or off as needed. The TTL on that could be 3600 seconds instead of 4+ days for a bad OCSP response, or the suggested (by SSL Labs) 6+ months for an HSTS response.


> But I think it'd be a tough sell to any business to have their users presented with a certificate error message for up to four days because Globalsign screwed up.

Are you making the argument that a business would want the ability to turn off HTTPS enforcement temporarily in case something like the GlobalSign thing happens? Because I think that's a terrible idea. They should just switch to a different CA, or get the CA to sign a certificate that's not impacted by that issue (which I believe GlobalSign offered). Even excluding Let's Encrypt, the price tag for that is < $10 and it's done in a matter of minutes (and only getting better with ACME). Since we're talking about businesses here, better yet: Have a backup certificate from a different CA ready. This is really no different than all the other things you'll need to do to run a reliable service (like have more than one web server). It's not like one CA screwing up means your domain is dead until they've resolved the issue. Same thing goes for HPKP, since a backup pin is required.

Your argument is also not specific to HSTS/HPKP - you're basically saying that HTTPS in general is dangerous for a business because of this. If your site offers HTTPS without HSTS and without redirecting to HTTPS by default, you'd still have users who bookmark HTTPS links, or search engines indexing those links and all of those would also fail, with no way for you to fix it other than switching CAs.

> This is yet another problem that could be solved with DNS signing -- both the risks of HSTS and HPKP would be gone.

I don't want to get into yet another DNSSEC discussion (that's what we have @tptacek for, anyway), but I don't see a huge difference here. Switching CAs can be done in less time than the TTL you're suggesting. Practically speaking, if you want to factor in "time to find a new CA" or something like that, I'd argue that most people won't run their own DNSSEC infrastructure and rather use something like Cloudflare, so the same "time to find a new <foo>" principle would apply here if they've messed up in a way that takes a long time to resolve.


If Globalsign starts sending bad OCSP data again, or your browser vendor blacklists your CA due to bad behavior from the CA (no fault of your own), you'll end up with a site that now can't be accessed at all

That's only true for somewhat carelessly configured HPKP.

With HSTS you can use any valid certificate. So all you need is obtain a certificate from another vendor. With Let's Encrypt, this will take you fifteen minutes tops.

A careful administrator deploying HPKP would probably pin a key they have securely stored somewhere and, should a disaster strike, go out and buy a certificate for that pinned key. Again, that would take an hour at a cost probably lower than his lunch.


That's why preloaded HSTS exists - basically a list of domains included in browser binaries that those browser will only ever visit via HTTPS.


And it's also why https everywhere is important. If your bank/shop/confidential whatever website isn't accessed via https, you should leave immediately. Once https is close to 100%, browsers could display a warning: "You seem to be entering confidential information on a non-https website. This is probably wrong".

(Ironically, IE used to ship with exactly that warning years ago, but they were way too much ahead of their time)


That's a good point, and exactly what Chrome is planning on doing[1]. Firefox has a similar feature for HTTP-only sites using password fields (I think it's currently enabled in Firefox Developer Edition).

[1]: https://security.googleblog.com/2016/09/moving-towards-more-...


> Firefox has a similar feature for HTTP-only sites using password fields

Man, people really don't think these things through, do they?

The end result of making that enabled by default would be that sites start using plaintext for their password fields. Try detecting <input name='password'> and sites start changing it to <input name='p4ssw0rd'> to get around it. Throw warnings on any POST submission and watch sites start using GET for passwords.

You can't force people into good practices. You need to make the HTTPS portion so easy that they have to go out of their way to undermine security. Let's Encrypt was a good first start. Now we need at least one competitor to them for free certificates to push innovation forward. And we need Apache and nginx to auto-negotiate TLS certificates and OCSP stapling with zero configuration changes. The default config should opt-in to this, and you'll have to manually disable it. Caddy was a good step there, but nobody uses Caddy. They use Apache, nginx, or IIS.


Wouldn't it be easier to make the browser use https by default? And then if a domain does norespond to a https request ask the user "There seems to be no secure way to communicate with www.somedomain.com - want to switch to insecure?".


Unfortunately, that would break a large number of sites out there. Some serve different content (as in: not what the visitor would be expecting) on HTTPS, many would have mixed content warnings, etc. Just take a look at the ruleset used by the HTTPS Everywhere extension[1] to get a sense for how much of the web would break in weird ways if browsers decided to do this today. This'll happen eventually, but with 50% of page loads using HTTPS, we're not quite there yet.

[1]: https://github.com/EFForg/https-everywhere/tree/master/src/c...


It's tricky: how is a user supposed to distinguish between "this site doesn't support HTTPS at all" and "a MitM is making the HTTPS site slow so I give up and use the HTTP version"?

What if it's their bank and they really need to pay a bill right now? Maybe the HTTPS site is just broken right now.

Users aren't wrong for clicking yes through these dialogs. This kind of UX is setting them up for failure with impossible choices. Its even worse in practice: users (including programmers, as evidenced by the comments in every HN thread about HTTPS) won't understand everything that's in play before making this decision. Assume users always click yes.

HSTS, preloading and restricting new features to HTTPS etc. are ultimately working to deprecate non-HTTP (inside browsers).


Most of what you are saying makes sense, except this bit:

> "Maybe the HTTPS site is just broken right now."

In the case of a bank, it's extremely unlikely that they are falling back to HTTP. It's much more likely that sslstrip is in place if you had this. The cost and hassle of someone having access to your whole bank account, is likely much higher than the fines for a late payment.


I get that, but I most users don't.


Probably won't sit well with website owners. If you attempt to connect over TLS when it isn't supported, there is a performance impact, basically you have to attempt one failed connection for every http request, do that for every request on a page and performance becomes a big problem.

You'll also be going against specs, HTTP should go to port 80( which typically isn't secure.), a browser unilaterally deciding to go against a legitimate user wish is bad form.


When I moved my message board to HTTPS, we had mixed content warnings on avatars. After adding a Content-Security-Policy header to upgrade insecure requests, we found that most sites without HTTPS failed quickly. But five or six people had domains that would "load" for 10+ seconds before finally timing out. Which had the effect of making forum topic pages take 10+ seconds to stop showing the "page loading" animation.

Any site lacking HTTPS should instantly close connection attempts on port 443. But some it seems choose not to respond at all, making the browser unsure if the site is just slow or if an HTTPS service doesn't exist.


Hum... If I explicitly type "http://example.com", then by all means, go to the plaintext site. But everybody just types "example.com" and expects the browser to go to the site - this one should favor the TLS version, not the plaintext one.

Besides, now that most sites run on https, making the http-only ones slower would be a good thing.


I think the current approach of allowing site owners to decide how they're going to support TLS is the right approach, browsers taking it upon themselves to enforce it is a bad idea. Some additional reasons why it is a bad idea have already been mentioned by other folks.

When it comes to security, TLS is crucial to the security infrastructure of the internet as a whole; however most of the current security problems on the internet don't actually involve TLS vulnerabilities, even though those tend to make a lot of news in the technology press.

The bar for exploiting TLS vulnerabilities generally requires setting up a legitimate seeming entity and injecting yourself into the internet traffic routing infrastructure, it is very much possibly (and indeed easy for state actors) but it is a much higher bar for common cyber criminals.


How is defaulting to https instead of http not allowing the site owners to decide?

Currently site owners are not allowed to default into https on first visit. Trying both will give them this choice.


They can decide themselves by 301ing to http and setting some sort of "I'm a crappy website, just use insecure requests" header.


they can't 301 if they don't already support TLS. If you attempt to connect to a site over TLS that doesn't support it, for starters port 443 is not likely to be open. 301 is part of the HTTP protocol, TLS is transport layer.


If it helps, the attack you're describing is called sslstrip and was presented by Moxie Marlinspike at Black Hat in 2009: https://www.youtube.com/watch?v=MFol6IMbZ7Y (code here: https://moxie.org/software/sslstrip/)

As others have mentioned, the current best defence against this is HSTS (and, in particular, HSTS preloading). I think the best explanation of it is here: https://www.chromium.org/hsts/



Does anyone know why there are holes in the CromeOS chart?

At first I thought the fact that the percentage is higher is because by default they contact more google services (presumably all over https) than other os/browsers. If this hypothesis is correct, does it mean that the holes are periods where some of these services were failing to provide https?


i think it's more likely a temporary spike in popularity of some non-HTTPS site than a temporary failure to provide HTTPS by a site that normally does.


Is there a need for IoT devices to serve HTTPS?


I'm curious how well that would work? I think it would be awkward to deploy a secret key to a (difficult-to-configure) home device and then keep that secret key secure. It worries me that your IoT device might claim to use HTTPS but actually depend on a private key that had been compromised.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: