Ugh. Let's Encrypt will issue you a single certificate for multiple domains on the same server. It's easy to set up, too.
It's not just for multiple subdomains like sub1.example.com and sub2.exmaple.com. You can have any unrelated domains you want on the cert.
You don't need multiple IPs and you don't even need SNI with its legacy client compatibility problems (now mostly well past). Just get a certificate that covers all the domains you use.
There's no reason at all to suffer with the setup and security and resource problems of SNI or multiple IPs outside extreme scenarios.
This is a solution if you don't care that anyone looking at the certificate would be able to directly see every single domain that you are hosting as part of your setup.
Determined people would be still able to find it out, more or less, despite not having it handy in their web browser under the field for the certificate's Subject Alternative Name. However, there is nothing stopping you from issuing separate certificates for each domain (possibly with subdomains) and configuring your webserver appropriately with SNI.
I have been using precisely Nginx to serve multiple HTTPS domains with certificates from Let's Encrypt since the first few weeks after it came out, so I am not sure why OP thinks it's strictly necessary to assign them separate IP addresses. Generally speaking, there is nothing wrong with that, and it is indeed a somewhat cleaner solution, if it wasn't for the IPv4 examples, oh my...
I was under the impression that older clients wouldn't send the host header over HTTPS, making it impossible to determine the correct certificate to serve in a shared IP environment. Modern browsers all support SNI which prevents exactly this problem, but compromise by sending the hostname in plain text, which may be a privacy concern; this is something that's still up for debate:
http://security.stackexchange.com/questions/86723/why-do-htt...
EDIT: I'm not sure what I was reading or who I was responding to. You mentioned this directly in your comment. Ignore my blathering, I'm tired. :)
They all send host headers over HTTPS (unless it's http2, because the protocol is different). But the host headers don't get sent until after the encrypted transport is fully setup. And to set up the encrypted transport, the server needs to send a certificate. So the server needs to send the certificate before it sees the host header. That's what SNI helps.
On internet explorer on Windows XP. Anyone on XP who uses Chrome or Firefox is just fine. If you are still using internet explorer on XP then you probably have other problems from all the malware that already installed itself on your computer.
Current versions of Chrome are no longer available for XP.
If you are still using XP it's probably because you have to, and do not have the knowledge to switch to something better. Ergo, it's highly possible that you are still using IE on XP as well, as you don't know any different, or cannot change it due to restrictions, or policy.
It's an accessibility thing. If I designed a new web system that blocked off 10% of the populous, for whatever reason (deaf,blind, not able bodied), then people would call me out on it.
In the main, it's unlikely that anyone still using XP is doing so because they want to. Not everyone is privileged enough to have access to modern equipment.
It's an accessibility thing. If I designed a new web system that
blocked off 10% of the populous, for whatever reason (deaf,blind,
not able bodied), then people would call me out on it.
Rightly so, because that's a constraint that cannot be changed.
Running an outdated, decommissioned operating system is something that can be changed. You have no obligation, moral or otherwise, to support Windows 3.1, OS2/Warp, WAP browsers, Gopher clients, or IE5 running on Mac OS 9.
You can still choose to support outdated clients, because it makes financial sense for your organisation - and many places to just that - just as a corporation running outdated software may choose not to update because that's what makes financial sense for them.
Equating support for an accessible service with support for outdated browsers is a non-starter.
> I have been using precisely Nginx to serve multiple HTTPS domains with certificates from Let's Encrypt since the first few weeks after it came out
This is what I do as well, so I became pretty confused when reading the article. I've had no problems running my personal sites (side projects, really) from one NGINX server using different Let's Encrypt certificates for each one.
I used to be a huge fan of nginx and I haven't touched it in a year now. I don't miss it, Caddy is fantastic and handles the Let's Encrypt stuff for me.
That means your webserver is not going to receive updates until you re-do this manually each time, which is dangerous and not at all something you should be using proffessionally.
It's the duty of distributions to pick up and package software. Maybe you could volunteer for the distro you use yourself? It is easier than one might think.
Can anyone else speak to Caddy? My interest is peaked. I run several hundred thousand HTTP requests in a load balanced web cluster hourly, I'll do some reading, but has anyone else used it for high traffic?
Caddy is pretty awesome. I use it to run my personal website (love that it can serve statically compiled Brotli assets out of the box). I maintain it's docker image here: https://github.com/ZZROTDesign/alpine-caddy :) Should be incredibly simple to set up!
Any device that doesn't support SNI wouldnt be modern enough to support secure cyphers since weaknesses hav e been found in most of the older ones. Plus anything below TLS1.0 shouldn't be supported either (nor even TLS1.0 if you're running something where security really does matter). So you're better off dropping support for the aforementioned devices regardless of whether you choose to use SNI or not.
> You don't need multiple IPs and you don't even need SNI with its legacy client compatibility problems (now mostly well past). Just get a certificate that covers all the domains you use.
Horrible idea depending on your use case. If you are single-tenant, this might work out well for you.
If you are multi-tenant then the information leak is pretty nuts, and not something I can see any of my customers being alright with. That and the whole idea of giving the public access to my customer list is pretty silly to me.
Additionally in many environments the end-user can potentially access the private key on the server (think managed services environments) which is an obvious security hole. You'd think people would realize this, but in my experience they do not. In such cases you just let the private key walk out the door for every domain ever configured for that SSL certificate.
Not sure if you'll see this or if it's directly related. I also domain map but on an Apache server, this is probably an excuse but while I realize Let's Encrypt is free, and although there is the 90-day thing that you could automate, I just go with the year-long $9.00 certificate for each domain. I was wondering though, since it's domain mapped, the root domain can bridge to other sites by subfolders eg.
My question I could just easily try it, I'm not sure if it's related to what you mentioned where if you switch domains while logged into one if you'll lose session. I don't expect it to work. I just don't know what happens if people figured out "hey this ip is hosting multiple sites" and could figure out how to traverse each subfolder-site.
My question sucks sorry. I'll have to try it out I guess, at least my stuff is for myself not dealing with other people's info/sites.
This quesiton is related to information leak. I handle the domain mapping with virtualhosts. I also realized when you switch from say non-www to www (I saw you should stick with one) but when you do that you'll lose the session value so I imagine I'm safe.
I actually just took down one site as it was a useless domain so I can't test the multiple ssl certificates at the moment. Yeap this question is a waste of time my bad.
Just create a folder in ~/www for each host.
It's been working great for around a year. Only interruption was due to docker destroying my containers during the upgrade of docker-engine. Great software guys...
That would be very kind, thank you.
I'm not the particular dev who has been doing the research, but I might read his notes later to catch up to what he perceived as the pitfalls of using ELB with LetsEncrypt.
You need to use SAN certificates to do this (Which LE will do). You just need to be comfortable with having every domain registered appearing on the certificate.
You don't want to do this, as it's a privacy violation. It will let attackers know all the names of all the other websites you are serving, just by looking at one of those website.
Do you want people to know that your server for www.donaldtrumpisgreat.com/www.hillaryclintonisgreat.com is also serving your company's homepage?
This article advocates for IP Per Domain over SNI. It's 2017, please use SNI. There's not enough IPv4 addresses in the world. Every single major browser supports it, and has supported it for some time: http://caniuse.com/#search=sni
Unfortunately for our ecommerce site this just isn't the at all an option. 3 months ago we analysed our traffic and found that 12% of our desktop traffic didn't support it (Win XP) and about 8% of our mobile traffic didn't support it (Android older than 4.0). I'm not losing 10% of my revenue just so I don't need to get a couple extra IPs from AWS. And even better, AWS doesn't actually charge me for the IPs. Once IPs are priced in line with their scarcity, I'll start caring. Today when I can get them for free? Not worth it.
If you're having to support devices that old then I'd be more worried about how you're going to take payment details on your e-commerce website over a "secure" connection that would fail most PCI DSS vulnerability scans.
The security of TLS has come a long long way since XP and so has research into breaking XP-era ciphers.
Deprecation of old stuff is going incredibly slowly even in rich countries (to the intense frustration of a lot of security teams). Check out the incredible true story, told over years on cabfpub, of the attempt to get rid of SHA-1 in TLS authentication. Notably, the attacks that led to experts' recommendation to move away from SHA-1 immediately were published back in 2005. Meanwhile, Microsoft's "effective date of the SHA-1 deprecation" is tomorrow (!), February 14, 2017.
Figuring out how people are going to get upgraded when problems of some sort are discovered (including software vulnerabilities, not just cryptographic protocol issues) is a major security challenge of our day, maybe the biggest information security problem overall in the world.
The vast majority of our users still on XP use IE. To the point that we conflate the numbers entirely. Most of those users are 50 year old women who have leftover computers handed down from relatives. They almost as a rule, don't install Chrome or Firefox on them, they just use what is currently on it.
(author here) I would have loved to have SNI work. I wrote this article in response to having profound struggles making it work. My iPhone 7's Safari was routinely failing to connect to sites other browsers claimed were fine, when relying on SNI.
The day i swapped over to IP-based connections, the problem resolved itself immediately. If there is something i am missing i would love to know what it is.
You might be talking about HTTP 1.1 Host header which allows vhosts for plaintext HTTP servers. SNI allows this to happen with TLS (HTTPS, etc.) servers.
You might be able to get useful debugging information with tcpdump or wireshark because SNI itself is sent in the clear (prior to the TLS cryptographic key exchange). You could see if the server is doing something different from other servers, or the browser is doing something different from other browsers.
I have been using precisely Nginx to serve multiple HTTPS domains with certificates from Let's Encrypt since the first few weeks after it came out, so I am not sure why you think it's strictly necessary to assign them separate IP addresses. Generally speaking, there is nothing wrong with that, and it is indeed a somewhat cleaner solution, but it is definitely doable with SNI if one configures their web server appropriately.
Check out the IMHO best TLS SNI test website out there (https://sni.velox.ch/) and the Qualys SSL Labs server test (https://www.ssllabs.com/ssltest/). They may give you a staring point to find out what exactly went wrong with SNI. And the documentation of Nginx, of course.
In particular, I'm curious if this is a misconfiguration on the server end, or a misconfiguration on the client end. Certain VPNs or malware can break SNI.
Never had any SNI issues with Safari. Could it be the mobile network provider (which tends to insert things like NATed IPv6 that can cause weirdness), or did it also fail over wifi?
older version of wget and Java also doesn't support SNI, for example used in API https callbacks, or Android apps.
One trick I use it to load that site (or the most important) first in nginx, because those that doesn't support SNI will use that certificate.
Another option, if you don't need encryption, is to allow http.
Plenty of reason to not use SNI. First off - not everything is a web browser. While browser support for SNI is pretty good these days, other clients are far behind. That random app that connects to your web API has to also support SNI, which means some old PHP library somewhere has to as well. Good luck.
That and a surprising number of "regular" clients still seem to have issues with SNI. My numbers are quite dated, but even as recent as 3 years ago it was something like dropping 10% of traffic for a high traffic site I did A/B testing with. I'm guessing the number was even higher, but the client requested we immediately stop the test once it became apparent it was a major reachability issue affecting revenue.
I've never had a single problem with hosting multiple https domains using Nginx and Let's Encrypt. This article is somewhat baffling, considering his example of clients that need this is "mobile browsers" but I've used iOS and Android and it works just fine.
Me neither. I guess Cloudflare (to name but one service reliant on SNI) hasn't either...
This article is a bit of an unusual response to a problem of a device not working. When a device of mine doesn't work but others do I put it down to there being a problem with my device, not a widely used technology (like SNI).
The article explains that SNI isn't working:
> But while this is widely supported, it is not supported ubiquitously. I've personally had a hell of a time fighting with mobile browsers when relying on SNI. On the other hand, IP addresses are cheap. Like, $1/mo or less, cheap. So buck up and grab an distinct IP for your HTTPS sites. Avoiding the headache of some device/browser combos not working will pay for itself 100 times over.
So that means you will use pre TLS v1.0 to support those browsers that can not deal with SNI.
You must feel very smart to be able to support old Android browsers like Gingerbread which represent 1% of the Android market share [0], and iOS pre v4.0 browsers which represent less than 0,1% of the iOS market share [1].
Now according to TLS/SSL support history of web browsers [2] your server is vulnerable to BEAST, POODLE, CRIME, etc.
You are making incorrect assumptions and running with them.
As i stated in the article, and on this HN comment page, my issues were not with antiquated browsers. Safari on my iPhone 7 was failing to connect to sites other browsers were handling fine. I went down this rabbit hole of IP-based differentiation specifically because of that issue.
There seems to be some sense that i published an article about the hardest way to achieve this. I promise had relying on SNI worked liked i expected it to, the IP-based section of the article would be absent. But it didn't and, like i said, IP's are cheap. Adding 1 step to a process of ubiquitous support seems like a reasonable approach to me.
NixOS modules are built around Nix and systemd so theoretically you could write a port for a different GNU/Linux distribution if you have those available. I'm not aware of any though. There is however a variant for Darwin based on launchd: https://github.com/LnL7/nix-darwin
I use SNI for my Apache Lets Encrypt script. This allows one IP (my home IP) to host many sites easily and monitors changes to sites-enabled to trigger creation of new SAN certificates based on contents of ServerName and ServerAlias. The script also will regenerate certs for sites-enabled every 30 days.
Included is daemontools run script, the script runs in a loop but should it die, you want it to restart. I added supervise command to my /etc/rc.local to make this run when the web server comes up.
I've been trying to do this for a couple of weeks. I have no idea what I'm doing and it's been hard to find any help via google. But I finished it last friday. Without multiple external IP addresses. Funny to see this as the top story when I woke up today. But yes, as caleblloyd says, it's 2017. Use SNI. It's not hard, I'd never even heard of nginx or letsencrypt before I started my project.
The only clients I've had trouble with SNI is Amazon and Apple's Java clients, as well as python2. It's unfortunately still not possible to host a podcast feed with an SNI HTTPS URL in iTunes, nor can you use SNI for Alexa skills. Otherwise, I've been happily using SNI for years now.
Interested to read this and the comments here, as I was just poking around with doing just that. I hadn't been planning on using a second IP address for it, and now I'm wondering how well it will work without it.
It's not just for multiple subdomains like sub1.example.com and sub2.exmaple.com. You can have any unrelated domains you want on the cert.
You don't need multiple IPs and you don't even need SNI with its legacy client compatibility problems (now mostly well past). Just get a certificate that covers all the domains you use.
There's no reason at all to suffer with the setup and security and resource problems of SNI or multiple IPs outside extreme scenarios.