Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hosting Multiple HTTPS Domains from the Same Server with Let's Encrypt and Nginx (benroux.me)
173 points by liquidise on Feb 13, 2017 | hide | past | favorite | 72 comments


Ugh. Let's Encrypt will issue you a single certificate for multiple domains on the same server. It's easy to set up, too.

It's not just for multiple subdomains like sub1.example.com and sub2.exmaple.com. You can have any unrelated domains you want on the cert.

You don't need multiple IPs and you don't even need SNI with its legacy client compatibility problems (now mostly well past). Just get a certificate that covers all the domains you use.

There's no reason at all to suffer with the setup and security and resource problems of SNI or multiple IPs outside extreme scenarios.


This is a solution if you don't care that anyone looking at the certificate would be able to directly see every single domain that you are hosting as part of your setup.

Determined people would be still able to find it out, more or less, despite not having it handy in their web browser under the field for the certificate's Subject Alternative Name. However, there is nothing stopping you from issuing separate certificates for each domain (possibly with subdomains) and configuring your webserver appropriately with SNI.

I have been using precisely Nginx to serve multiple HTTPS domains with certificates from Let's Encrypt since the first few weeks after it came out, so I am not sure why OP thinks it's strictly necessary to assign them separate IP addresses. Generally speaking, there is nothing wrong with that, and it is indeed a somewhat cleaner solution, if it wasn't for the IPv4 examples, oh my...


I was under the impression that older clients wouldn't send the host header over HTTPS, making it impossible to determine the correct certificate to serve in a shared IP environment. Modern browsers all support SNI which prevents exactly this problem, but compromise by sending the hostname in plain text, which may be a privacy concern; this is something that's still up for debate: http://security.stackexchange.com/questions/86723/why-do-htt...

EDIT: I'm not sure what I was reading or who I was responding to. You mentioned this directly in your comment. Ignore my blathering, I'm tired. :)


They all send host headers over HTTPS (unless it's http2, because the protocol is different). But the host headers don't get sent until after the encrypted transport is fully setup. And to set up the encrypted transport, the server needs to send a certificate. So the server needs to send the certificate before it sees the host header. That's what SNI helps.


SNI doesn't work on Windows XP.

"Who still use Windows XP?" I hear you ask?

Just under 10% of all users[1]. Enough to make SNI problematic. In a few years time, we'll be OK, but not right now.

---

[1] https://www.netmarketshare.com/operating-system-market-share...


On internet explorer on Windows XP. Anyone on XP who uses Chrome or Firefox is just fine. If you are still using internet explorer on XP then you probably have other problems from all the malware that already installed itself on your computer.


Current versions of Chrome are no longer available for XP.

If you are still using XP it's probably because you have to, and do not have the knowledge to switch to something better. Ergo, it's highly possible that you are still using IE on XP as well, as you don't know any different, or cannot change it due to restrictions, or policy.

It's an accessibility thing. If I designed a new web system that blocked off 10% of the populous, for whatever reason (deaf,blind, not able bodied), then people would call me out on it.

In the main, it's unlikely that anyone still using XP is doing so because they want to. Not everyone is privileged enough to have access to modern equipment.


  It's an accessibility thing. If I designed a new web system that
  blocked off 10% of the populous, for whatever reason (deaf,blind,
  not able bodied), then people would call me out on it.
Rightly so, because that's a constraint that cannot be changed.

Running an outdated, decommissioned operating system is something that can be changed. You have no obligation, moral or otherwise, to support Windows 3.1, OS2/Warp, WAP browsers, Gopher clients, or IE5 running on Mac OS 9.

You can still choose to support outdated clients, because it makes financial sense for your organisation - and many places to just that - just as a corporation running outdated software may choose not to update because that's what makes financial sense for them.

Equating support for an accessible service with support for outdated browsers is a non-starter.


> Not everyone is privileged enough to have access to modern equipment.

And there you go. If you can't be bothered to not use an ancient operating system, that's just too bad. I don't care about people running DOS either.

Luckily it is a lot easier to switch to a modern operating system than to replace nonfunctional parts of the human body.


> I have been using precisely Nginx to serve multiple HTTPS domains with certificates from Let's Encrypt since the first few weeks after it came out

This is what I do as well, so I became pretty confused when reading the article. I've had no problems running my personal sites (side projects, really) from one NGINX server using different Let's Encrypt certificates for each one.


Also, plugging Caddy: https://caddyserver.com/

I used to be a huge fan of nginx and I haven't touched it in a year now. I don't miss it, Caddy is fantastic and handles the Let's Encrypt stuff for me.


I can never take caddy seriously until they get serious about updates and start working with linux packages.

When you have to do this: https://gist.github.com/Jamesits/2a1e2677ddba31fae62d022ef8a...

That means your webserver is not going to receive updates until you re-do this manually each time, which is dangerous and not at all something you should be using proffessionally.


It's the duty of distributions to pick up and package software. Maybe you could volunteer for the distro you use yourself? It is easier than one might think.


We will look into it after we hit 1.0.


Can anyone else speak to Caddy? My interest is peaked. I run several hundred thousand HTTP requests in a load balanced web cluster hourly, I'll do some reading, but has anyone else used it for high traffic?


I don't know anything about Caddy but for some reason I feel compelled to tell you that it's "piqued", not "peaked".


Caddy is pretty awesome. I use it to run my personal website (love that it can serve statically compiled Brotli assets out of the box). I maintain it's docker image here: https://github.com/ZZROTDesign/alpine-caddy :) Should be incredibly simple to set up!


How does Caddy compare to nginx performance wise? I have a similar setup like you running on nginx.

For loadbalancers I mainly care about performance and not usability of the configuration language.


I hear this question a hundred different ways. What do you even mean by performance? There's so many dimensions to a web server.


Any device that doesn't support SNI wouldnt be modern enough to support secure cyphers since weaknesses hav e been found in most of the older ones. Plus anything below TLS1.0 shouldn't be supported either (nor even TLS1.0 if you're running something where security really does matter). So you're better off dropping support for the aforementioned devices regardless of whether you choose to use SNI or not.


> You don't need multiple IPs and you don't even need SNI with its legacy client compatibility problems (now mostly well past). Just get a certificate that covers all the domains you use.

Horrible idea depending on your use case. If you are single-tenant, this might work out well for you.

If you are multi-tenant then the information leak is pretty nuts, and not something I can see any of my customers being alright with. That and the whole idea of giving the public access to my customer list is pretty silly to me.

Additionally in many environments the end-user can potentially access the private key on the server (think managed services environments) which is an obvious security hole. You'd think people would realize this, but in my experience they do not. In such cases you just let the private key walk out the door for every domain ever configured for that SSL certificate.


Not sure if you'll see this or if it's directly related. I also domain map but on an Apache server, this is probably an excuse but while I realize Let's Encrypt is free, and although there is the 90-day thing that you could automate, I just go with the year-long $9.00 certificate for each domain. I was wondering though, since it's domain mapped, the root domain can bridge to other sites by subfolders eg.

/var/www/html/main-domain | https://mainsite.com

/var/www/html/main-domain/domain2 | https://somesite.com

/var/www/html/main-domain/domain3 | https://somesite2.com

My question I could just easily try it, I'm not sure if it's related to what you mentioned where if you switch domains while logged into one if you'll lose session. I don't expect it to work. I just don't know what happens if people figured out "hey this ip is hosting multiple sites" and could figure out how to traverse each subfolder-site.

My question sucks sorry. I'll have to try it out I guess, at least my stuff is for myself not dealing with other people's info/sites.

This quesiton is related to information leak. I handle the domain mapping with virtualhosts. I also realized when you switch from say non-www to www (I saw you should stick with one) but when you do that you'll lose the session value so I imagine I'm safe.

I actually just took down one site as it was a useless domain so I can't test the multiple ssl certificates at the moment. Yeap this question is a waste of time my bad.


I was about to comment pretty much all of these points. As soon as I read about multiple IP's I wondered what was going on.

I'm starting my own tech blog for similar articles, maybe I'll do a simpler example for my first post.


Exactly.

I made this for exactly this usage: https://github.com/fenollp/nginx_ssl_compose

Just create a folder in ~/www for each host. It's been working great for around a year. Only interruption was due to docker destroying my containers during the upgrade of docker-engine. Great software guys...


Indeed this is what a SAN cert does. From Letsencrypt's FAQ:

"Can I get a certificate for multiple domain names (SAN certificates or UCC certificates)?

Yes, the same certificate can contain several different names using the Subject Alternative Name (SAN) mechanism.

Source: https://letsencrypt.org/docs/faq/


Even with AWSs load balancers? We are trying to solve this issue right now in house.

Any recommendations/war stories/further reading would be greatly appreciated.

Related forum post:

https://forums.aws.amazon.com/message.jspa?messageID=520926


Yes! I do with NGINX using SNI (to server multiple ssl certs from the same IP) and using Proxy Protocol on the ELB to send the requests to NGINX.

What you need to do is enable the proxy protocol on the ELB and then point to NGINX. http://docs.aws.amazon.com/elasticloadbalancing/latest/class...

You need to also enable the proxy_protocol in nginx.

  server {
    listen    443 ssl proxy_protocol;
    ...
  }
I'll write something up and share it. But hopefully this will help you in the short term.


That would be very kind, thank you. I'm not the particular dev who has been doing the research, but I might read his notes later to catch up to what he perceived as the pitfalls of using ELB with LetsEncrypt.


You need to use SAN certificates to do this (Which LE will do). You just need to be comfortable with having every domain registered appearing on the certificate.


"... setup and security and resource problems of SNI..."

As a user who really dislikes SNI, I would like to see someone write more about these problems.


You don't want to do this, as it's a privacy violation. It will let attackers know all the names of all the other websites you are serving, just by looking at one of those website.

Do you want people to know that your server for www.donaldtrumpisgreat.com/www.hillaryclintonisgreat.com is also serving your company's homepage?


This article advocates for IP Per Domain over SNI. It's 2017, please use SNI. There's not enough IPv4 addresses in the world. Every single major browser supports it, and has supported it for some time: http://caniuse.com/#search=sni


Unfortunately for our ecommerce site this just isn't the at all an option. 3 months ago we analysed our traffic and found that 12% of our desktop traffic didn't support it (Win XP) and about 8% of our mobile traffic didn't support it (Android older than 4.0). I'm not losing 10% of my revenue just so I don't need to get a couple extra IPs from AWS. And even better, AWS doesn't actually charge me for the IPs. Once IPs are priced in line with their scarcity, I'll start caring. Today when I can get them for free? Not worth it.


If you're having to support devices that old then I'd be more worried about how you're going to take payment details on your e-commerce website over a "secure" connection that would fail most PCI DSS vulnerability scans.

The security of TLS has come a long long way since XP and so has research into breaking XP-era ciphers.


Deprecation of old stuff is going incredibly slowly even in rich countries (to the intense frustration of a lot of security teams). Check out the incredible true story, told over years on cabfpub, of the attempt to get rid of SHA-1 in TLS authentication. Notably, the attacks that led to experts' recommendation to move away from SHA-1 immediately were published back in 2005. Meanwhile, Microsoft's "effective date of the SHA-1 deprecation" is tomorrow (!), February 14, 2017.

https://social.technet.microsoft.com/wiki/contents/articles/...

(Let's have a party!)

Figuring out how people are going to get upgraded when problems of some sort are discovered (including software vulnerabilities, not just cryptographic protocol issues) is a major security challenge of our day, maybe the biggest information security problem overall in the world.


Hang on. What percent is XP, and what percent is IE on XP? Those are not numbers that should be conflated.


The vast majority of our users still on XP use IE. To the point that we conflate the numbers entirely. Most of those users are 50 year old women who have leftover computers handed down from relatives. They almost as a rule, don't install Chrome or Firefox on them, they just use what is currently on it.


(author here) I would have loved to have SNI work. I wrote this article in response to having profound struggles making it work. My iPhone 7's Safari was routinely failing to connect to sites other browsers claimed were fine, when relying on SNI.

The day i swapped over to IP-based connections, the problem resolved itself immediately. If there is something i am missing i would love to know what it is.


It must have been something else. Even Safari on iOS has supported SNI since iOS 4.0 (2010).


The only thing I've had SNI fail under (so far) had been Netscape Navigator 3.0 and at that point, does it really matter?


IE on Win XP. Or at least anything using the built in crypto stuff. I think firefox will still use it's own. Not sure about chrome.


You might be talking about HTTP 1.1 Host header which allows vhosts for plaintext HTTP servers. SNI allows this to happen with TLS (HTTPS, etc.) servers.


You might be able to get useful debugging information with tcpdump or wireshark because SNI itself is sent in the clear (prior to the TLS cryptographic key exchange). You could see if the server is doing something different from other servers, or the browser is doing something different from other browsers.


I have been using precisely Nginx to serve multiple HTTPS domains with certificates from Let's Encrypt since the first few weeks after it came out, so I am not sure why you think it's strictly necessary to assign them separate IP addresses. Generally speaking, there is nothing wrong with that, and it is indeed a somewhat cleaner solution, but it is definitely doable with SNI if one configures their web server appropriately.

Check out the IMHO best TLS SNI test website out there (https://sni.velox.ch/) and the Qualys SSL Labs server test (https://www.ssllabs.com/ssltest/). They may give you a staring point to find out what exactly went wrong with SNI. And the documentation of Nginx, of course.


Can you get to https://sni.velox.ch/ from your phone? What are the first few lines?

In particular, I'm curious if this is a misconfiguration on the server end, or a misconfiguration on the client end. Certain VPNs or malware can break SNI.


Never had any SNI issues with Safari. Could it be the mobile network provider (which tends to insert things like NATed IPv6 that can cause weirdness), or did it also fail over wifi?


older version of wget and Java also doesn't support SNI, for example used in API https callbacks, or Android apps. One trick I use it to load that site (or the most important) first in nginx, because those that doesn't support SNI will use that certificate.

Another option, if you don't need encryption, is to allow http.


Could you be having issues with ipv6?


Plenty of reason to not use SNI. First off - not everything is a web browser. While browser support for SNI is pretty good these days, other clients are far behind. That random app that connects to your web API has to also support SNI, which means some old PHP library somewhere has to as well. Good luck.

That and a surprising number of "regular" clients still seem to have issues with SNI. My numbers are quite dated, but even as recent as 3 years ago it was something like dropping 10% of traffic for a high traffic site I did A/B testing with. I'm guessing the number was even higher, but the client requested we immediately stop the test once it became apparent it was a major reachability issue affecting revenue.


That is if you don't care about supporting windows xp users, I certainly don't.


And it's not just XP, it's IE on XP, meaning IE 8 or older. SNI works on the most recent Firefox & Chrome for XP.


There's enough IPv6


I've never had a single problem with hosting multiple https domains using Nginx and Let's Encrypt. This article is somewhat baffling, considering his example of clients that need this is "mobile browsers" but I've used iOS and Android and it works just fine.


Me neither. I guess Cloudflare (to name but one service reliant on SNI) hasn't either...

This article is a bit of an unusual response to a problem of a device not working. When a device of mine doesn't work but others do I put it down to there being a problem with my device, not a widely used technology (like SNI).

It's an interesting article to read though.


I was gonna post this same thing. Nginx is super rad.


"This means you cannot have multiple HTTPS sites hosted from the same IP address."

Is that some kind of joke?

Yes you can, unless you want to support very old browsers [0] which would defeat the whole purpose of using SSL/TLS in the first place.

Maybe have a look at Mozilla Security/Server Side TLS [1]

Also SSL certs are issued for FQDN, not IP addresses (unless the IP is public and owned but still it is considered deprecated now [2]).

[0]: https://blogs.msdn.microsoft.com/ieinternals/2009/12/07/unde...

[1]: https://wiki.mozilla.org/Security/Server_Side_TLS

[2]: https://www.digicert.com/internal-names.htm


The article explains that SNI isn't working: > But while this is widely supported, it is not supported ubiquitously. I've personally had a hell of a time fighting with mobile browsers when relying on SNI. On the other hand, IP addresses are cheap. Like, $1/mo or less, cheap. So buck up and grab an distinct IP for your HTTPS sites. Avoiding the headache of some device/browser combos not working will pay for itself 100 times over.


So that means you will use pre TLS v1.0 to support those browsers that can not deal with SNI.

You must feel very smart to be able to support old Android browsers like Gingerbread which represent 1% of the Android market share [0], and iOS pre v4.0 browsers which represent less than 0,1% of the iOS market share [1].

Now according to TLS/SSL support history of web browsers [2] your server is vulnerable to BEAST, POODLE, CRIME, etc.

Congrats your SSL cert is useless.

[0]: https://developer.android.com/about/dashboards/index.html

[1]: https://david-smith.org/iosversionstats/

[2]: https://en.wikipedia.org/wiki/Template:TLS/SSL_support_histo...


You are making incorrect assumptions and running with them.

As i stated in the article, and on this HN comment page, my issues were not with antiquated browsers. Safari on my iPhone 7 was failing to connect to sites other browsers were handling fine. I went down this rabbit hole of IP-based differentiation specifically because of that issue.

There seems to be some sense that i published an article about the hardest way to achieve this. I promise had relying on SNI worked liked i expected it to, the IP-based section of the article would be absent. But it didn't and, like i said, IP's are cheap. Adding 1 step to a process of ubiquitous support seems like a reasonable approach to me.


This article has an incorrect premise. The author should learn about SNI: https://en.wikipedia.org/wiki/Server_Name_Indication


On a side note: in NixOS ACME has been integrated into the nginx configuration. To set up a server with TLS you just do

  security.acme.certs = {
    "example.com".email = "youremail@address.com";
  };

  services.nginx = {
    enable = true;
    virtualHosts."example.com" = {
      enableSSL  = true;
      enableACME = true;
    };
  };
This fetches the certificates and set up a service and a timer to periodically renew them.


Wow, is their nginx support portable to other OSes? Do you know who has implemented it?


NixOS modules are built around Nix and systemd so theoretically you could write a port for a different GNU/Linux distribution if you have those available. I'm not aware of any though. There is however a variant for Darwin based on launchd: https://github.com/LnL7/nix-darwin

You can find the implementation of the nginx service here: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/s...


Thanks, I'd like the people working on nginx integration for certbot to see this!


I use SNI for my Apache Lets Encrypt script. This allows one IP (my home IP) to host many sites easily and monitors changes to sites-enabled to trigger creation of new SAN certificates based on contents of ServerName and ServerAlias. The script also will regenerate certs for sites-enabled every 30 days.

See: https://github.com/tzakrajs/cloud-fortress-lets-encrypt


Included is daemontools run script, the script runs in a loop but should it die, you want it to restart. I added supervise command to my /etc/rc.local to make this run when the web server comes up.


I've been trying to do this for a couple of weeks. I have no idea what I'm doing and it's been hard to find any help via google. But I finished it last friday. Without multiple external IP addresses. Funny to see this as the top story when I woke up today. But yes, as caleblloyd says, it's 2017. Use SNI. It's not hard, I'd never even heard of nginx or letsencrypt before I started my project.


The only clients I've had trouble with SNI is Amazon and Apple's Java clients, as well as python2. It's unfortunately still not possible to host a podcast feed with an SNI HTTPS URL in iTunes, nor can you use SNI for Alexa skills. Otherwise, I've been happily using SNI for years now.


Interesting, those two are very surprising to have issues. Got anything about the alexa skill one?


Interested to read this and the comments here, as I was just poking around with doing just that. I hadn't been planning on using a second IP address for it, and now I'm wondering how well it will work without it.



Can I apply this on Heroku hosted app?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: