Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's your solution for SSL on internal servers?
65 points by Tepix on Feb 9, 2022 | hide | past | favorite | 77 comments
Lets encrypt has made SSL/TLS a no-brainer for websites on the public internet. Behind home routers, it's still a tedious topic. What's your (non-enterprise) solution?



I have one main Nginx server that all other services are behind, regardless if they're internal or external. This box is where NAT port forwards 80,443 to.

I also use only subdomains of the domains I own, even for internal stuff. This means I also run a small bind9 DNS server with minimal zones to direct traffic to the proxy inside the network, and most of the records just don't exist outside, ie. they return NXDOMAIN from my public DNS provider.

On the nginx box, I have a snippet like this:

    deny all;
    allow 10.x.0.0/16;
Then, when I configure something as 'internal only' I just add this line to its config file:

    include /etc/nginx/private.conf
This means that I can decouple the certificate status from the internal/external status of the site. All sites get valid certs, and most of them get 401's from outside the network.

In reality, I manage the nginx config with ansible templates, so what I really do is set a boolean "public" flag to "true" on sites I want accessible outside, everything else is private by default.


I've not used this directly, but it may be useful: https://github.com/FiloSottile/mkcert

    mkcert is a simple tool for making locally-trusted development certificates. It requires no configuration.

    Using certificates from real certificate authorities (CAs) for development can be dangerous or impossible (for hosts like example.test, localhost or 127.0.0.1), but self-signed certificates cause trust errors. Managing your own CA is the best solution, but usually involves arcane commands, specialized knowledge and manual steps.

    mkcert automatically creates and installs a local CA in the system root store, and generates locally-trusted certificates. mkcert does not automatically configure servers to use the certificates, though, that's up to you.


I'm using subdomains on a domain I own and request Let's Encrypt certificates with the DNS challenge.


Just beware that the records of the TLS certs are public. So you'll leak a some hostnames. This might or might not be ok.


not if you use wildcard domain in the cert. LE already support wildcard certs


That's not really ideal either...one copy of the cert that isn't protected well becomes a master key of sorts for someone already inside your internal network.


Often inside your network you are more concerned with encryption than authentication. If you "just" need encryption over the wire, wildcard cert's are useful.


Yeah, this is the way to go. If your DNS provider doesn't let you setup limited scope access tokens, then you can use something like acme-dns with delegated dns challenges, as long as you have a single external server.


Then you're routing internal traffic through a public IP? Or do they support wildcard certs?


>Then you're routing internal traffic through a public IP?

No, not typically. There's various methods to do the LetsEncrypt challenge/verification that don't require internet connecting the internal host you're generating the certificate for.

The downsides are:

- You can generate a wildcart cert for *.internal.yourdomain.tld. But then, it's a pretty big master key if you lose control of it.

- You can generate a cert-per-server but it exposes your hostnames (at least) in certificate transparency logs, which gives outsiders some view into how big your internal network is, perhaps some detail on what it's like via hostnames, etc. This is worse if you also expose the internal DNS records externally, then everyone sees those records as well, exposing more internal info. You could mitigate these things somewhat with various strategies around hostnames, DNS setup, etc.


> You can generate a wildcart cert for *.internal.yourdomain.tld. But then, it's a pretty big master key if you lose control of it.

For a home network, this is less relevant, since many of the services (and the nginx gateway) are running on the same host as the cert resides on. If they grab the wildcard cert, they're already in a position to mess with the services directly, no SSL MITM needed


SSL certificates contain the name, not the IP. So the IP address can be anything, including internal ones.


I thought Let's Encrypt wouldn't give you a cert if the domain on the cert resolves to a private IP. Good to know - thx.


You just resolve the domain to a private IP on your internal network, Let's Encrypt can see it as whatever you want, for all they care it's 1.1.1.1.


Friendly reminder that 1.1.1.1 is a real, valid, public IP. Seen plenty of networks that don’t recognize this, use it for some internal purpose, and break https://1.1.1.1/


> Seen plenty of networks that don’t recognize this, use it for some internal purpose, and break https://1.1.1.1/

AFAIK Cisco used 1.1.1.1 as an example "dummy" IP in their wireless LAN controller documentation, which of course led to infinite idiots copy/pasting exactly that and setting up broken networks.


My college uses 1.1.1.1 as their iis administration endpoint, I was told the reason was "nobody would guess it so it reduces the number of dumb kids guessing the edu\Administrator domain password". Around the time cloudflare started using it their logs must have skyrocketed.


They don't seem to check whether the hostname you're requesting a cert for resolves. At least with certbot, it requests the cert, creates the challenge record, then removes it after receiving the signed cert.


I’ve got a ton of certs from LE where the IP resolves to an RFC 1918 IP


You can, but you might not want employeerecords.example.com leaking its IP address, even if it is an inaccessible 192.168.10.10. Defense in depth. You can use hosts or internal resolution.


Other replies already explained how this is orthoginal to IP addressing, but also there's not many virtuous virtuous things and many downsides about using ambiguous addresses your server to server communications. Also invariably you'll eventually end up networking them in a new way you didn't originally plan. It ends up being bad for security because it breeds unneeded complexity and makes your system harder to understand.


I have local DNS setup to resolve my personal domains to hosts on my home network. They do support wildcard certs, _only_ if you use some form of DNS challenge.


I'll try to do this for internal IPs using traefik on Kubernetes. Any pointers?


I just use unbound where I have a ansible script install it from the arch repos and deploy a handwritten config file with the dns entries. Then it forwards the rest to my DNS provider. I have my router set the address of that unbound host as the DNS server for my devices via DHCP.

I don't use k8s in my home network (though I do have some podman containers), but there's probably something with more k8s integration you can tie into your k8s ingress setup that I'm unaware of.


Seconding this approach.


DNS alias mode:

* https://dan.langille.org/2019/02/01/acme-domain-alias-mode/

* https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...

* https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...

You want the name "internal.example.com". In your external DNS you create a CNAME from "_acme-challenge.internal.example.com" and point it to (e.g.) "internal.example.net" or "internal.dns-auth.example.com"

When you request the certificate you specify the "dns-01" method. The issuer (e.g., LE) will go to the the external DNS server for the look up, see that it is a CNAME and then follow the CNAME/alias, and do the verification at the final hostname.

So your ACME client has to do a DNS (TXT) record update, which can often be done via various APIs, e.g.:

* https://github.com/AnalogJ/lexicon

You can even run your own DNS server locally (in a DMZ?) if your DNS provider does not have an convenient API. There are servers written for this use case:

* https://github.com/joohoi/acme-dns

* https://github.com/joohoi/acme-dns-certbot-joohoi

* https://github.com/pawitp/acme-dns-server


I approve this post. /s


For a long time I was all fussy about having to create a security exception for self-signed certificates. One day I realized I was acting insane, as if there was some glorious principle involved. There isn't.

I trust my own (or coworkers) certificates. It's a dev site for heavens sake.

Ever since, ssh-keygen all the way.


It can be tricky to get self-signed certificates put into all the various places where they need to be. OS level certificate stores, browsers, mobile devices, curl, python/requests, VPN clients, etc. There's always some weird exception case.


Yeah, I think OP probably mostly does web development through the browser, where there’s just one trust store to worry about.

As soon as you need some automation involving backend services or even just curl and some scripts, this gets tougher. (And please please don’t use `curl -k` to disable checks… if such scripts accidentally make their way to production, you may as well not use TLS at all.)


About the -k: why?

A certificate has two main functions: identity of the server and encryption. Not checking the chain leaves you with encryption, which is likely what your need.

A self-signed certificate is as good as any other when it comes to encryption.


If you're in a situation where encryption matters to you, you're in a situation where the identity of the remote end ought to matter to you as well. If someone's able to snoop the traffic between you and the remote end, it's dangerous to assume they won't also be able to MITM you.


A self-signed certificate provides an identity the same way as a CA-signed one. The only difference is that it is in a CA chain.

You may trust the CA enough to not check further, but if you want to make sure that the endpoint you are talking with is the one you expect, you should check the identity of the certificate on the server. And it is the same for a self-signed certificate as from a CA-issued one.


`curl -k` doesn't do any of that. It just connects via TLS and does no checking whatsoever on the remote server's identity.

And what you're saying about CA certs has no resemblance to reality. People don't look at certs by matching their public keys exactly to what they expect... they trust certificate authorities to make that determination for them. But again, `curl -k` does neither so I don't think your point applies regardless.


It appears to me the issue is browser warning dialogs that imply it is always very dangerous. There should be either more context explained in those dialogs or a recognition of/mode for sites that are supposed to be self signed.


There is a long history to this. The original browser warnings were along the lines of your suggestion. Then it was discovered that regular users just clicked through the warning when an attacker MITM'ed their bank. There followed decades of making the warning ever more scary sounding and ever more difficult to bypass.


Where does responsibility lie here? Is there a mechanism for users to state trust towards their org's sysadmin instead of a global authority? If there isn't, what is the problem preventing this in the long history you mention?


(Untested idea) I'd suggest creating your own Root CA with an expire-date far in the future. Install it's public key as a trusted certificate and all derived certificates should not prompt any issues anymore


I did this on my workstation. I used it for a short while. Then I realized that I was actually putting a thing into the world that circumvents the entire security infrastructure of the world. Then I needed to do it for a server. Local only. Never production.

Then I thought, WTF am I doing?. I realized that I was making computers that would circumvent the world's security apparatus. Of course it won't hurt if I make no errors of practice or judgement but, am I really smart enough to handle highly radioactive material.

I ripped it all out. It makes me shudder thinking about it.


Then far in the future, when your own Root CA expires, then there comes all the trouble to update all the services/devices to trust the new Root CA.

This becomes a nightmare, when the original admin in the org is gone long times ago.


Please don't. I've been in orgs doing this and it sucks. There seems to just be to many edge cases, and to many custom scripts for setup.


That's not untested insofar as that's how I've always seen self-signed certs managed in an org.


At ZeroTier we are working on a solution for this that will implement ACME. Not ready for release quite yet but getting close. Could be used on ZeroTier networks but doesn't have to be.


Please post a submission on HN whenever this is released.


None of my self-hosted things are internet facing, so I use the DNS-01 challenge type. For internal DNS, I run the powerdns authoritative server. Each ACME client uses RFC2136 (TSIG) to update the _acme-challenge TXT record. I have a pdns lua policy script set up so that the only record that's allowed to be updated is the TXT record matching the TSIG name.

To allow Let's Encrypt to hit the DNS server, I run a public-facing dnsdist load balancer. It forwards the relevant TXT, CAA, DNSKEY, and NS queries to pdns and silently drops all other queries not required for ACME challenges.

I'd prefer that the internal hostnames wouldn't be leaked in the certificate transparency logs, but given that no services are exposed to the internet, it doesn't bother me enough to look for alternatives (eg. wildcard certs).


This all comes down to two things:

- keytool and openssh absolutely suck from a usability standpoint

- testing the pipeline of generation of keys/files/certs/stores and importing/generating/signing etc is difficult

- error messages, if you get them, are completely unhelpful, and often the errors are superficially not even ssl/security related.

Every time I do SSL, it is a 1-4 day job, and that's with StackOverflow saving my ass on translating "why this weird error means this failure".

Between the above two issues, SSL on every platform, application, database, operating system (or version of OS) has different errors.

If you have a non-mainstream language, I have NO IDEA how you would get SSL up. Python, JVM, javascript, C/C++ there's a lot of eyeballs on this.


I have a private CA for all internal hosts. It's a bit of a pain to run, but it's neccessary because "internal" includes VMs that run colocated servers and many services to mutual authentication.

It's hyper annoying that it's barely to not possible to include our own CA into mobile Browsers so that they can use internal websites.

The alternative would be to depend on LE where neccessary, but this introduces an external dependency that I would rather avoid.


>It's hyper annoying that it's barely to not possible to include our own CA into mobile Browsers so that they can use internal websites.

You can? Any MDM should be able to deploy internal CAs, and at least on iPhones the free Apple Configurator 2 software will let you make profiles you can then easily deploy a bunch of ways. Doesn't scale easily but it's free and easy for small numbers. Also very useful period even for an individual or family to bootstrap devices and make changes if you've got significant numbers of email accounts for example. I'm sure Android has some equivalent.


Why not have your applications accessible through two URLs with yor CA and another with LetsEncrypt.

Also, it may be possible to sign an SSL certificate with two authorities.


You can use https://github.com/joohoi/acme-dns to issue letsencrypt certificates to your internal hosts using DNS validation.

All it takes is to setup an ACME-DNS server somewhere (or just use author's public ACME-DNS server if you don't care much), and create one CNAME record in your DNS.


Wildcard certificate using Traefik on a secondary domain generated using DNS challenge. Since this gives Traefik too much permissions to my DNS (CloudFlare now has better RBAC, but I haven't switched to it) - I use a secondary domain for my homeserver.

For the few services where I want to use my primary domain, I run dehydrated once in a while from my local setup (which uses my password manager on a script). These services do end up exposed publicly on the CT Logs, but I'm okay with that.

I proxy these services over a DO VPN, but resolve these internally to private IPs using NextDNS. There's also a internal domain for every service (service.in.example.com resolves to internal IP and service.example.com resolves to public IP, and service.example.com resolves to the private IP within my home network)


Running an internal CA is really easy in 2022: https://gruchalski.com/posts/2020-09-09-multi-tenant-vault-p....


Depends on how your home network is setup and how you deploy your services. If you have a public domain for you home IP, and it's the usual docker bridge network setup with only a couple of containers, using Traefik or Caddy as a reverse proxy will suffice. They'll automatically provision TLS certs for your services with very little to no effort at all. If it's something more complicated than that, such as needing a separate IP and mdns host name per container running on a vlan, or some multicloud kubernetes setup, you pretty much have to setup your own CA. In that case, look into mkcert and/or step-ca.


If you have your own domain, then move it to one of the listed DNS providers and use DNS challenge with ACME:

We are using using certbot + cloudflare whis way. There is no HTTP request, certbot makes a temporary DNS record using the Cloudflare API to satisfy the challenge so you can run the script anywhere. Then copy the cert to the device that needs it.

DNS providers supported bycertbot:

https://community.letsencrypt.org/t/dns-providers-who-easily...


Good question. I'm also interested. My use case would be:

- a have an nginx server load balancing traffic between N web servers that talk to one DB. Everything is inside a VPC (I'm using digital ocean), and only nginx is public to the internet

One approach I have read is that I could terminate SSL at the nginx level, and handle normal http between my web servers. Question would be: how secure is that? Can I (should I) trust that everything within my VPC is only accessible to me? Is terminating SSL good enough when handling, let's say, account creations and payments via Stripe?


I have my own 2T PKI.

Pro: you would know how some apps crash (and sometimes burn) when PKI breaks. Also you would know what every other distro guy thinks he is smarter than everyone and do the PKI his own way.

Cons: see Pro.

NB: Windows PKI in the ADDS environment (at terms of distributing the RCA/ICA trust) is a walk in the park compared to everything else.

NB2: Java keychain is PITA.

> Behind home routers, it's still a tedious topic

Buy a domain, park it at Cloudflare/Ghandi/whatever ACME supported provider, use DNS-01, push or pull certs to the local network.

There is no problem with the process, only with laziness and automation.


I'd already written a small (server focussed) tool to make use of certbot (or any ACME client really) certs a bit more automated (getting the right combination of certs/key in a file, converting to alternative formats, fetching the OCSP data, syncing across machines, restarting services after they're updated etc);

Last year I added in a 'create' mode where it sets up a self signed root CA, and issues certs using that. The other logic (convert, sync, combine) is obviously all the same still.


I asked a similar question previously: https://news.ycombinator.com/item?id=29995812


EasyRSA. Install the CA on my devices, distribute the SSL certs to the devices I care about. (Mainly NAS, OctoPi, PiHole). The me PiHole serves hostnames too, and DHCP.


Personally I avoid it where I can and use wireguard for all internal traffic. Hopefully someone more knowledgeable here can tell me if it's a good or bad idea.


Same here, Tailscale and their DNS LetsEncrypt service does all the HTTPS management for me. Works just fine really.


If you can automate DNS, create a wildcard LE cert and have a cronjob to maintain it around your different places from the one place you issue it. That is what I do.

Before that I just bought one wildcard cert and used that. Can be bought at less than 50 bucks and then no hassle.

If I could not automate DNS and I don't have 50 bucks per year for it, I would create a small CA myself, trust it in my browsers and issue certificates from that.


I do a lot of the same things as others with a custom domain at home with a NAT 80/443 forward to nginx, but I use https://nginxproxymanager.com/ as it gives a dead-simple hostname proxy to forward traffic to internal hosts, and will request/renew Let's Encrypt certs for them automatically.


I have a private CA (managed with cert-manager) and trust it on my systems, and issue certs for internal services from that.


I still pay for wildcard certs on a few domains; it's crazy - I know, I just haven't bothered to jump over to LE and add it to the list of things to mind.

For some services I'll put them behind those domains and simply use the appropriate certificate.

Generally though if this is strictly internal, my domain can issue internally trusted certificates.


At a previous job, I used Smallstep Certificates[0] as a hosted CA, though for certificates to communicate with our Kafka clusters. It worked pretty well, and was relatively easy to set up.

[0] https://smallstep.com/docs/step-ca


I use an internal subdomain wildcard certificate (e.g., *.internal.example.com) and traditional methods of configuring ssl for internal services / sites on the internal servers.

For cases where the service or site doesn’t natively support ssl, I run a local reverse proxy with the above certificate.



I have only one internal server which is coincidentally also accessible from the public Internet. I use NAT hairpinning for the external interface of my router and forward all packages on ports 22 and 443 to my server, so its TLS certificate is also valid from inside my LAN.


I wrote this tool[1] to help me create a CA, generate certificates, and automatically renew them

https://github.com/galenguyer/hancock


Private CAs are the enterprise solution, but that can get expensive or difficult to manage for a home setup.

You could get a cert for a wildcard subdomain and then use whatever private subdomains you want on your home network


I became my own CA. Install certs in the system/browser and issue keys that are 10 years out. Make everything with the CertManEX tool.


For a relatively small company I use PiHole Unbound with LE certificates and I'm routing the private subdomains to internal IPs.


Hashicorp Vault can generate certs via a web form or REST call. Easy to set up and maintain, and it's free.


A private PKI using Ejbca Community Edition from Primekey. Pretty solid.

Another one is step-ca


Create a self-signed certificate on the server

Install it as trusted on each client


Can you explain your use-case for this?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: