Lets encrypt has made SSL/TLS a no-brainer for websites on the public internet. Behind home routers, it's still a tedious topic. What's your (non-enterprise) solution?
I have one main Nginx server that all other services are behind, regardless if they're internal or external. This box is where NAT port forwards 80,443 to.
I also use only subdomains of the domains I own, even for internal stuff. This means I also run a small bind9 DNS server with minimal zones to direct traffic to the proxy inside the network, and most of the records just don't exist outside, ie. they return NXDOMAIN from my public DNS provider.
On the nginx box, I have a snippet like this:
deny all;
allow 10.x.0.0/16;
Then, when I configure something as 'internal only' I just add this line to its config file:
include /etc/nginx/private.conf
This means that I can decouple the certificate status from the internal/external status of the site. All sites get valid certs, and most of them get 401's from outside the network.
In reality, I manage the nginx config with ansible templates, so what I really do is set a boolean "public" flag to "true" on sites I want accessible outside, everything else is private by default.
mkcert is a simple tool for making locally-trusted development certificates. It requires no configuration.
Using certificates from real certificate authorities (CAs) for development can be dangerous or impossible (for hosts like example.test, localhost or 127.0.0.1), but self-signed certificates cause trust errors. Managing your own CA is the best solution, but usually involves arcane commands, specialized knowledge and manual steps.
mkcert automatically creates and installs a local CA in the system root store, and generates locally-trusted certificates. mkcert does not automatically configure servers to use the certificates, though, that's up to you.
That's not really ideal either...one copy of the cert that isn't protected well becomes a master key of sorts for someone already inside your internal network.
Often inside your network you are more concerned with encryption than authentication. If you "just" need encryption over the wire, wildcard cert's are useful.
Yeah, this is the way to go. If your DNS provider doesn't let you setup limited scope access tokens, then you can use something like acme-dns with delegated dns challenges, as long as you have a single external server.
>Then you're routing internal traffic through a public IP?
No, not typically. There's various methods to do the LetsEncrypt challenge/verification that don't require internet connecting the internal host you're generating the certificate for.
The downsides are:
- You can generate a wildcart cert for *.internal.yourdomain.tld. But then, it's a pretty big master key if you lose control of it.
- You can generate a cert-per-server but it exposes your hostnames (at least) in certificate transparency logs, which gives outsiders some view into how big your internal network is, perhaps some detail on what it's like via hostnames, etc. This is worse if you also expose the internal DNS records externally, then everyone sees those records as well, exposing more internal info. You could mitigate these things somewhat with various strategies around hostnames, DNS setup, etc.
> You can generate a wildcart cert for *.internal.yourdomain.tld. But then, it's a pretty big master key if you lose control of it.
For a home network, this is less relevant, since many of the services (and the nginx gateway) are running on the same host as the cert resides on. If they grab the wildcard cert, they're already in a position to mess with the services directly, no SSL MITM needed
Friendly reminder that 1.1.1.1 is a real, valid, public IP.
Seen plenty of networks that don’t recognize this, use it for some internal purpose, and break https://1.1.1.1/
> Seen plenty of networks that don’t recognize this, use it for some internal purpose, and break https://1.1.1.1/
AFAIK Cisco used 1.1.1.1 as an example "dummy" IP in their wireless LAN controller documentation, which of course led to infinite idiots copy/pasting exactly that and setting up broken networks.
My college uses 1.1.1.1 as their iis administration endpoint, I was told the reason was "nobody would guess it so it reduces the number of dumb kids guessing the edu\Administrator domain password". Around the time cloudflare started using it their logs must have skyrocketed.
They don't seem to check whether the hostname you're requesting a cert for resolves. At least with certbot, it requests the cert, creates the challenge record, then removes it after receiving the signed cert.
You can, but you might not want employeerecords.example.com leaking its IP address, even if it is an inaccessible 192.168.10.10. Defense in depth. You can use hosts or internal resolution.
Other replies already explained how this is orthoginal to IP addressing, but also there's not many virtuous virtuous things and many downsides about using ambiguous addresses your server to server communications. Also invariably you'll eventually end up networking them in a new way you didn't originally plan. It ends up being bad for security because it breeds unneeded complexity and makes your system harder to understand.
I have local DNS setup to resolve my personal domains to hosts on my home network. They do support wildcard certs, _only_ if you use some form of DNS challenge.
I just use unbound where I have a ansible script install it from the arch repos and deploy a handwritten config file with the dns entries. Then it forwards the rest to my DNS provider. I have my router set the address of that unbound host as the DNS server for my devices via DHCP.
I don't use k8s in my home network (though I do have some podman containers), but there's probably something with more k8s integration you can tie into your k8s ingress setup that I'm unaware of.
You want the name "internal.example.com". In your external DNS you create a CNAME from "_acme-challenge.internal.example.com" and point it to (e.g.) "internal.example.net" or "internal.dns-auth.example.com"
When you request the certificate you specify the "dns-01" method. The issuer (e.g., LE) will go to the the external DNS server for the look up, see that it is a CNAME and then follow the CNAME/alias, and do the verification at the final hostname.
So your ACME client has to do a DNS (TXT) record update, which can often be done via various APIs, e.g.:
You can even run your own DNS server locally (in a DMZ?) if your DNS provider does not have an convenient API. There are servers written for this use case:
For a long time I was all fussy about having to create a security exception for self-signed certificates. One day I realized I was acting insane, as if there was some glorious principle involved. There isn't.
I trust my own (or coworkers) certificates. It's a dev site for heavens sake.
It can be tricky to get self-signed certificates put into all the various places where they need to be. OS level certificate stores, browsers, mobile devices, curl, python/requests, VPN clients, etc. There's always some weird exception case.
Yeah, I think OP probably mostly does web development through the browser, where there’s just one trust store to worry about.
As soon as you need some automation involving backend services or even just curl and some scripts, this gets tougher. (And please please don’t use `curl -k` to disable checks… if such scripts accidentally make their way to production, you may as well not use TLS at all.)
A certificate has two main functions: identity of the server and encryption. Not checking the chain leaves you with encryption, which is likely what your need.
A self-signed certificate is as good as any other when it comes to encryption.
If you're in a situation where encryption matters to you, you're in a situation where the identity of the remote end ought to matter to you as well. If someone's able to snoop the traffic between you and the remote end, it's dangerous to assume they won't also be able to MITM you.
A self-signed certificate provides an identity the same way as a CA-signed one. The only difference is that it is in a CA chain.
You may trust the CA enough to not check further, but if you want to make sure that the endpoint you are talking with is the one you expect, you should check the identity of the certificate on the server. And it is the same for a self-signed certificate as from a CA-issued one.
`curl -k` doesn't do any of that. It just connects via TLS and does no checking whatsoever on the remote server's identity.
And what you're saying about CA certs has no resemblance to reality. People don't look at certs by matching their public keys exactly to what they expect... they trust certificate authorities to make that determination for them. But again, `curl -k` does neither so I don't think your point applies regardless.
It appears to me the issue is browser warning dialogs that imply it is always very dangerous. There should be either more context explained in those dialogs or a recognition of/mode for sites that are supposed to be self signed.
There is a long history to this. The original browser warnings were along the lines of your suggestion. Then it was discovered that regular users just clicked through the warning when an attacker MITM'ed their bank. There followed decades of making the warning ever more scary sounding and ever more difficult to bypass.
Where does responsibility lie here? Is there a mechanism for users to state trust towards their org's sysadmin instead of a global authority? If there isn't, what is the problem preventing this in the long history you mention?
(Untested idea) I'd suggest creating your own Root CA with an expire-date far in the future. Install it's public key as a trusted certificate and all derived certificates should not prompt any issues anymore
I did this on my workstation. I used it for a short while. Then I realized that I was actually putting a thing into the world that circumvents the entire security infrastructure of the world. Then I needed to do it for a server. Local only. Never production.
Then I thought, WTF am I doing?. I realized that I was making computers that would circumvent the world's security apparatus. Of course it won't hurt if I make no errors of practice or judgement but, am I really smart enough to handle highly radioactive material.
I ripped it all out. It makes me shudder thinking about it.
At ZeroTier we are working on a solution for this that will implement ACME. Not ready for release quite yet but getting close. Could be used on ZeroTier networks but doesn't have to be.
None of my self-hosted things are internet facing, so I use the DNS-01 challenge type. For internal DNS, I run the powerdns authoritative server. Each ACME client uses RFC2136 (TSIG) to update the _acme-challenge TXT record. I have a pdns lua policy script set up so that the only record that's allowed to be updated is the TXT record matching the TSIG name.
To allow Let's Encrypt to hit the DNS server, I run a public-facing dnsdist load balancer. It forwards the relevant TXT, CAA, DNSKEY, and NS queries to pdns and silently drops all other queries not required for ACME challenges.
I'd prefer that the internal hostnames wouldn't be leaked in the certificate transparency logs, but given that no services are exposed to the internet, it doesn't bother me enough to look for alternatives (eg. wildcard certs).
I have a private CA for all internal hosts. It's a bit of a pain to run, but it's neccessary because "internal" includes VMs that run colocated servers and many services to mutual authentication.
It's hyper annoying that it's barely to not possible to include our own CA into mobile Browsers so that they can use internal websites.
The alternative would be to depend on LE where neccessary, but this introduces an external dependency that I would rather avoid.
>It's hyper annoying that it's barely to not possible to include our own CA into mobile Browsers so that they can use internal websites.
You can? Any MDM should be able to deploy internal CAs, and at least on iPhones the free Apple Configurator 2 software will let you make profiles you can then easily deploy a bunch of ways. Doesn't scale easily but it's free and easy for small numbers. Also very useful period even for an individual or family to bootstrap devices and make changes if you've got significant numbers of email accounts for example. I'm sure Android has some equivalent.
All it takes is to setup an ACME-DNS server somewhere (or just use author's public ACME-DNS server if you don't care much), and create one CNAME record in your DNS.
Wildcard certificate using Traefik on a secondary domain generated using DNS challenge. Since this gives Traefik too much permissions to my DNS (CloudFlare now has better RBAC, but I haven't switched to it) - I use a secondary domain for my homeserver.
For the few services where I want to use my primary domain, I run dehydrated once in a while from my local setup (which uses my password manager on a script). These services do end up exposed publicly on the CT Logs, but I'm okay with that.
I proxy these services over a DO VPN, but resolve these internally to private IPs using NextDNS. There's also a internal domain for every service (service.in.example.com resolves to internal IP and service.example.com resolves to public IP, and service.example.com resolves to the private IP within my home network)
Depends on how your home network is setup and how you deploy your services. If you have a public domain for you home IP, and it's the usual docker bridge network setup with only a couple of containers, using Traefik or Caddy as a reverse proxy will suffice. They'll automatically provision TLS certs for your services with very little to no effort at all. If it's something more complicated than that, such as needing a separate IP and mdns host name per container running on a vlan, or some multicloud kubernetes setup, you pretty much have to setup your own CA. In that case, look into mkcert and/or step-ca.
If you have your own domain, then move it to one of the listed DNS providers and use DNS challenge with ACME:
We are using using certbot + cloudflare whis way. There is no HTTP request, certbot makes a temporary DNS record using the Cloudflare API to satisfy the challenge so you can run the script anywhere. Then copy the cert to the device that needs it.
Good question. I'm also interested. My use case would be:
- a have an nginx server load balancing traffic between N web servers that talk to one DB. Everything is inside a VPC (I'm using digital ocean), and only nginx is public to the internet
One approach I have read is that I could terminate SSL at the nginx level, and handle normal http between my web servers. Question would be: how secure is that? Can I (should I) trust that everything within my VPC is only accessible to me? Is terminating SSL good enough when handling, let's say, account creations and payments via Stripe?
Pro: you would know how some apps crash (and sometimes burn) when PKI breaks. Also you would know what every other distro guy thinks he is smarter than everyone and do the PKI his own way.
Cons: see Pro.
NB: Windows PKI in the ADDS environment (at terms of distributing the RCA/ICA trust) is a walk in the park compared to everything else.
NB2: Java keychain is PITA.
> Behind home routers, it's still a tedious topic
Buy a domain, park it at Cloudflare/Ghandi/whatever ACME supported provider, use DNS-01, push or pull certs to the local network.
There is no problem with the process, only with laziness and automation.
I'd already written a small (server focussed) tool to make use of certbot (or any ACME client really) certs a bit more automated (getting the right combination of certs/key in a file, converting to alternative formats, fetching the OCSP data, syncing across machines, restarting services after they're updated etc);
Last year I added in a 'create' mode where it sets up a self signed root CA, and issues certs using that. The other logic (convert, sync, combine) is obviously all the same still.
EasyRSA. Install the CA on my devices, distribute the SSL certs to the devices I care about. (Mainly NAS, OctoPi, PiHole). The me PiHole serves hostnames too, and DHCP.
Personally I avoid it where I can and use wireguard for all internal traffic. Hopefully someone more knowledgeable here can tell me if it's a good or bad idea.
If you can automate DNS, create a wildcard LE cert and have a cronjob to maintain it around your different places from the one place you issue it. That is what I do.
Before that I just bought one wildcard cert and used that. Can be bought at less than 50 bucks and then no hassle.
If I could not automate DNS and I don't have 50 bucks per year for it, I would create a small CA myself, trust it in my browsers and issue certificates from that.
I do a lot of the same things as others with a custom domain at home with a NAT 80/443 forward to nginx, but I use https://nginxproxymanager.com/ as it gives a dead-simple hostname proxy to forward traffic to internal hosts, and will request/renew Let's Encrypt certs for them automatically.
I still pay for wildcard certs on a few domains; it's crazy - I know, I just haven't bothered to jump over to LE and add it to the list of things to mind.
For some services I'll put them behind those domains and simply use the appropriate certificate.
Generally though if this is strictly internal, my domain can issue internally trusted certificates.
At a previous job, I used Smallstep Certificates[0] as a hosted CA, though for certificates to communicate with our Kafka clusters. It worked pretty well, and was relatively easy to set up.
I use an internal subdomain wildcard certificate (e.g., *.internal.example.com) and traditional methods of configuring ssl for internal services / sites on the internal servers.
For cases where the service or site doesn’t natively support ssl, I run a local reverse proxy with the above certificate.
I have only one internal server which is coincidentally also accessible from the public Internet. I use NAT hairpinning for the external interface of my router and forward all packages on ports 22 and 443 to my server, so its TLS certificate is also valid from inside my LAN.
I also use only subdomains of the domains I own, even for internal stuff. This means I also run a small bind9 DNS server with minimal zones to direct traffic to the proxy inside the network, and most of the records just don't exist outside, ie. they return NXDOMAIN from my public DNS provider.
On the nginx box, I have a snippet like this:
Then, when I configure something as 'internal only' I just add this line to its config file: This means that I can decouple the certificate status from the internal/external status of the site. All sites get valid certs, and most of them get 401's from outside the network.In reality, I manage the nginx config with ansible templates, so what I really do is set a boolean "public" flag to "true" on sites I want accessible outside, everything else is private by default.