> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.
Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
> I am not sure why people are so afraid of exposing ports
It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
> It's the way the Internet is meant to work.
Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.
Ah… I really could not disagree more with that statement. I know we don’t want to trust BigCorp and whatnot, but a single exposed port and an incomplete understanding of what you’re doing is really all it takes to be compromised.
Same applies to Tailscale. A Tailscale client, coordination plane vulnerability, or incomplete understanding of their trust model is also all it takes. You are adding attack surface, not removing it.
If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
If you are exposing a handful of hardened services on infrastructure you control, Tailscale adds complexity for no gain. If you are connecting machines across networks you do not control, or want zero-config access to internal services, then I can see its appeal.
I'll take this to mean that you think arbitrary access to a computer's capabilities will require licensure, in which case I think this is a bad metaphor.
The point of a driver's license is that driving a ton of steel around at >50mph presents risk of harm to others.
Not knowing how to use a computer - driving it "poorly" - does not risk harm to others. Why does it merit restriction, based on the topic of this post?
1. "Unpatched servers become botnet hosts" - true, but Tailscale does not prevent this. A compromised machine on your tailnet is still compromised. The botnet argument applies regardless of how you access your server.
2. Following this logic, you would need to license all internet-connected devices: phones, smart TVs, IoT. They get pwned and join botnets constantly. Are we licensing grandma's router?
3. The Cloudflare point undermines the argument: "botnets cause centralization (Cloudflare), which is harm", so the solution is... licensing, which would centralize infrastructure further? That is the same outcome being called harmful.
4. Corporate servers get compromised constantly. Should only "licensed" corporations run services? They already are, and they are not doing better.
Back to the topic: I have no clue what you think Tailscale is, but it does increase security, only convenience.
The comment I was replying to was claiming that using your computer 'poorly' does not harm others. I was simply refuting that. Having spent the last two decades null routing customer servers when they decide to join an attack, this isn't theoretical.
As an aside, I dislike tailscale, and use wireguard directly.
Back to the topic: Your connected device can harm others if used poorly. I am not proposing licensing requirements.
Most inadequate drivers don't think they're inadequate, which is part of the problem. Unless your acquaintances are exclusively PMC you most likely know several adults who've lost their licenses because they are not adequately safe drivers, and if your acquaintances are exclusively PMC you most likely know several adults who are not adequately safe drivers and should've lost their licenses but knew the legal tricks to avoid it.
From the perspective of those writing the regs, speeding, running lights, driving carelessly or dangerously (all fines or crimes here) are indeed indicators of safe driving or not.
Understand, I am not advocating this. I said I did not like it. Neirher of those statements have anything totk do with whether I think it will come to pass, or not.
This felt like it didn’t do your aim justice, “$X and an incomplete understanding of what you’re doing is all it takes to be compromised” applies to many $X, including Tailscale.
Even if you understand what you are doing, you are still exposed to every single security bug in all of the services you host. Most of these self hosted tools have not been through 1% of the security testing big tech services have.
Now you are exposed to every security bug in Tailscale's client, DERP relays, and coordination plane, plus you have added a trust dependency on infrastructure you do not control. The attack surface did not shrink, it shifted.
I run the tailscale client in it's own LXC on Proxmox. Which connects to nginx proxy manager also in it's own LXC, which then connects to Nextcloud configured with all the normal features (Passkeys, HTTPS, etc). The Nextcloud VM uses full disk encryption as well.
Any one of those components might be exploitable, but to get my data you'd have to exploit all of them.
You do not need to exploit each layer because you traverse them. Tailnet access (compromised device, account, Tailscale itself) gets you to nginx. Then you only need to exploit Nextcloud.
LXC isolation protects Proxmox from container escapes, not services from each other over the network. Full disk encryption protects against physical theft, not network attacks while running.
And if Nextcloud has passkeys, HTTPS, and proper auth, what is Tailscale adding exactly? What is the point of this setup over the alternative? What threat does this stop that "hardened Nextcloud, exposed directly" does not? It is complexity theater. Looks like defense in depth, but the "layers" are network hops, not security boundaries.
And, Proxmox makes it worse in this case as most people won't know or understand that proxmox's netoworking is fundamentally wrong: its configured with consistent interface naming set the wrong way.
For every remote exploit and cloud-wide outage that has happened over the past 20 years my sshd that is exposed to the internet on port 22 has had zero of either. There were a couple of major OpenSSH bugs but my auto updater took care of that before I saw it on the news.
You can trust BugCorp all you want but there are more sshd processes out there than tailnets and the scrutiny is on OpenSSH. We are not comparing sshd to say WordPress here. Maybe when you don’t over engineer a solution you don’t need to spend 100x the resources auditing it…
If you only expose SSH then you're fine, but if you're deploying a bunch of WebApps you might not want them accessible on the internet.
The few things I self host I keep out in the open. etcd, Kubernetes, Postgres, pgAdmin, Grafana and Keycloak but I can see why someone would want to hide inside a private network.
Yeah any web app that is meant to be private is not something I allow to be accessible from the outside world. Easy enough to do this with ssh tunnels OR Wireguard, both of which I trust a lot more than anything that got VC funding. Plus that way any downtime is my own doing and in my control to fix.
SSH is TCP though and the outside world can initiate a handshake, the point being that wireguard silently discards unauthenticated traffic - there's no way they can know the port is open for listening.
Uh, you know you can scan UDP ports just fine, right? Hosts reply with an ICMP destination unreachable / port unreachable (3/3 in IPv4, 1/4 in IPv6) if the port is closed. Discarding packets won't send that ICMP error.
It's slow to scan due to ICMP ratelimiting, but you can parallelize.
(Sure, you can disable / firewall drop that ICMP error… but then you can do the same thing with TCP RSTs.)
Wireguard is explicitly designed to not allow unauthenticated users to do anything, whereas SSH is explicitly designed to allow unauthenticated users to do a whole lot of things.
Interesting product here, thanks although I prefer the p2p transport layer (VL1) plus an Ethernet emulation layer (VL2) for bridging and multicast support.
Headscale is only really useful if you need to manage multiple users and/or networks. If you only have one network you want to have access to and a small number of users/devices it only increases the attack surface over having one wireguard listening because it has more moving parts.
I set it up to open the port for few secs via port knocking. Plus another script that runs on the server that opens connections to my home ip addr doing reverse lookup to a domain my router updates via dyndns so devices at my home don’t need to port knock to connect.
I think the most important thing about Tailscale is how accessible it is. Is there a GUI for Wireguard that lets me configure my whole private network as easily as Tailscale does?
This is where using frontier models can help - You can have them assist with configuring and operating wireguard nearly as easily as you can have them walk you through Tailscale, eliminating the need for a middleman.
The mid-level and free tiers aren't necessarily going to help, but the Pro/Max/Heavy tier can absolutely make setting up and using wireguard and having a reasonably secure environment practical and easy.
You can also have the high tier models help with things like operating a FreePBX server and VOIP, manage a private domain, and all sorts of things that require domain expertise to do well, but are often out of reach for people who haven't gotten the requisite hands on experience and training.
I'd say that going through the process of setting up your self hosting environment, then after the fact asking the language model "This is my environment: blah, a, b, c, x, y, z, blah, blah. What simple things can I do to make it more secure?"
And then repeating that exercise - create a chatgpt project, or codex repo, or claude or grok project, wherein you have the model do a thorough interrogation of you to lay out and document your environment. With that done, you condense it to a prompt, and operate within the context where your network is documented. Then you can easily iterate and improve.
Something like this isn't going to take more than a few 15 minute weekend sessions each month after initially setting it up, and it's going to be a lot more secure than the average, completely unattended, default settings consumer network.
You could try to yolo it with Operator or an elevated MCP interface with your system, but the point is, those high tier models are sufficiently good enough to make significant self hosting easily achievable.
> Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
Wireguard is distributed by distros in official packages. You don't need time, money and expertise to setup unattended upgrades with auto reboot on a debian or redhat based distro. At least it is not more complicated than setting an AI agent.
What about SMTP, IMAP(S), HTTP(S), various game servers parent mentioned have open ports for?
Having a single port open for VPN access seems okay for me. That's what I did, But I don't want an "etc" involved in what has direct access to hardware/services in my house from outside.
Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
> I am not sure why people are so afraid of exposing ports
It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
> It's the way the Internet is meant to work.
Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.