Hacker Newsnew | past | comments | ask | show | jobs | submit | lmm's commentslogin

> It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.


Ah… I really could not disagree more with that statement. I know we don’t want to trust BigCorp and whatnot, but a single exposed port and an incomplete understanding of what you’re doing is really all it takes to be compromised.

Same applies to Tailscale. A Tailscale client, coordination plane vulnerability, or incomplete understanding of their trust model is also all it takes. You are adding attack surface, not removing it.

If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.

If you are exposing a handful of hardened services on infrastructure you control, Tailscale adds complexity for no gain. If you are connecting machines across networks you do not control, or want zero-config access to internal services, then I can see its appeal.


There was a time when people were allowed to drive cars unlicensed.

These days, that seems insane.

As the traffic grew, as speeds increased, licensing became necessary.

I think, these days, we're almost into that category. I don't say this happily. But having unrestricted access seems like an era coming to an end.

I realise this seems unworkable. But so was the idea of a driver's license. Sometimes society and safety comes first.

I'm willing to bet that in under a decade, something akin to this will happen.


I'll take this to mean that you think arbitrary access to a computer's capabilities will require licensure, in which case I think this is a bad metaphor.

The point of a driver's license is that driving a ton of steel around at >50mph presents risk of harm to others.

Not knowing how to use a computer - driving it "poorly" - does not risk harm to others. Why does it merit restriction, based on the topic of this post?


Your unpatched Wordpress install is someone else’s botnet host, forming part of the “distributed” in DDoS, which harms others.

It’s why Cloudflare exists, which in itself is another form of harm, in centralising a decentralised network.


The argument is self-defeating:

1. "Unpatched servers become botnet hosts" - true, but Tailscale does not prevent this. A compromised machine on your tailnet is still compromised. The botnet argument applies regardless of how you access your server.

2. Following this logic, you would need to license all internet-connected devices: phones, smart TVs, IoT. They get pwned and join botnets constantly. Are we licensing grandma's router?

3. The Cloudflare point undermines the argument: "botnets cause centralization (Cloudflare), which is harm", so the solution is... licensing, which would centralize infrastructure further? That is the same outcome being called harmful.

4. Corporate servers get compromised constantly. Should only "licensed" corporations run services? They already are, and they are not doing better.

Back to the topic: I have no clue what you think Tailscale is, but it does increase security, only convenience.


The comment I was replying to was claiming that using your computer 'poorly' does not harm others. I was simply refuting that. Having spent the last two decades null routing customer servers when they decide to join an attack, this isn't theoretical.

As an aside, I dislike tailscale, and use wireguard directly.

Back to the topic: Your connected device can harm others if used poorly. I am not proposing licensing requirements.


I meant: does not increase security.

I would detest living in a world where regulators assign liability in this way, it sounds completely ridiculous. On a level with "speech is violence".

If I threw my license away tomorrow, what would be insane about me driving without a license?

Are you saying "unlicensed" where you mean "untrained?"


The point of massive fines, and in some cases jailtime for driving without a license is control.

If someone breaks regs, you want to be able to levy fines or jail. If they do it a lot, you want an inability to drive at all.

It's about regulating poor drivers. And yes, initially vetting a driver too.


I don't really know any adults who don't drive, and nobody ever told me they weren't capable.

I don't think it's about driving ability, besides the initial vetting.


Most inadequate drivers don't think they're inadequate, which is part of the problem. Unless your acquaintances are exclusively PMC you most likely know several adults who've lost their licenses because they are not adequately safe drivers, and if your acquaintances are exclusively PMC you most likely know several adults who are not adequately safe drivers and should've lost their licenses but knew the legal tricks to avoid it.

From the perspective of those writing the regs, speeding, running lights, driving carelessly or dangerously (all fines or crimes here) are indeed indicators of safe driving or not.

Understand, I am not advocating this. I said I did not like it. Neirher of those statements have anything totk do with whether I think it will come to pass, or not.


I am ~30 years old and I do not drive. In fact, I cannot drive.

Can you be more concrete what do you predict?

This felt like it didn’t do your aim justice, “$X and an incomplete understanding of what you’re doing is all it takes to be compromised” applies to many $X, including Tailscale.

Even if you understand what you are doing, you are still exposed to every single security bug in all of the services you host. Most of these self hosted tools have not been through 1% of the security testing big tech services have.

Now you are exposed to every security bug in Tailscale's client, DERP relays, and coordination plane, plus you have added a trust dependency on infrastructure you do not control. The attack surface did not shrink, it shifted.

I run the tailscale client in it's own LXC on Proxmox. Which connects to nginx proxy manager also in it's own LXC, which then connects to Nextcloud configured with all the normal features (Passkeys, HTTPS, etc). The Nextcloud VM uses full disk encryption as well.

Any one of those components might be exploitable, but to get my data you'd have to exploit all of them.


You do not need to exploit each layer because you traverse them. Tailnet access (compromised device, account, Tailscale itself) gets you to nginx. Then you only need to exploit Nextcloud.

LXC isolation protects Proxmox from container escapes, not services from each other over the network. Full disk encryption protects against physical theft, not network attacks while running.

And if Nextcloud has passkeys, HTTPS, and proper auth, what is Tailscale adding exactly? What is the point of this setup over the alternative? What threat does this stop that "hardened Nextcloud, exposed directly" does not? It is complexity theater. Looks like defense in depth, but the "layers" are network hops, not security boundaries.


And, Proxmox makes it worse in this case as most people won't know or understand that proxmox's netoworking is fundamentally wrong: its configured with consistent interface naming set the wrong way.

For every remote exploit and cloud-wide outage that has happened over the past 20 years my sshd that is exposed to the internet on port 22 has had zero of either. There were a couple of major OpenSSH bugs but my auto updater took care of that before I saw it on the news.

You can trust BugCorp all you want but there are more sshd processes out there than tailnets and the scrutiny is on OpenSSH. We are not comparing sshd to say WordPress here. Maybe when you don’t over engineer a solution you don’t need to spend 100x the resources auditing it…


If you only expose SSH then you're fine, but if you're deploying a bunch of WebApps you might not want them accessible on the internet.

The few things I self host I keep out in the open. etcd, Kubernetes, Postgres, pgAdmin, Grafana and Keycloak but I can see why someone would want to hide inside a private network.


Yeah any web app that is meant to be private is not something I allow to be accessible from the outside world. Easy enough to do this with ssh tunnels OR Wireguard, both of which I trust a lot more than anything that got VC funding. Plus that way any downtime is my own doing and in my control to fix.

How would another service be impacted by an open UDP port on a server that the service is not using?

Using a BigCorp service also has risks. You are exposed to many of their vulnerabilities, that’s why our information ends up in data leaks.

Someone would need your 256-bit key to do anything to an exposed Wireguard port.

In theory.

In the same theory, someone would need your EC SSH key to do anything with an exposed SSH port.

Practice is a separate question.


SSH is TCP though and the outside world can initiate a handshake, the point being that wireguard silently discards unauthenticated traffic - there's no way they can know the port is open for listening.

Uh, you know you can scan UDP ports just fine, right? Hosts reply with an ICMP destination unreachable / port unreachable (3/3 in IPv4, 1/4 in IPv6) if the port is closed. Discarding packets won't send that ICMP error.

It's slow to scan due to ICMP ratelimiting, but you can parallelize.

(Sure, you can disable / firewall drop that ICMP error… but then you can do the same thing with TCP RSTs.)


That's why you discard ICMP errors.

If anything, that's why you discard ICMP port unreachable, which I assume you meant.

If you're blanket dropping all ICMP errors, you're breaking PMTUD. There's a special place reserved in hell for that.

(And if you're firewalling your ICMP, why aren't you firewalling TCP?)


Not even remotely comparable.

Wireguard is explicitly designed to not allow unauthenticated users to do anything, whereas SSH is explicitly designed to allow unauthenticated users to do a whole lot of things.


> SSH is explicitly designed to allow unauthenticated users to do a whole lot of things

I'm sorry, what?


You could also use ZeroTier and get similar capabilities without a third-party being a blocker.

or netbird

Interesting product here, thanks although I prefer the p2p transport layer (VL1) plus an Ethernet emulation layer (VL2) for bridging and multicast support.

Headscale is a thing

Headscale is only really useful if you need to manage multiple users and/or networks. If you only have one network you want to have access to and a small number of users/devices it only increases the attack surface over having one wireguard listening because it has more moving parts.

I set it up to open the port for few secs via port knocking. Plus another script that runs on the server that opens connections to my home ip addr doing reverse lookup to a domain my router updates via dyndns so devices at my home don’t need to port knock to connect.

I think the most important thing about Tailscale is how accessible it is. Is there a GUI for Wireguard that lets me configure my whole private network as easily as Tailscale does?

I guess the people Israel is murdering on a massive scale are generally noncitizens, but it's still not really in a position to throw stones.

How does post help with any of that?

They are legally obliged to respond to registered letter. Request over registered letter is potential first step in subsequent legal procedure.

Note that that's for a service that ran for 18 months total.

Are we paying more? Or are we being lied to about the rate of inflation?

A completely different topic.

Hardly. Whether we are paying more for less is intimately linked to whether we are, in fact, paying more.

> And yet Rust ecosystem practically killed runtime library sharing, didn't it?

Yes, it did. We have literally millions of times as much memory as in 1970 but far less than millions of times as many good library developers, so this is probably the right tradeoff.


C++ already killed it: templated code is only instantiated where it is used, so with C++ it is a random mix of what goes into the separate shared library and what goes into the application using the library. This makes ABI compatibility incredibly fragile in practise.

And increasingly, many C++ libraries are header only, meaning they are always statically linked.

Haskell (or GHC at least) is also in a similar situation to Rust as I understand it: no stable ABI. (But I'm not an expert in Haskell, so I could be wrong.)

C is really the outlier here.


Static linking is still better than shipping a whole container for one app. (Which we also seem to do a lot these days!)

It still boggles my mind that Adobe Acrobat Reader is now larger than Encarta 95… Hell, it’s probably bigger than all of Windows 95!


Whole container or even chromium in electron

It's not just about memory. I'd like to have a stable Rust ABI to make safe plugin systems. Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table. This could be done today with a semi stable versionned ABI. New app builds would be able to load older libraries.

The main problem with dynamic libraries is when they're shared at the system level. That we can do away with. But they're still very useful at the app level.


> I'd like to have a stable Rust ABI to make safe plugin systems

A stable ABI would allow making more robust Rust-Rust plugin systems, but I wouldn't consider that "safe"; dynamic linking is just fundamentally unsafe.

> Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table.

This can already be done within a single project by using the dylib crate type.


Loading dynamic libraries can fail for many reasons but once loaded and validated it should be no more unsafe than regular crates?

You could check that mangled symbols match, and have static tables with hashes of structs/enums to make sure layouts match. That should cover low level ABI (though you would still have to trust the compiler that generated the mangling and tables).

A significantly more thorny issue is to make sure any types with generics match, e.g. if I declare a struct with some generic and some concrete functions, and this struct also has private fields/methods, those private details (that are currently irrelevant for semver) would affect the ABI stability. And the tables mentioned in the previous paragraph might not be enough to ensure compatibility: a behaviour change could break how the data is interpreted.

So at minimum this would redefine what is a semver compatible change to be much more restricted, and it would be harder to have automated checks (like cargo-semverchecks performs). As a rust developer I would not want this.


What properties are you validating? ld.so/libdl don't give you a ton more than "these symbols were present/absent."

It's really bad for security.

How much evidence do we actually have that AI wasn't used for these "real props"?

(Personally I don't care about my ability to tell the difference between what's AI and what's not; I care about my ability to tell the difference between well-crafted and not, and that seems to be functioning fine)


IPv6 is already here if you're not in the US. I moved house last month and consumer ISPs don't offer a (real) IPv4 connection in my country any more; you get an IPv6 connection and your router does MAP-E if you want to send data over IPv4.

I want to echo this comment. I am on Map-e in Asia and it is very difficult to get an exclusive ipv4 address without paying extra money.

And I want to connect to my machines without some stupid vpn or crappy cloud reverse tunneling service. Not everyone in the world wants to subscribe to some stupid SaaS service just to get functionality that comes by default with ipv6.

I think Silicon Valley is in a thought bubble and for people there ipv4 is plentiful and cheap. So good for them. However, the more these SaaS services delay ipv6 support, the more I pray to any deity out there I can move off these services permanently.


> The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply.

In my experience the differences are just an excuse, and however similar you made the protocol to IPv4 the people who wanted an excuse would still manage to find one. Deploying IPv6 is really not hard, you just have to actually try.


> - I don't have a shortage of IPv4. Maybe my ISP or my VPN host do, I don't know. I have a roomy 10.0.0.0/8 to work with.

That's great until you need to connect to a work/client VPN that decided to also use 10.0.0.0/8.

> - Every host routable from anywhere on the Internet? No thanks. Maybe I've been irreparably corrupted by being behind NAT for too long but I like the idea of a gateway between my well kept garden and the jungle and my network topology being hidden.

Even on IPv4, having normal addresses for all your computers makes life so much nicer. Perhaps-trivial example, but one that matters to me: if two people live in one house and a third person lives in a different house, can they all play a network game together? IPv4 sucks at this.


> That's great until you need to connect to a work/client VPN that decided to also use 10.0.0.0/8.

There's numerous other reserved IPv4 blocks that can be used: https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4. Would definitely not recommend to use 10/8 for private networks.


Landed on 172.16/22 for this reason however it's not uncommon how an enterprise to use all 3 private classes. One place I worked used 192.168 for management, 10 for servers, and 172 for wifi

Using 2 different classes has been a pretty common setup for wifi and wireless in my experience


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: