Hacker Newsnew | past | comments | ask | show | jobs | submit | raggi's commentslogin

It's difficult for us to maintain documentation of exactly the kind you'd want there, though we do try to keep up with docs as best we can. In particular there is a fairly wide array of heuristics in the client to adapt to the environment that it's running in - and this is most true on Linux where there are far far too many different configuration patterns and duplicate subsystems (example: https://tailscale.com/blog/sisyphean-dns-client-linux).

To try and take a general poke at the question in more of the context you leave at the end:

- We use rule based routing to try to dodge arbitrary order conflicts in the routing tables.

- We install our rules with high priority because traffic intended for the tailnet hitting non-tailscale interfaces is typically undesirable (it's often plain text).

- We integrate with systemd-resolved _by preference_ on Linux if it is present, so that if you're using cgroup/namepsace features (containers, sandbox runtimes, etc etc) then this provides the expected dns/interface pairings. If we can't find systemd-resolved we fall back to modifying /etc/resolv.conf, which is unavoidably an area of conflict on such systems (on macos and windows they have more broadly standard solutions we can use instead, modulo other platform details).

- We support integration with both iptables and nftables (the latter is behind manual configuration currently due to slightly less broad standardization, but is defaulted by heuristic on some distros/in some environments (like gokrazy, some containers)). In nftables we create our own tables, and just install jumps into the xtables conventional locations so as to be compatible with ufw, firewalld and so on.

- We do our best in tailscaled's sshd to implement login in a broadly compatible way, but again this is another of those places the linux ecosystem lacks standards and there's a ton of distro variation right now (freedesktops concerns start at a higher level so they haven't driven standardization, everyone else like openssh have their own pile of best-guesses, and distros go ham with patches).

- We need a 1360 byte MTU path to peers for full support/stability. Our inner/interface MTU is 1280, the minimum MTU for IPv6, once packed in WireGuard and outer IPv6, that's 1360.

I can't answer directly based on "very custom" if there will be any challenges to deal with. We do offer support to work through these things though, and have helped some users with fairly exotic setups.


> It's difficult for us to maintain documentation of exactly the kind you'd want there

Suggestion: let an LLM maintain it for you.

Alternate suggestion for OP: let an LLM generate the explanations you want from the code (when available).


This problem space is not small enough to stay within current LLM attention span. A sufficiently good agent setup might be able to help maintain docs somewhat through changes, but organizing them in an approachable way covering all the heuristics spread across so many places and external systems with a huge amount of time and versioning multivariate factors is hugely troublesome for current LLM capabilities. They're better at simpler problems, like typing the code.

LLM docs suck.

For technically complex things, they EXTRA suck.

This is a bad idea.


Which CEO?

hurd init is a lot like systemd architecturally, it just gets to use kernel provided ipc rather than having to manage its own. if your objection to systemd is its architecture you don't want anything to do with hurd.


https://github.com/systemd/systemd/tree/main/src/core doesn't look like 1678 C files to me.


Github says 2.8k files when selecting c (including headers...) https://github.com/search?q=repo%3Asystemd%2Fsystemd++langua...

If the project is even split in different parts that you need to understand... already makes the point.


Well to be fair, you don't need to understand how SystemD is built to know how to use it. Unit files are pretty easy to wrap your head around, it took me a while to adjust but I dig it now.

To make an analogy: another part of LFS is building a compiler toolchain. You don't need to understand GCC internals to know how to do that.


> Well to be fair, you don't need to understand how SystemD is built to know how to use it.

The attitude that you don't need to learn what is inside the magic black box is exactly the kind of thing LFS is pushing against. UNIX traditionally was a "worse is better" system, where its seen as better design to have a simple system that you can understand the internals of even if that simplicity leads to bugs. Simple systems that fit the needs of the users can evolve into complex systems that fit the needs of users. But you (arguably) can't start with a complex system that people don't use and get users.

If anyone hasn't read the full Worse Is Better article before, its your lucky day:

https://www.dreamsongs.com/RiseOfWorseIsBetter.html


LFS is full of packages that fit your description of a black box. It shows you how to compile and configure packages, but I don't remember them diving into the code internals of a single one.

I understand not wanting to shift from something that is wholly explainable to something that isn't, but it's not the end of the world.


No, its not the end of the world. And I agree, LFS isn't going to be the best resource for learning how a compiler works or cron or ntp. But the init process & systemd is so core to linux. I can certainly see the argument that they should be part of the "from scratch" parts.


You still build it from scratch (meaning you compile from source).. they don't dive into Linux code internals either.

They still explain what an init system is for and how to use it.


The problem is ultimately that by choosing one, the other gets left out. So whatever is left out just has one more nail in its coffin. With LFS being the "more or less official how-to guide of building a Linux system", therefore sysvinit is now essentially "officially" deprecated by Linux. This is what is upsetting people here.

I'm OK with that in the end because my system is a better LFS anyhow. The only part that bothers me is that the change was made with reservations, rather than him saying no and putting his foot down, insisting that sysvinit stay in regardless of Gnome/KDE. But I do understand the desire to get away from having to maintain two separate versions of the book.

Ultimately I just have to part ways with LFS for good, sadly. I'm thankful for these people teaching me how to build a Linux system. It would have been 100x harder trying to do it without them.


Linux is just a kernel, that does not ship with any sort of init system.. so I don't see how anything is being deprecated by Linux.

The LFS project is free to make any decisions that they want about what packages they're going to include in their docs. If anyone is truly that upset about this then they should volunteer their time to the project instead of commenting here about what they think the project should do IMO.


The whole point of LFS is to understand how the thing works.


nothing is actually stopping people from understanding systemd-init except a constant poorly justified flame war. it's better documented than pretty much everything that came before it.


In what way was Bruce incorrect, your one link excepted?


he is counting every c file in the systemd _repository_ which houses multiple projects, libraries and daemons. he equates that to the c file count for a single init. it's a disingenuous comparison. systemd-init is a small slice of the code in the systemd repository.


I'm guessing he shares my belief that systemd-init cannot exist in the wild on its own, correct? When you want a teacup, you have to get the whole 12 place dinner set.


IIRC the mandatory components are the init system, udev, dbus, and journald. Journald is probably the most otherwise-optional feeling one (udev and dbus are both pretty critical for anything linux regardless), though you can put it into a passthrough mode so you don't have to deal with its log format if you don't want. Everything else is optional.


> ... dbus [is] pretty critical for anything linux regardless

Weird. If I weren't a sicko and had OBS Studio installed on my multipurpose box [0] I'd not have dbus installed on it.

dbus is generally optional; not that many packages require it. [1]

[0] Two of its several purposes are video transcoding and file serving.

[1] This is another area where Gentoo Linux is (sadly) one of the absolute best Linux distros out there.


> he is counting every c file in the systemd _repository_ which houses multiple projects, libraries and daemons. he equates that to the c file count for a single init. it's a disingenuous comparison.

See, this is why when I refer to the Systemd Project, I spell it as "SystemD", and when I'm referring to systemd(1), I spell it "systemd". I understand that some folks who only wish to shit on the Systemd Project also spell it that way, but I ain't one of them.

> systemd-init is a small slice of the code in the systemd repository.

Given the context:

   Yes, systemd provides a lot of capabilities, but we will be losing some things I consider important.
I'd say that the topic of discussion was SystemD, rather than systemd. systemd doesn't provide you with all that many capabilities; it's really not much more than what you get with OpenRC + a supervisor (either supervise-daemon or s6).


I love my shield, it’s been a staple.

If they wanted to really knock it out the park, the next step would be a steamos port with DRM support.


DRM is anti-consumer malware, so I hope not.

There are other ways to source videos than paying a monthly fee forever for something that you will never own.


Yeah, but when your daughter wants to watch Moana 2, that tends to stop being an issue.


you might find that not everyone agrees


Please talk to your congresspeople about getting DRM abolished, in the meantime, please don't try to deny my freedom to consume legally obtained content that is only available with DRM.


Unless Valve are going to do some work to enable that and support a hardware backed chain of trust for drivers that's not going to happen.

(I think it should happen but that's not the same as that it will.)


That would be great, honestly. Imagine just being able to install Android apps like Netflix, Disney+, ... On your Steam Deck or Steam Machine and having it work out of the box with Widevine L1. Then you'd truly just only need one device attached to your TV for all your entertainment needs. And then a great and supported one at that.


The drivers are already done, they're in the Android build.

I just want a more open OS.

I don't mind if it requires running a vendor signed boot, kernel and driver chain, I'd be using those same vendor chains anyway for non upstreamed hardware most likely.


It could be really interesting if they used a fraction of the tech they have or recently stopped using that could still fit here well.


Been wanting this ever since doing it in Fuchsia. Really excited to see added focus and investment in this for the Linux ecosystem.


> I'd just cloned a copy of Chromium myself, and for all that time and money, independent developers who cloned the repo reported that the codebase is very far from a functional browser. Recent commits do not compile cleanly, GitHub Actions runs on main are failing, and reviewers could not find a single recent commit that was built without errors.

Significant typo I assume?


there are a loooot of languages/compilers for which the most wall-time expensive operation in compilation or loading is stat(2) searching for files


I actually ran into this issue building dependency graphs of a golang monorepo. We analyzed the cpu trace and found that the program was doing a lot of GC so we reduced allocations. This was just noise though as the runtime was just making use of time waiting for I/O as it had shelled out to go list to get a json dep graph from the CLI program. This turns out to be slow due to stat calls and reading from disk. We replaced our usage of go list with a custom package import graph parser using the std lib parser packages and instead of reading from disk we give the parser byte blobs from git, also using git ls-files to “stat” the files. Don’t remember the specifics but I believe we brought the time from 30-45s down to 500ms to build the dep graph.


> I am working on a high-performance game that runs over ssh.

WAT. Please no.


Why not? If it's high-performance, it's fine.


SSH suffers from tcp-in-tcp issues which means it’ll always take a performance hit over other protocols


If you spend entire CPU to process few megabits of SSH traffic, it isn't high performance


Performing with highly elevated privileges? (Joke)


ssh the protocol doesn't imply any privileges of any kind


Unless you leave your ssh agent on, then it very much does.


Yep.

Observability stacks are a similar blind alley to containers: They solve a handful of defined problems and immediately fall down on their own KPI's around events handled/prevented in-place, efficiency, easier to use than what came before.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: