Gentoo doesn't "exist" because it is necessary to have an alternative to systemd. Gentoo is simply about choice and works with both openrc and systemd. It supported other inits to some degree as well im the past.
After over a decade of Debian, when I upgraded my PC, I tried every big systemd-based distro, including opensuse, which I wholly loathed. I finally decided on Void and feel at home as I did 20+ years ago when I began.
There are serious problems with the systemd paradigm, most of which I couldn't argue for or against. But at least in Void, I can remove network-manger altogether, use cron as I always have, and generally remain free to do as I please until eventually every package there is has systemd dependencies which seems frightfully plausible at this pace.
Void is as good as I could have wanted. If that ever goes, I guess it's either BSD or a cave somewhere.
I'm glad to see the terse questions here. They're well warranted.
systemd parses your crontab and runs the jobs inside on its own terms
of course you can run Cron as well and run all your jobs twice in two different ways, but that's only pedantically possible as it's a completely useless way to do things.
Not stopping. Just clashing with that and a hundred other things that I never wanted managed by one guy. Systemd.timer, systemd.service, yes, trivial, but I don't catalog every thing that bothers me about systemd - I just stay away from it. There are plenty of better examples. So where ever I wrote 'stop', it should read hinder.
> Void is as good as I could have wanted. If that ever goes, I guess it's either BSD or a cave somewhere.
If systemd-less Linux ever go, there are indeed still the BSDs. But I thought long and hard about this and already did some testing: I used to run Xen back in the early hardware-virt days and nowadays I run Proxmox (still, sadly, systemd-based).
An hypervisor with a VM and GPU passthrough to the VM is at least something too: it's going to be a long long while before people who want to take our ability to control our machines will be able to prevent us from running a minimal hypervisor and then the "real" OS in a VM controlled by the hypervisor.
I did GPU passthrough tests and everything works just fine: be it Linux guests (which I use) or Windows guests (which I don't use).
My "path" to dodge the cave you're talking about is going to involved an hypervisor (atm I'm looking at the FreeBSD's bhyve hypervisor) and then a VM running systemd-less Linux.
And seen that, today, we can run just about every old system under the sun in a VM, I take we'll all be long dead before evil people manage to prevent us from running the Linux we want, the way we want.
You're not alone. And we're not alone.
I simply cannot stand the insufferable arrogance of Agent Poettering. Especially not seen the kitchen sink that systemd is (systemd ain't exactly a homerun and many are realizing that fact now).
I use Fossil extensively, but only for personal projects. There are specific design conditions, such as no rebasing [0], and overall, it is simpler yet more useful to me. However, I think Fossil is better suited for projects governed under the cathedral model than the bazaar model. It's great for self-hosting, and the web UI is excellent not only for version control, but also for managing a software development project. However, if you want a low barrier to integrating contributions, Fossil is not as good as the various Git forges out there. You have to either receive patches or Fossil bundles via email or forum, or onboard/register contributors as developers with quite wide repo permissions.
It was developed primarily to replace SQLite's CVS repository, after all. They used CVSTrac as the forge and Fossil was designed to replace that component too.
Not all reading is the same. In other words, I wish this article had differentiated between different types of reading. For example, I read that many young adults have picked up reading "new adult" genre books. They enjoy the physical experience of an analog medium and consume one edition after another of popular series. This sounds fine at first, but the content is problematic. These books are not literature, and they may convey problematic views of behavior. For example, they may perpetuate outdated views of relationships between men and women, portraying them as unequal and reproducing clichéd stereotypes from the last millennium.
In short, the article focuses only on the amount of reading, but the content is also important. This should be part of the equation.
I see no reference to this in the article. Nor have you explained why these books are "not literature". This sounds like someone looking at a piece of art, and saying "that's not art".
As we're referencing young adults here, they already have a degree of understanding of the world today. Reading of the past, gives historical context to how the world is today, to why the world is as it is today. I'd have hoped they'd been well exposed to such things in school, and you can be absolutely sure they've been exposed to such things in movies, or music (have you heard some rap music?), or.. you know, this thing called the Internet.
In 12 seconds I can find more untoward content on the Internet, than I could in an entire library or book store.
When I was that age I read a lot of science fiction series. I had friends reading what they called “trashy romance”—they knew it was in no way realistic. This was also during peak Harry Potter, which is literary street food, and I say that as a compliment. Most of us read other stuff too, but realistically, dense English lit was confined to English class.
So this isn’t new and I don’t see the problem.
As for the “views,” by this standard kids shouldn’t read A Tale of Two Cities because it encourages beheadings.
Books portraying problematic behaviour doesn't mean it agrees with them, Jesus it seems like liberals and pseudo-progressives have adopted the right mindset and vocabulary of leftists and actual progressives while clinging onto their reactionary puritan sensibilities, this time saying something is "problematic" instead of demonic
> Every bad day for microsoft is yet another glorious day for linux.
Nah. If that were the case, Linux would dominate personal computer statistics. The reality is that most mainstream users just don't care. But, of course, that won't stop us.
I would also argue that _what_ personal computing means to most people has also evolved, even with younger generations. My gen Z nephew the other day was faberglasted when he learned I use my Documents, Videos, Desktop folders, ect. He literally asked "What is the Documents folder even for?". To most people, stuff is just magically somewhere (the cloud) and when they get a new machine tbey just expect it all to be there and work. I feel like these cryptography and legality discussions here on HackerNews always miss the mark because we overestimate hiw much most people care. Speaking of younger generations, I also get the feeling that there isn't such a thing as "digital sovereignty" or "ownership", at least not by the same definitions we gen x and older millennials internalize those definitions.
Across the generations, there are always a few groups to where cryptographic ownership really matter, such as journalists, protesters, and so on. Here on HN I feel like we tend to over-geeneralize these use cases to everybody, and then we are surprised when most people don't actually care.
I spent a long time tinkering with the tooling, which meant that writing always took a back seat or was put off. As a transition, I decided to use Bear Blog [0] for writing, and when I eventually find a self-hosted solution that works for me, I'll just switch over. And Bear Blog is in line with my values, unlike so many other platforms.
You make a good point. From a philosophical point of view, abstractions should hide complexity and make things easier for the human user. It should be like a pyramid: the bottom layer should be the most complex, and each subsequent layer should be simpler. The problem is that many of today's abstractions are built on past technology, which was often much better designed and simpler due to the constraints of that time. Due to the divergent complexity of today's abstractions and unavoidable leaks, we have a plethora of "modern" frameworks and tools that are difficult to use and create mental strain for developers. In short, I always avoid using such frameworks and prefer the old, boring basics wherever possible.
I'm struggling to form a definitive statement about my thoughts here, but I'll give it a try:
Every (useful) abstraction that aims to make an action easier will have to be more complex inside than doing the action itself.
Would love for someone to challenge this or find better words. But honestly, if that's not the case, you end up with something like leftPad. Libraries also almost always cover more than one use case, which also leads to them being more complex than a simple tailored solution.
I think of it as: adding an abstraction relocates complexity away from what you want to make easy and moves it somewhere else. It does not eliminate complexity in total, it increases it. The best abstractions have a soft edge between using them and not using them. The worst are like black holes.
> Surely a better approach is to record the complete ancestry of every check-in but then fix the tool to show a "clean" history in those instances where a simplified display is desirable and edifying
From your link. The actual issue that people ought to be discussing in this comment section imo.
Why do we advocate destroying information/data about the dev process when in reality we need to solve a UI/display issue?
The amount of times in the last 15ish years I've solved something by looking back at the history and piecing together what happened (eg. refactor from A to B as part of a PR, then tweak B to eventually become C before getting it merged, but where there are important details that only resulted because of B, and you don't realize they are important until 2 years later) is high enough that I consider it very poor practice to remove the intermediate commits that actually track the software development process.
Because nobody cares about the dev process. The number of times I’ve looked back in the history and seen a branch with a series of twenty commits labeled “fix thing”, “oops”, “typo”, “remove thing I tried that didn’t work”, or just a chain of WIP WIP WIP WIP is useless, irritating, and pointless.
One commit per logical change. One merge per larger conceptual change. I will rewrite my actual dev process so that individual commits can be reviewed as small, independent PRs when possible, and so that bigger PRs can be reviewed commit-by-commit to understand the whole. Because I care about my reviewers, and because I want to review code like this.
Care about your goddamn craft, even just a little bit.
Isn't this just `--first-parent`? I think that should probably be the default in git. Maybe the only way this will happen is with a new SCM.
But the git authors are adamant that there's no convention for linearity, and somehow extended that to why there shouldn't be a "theirs" merge strategy to mirror "ours" (writing it out it makes even less sense, since "theirs" is what you'd want in a first-parent-linear repo, not "ours").
Yes, that is also my feeling. But comparing an interpreted language with a compiled one is not really fair.
Here is my quick benchmark. I refrain from using Python for most scripting/prototyping task but really like Janet [0] - here is a comparison for printing the current time in Unix epoch:
$ hyperfine --shell=none --warmup 2 "python3 -c 'import time;print(time.time())'" "janet -e '(print (os/time))'"
Benchmark 1: python3 -c 'import time;print(time.time())'
Time (mean ± σ): 22.3 ms ± 0.9 ms [User: 12.1 ms, System: 4.2 ms]
Range (min … max): 20.8 ms … 25.6 ms 126 runs
Benchmark 2: janet -e '(print (os/time))'
Time (mean ± σ): 3.9 ms ± 0.2 ms [User: 1.2 ms, System: 0.5 ms]
Range (min … max): 3.6 ms … 5.1 ms 699 runs
Summary
'janet -e '(print (os/time))'' ran
5.75 ± 0.39 times faster than 'python3 -c 'import time;print(time.time())''
concerning (1): I have no offline sync in place, all my emails stay on the server. The IMAP protocol has a decent server-side search included[0], combined with Gnus unified search syntax[1], I enjoy a hassle-free search experience.
gnus had some massive IMAP performance improvements a few years (probably close to a decade now) ago. Before that it was quite painful to use it on large mailboxes without a local imap - I used to sync that with offlineimap. When they had a massive issue moving from python2 to python3, and keeping that running on a modern distro started getting painful I tried it without local imap - and realised those improvements made things fast enough that you can run it on remote mailboxes, and even do so in your main emacs instance.
reply