This is the job of your linux distribution: it's not just a pile of packages. It includes the bridges required to rebuild each package for the infrastructure of the distribution.
If you really want to have everything rebuilding from source at your fingertips, you could already do that with gentoo and similar. Patching upstream sources and rebuilding is pretty trivial there if that's your thing. There are more to choose from.
You probably won't be having the build infrastructure of all your packages "ready" all the time though, and things are taking a turn to the worse due to dependency bundling where each package has it's own independent copy.
I want Gentoo that has been prepared as if the user just recompiled every goddamned program that makes up the core os, build desiderata and all waiting for me to make another change.
So if I decide I want to recompile coreutils after messing with a single implementation file, `make` or whatever will just incrementally recompile what is needed.
Another way to think of it: (I hope) all coreutils devs have a dev environment they use where they can make such a change and have very little latency from the moment of a code change to the moment they see results from running/testing the command in question. I want a distro that delivers code and source in such a way that the code of all core programs is just waiting for me to incrementally rebuild any of those programs.
And that wouldn't even really suffice-- there are probably some core programs which still have long incremental recompiling times. Digression-- I'd bet all the browsers are such examples-- in fact I'll just claim it here in the hopes that the internet gods strike me down. :)
> I want Gentoo that has been prepared as if the user just recompiled every goddamned program that makes up the core os, build desiderata and all waiting for me to make another change.
You likely don't want that. You can expect compiled objects to be in a 10:1 ratio compared to the final build size. Rust is closer to 30:1 in my experience so far. Then you have projects that vastly exceed that.
As a dev, I'm constantly struggling with disk space with just the projects I'm working on.
This doesn't even begin to handle the issue of contained/reproducible builds.
And with things such as LTO, you might be waiting at the "linking" step way longer than what you might think.
I don't think you realize how GOOD gentoo makes it already, considering all the variables in question. You can rebuild and patch the entire system, from start to finish, with a single command. Take a moment to appreciate this fact alone.
As it stands, OP's shorthand of "anyone" expands to "anyone with enough free time to slow smoke an entire pig" for a large class of programs. That's blocking a lot more participation and limiting the upshot of OP's rhetoric.
If someone builds a thing where I can fuck around in FF codebase and see the results in less than two minutes, "anyone" suddenly becomes a lot more meaningful/actionable.
To add to the Gentoo recommendation: ebuilds (effectively, build recipes) take care of pulling in build- and runtime- dependencies. They work with build systems (automake, ninja, meson, etc.) through reusable framework of eclasses, so it's not that hard to write an ebuild to install something that's not in the existing repos. With reference ebuilds and a bit of patience, one can write ebuilds for software written in languages one has no experience in.
To be fair, there's Linux From Scratch, but I found it to be a bit tedious to build out everything and manage the dependencies by hand. Walking through a big dependency DAG manually is not very practical IMO.
There's also Nix, but I am yet to learn how to work with custom patches there. I believe that it's possible, and Nix has a lot of other great features but hasn't been exactly Gentoo-like in my experience.
I think the article misses the mark. The #1 problem is spam, but spam is mostly a social issue, not a technical one.
Case in point: 90%+ of my spam is currently delivered by google. Either using fake accounts and/or hacked ones.
That spam passes _all_ the technical solutions we have put in place today: SPF/DKIM/DMARC. Sure, the SMTP protocol isn't that great in today's light, but you think changing the protocol would solve it?
Say hello to whatsapp spam, facebook's messenger spam, and so on...
My spam filter works very reliably for all such cases; my first and only manual rule I had to add recently was to stop "bitcoin" spam delivered by Jira accounts; and I also check the filtered messages from time to time. I think there will be spam with any protocol we might get in future.
Yes, but it's kind of anticlimactic isn't it? If the issue was SMTP/protocol, then spam within the gmail network wouldn't exist. But it's not the case.
The main issue is that email is designed to accept messages from strangers you don't know. Because you want that, and it's a core feature. You will have spam on any network where there this is possible.
If you remove the ability to being contacted from strangers, then email would stop being useful.
You don't even need to change the protocol if you wanted zero spam.
I have one email address where I accept only PGP-signed messages I trust with my known keychain, and discard everything else. I can publish this address anywhere. No spam gets through. There are a couple of other ways you could do this with fancy header tagging if you really wanted to, it's besides the point. The issue goes away as soon as you ignore the first contact problem.
Fully agree. There's a lot of value in providing a change summary which is _not_ tied to development history.
Even as a dev, I usually don't care about your PRs. I'm pretty sure there will never be a 1:1 PR/feature history. Keeping the commit history in the repository clean must be useful for the developer (for example, to possibly revert atomically a change), not for the end user.
Do you find more informative the Linus changelog between kernel releases listing a stack of PRs, or the nicer summary provided by kernelnewbies (and others) showing the prominent new features so you can drill down later?
Git has very nice release notes. The language used in the release notes is completely different from commit history.
It's a very nice gesture to write good release notes.
It doesn't take long to do, and I find it beneficial for PRs that include notable changes that should get into the release notes also include a new line in there. It doesn't need to be perfect (surely it will change before being finalized), but it serves as a landmark for the final edit.
I love/hate info. On one side, info can be more than a manual page, and contain _usable_ references. I suppose you know about the "all in one" manpage bundles such as ffmpeg-all and zshall which are there just so you can search?
Yeah..
On the other side, "info cp" didn't bring up the _reference_ of cp for a frickin' loooong time. It was a usability nightmare for that reason, and that reason alone. Had it worked the way it's working now: bringing you to the command reference first, but allow to search the entire manual scope at once (and WORKING references!), I would have been SOLD from the first moment.
The fixed-width format is the last thing I wished would be removed. I disabled catpath and have full-width manpages, but I cannot do the same with info as the text is pre-formatted.
I had a lot of other big technic sets at the time. I remember this one, and I also remember my parents telling me it was too expensive.
They were not joking.
Not too long ago I wanted to gift a technic set and I was shocked at the prices. The fact that most modern sets also use a lot of custom parts was also something that goes against the lego mindset IMHO.
It would still be something that I would totally buy for my children, but not as a gift for somebody else.
The counterpoint is that it's now actually useless to craft a query that tries to match exact terms, because there's this extra layer on top of it.
So you might be lucky if the inferred intent of the engine was correct, but good luck steering it away. It doesn't help pretty much all query logic operators are merely hints nowadays, more often ignored than not.
I very much preferred a dumber engine for this reason, since it was way easier to search precisely and avoid SEO, even as the SEO game changed.
I'm also using a local searx instance now. I'm not terribly happy about it as bing/ddg also have very similar issues, so searching for exact terms still doesn't work the way it should. But it's much easier to blackhole SEO silos, pre-filter queries NOT matching my exact queries, as well as yielding more obscure content.
I want pure keywords-based search engine back [1]. If the keywords verbatim are not there, give me zero results back. It will be less time consuming to refine my query, or conclude that the stuff I am looking for is just not there. It's much better than sifting through low-precision results.
Google has 'verbatim' mode as such, which used to work well. I can't figure what they are doing to it, and why.
As other posters mentioned, the recent "mechanical keyboard" subculture is all about show and boasting. It's not a surprise it seem to have boomed with gaming rigs and streaming channels.
The keyboard in these circles is all about status, not function.
If anything, I would compare it to "dubious" car mods like "cambered wheels".
Due to RSI I tried dozens of keyboards over the years. When I see these new keyboards being sold at these prices, especially the mechanical ones marketed with "improved layouts" and ergonomics, I have to laugh.
There's genuine difference between the various switch types, and I totally believe that for some people the acoustic feedback can be a valid alternative to the haptic one. Physical feedback while typing can really help. However it's clear to me most of these people are not really trying to solve an issue when you see reviews of switching performance after lubing...
>Due to RSI I tried dozens of keyboards over the years. When I see these new keyboards being sold at these prices, especially the mechanical ones marketed with "improved layouts" and ergonomics, I have to laugh.
If you haven't got to the opensource crowd of the mechanical keyboard world yet, you should. Ergonomic is one of the problems that people are trying to solve, and their solutions are opensource.
Oh I'm fully aware. This scene has been going on far longer than the current wave of mechanical keyboard craze (and it often doesn't emphasize switches as much either). I consider the two groups completely distinct.
I'm currently using tridactyl, but due to the webext limitations it's far from what even vimfx could do in terms of consistency.
On top of all you said, when you realize not only the browser has inconsistent shortcuts, but a webpage has the right to steal your keyboard shortcut and break the extension itself ...
Every time I use '/' and either it doesn't work because I'm in a field where the extension doesn't have access or in a website where it's redirected to the USELESS site search, I truly get mad.
I'm not even sure why I insist in trying, since there's no clear intention to allow such customization to ever work consistently in the future.
Hear hear, the shortcut stealing is so annoying. Github for example does this; Ctrl-k is some search within Github instead of focusing the search bar in Firefox.
Then we have this option to disable shortcuts in sites which fixes this, but that also disables all shortcuts of the Vimium extension...I want to use an extension like Vimium so badly and its almost there but these little things make it that in the end the whole extension is unusable.
Yes, there is, to block this by default, set 'permissions.default.shortcuts' to 2.
I just found out that doing that does not prevent shortcuts from the extension to work contrary to what I just posted although I'm sure that when I tested this yesterday it did...:) So this is a step in the right direction.
Then I'm still left with sites stealing focus preventing shortcuts to work. Even though Vimium has an option to prevent sites from doing it, it is not fool proof. YouTrack/Upsource for example insist on stealing focus, so when I'm happily switching tabs using shortcuts as soon as I stumble upon YouTrack/Upsource I have to grab the mouse again :'(.
Ideas as in "search some term, get a not-even-vaguely-related picture collection you cannot see unless you log in"? Or the "ideas" that also happen to hide where that very idea came from?
Pinterest is scourge for a text search engine. It's also scourge for an IMAGE search engine, because it's just hiding the real source.
Pinterest is in the top 10 websites I would blacklist from search results.
I do the same for projects I'm looking at, and I'm on linux. Gives a good understanding of the thing you're looking at. There are many things you can tell by looking at the build process.
There are some things I do not agree with the author though.
Pure make: I think it's refreshing to see, nowdays. Many autotools projects do not do ANY configuration, and could actually use just make in most cases. But the authors just jumped at the "autoconf/cmake" bandwagon, because they probably were never familiar with make, or just want to follow conventions (which is ok, I guess).
Fixing a make build is leaps and bounds easier than to fix compared to cmake or autotools. If you follow the "standard" C environment variable conventions a makefile can be extremely portable (I've written stuff myself supporting osx/linux/bsd/irix/aix for several years). Vanilla make doesn't support variant builds, but if you ever need that (and are willing to believe that make is not rotten as many people seem to suggest), I suggest you look at makepp.
There's no date on the article, but a common incumbent to cmake these days is meson. As a dev, I'm still conflicted between these two: meson has a superior syntax and less baggage, but everything else seems inferior to me at the moment, especially the documentation.
Building from source also gives you an idea of how the author is managing dependencies, which ones were chosen, and so on.
A big one for me is how I still consider a package with two external (that you have to build yourself), robust dependencies much better than 20 smaller cargo-managed crates that require 300mb to build, or ~100 packages pulled by tox when you actually _need_ to contribute to the python package...
This is great way to weed off between multiple candidate projects when you have the luxury of doing so.
> But the authors just jumped at the "autoconf/cmake" bandwagon, because they probably were never familiar with make, or just want to follow conventions (which is ok, I guess).
I'd personally recommend everyone use autoconf or cmake (or even better these days, meson) over straight make for the "conventions" purpose if anything. It's very easy to forget build system features that downstream linux distributions and package managers rely on. This includes but isn't limited to setting installation paths (this includes bindir, sbindir, libdir (especially important for multilib), and even mandir, docdir and infodir. don't just assume they're under share/ or share/ even exists!), proper cross-compilation support/detection, inheriting CFLAGS and other variables to set the path for certain programs, and a lot of more minute details.
It saves a lot of time downstream when all the packages can be built using the same script, as a lot of time is wasted on patching custom build systems. Even autotools and cmake (and meson) aren't always perfect at preventing developers from breaking one of their features through e.g. hardcoding/misuses of features, but it's at least less common, as they provide the mechanisms to conform easily.
Hell, even if you use a regular makefile as your "main" build system for personal convenience reasons, it's very appreciated if you can provide at least one of autotools/cmake/meson as an alternative for your friendly neighbor distribution maintainer, lest you end up with a perl-cross-make situation[1].
If you really want to have everything rebuilding from source at your fingertips, you could already do that with gentoo and similar. Patching upstream sources and rebuilding is pretty trivial there if that's your thing. There are more to choose from.
You probably won't be having the build infrastructure of all your packages "ready" all the time though, and things are taking a turn to the worse due to dependency bundling where each package has it's own independent copy.