> If a package hosted by Fedora or Debian or PyPI or crates.io, etc claims to correspond to an upstream git commit or release, then the hosting system should build the package, from the commit or release in question plus whatever package-specific config and patches are needed, and publish that.
> one of those not-actually-a-judge decisionmakers
With all the hubbub these days of those same decision-makers writing "warrants", I consciously try to reframe them as "memos." (Ex: "I have a memo for your arrest.")
Sure, it may not be a term of art for executive-branch bureaucrats... but it's way less misleading for the public that associates "warrant" with a much weightier process.
It also underscores the absurd recklessness of ICE flunkies ramming cars and pointing guns into people's faces while hunting for what are often civil infractions. Not felonies, not misdemeanors, but the equivalent of parking tickets.
I think what you've written is pretty much what the "almost all programs have paths that crash" was intended to convey.
I think "perhaps the density of crashes will be tolerable" means something like "we can reasonably hope that the crashes from Fil-C's memory checks will only be of the same sort, that aren't reached when the program is used as it should be".
Let's say you have 100 programs in your PATH that start with the letter "g", but only one program in the current folder that starts with "g". You type `./g[TAB]` so it autocompletes automatically to the local program instead of cycling through dozens of results you know you don't want.
One of the things that this group of "stewards" could do to get their costs down is get together and implement a high quality free software caching proxy that understands all their back-ends.
But that would compete with the commercial offerings of at least one of the organisations sponsoring that message. So I expect they won't do that.
I covered some of this in one of my previous blogs where i talked about the systemic challenges here that I've uncovered. The heavy users that I spoke to, 100% of them had a repository manager, some Nexus, others Artifactory. And yet the high levels of consumption still persisted. I discussed some of the reasons for this in the blog link below... but I think this refutes the theory that simply having yet another caching proxy solves the problem. It really doesn't. Additionally as Mike discussed, bandwidth is only part of the challenge. Without the people behind the repositories doing the malware response, the curation of namespaces etc, there wouldn't be anything to proxy anyway.
Please see my other reply about network costs. Bandwidth is a real cost that does not currently show up on the balance sheet because of Fastly's generous donations.
That said, I would love to see more organizations implement private staging repositories for their upstream package supply. This is where they can and should apply policies to protect their applications.
Developing a single multi-protocol or even multiple open source caching proxies will cost real time and money. I'd love to see more solutions here but at this stage it will take more than a few volunteers and a "PRs welcome" in the README.
The commercial CDNs sponsor bandwidth for almost every FOSS project there is. For example the canonical Debian package distribution website deb.debian.org is CDN sponsored.
The plan described in "Our Vision for the Rust Specification", and the linked RFC3355, were abandoned early in 2024.
The team that was formed to carry out RFC3355 still exists, though it's had an almost complete turnover of membership.
They're currently engaged in repeating the process of deciding what they think a spec ought to be like, and how it might be written, from scratch, as if the discussion and decisions from 2023 had never happened.
The tracking issue for the RFC was https://github.com/rust-lang/rust/issues/113527 . You can see the last update was in 2023, at the point where it was time to start the actual "write the spec" part.
That's when it all fell apart, because most of the people involved were, and are, much more interested in having their say on the subject of what the spec should and shouldn't be like than they are in doing the work to write it.
Your comment suggests there is no progress being made on the spec. The activity on this repo suggests the opposite - https://github.com/rust-lang/reference
There's still work being done on the Reference, which is being maintained in the same way as it has been since the Rust documentation team was disbanded in 2020. But it's a very long way from being either complete or correct enough to reliably answer questions about Rust-the-language.
After the "write a new spec" plan was abandoned in 2024 the next stated plan was to turn the Reference into the spec, though they never got as far as deciding how that was going to work. That plan lasted six months before they changed direction again. The only practical effect was to add the paragraph identifiers like [destructors.scope.nesting.function-body].
They're currently arguing about whether to write something from scratch or somehow convert the FLS[1] into a spec.
This is the best answer. The area of the plane is 0.0196 m^2, so the heating of 10^5 W/m^2 is actually around 2 x 10^3 W, which seems much more reasonable.
Also, I noticed that I missed that the plane is made from 4 sheets of A4 paper. Table 1 lists the mass as 4 grams, but 4 grams is typical for a single sheet of A4 paper, so the listed mass is probably for a single sheet. The actual plane mass is likely 16 grams. This means that the kinetic energy is likely closer to 480 kJ.
Thank you to afeuerstein for pointing out that I was missing the potential energy! However, that is not enough to make a huge difference. A quick estimate because I'm lazy is 9.8 m/s^2 * 480 km * 0.016 kg = 75.2 kJ. Yes, gravity decreases slightly as you get farther out, so this is an over-estimate.
So a total of around 550 kJ, and a power around 2 x 10^3 W gives a duration of 275 seconds or... a couple of minutes. I feel much better about the numbers now.
For Debian, that's what tag2upload is doing.