How confident are you in this statement? I have no particular knowledge of Asahi. But I do know this narrative emerged about Rust-for-Linux after a couple of high-profile individuals quit.
In that case it was plainly bogus but this was only obvious if you were somewhat adjacent to the relevant community. So now I'm curious if it could be the same thing.
(Hopefully by now it's clear to everyone that R4L is a healthy project, since the official announcement that Rust is no longer "experimental" in the kernel tree).
I know Asahi is a much smaller project than R4L so it's naturally at higher risk of losing momentum.
I would really love Asahi to succeed. I recently bought a Framework and, while I am pretty happy with it in isolation... when I use my partner's M4 Macbook Air I just think... damn. The quality of this thing is head and shoulders above the rest of the field. And it doesn't even cost more than the competition. If you could run Linux on it, it would be completely insane to use anything else.
It's similarly bogus here. Early Asahi development tried to upstream as much as possible but ultimately still maintained a gigantic pile of downsteam patches, which wasn't a sustainable model.
Most of current development is focused on reducing that pile to zero to get things into a tractable state again. So things continue to be active, but the progress has become much less visible.
M2 to M3 was a complete architectural change that will require a lot of reverse engineering. As far as I know no one is working on this. The M1/M2 work was a labor of love of largely one dev that has since moved on.
The project is still active and working to upstream the work of these devs. But as far as I know, no NEW reverse engineering is being done. Ergo, it’s a dead end.
Someone should create a minimal, nearly-headless macOS distribution (similar to the old hackintosh distros) that bootstraps just enough to manage the machine's hardware, with no UI, and fires up the Apple virtualization framework and a Linux VM, which would own the whole display.
For optimal battery life you need to tweak the whole OS stack for the hardware. You need to make sure all the peripherals are set up right to go into the right idle states without causing user-visible latency on wake-up. (Note that often just one peripheral being out of tune here can mess up the whole system's power performance. Also the correct settings here depend on your software stack). You need to make sure that cpufreq and cpuidle governors work nicely with the particular foibles of your platform's CPUs. Ditto for the task scheduler. Then, ditto for a bunch of random userspace code (audio + rendering pipeline for example). The list goes on and on. This work gets done in Android and ChromeOS.
It's a crazy drug because there's a lot of significant downsides to Nix[OS]. E.g. it took me a solid half an hour of focus to upgrade my config to 25.11[1]. Also, like, no Secure Boot. And I've had to reverse engineer a lot of stuff.
But like you said, I can't ever imagine going back. Once you're over the learning curve (and... yeah, that learning curve) the upside are just so huge. Nothing compares, at all.
Part of me wonders if maybe one day a bootc-based framework will offer something like 20% of the benefits of NixOS with only 10% of the downsides. But other than that, we're totally stuck with Nix forever. (And once I had switched to bootc, I bet my next thought would be "I should find a way to generate this config from Nix"...).
[1] I have a very complex config so this may be an extreme case. On the other hand, everything about Nix is basically designed as an invitation to create an extremely complex config.
Geminii CLI has a specific "model is overloaded" error message which is distinct from "you're out of quota" so I suspect whatever tools they're using for this probably have something similar, and they're referring to that.
Why not? SMB is no slouch. Microsoft has taken network storage performance very seriously for a long time now. Back in the day, Microsoft and others (NetApp, for instance,) worked hard to extend and optimize SMB and deliver efficient, high throughput file servers. I haven't kept up with the state of the art recently, but I know there have been long stretches where SMB consistently led the field in benchmark testing. It also doesn't hurt that Microsoft has a lot of pull with hardware manufacturers to see their native protocols remain tier 1 concerns at all times.
I think a lot of people have a hard time differentiating the underlying systems from what they _see_ and use it to bash MS products.
I heard that it was perhaps recently fixed, but copying many small files was multiple times faster to do via something like Total Commander vs the built in File Explorer (large files goes equally fast).
People seeing how slow Explorer was to copy would probably presume that it was a lower level Windows issue if they had a predisposed bias against Microsoft/Windows.
My theory about Explorers sluggishness is that they added visual feedback to the copying process at some point, and for whatever reason that visual feedback is synchronous/slow (perhaps capped at the framerate, thus 60 files a second), whilst TC does updating in the background and just renderers status periodically whilst the copying thread(s) can run at full speed of what the OS is capable of under the hood.
I dunno about Windows Explorer, but macOS’ finder seems to hash completed transfers over SMB (this must be something it can trigger the receiver to do in SMB itself, it doesn’t seem slow enough for the sender to be doing it on a remote file) and remove transferred files that don’t pass the check.
I could see that or other safety checks making one program slower than another that doesn’t bother. Or that sort of thing being an opportunity for a poor implementation that slows everything down a bunch.
A problem with Explorer, that it also shares with macOS Finder[1], is that they are very much legacy applications with features piled on top, and Explorer was never expected to be used for heavy I/O work and tends to do things the slower way possible, including doing things in ways that are optimized for "random first time user of windows 95 who will have maybe 50 files in a folder"
[1] Finder has parts that show continued use of code written for MacOS 9 :V
This blows my mind. $400B in annual revenue and they can't spare the few parts per million it would take to spruice up the foundation of their user experience.
This is speculation based on external observation, nothing internal other than rumours:
A big, increasing over last decade, chunk of that is fear that they will break the compatibility - or otherwise drop in shared knowledge. To the point that the more critical parts the less anyone wants to touch them (heard that ntfs.sys is essentially untouchable these days, for example).
And various rules that used to be sacrosanct are no longer followed, like the "main" branch of Windows source repository having to always build cleanly every night (fun thing - Microsoft is one of the origins of nightly builds as a practice)
Less people are trusted to touch ntfs.sys due to lack of experience, thus they never gain it and that in turn means less work and in turn means even less people have proved themselves trustworthy enough to work on it.
Until nobody remains in the company that is trusted enough.
Microsoft gives them a lot of ammo. While, as I said, Microsoft et al. have seen that SMB is indeed efficient, at the same time security has been neglected to the point of being farcical. You can see this in headlines as recent as last week: Microsoft is only now, in 2025, deprecating RC4 authentication, and this includes SMB.
So while one might leverage SMB for high throughput file service, it has always been the case that you can't take any exposure for granted: if it's not locked down by network policies and you don't regularly ensure all the knobs and switches are tweaked just so, it's an open wound, vulnerable to anything that can touch an endpoint or sniff a packet.
Agreed, but that used to be the difference between MS and Google.
MS would bend backwards to make sure those enterprise Windows 0.24 boxes will still be able to connect to networks because those run some 16bit drivers for CNC machines.
Meanwhile Google decided to kill a product the second whoever introduced it on stage walked off it.
Azure is a money-maked for MS, and wouldn't be so without those weird legacy enterprise deployments. The big question is if continuing to increase a posture about about security together with an "cloud" focus is actually in their best interest or if retaining those legacy enterprises would have been smarter.
Plenty of other workloads that benefit from high performance file access and with networks speeds and disk speeds getting higher whilst single-core perf has more or less plateaued in comparison, it's thus more and more important to support data-paths where the kernel switching won't become a bottleneck.
I work in CPU security and it's the same with microarchitecture. You wanna know if a machine is vulnerable to a certain issue?
- The technical experts (including Intel engineers) will say something like "it affects Blizzard Creek and Windy Bluff models'
- Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).
- The spec sheet for the hardware calls it a "Xeon Osmiridium X36667-IA"
Absolutely none of these forms of naming have any way to correlate between them. They also have different names for the same shit depending on whether it's a consumer or server chip.
Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.
Usually I just ask the LLM and accept that it's wrong 20% of the time.
> - Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).
I’m doing some OS work at the moment and running into this. I’m really surprised there’s no caniuse.com for cpu features. I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out. Especially across Intel and amd. Can I assume apic? Iommu stuff? Is acpi 2 actually available on all CPUs or do I need to have to have support for the old version as well? It’s very annoying.
Even more fun is that some of those (IOMMU and ACPI version) depend on motherboard/firmware support. Inevitably there is some bargain-bin board for each processor generation that doesn’t support anything that isn’t literally required for the CPU/chipset to POST. For userspace CPU features the new x86_64-v3/v4 profiles that Clang/LLVM support are good Schelling points, but they don’t cover e.g. page table features.
Windows has specific platform requirements they spell out for each version - those are generally your best bet on x86. ARM devs have it way worse so I guess we shouldn’t complain.
At least on ARM you can get trms or data sheets that cover all of the features of a specific processor and also the markings on the chip that differentiate it from models within the same family.
I’m pretty sure the number of people at Intel who can tell you offhandedly the answer to your questions about only Intel processors is approximately zero give or take couple. Digging would be required.
If you were willing to accept only the relatively high power variants it’d be easier.
I'd be happy to support the low power variants as well, but without spending a bunch of money, I have no idea what features they have and what they're missing. Its very annoying.
For anyone not familiar with caniuse, its indispensable for modern web development. Say you want to put images on a web page. You've heard of webp. Can you use it?
At a glance you see the answer. 95% of global web users use a web browser with webp support. Its available in all the major browsers, and has been for several years. You can query basically any browser feature like this to see its support status.
That initial percentage is a little misleading. It includes everything that caniuse isn't sure about. Really it should be something like 97.5±2.5 but the issue's been stalled for years.
Even the absolute most basic features that have been well supported for 30 years, like the HTML "div" element, cap out at 96%. Change the drop-down from "all users" to "all tracked" and you'll get a more representative answer.
> I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out.
The easiest thing would probably to specify the need for "x86-64-v3":
RHEL9 mandated "x86-64-v2", and v3 is being considered for RHEL10:
> The x86-64-v3 level has been implemented first in Intel’s Haswell CPU generation (2013). AMD implemented x86-64-v3 support with the Excavator microarchitecture (2015). Intel’s Atom product line added x86-64-v3 support with the Gracemont microarchitecture (2021), but Intel has continued to release Atom CPUs without AVX support after that (Parker Ridge in 2022, and an Elkhart Lake variant in 2023).
> The easiest thing would probably to specify the need for "x86-64-v3"
AFAIK, that only specifies the user-space-visible instruction set extensions, not the presence and version of operating-system-level features like APIC or IOMMU.
This is unfortunately the same for GPUs. The graphics APIs expose capability bits or extensions indicating what features the hardware and driver supports, but the graphics vendors don't always publish documentation on what generations of their hardware support various features, so your program is expected to dynamically adapt to arbitrary combinations of features. This is no longer as bad as it used to be due to consolidation in the graphics market, but people still have to build ad-hoc crowd sourced databases of GPU caps bits.
It's also not monotonic, on both CPU and GPU sides features can go away later because either due to a hardware bug or the vendor lost interest in supporting it.
CPU Monkey had some neat info like whether a CPU had AV1 hwdec/hwenc, then they redesigned their site and that info is gone for some reason. I think it was a year or less between finding their site and them ruining it.
I feel like it's a cultural thing with the designers. Ceragon were the exact same when I used to do microwave links. Happy to provide demo kit, happy to provide sales support, happy to actually come up and go through their product range.
But if you want any deep and complex technical info out of them, like oh maybe how to configure it to fit UK/EU regulatory domain RF rules? Haha no chance.
We ended up hiring a guy fluent in Hebrew just to talk to their support guys.
Super nice kit, but I guess no-one was prepared to pay for an interface layer between the developers and the outside world.
I have three Ubuntu servers and the naming pisses me off so much. Why can't they just stick with their YY.MM. naming scheme everywhere. Instead, they mostly use code names and I never know what codename I am currently using and what is the latest code name. When I have to upgrade or find a specific Python ppa for whatever OS I am running, I need to research 30 minutes to correlate all these dumb codenames to the actual version numbers.
As an Apple user, the macOS code names stopped being cute once they ran out of felines, and now I can't remember which of Sonoma or Sequoia was first.
Android have done this right: when they used codenames they did them in alphabetical order, and at version 10 they just stopped being clever and went to numbers.
Ubuntu has alphabetical order too, but that's only useful if you want to know if "noble" is newer than "jammy", and useless if you know you have 24.04 but have no idea what its codename is and
Android also sucks for developers because they have the public facing numbers and then API versions which are different and not always scaling linearly (sometimes there is something like "Android 8.1" or "Android 12L" with a newer API), and as developers you always deal with the API numbers (you specify minimum API version, not the minimum "OS version" your code runs in your code), and have to map that back to version numbers the users and managers know to present it to them when you're upping the minimum requirements...
> Ubuntu has alphabetical order too, but that's only useful if you want to know if "noble" is newer than "jammy"
Well, it was until they looped.
Xenial Xerus is older than Questing Quokka. As someone out of the Ubuntu loop for a very long time, I wouldn't know what either of those mean anyway and would have guessed the age wrong.
Yes, I agree, codenames are stupid, they are not funny or clever.
I want a version number that I can compare to other versions, to be able to easily see which one is newer or older, to know what I can or should install.
I don't want to figure out and remember your product's clever nicknames.
Protip, if you have access to the computer: `lsb_release -a` should list both release and codename. This command is not specific to Ubuntu.
Finding the latest release and codename is indeed a research task. I use Wikipedia[1] for that, but I feel like this should be more readily available from the system itself. Perhaps it is, and I just don't know how?
That's only if the distro is recent enough; sooner or later, you'll encounter a box running a distro version from before /etc/os-release became the standard, and you'll have to look for the older distro-specific files like /etc/debian_version.
> you'll encounter a box running a distro version from before /etc/os-release became the standard
Do those boxes really still exist? Debian, which isn't really known to be the pinacle of bleeding edge, has had /etc/os-release since Debian 7, released in May 2013. RHEL 7, the oldest Red Hat still in extended support, also has it.
> the oldest Red Hat still in extended support, also has it.
You would be alarmed to know how long the long tail is. Are you going to run into many pre-RHEL 7 boxes? No. Depending on where you are in the industry, are you likely to run into some ancient RHEL boxes, perhaps even actual Red Hat (not Enterprise) Linux? Yeah, it happens.
Yes, they do. You'll be surprised by how many places use out-of-support operating systems and software (which were well within their support windows when installed, they have just never been upgraded). After all, if it's working, why change it? (We have a saying here in Brazil "em time que está ganhando não se mexe", which can be loosely translated as "don't change a (soccer) team which is winning".)
Thank you! I was just about to kvetch about how difficult it was to map (eg) "Trixie" == "13" because /etc/debian_version didn't have it... I always ended up having to search the internet for it which seemed especially dumb for Debian!
I work with Debian daily and I still couldn't tell you what order those go in. but Debian 12, Debian 13, etc.. is perfectly easy to remember and search for.
> AMD's part numbers contain a digit that increments with each year
Aha, but which digit? Sure, that's easy for server, HEDT and desktop (it's the first one) but if you look at their line of laptop chips then it all breaks down.
Oh, the Xeons are with the vX vs vY nonsense, where the same number but a different version is an entirely different CPU (like the 2620 v1 and v2 are different microarchitecture generations and core counts). But, not to leave AMD out, they do things like the Ryzen 7000 series which are Zen 4 except for the models that are Zen 2 (!). (yes if you read the middle digits there's some indication but that's not that helpful for normal customers).
That's been the case with hardware at several companies I was at.
I was convinced that the process was encouraged by folks who used it as a sort of weird gatekeeping by folks who only used the magic code names.
Even better I worked at a place where they swapped code names between two products at one time... it wasn't without any reason, but it mean that a lot of product documentation suddenly conflicted.
I eventually only refereed to exact part numbers and model numbers and refused to play the code name game. This turned into an amusing situation where some managers who only used code names were suddenly silent as they clearly didn't know the product / part to code name convention.
You can correlate microarchitecture to product SKUs using the Intel site that the article links. AMD has a similar site with similar functionality (except that AFAIK it won't let you easily get a list of products with a given uarch). These both have their faults, but I'd certainly pick them over an LLM.
But you're correct that for anything buried in the guts of CPUID, your life is pain. And Intel's product branding has been a disaster for years.
> You can correlate microarchitecture to product SKUs using the Intel site that the article links.
Intel removed most things older than SB late 2024 (a few xeons remain but afaik anything consumer was wiped with no warning). It’s virtually guaranteed that Intel will remove more stuff in the future.
Also technically the code names are only for unreleased products so on ark it’ll say “products formerly Ice Lake” but the intel will continue to calm them Ice Lake.
> Absolutely none of these forms of naming have any way to correlate between them.
I've found that -- as of a ~decade ago, at least, ark.intel.com had a really good way to cross-reference among codenames / SKUs / part numbers / feature set/specs. I've never seen errata there but they might be. Also, I haven't used it in a long time so it could've gotten worse.
Intel do have a website where you can look up SKUs. If you wait long enough and exploit certain bugs in the JS you can get it to give you a bunch of CSV files.
Now the only issue you have is that there is no consistent schema between those files so it's not really any use.
I've also found the same thing a decade ago,
apparently lots of features(e.g. specific instruction, igpu)
are broadly advertised as belonging to specific arch,
but pentium/celeron(or for premium stuff non-xeon) models often lack them entirely and
the only way to detect is lscpu/feature bits/digging in UEFI settings.
Intel doesn't like to officially use codenames for products once they have shipped, but those codenames are used widely to delineate different families (even by them!), so they compromise with the awkward "products formerly x" wording. Have done for a long time.
I wouldn't mind them coming up with better codenames anyway. "Some lower-end SKUs branded as Raptor Lake are based on Alder Lake, with Golden Cove P-cores and Alder Lake-equivalent cache and memory configurations." How can anyone memorize this endless churn of lakes, coves and monts? They could've at least named them in the alphabetical order.
AMD does this subterfuge as well. Put Zen 2 cores from 2019 (!) in some new chip packaging and sell it as Ryzen 10 / 100. Suddenly these chips seem as fresh as Zen 5.
The entire point of code names is that you can delay coming up with a marketing name. If the end user sees the code name then what is even the point? Using the code name in external communication is really really dumb. They need to decide if it should be printed on the box or if it's only for internal use, and don't do anything in between.
The problem, especially at Intel, but also at AMD, is that they sell very different CPUs under approximately identical names.
In a very distant past, AMD was publishing what the CPUID instruction will return for each CPU model that they were selling. Now this is no longer true, so you have to either buy a CPU to discover what it really is, or to hope that a charitable soul who has bought such a CPU will publish on the Internet the result.
Without having access to the CPUID information, the next best is to find on the Intel Ark site, whether the CPU model you see listed by some shop is described for instance as belonging to 'Products formerly Arrow Lake S", as that will at least identify the product microarchitecture.
This is still not foolproof, because the products listed as "formerly ..." may still be packaged in several variants and they may have various features disabled during production, so you can still have surprises when you test them for the first time.
So they should put it on the box. In small font on the back if necessary, but make it an official part of the spec sheet - don't pretend it's irrelevant.
Product lines are in design and development for years, two years is lightning fast, code names can be found for things five or more years before they were released, so everyone who works with them knows them better (much better) than the retail names.
- sSpec S0ABC = "Blizzard Creek" Xeon type 8 version 5 grade 6 getConfig(HT=off, NX=off, ECC=on, VT-x=off, VT-d=on)=4X Stepping B0
- "Blizzard Creek" Xeon type 8 -> V3 of Socket FCBGA12345 -> chipset "Pleiades Mounds"
- CPUID leaf 0x3aa = Model specific feature set checks for "Blizzard Creek" and "Windy Bluff(aka Blizzard Creek V2)"
- asserts bit 63 = that buggy VT-d circuit is not off
- "Xeon Osmiridium X36667-IA" = marketing name to confuse specifically you(but also IA-36-667 = (S0ABC|S9DFG|S9QWE|QA45P))
disclaimer: above is all made up and I don't work at any of relevant companies
Vinyl is nowhere near as inconvenient as tapes and sounds way better. And I say this as someone who used to lug around big bags of 12" records as a DJ! It's pretty annoying, but it's still better than having to rewind, and deal with the appalling durability of cassettes!
Nothing has managed to capture the mixtape model. A tangible object made with care you could give as a gift and was unique and valuable. CDs got close but people didn’t have the gear to make them until mp3s had arrived and overshadowed them. Plus CDs with handwritten tracklists didn’t feel as nice as tapes and blank CDs were invariably ugly.
Music as an object is a thing and playlists are in no way the same. You can’t even control the music on a playlist as it’s in the gift of the streamer.
I think the qualities of a cassette mentioned have clearly helped with the mixtape model. But I can't help but wonder if it wasn't also a product of that particular era.
It certainly depends on geographical zones, too, but I remember people burning audio cds for quite a while, and taking them on the go with portable players. This was quite widespread before portable mp3 players became common.
Hell, where I grew up, cassettes were still in regular use until the end of the 90s, and mixtapes had grown increasingly rare.
It's funny that absolutely everything about GHA fucking sucks, and everyone agrees about this. BUT, the fact that it's free compute, and it's "right there"... means it's very very difficult to say no to!
Personally I've just retired a laptop and I'm planning to turn it into a little home server. I think I'm gonna try spinning up Woodpecker on there, I'm curious to see what a CI system people don't hate is like to live with!
I can already tell by their example that I don't like it. I've worked with a bunch of different container-based CI systems and I'm getting a little tired seeing the same approach by done slightly differently.
steps:
- name: backend
image: golang
commands:
- go build
- go test
- name: frontend
image: node
commands:
- npm install
- npm run test
- npm run build
Yes, it's easy to read and understand and it's container based, so it's easy to extend. I could probably intuitively add on to this. I can't say the same for GitHub, so it has that going for it.
But the moment things start to get a little complex then that's when the waste starts happening. Eventually you're going to want to _do_ something with the artifacts being built, right? So what does that look like?
Immediately that's when problems start showing up...
- You'll probably need a separate workflow that defines the same thing, but again, only this time combining them into a Docker image or a package.
- I am only now realizing that woodpecker is a fork of Drone. This was a huuuge issue in Drone. We ended up using Starlark to generate our drone yaml because it lacked any kind of reusability and that was a big headche.
- If I were to only change a `frontend` file or a `backend` file, then I'm probably going to end up wasting time and compute rebuilding the same artifacts over and over.
- GitHub's free component honestly hurts itself here. I don't have to care about waste if it's mostly free anyways.
- Running locally using the local backend... looks like a huge chore. In Drone this was basically impossible.
I really wish someone would take a step back and really think about the problems being solved here and where the current tooling fails us. I don't see much effort being put into the things that really suck about github actions (at least for me): legibility, waste, and the feedback loop.
By adding one file to your git repo, you get cross-platform build & test of your software that can run on every PR. If your code is open source, it's free(ish) too.
It feels like a weekend project that a couple people threw together and then has been held together by hope and prayers with more focus on scaling it than making it well designed.
I also admired this and for a moment thought "I should steal this person's style". Then I quickly realised I am not even close to capable of pulling a design like that off, on so many dimensions.
I guess that's why this person is a professional designer and I'm a person who's never worked on a product with a UI in his career!
You can't completely escape advertising while still participating in modern society but there's still a huge difference between free and premium YouTube in this regard.
Yes, creators have paid sections but they are skippable (and note YouTube helps you skip with a little white dot in the UI[1]) and creators have a strong incentive to protect their credibility. They have an ongoing "relationship" with their viewer. Not so for the random companies that get to spam you with unskippable adverts for crypto scams or fat-free yoghurt in the freezer version.
[1]They don't like sponsored segments as they don't get a cut most of the time. They do have a programme for arranging sponsored segments via the platform, in which case they _do_ get a cut. I'm not sure if they still offer the little skip-helper dot in that case... Anyone know?
> > and creators have a strong incentive to protect their credibility.
> I haven't seen this play out very much to be honest.
"Credibility" means "relative to the interests of their audience". Faux News has a completely different, almost inverse metric for "credibility" with their "Aliuns made the pirramids!" fanbase. CNN follows a more strict "if it bleeds it leads" policy to keep their audience believing them.
There's a huge difference indeed — uBlock + SponsorBlock are superior. Not only do I not see any ads at all—including self-promotions of the video creators and their sponsorship segments—I also get to skip content-free intermissions, tangents, etc. and jump straight to the highlight of the video.
How confident are you in this statement? I have no particular knowledge of Asahi. But I do know this narrative emerged about Rust-for-Linux after a couple of high-profile individuals quit.
In that case it was plainly bogus but this was only obvious if you were somewhat adjacent to the relevant community. So now I'm curious if it could be the same thing.
(Hopefully by now it's clear to everyone that R4L is a healthy project, since the official announcement that Rust is no longer "experimental" in the kernel tree).
I know Asahi is a much smaller project than R4L so it's naturally at higher risk of losing momentum.
I would really love Asahi to succeed. I recently bought a Framework and, while I am pretty happy with it in isolation... when I use my partner's M4 Macbook Air I just think... damn. The quality of this thing is head and shoulders above the rest of the field. And it doesn't even cost more than the competition. If you could run Linux on it, it would be completely insane to use anything else.