Hacker News new | past | comments | ask | show | jobs | submit login
Asahi Linux: July 2022 Release and Progress Report (asahilinux.org)
201 points by pantalaimon on July 18, 2022 | hide | past | favorite | 143 comments



I am astonished by the progresses made by the Asahi folks. In particular, having been a Linux user for the past 15 years or so, I can't wrap my head around the fact that a single developer can get such impressive results in reverse engineering a device --- be it storage, Bluetooth, and now even the GPU!

How can this be so different from the situation with, say, the nvidia drivers, or even many wi-fi drivers, where it took years of effort from the community before reaching fully working drivers?


The open-source nvidia drivers are hard because nvidia is apparently deliberately making it difficult: https://www.phoronix.com/forums/forum/linux-graphics-x-org-d..., https://www.phoronix.com/forums/forum/linux-graphics-x-org-d.... The Asahi people have been pretty clear that, while Apple doesn't do much to _help_ alternative OSes, they haven't done anything to _hurt_ them either, and have explicitly opened up their bootloader so that it's able to boot alternative OSes. The Asahi people don't have to figure out how to circumvent restrictions, they're "just" doing normal reverse engineering stuff.

And yeah, it helps that there's just a tiny amount of Mac hardware compared to PC hardware. If you get any arbitrary wifi chip to work with Linux, then congrats, you just made like 2 or 3 of the thousands of PC laptops work. But if you get the wifi chip Apple uses in their laptops to work, then you might've gotten wifi to work on multiple generations of Mac laptops.


Same. GPU in particular, seems like magic to a layman’s eye like mine.

Bluetooth and WiFi reverse engineering is something I can conceptually grasp from afar, especially because they should be relatively similar across architectures. They are well defined protocols, you send and receive packages. OK.

But GPUs are such complicated beasts, each on it’s own snowflakey way, I don’t even understand how the work on a deep level, let alone imagine how reverse engineering would begin. Memory access and sharing with the CPU, shaders, a massively parallel computing model. Crazy stuff. Oh, and you also get to draw lots of triangles really fast while you’re at it.


My guess is that the Mx GPU is probably the toughest target for reverse engineering -- as it is very unique to Apple. It will probably take many years to reverse engineer. I believe the Asahi Linux X11/Wayland graphics interface still works on the CPU today (Mesa) and _not_ the GPU. I don't think they have the GPU working except for small proof of concept things.

In general, I think the porting effort depends on whether Apple has used a freely available industry component. So if the WiFi chip is something that is available in a non-Apple system, one could tweak the AArch64 Linux driver a bit and get it to work on AArch64 Mx Linux. If the silicon is totally custom I can imagine the task being much more difficult.

My guess is that many of the components on which Asahi Linux has made good progress in porting over to work have good support already on vanilla AArch64 Linux. Understanding the integration and working around Apple's proprietary sauce, of course, makes the task totally non-trivial.


This is not correct. Right now, the GPU Driver for Asahi Linux is not written in Rust/C++ yet (because that's a lot of work). Instead, the prototype driver is written in Python, and when it is done, then it will be rewritten in Rust/C++ once it works correctly. It runs on a host system, receives OpenGL calls (like the demo application shown at the bottom), and then creates a pile of GPU instructions that are then sent over USB to run on the real GPU in the m1n1 preboot environment. So, in practice, the GPU is being reverse-engineered, is doing the rendering, it's just that the OpenGL -> GPU layer is being handled outside Linux in a simpler language for development speed. This prototype GPU driver has 94% openGLES test suite compliance at this point.

> It will probably take many years to reverse engineer.

The last 6% is probably decently hard, but it will probably not be terribly hard to finish - and not long for this team thereafter to change the Python code into an actual Linux driver considering the sheer number of drivers they've already written. Once you have openGLES, you have a HW-accelerated desktop, at least through most desktop environments. Video acceleration is another story they haven't started on, but its generally easy by comparison. Might take another year or two for full OpenGL or Vulkan though.


Why Rust/C++?

The Linux kernel is written in C, Linus has historically made his dislike of C++ clear as well

Rust would be reasonable but its inclusion in the kernel at this point is "that's a good idea!" rather than "that's a thing being done right now in kernel.org kernels"


This is the userspace part of the driver, not the kernelspace part; the existing open-source graphics drivers already are (for the most part) in C++, because extern "C" is less of a pain than writing a compiler in pure C


The majority of a GPU driver doesn't live in the kernel, but in userspace (part of the Mesa library).


I'm a bit mind blown that you can write a driver in python. I thought it would be too high level to do the job at all, nevrmind slowly


"Drivers" for things in the kernel space in this case, are mostly just a glorified shim that allow you to submit command buffers to a device through memory-mapped I/O.

The actual meat of the "driver" in these case is in Mesa, a userspace component, which is what actually forms and creates the command buffers to send.

You can write a driver to interact with hardware in any language assuming you don't have insane timing requirements. It's mostly a matter of engineering deciding where the pieces should go, though. (For example, everything else aside, a major advantage of this kind of design is that all the real complexity exists in userspace.)


Python is a popular choice for FUSE filesystem drivers, I think. It's also how this perfectly usable driver for the DualShock 4 is written: https://github.com/chrippa/ds4drv


> So if the WiFi chip is something that is available in a non-Apple system, one could tweak the AArch64 Linux driver a bit and get it to work on AArch64 Mx Linux.

Yeah that's precisely what they did. According to this [0] dmesg of a mac mini, M1 macs have the BCM4378 chip, for which they use the existing [1] brcmfmac driver. That driver is years old. A cursory look into git log drivers/net/wireless/broadcom/brcm80211/brcmfmac shows maintenance from various people, including folks with @broadcom.com e-mails, reaching back to 2015 (but that might only have been the date the directory was moved, it might be actually older, as again it's only been a cursory look to establish a lower bound on the age). They did have to add some patches to add support for the specific chip used by apple thogh, which they are upstreaming [2], but they weren't starting from scratch.

[0]: https://gist.github.com/z4yx/13520bd2beef49019b1b7436e3b95dd...

[1]: https://wiki.debian.org/brcmfmac

[2]: https://lore.kernel.org/lkml/4928ea79-2794-05fb-d1a8-942b589...


> [2]

You linked to the old Corellium patch, here's proper one: https://lore.kernel.org/linux-acpi/20211226153624.162281-1-m...


My bad, thanks for pointing it out! It seems that these patches have also been accepted since and are part of 5.18 onward.


There are parts where basically only the interface to the hardware is different (example as in the article: bluetooth). But there are also many, many piece of hardware that are bespoke. GPU, DCP, Interrupt controller and more. It's not really a serious hindrance with m1n1. As the communication can be snooped basically like using wireshark but on the hardware level. Don't get me wrong that doesn't make things easy. But it's definitely not an impossible task.


Drivers for things like WiFi are generally platform independent.


> Drivers for things like WiFi are generally platform independent.

For the core of the driver that may be true. Ultimately the driver needs to _interface_ with the platform (and the OS) and for that custom code needs to be done to account for platform and OS differences. You cannot abstract away everything.


> How can this be so different from the situation with, say, the nvidia drivers, or even many wi-fi drivers, where it took years of effort from the community before reaching fully working drivers?

They only have to get a single device model to work, with perhaps a handful of small variations. Monoculture has its benefits.


It also seems quite a bit saner than a lot of GPUs from what I've read of the code released so far.


They have a patreon with over 1000 sponsors and afaiu marcan is working on the project full time.

So I would assume it's mostly developer resources that make the difference.


I think I read somewhere that he never actually got to the sponsorship level he wanted for doing this full time


> How can this be so different from the situation with, say, the nvidia drivers, or even many wi-fi drivers, where it took years of effort from the community before reaching fully working drivers?

Support for Apple Silicon has much less device combinations to deal with unlike what you see with other devices. There is only one type and fixed set of GPU, storage, wifi, bluetooth drivers to support vs the hundreds of hundreds of GPU variants, with the infinite combinations of other devices found on PCs.

Also the so-called 'open-sourcing' of the Nvidia drivers which got the Linux fans too excited, only to be disappointed to realise that Nvidia was not only still working against them, but the news of this announcement turned out to be a red herring.

Pushing GPU FFI wrapper code to binary blobs is still not 'open-source'.


Marcan is quite the talented reverse engineer and low level engineer. From watching him on YouTube, and a small bit of research into his past (Wii/PS3 pwning), he figures out ways to creatively solve problems, and provide himself with fast feedback on the tough problems he works on. M1n1 is case and point.


This is a great project. Can Apple be incentivized somehow to help the Asahi Linux folks?

It is not possible to simply reverse engineer everything nowadays -- it could take years to build a polished system and in the meanwhile there will always be new hardware from Apple. It's a never ending game of "catch up".

Are there some scenarios where Apple might want to allow "proper" Linux on Mx to happen ? The fact that Apple has not totally locked down these machines is tremendously heartening. Could it be in Apple's interest to somehow help the Asahi guys move faster -- release hardware specifications in the open etc.?

I think that if Apple hardware releases Mx hardware specs, a high quality linux implementation can then take shape on the Mx. This will create some competition with the Apple software folks. People will benchmarks software running on Mx Linux vs Mx MacOS. This could create some healthy competition.

Just like the arrival of llvm helped gcc improve its game, a well functioning software stack on Mx Linux can create some healthy competition for the Apple software team. Their software is quite decent already -- it can achieve greater heights with some competition !


Just to add, Apple has helped a little bit in the form of adding a raw image boot mode that wouldn't be useful internally but helps Asahi 's boot process not break in the future.

https://twitter.com/marcan42/status/1471799568807636994


Apple also helped by not copying the iOS Boot mechanism to macOS, which they could have. This entire "Permissive Security" mode allowing for installation without jailbreaking took extra work to make and was not necessary otherwise.


They probably want to entice arm windows and chromeos support longer term


Reading between the lines, Apple wanted Windows support at release, but Microsoft signed an exclusivity agreement with Qualcomm for windows on ARM so they weren't able to make that happen even if they wanted to.


It's fair to speculate that Microsoft likely has Windows already running on M1 machines somewhere deep within their labs. Even if it's just some engineers doing it "in their free time"

Wonder what Qualcomm had to do to get that agreement though. All signs seem to point to Qualcomm being absolutely poisonous to work with and their ARM offerings being uncompetitive with Apple's now


Is that ever set to expire?


Yes, quite soon (this year? next year?) IIRC


> it could take years to build a polished system

That's the reality of Linux support on pretty much any hardware - it can easily take years to really shape up and become polished. The Apple M1/M2 if anything are making surprisingly fast progress.


It's not comparable. I can buy brand new Intel/AMD hardware and Linux generally boots out of the box with mostly everything working except some quirks that have a good chance to be fixed within a few months.

Whereas Asahi still sports a rudimentary GPU driver written in Python... Current guesstimate to achieve hardware acceleration is... years. https://news.ycombinator.com/item?id=32138327


Your link just goes back to this HN comment section.


Thanks. I mean't to link this comment in particular: https://news.ycombinator.com/item?id=32138327


Thank you for show casing the true reality of the GPU progress and that there is still no H/W acceleration yet which is more straight to the point and accurately concise than most of the Linux fans vaguely screaming 'iT w0RkS'. I don't care about that.

If there is no H/W acceleration, then it is not useful as a daily driver at all.

We wait until there is proper support of H/W acceleration before using such a system. Otherwise all your memory eating programs or graphical intensive apps will run the computer to the ground.


Any hardware?

You can basically take any PC with brand new components and have high confidence that it'll work perfectly with Linux.


Desktop yes, laptops not so much.


Laptops are made of commodity parts these days. Very few have anything that Linux doesn't support.


I get that, but there are differences. There's a huge number of problems that pop up on linux laptops. Things like special functions of keys not working (volume, LCD brightness, mute, switch to external video, etc). Then things like gestures on track pads. Waking up from sleep mode. Switching from iGPU to dGPUs. Not to mention various things like Lenovo disabling linux signed bootloaders, lying about PCI-ids so they can advertise some windows specific software RAID driver.

So sure the popular GPU chipsets, ethernet, wifi, etc have drivers. But the integration doesn't necessarily work, doubly so after sleep mode. There's a cottage industry of selling "good" wifi cards for laptops (like Lenovo) that work well with Linux to replace the "bad" wifi cards that don't work well with Linux. Other problems include controlling clock speeds of GPUs, memory chips, fan speeds, etc.

So sure there are some laptops that are known to work well, of if in particularly lucky your vendor (like System76 or Dell) supports linux. But often the "linux" flavor of the laptop has different hardware then the windows flavor, particularly because the drivers for some chips don't work well.

Most of the folks I know that used to run linux on laptops have given up, they don't want to tinker with PCI-id maps, disabling features to get idle/sleep modes working, etc. They just buy an apple laptop and everything works.


It is not healthy competition only for the Apple software team ... it is competition for every hardware manufacturer. Vendors must boost their game if they brand themselves as decent Linux supporters, and Apple provides premium hardware which is also usable by Linux systems. Before Apple Silicon days, the primary motivation to buy Apple devices was the software side, but now the hardware has gone to the next level, and buying for the hardware alone is tempting.


> Vendors must boost their game if they brand themselves as decent Linux supporters, and Apple provides premium hardware which is also usable by Linux systems.

Apple is practically the worst computer vendor when it comes to "decent Linux support"; even nvidia is better, at least they have proprietary drivers.

It's quite a stretch to say that "vendors must boost their game if they brand themselves as decent Linux supporters"; rather, the conclusion here would be that vendors should drop their game altogether since no one cares. People, even Linux users and developers apparently, buy the hardware anyway even if the vendor just totally ignores Linux (at best).

I would find it quite telling if we go back to that era where all I see are MacBooks at the Linux kernel developers conferences, specially when the other manufacturers are now most definitely not ignoring Linux. Fortunately, that is not the case, so far.


I was applying more into the future than the current situation, in case Apple would start giving better support.

Currently yes, Apple is the worst vendor.

> rather, the conclusion here would be that vendors should drop their game altogether since no one cares. People, even Linux users and developers apparently, buy the hardware anyway even if the vendor just totally ignores Linux (at best).

That's exactly the idea I was chasing - if Apple would care, then other vendors should care as well. But currently, Linux users are still in minority and it does not matter that much yet.


> This is a great project. Can Apple be incentivized somehow to help the Asahi Linux folks?

> It is not possible to simply reverse engineer everything nowadays -- it could take years to build a polished system and in the meanwhile there will always be new hardware from Apple. It's a never ending game of "catch up".

I would think it's very much in their favor to help. If it takes years to build a polished release, a lot of people will value years old apple hardware over whatever more recently released. Cut that down to months and they just added a new (albeit smaller) segment chomping at the bit for their latest hardware.


> Can Apple be incentivized somehow to help the Asahi Linux folks?

Apple doesn't have a history of helping anyone but themselves and if you read the article it looks like they're not thinking at all about helping alternative OSes by using proprietary interfaces and overly complicated hardware/firmware designs and breaking things with macOS updates.

It is a truly herculian task to get Linux merely booting up and usable on Apple M* SoCs - kudos to the Asahi Linux team for all the heroic efforts - personally it feels like it's a great reverse engineering project and they should continue fighting it but Linux on M1/2 is not going to be a daily driver like Linux on standard x64 machines is.

Thankfully Intel and AMD are doing pretty good in performance department with their latest chips and it's only going to get better.


> they're doing everything they can to hinder others by using overly complicated hardware and firmware designs and breaking things with macOS updates

This is such a shallow take. Your animosity toward Apple has blinkered you.

There is zero possibility that anyone at Apple is spending any time working to make hardware designs "overly complicated" for the purpose of hindering efforts like Asahi Linux.


> There is zero possibility that anyone at Apple is spending any time [..] for the purpose of hindering efforts like Asahi Linux.

Ah, that argument. Apple is, like Microsoft, actively trying to get people to run Linux and applications _under_ their OS -- Microsoft is shipping an entire Linux emulator for graphical applications and Apple is going to ship an x86 Linux emulator in a future release too.

And then, both companies have a lot of incentive that you continue to run their operating system as your main OS -- they can show you ads that way, they can continue to sell your their subscriptions and services, their partners', etc.

Combine the two, and I no longer believe the argument that "they couldn't care less about people who run Linux".


Of course Apple wants macOS to be your preference. They sell hardware and services that depend on macOS.

Things Apple does to make macOS more useful or appealing are in service of that goal. Absolutely.

I have not heard, and cannot find, any rumors about Apple shipping a Linux emulator. Links?

Regardless, I would not argue that Apple doesn't care about Linux. I would argue that Apple does not overcomplicate their hardware engineering with the goal of preventing people from running Linux on their hardware. And that is what I said.

macOS drives hardware sales. Apple makes money on hardware sales. They will not compromise their ability to change hardware or software to support a tiny noncommercial project, of course!

But at the end of the day, thinking as commercially as I can: Apple would prefer that you buy a M1 Mac and run Asahi, than buy a Dell running Windows or Linux instead. They have no opportunity to sell you macOS services in either case, but if you buy the M1, they capture hardware sales revenue.


> I have not heard, and cannot find, any rumors about Apple shipping a Linux emulator. Links?

It was even on HN: https://news.ycombinator.com/item?id=31644990 , https://arstechnica.com/gadgets/2022/06/macos-ventura-will-e...

> macOS drives hardware sales. Apple makes money on hardware sales.

Not as clear specially when the money they make from macOS hardware sales is actually less than what they make from services or iOS hardware sales.

The era when we had an open computing standards Apple that tried to interoperate with others is long gone. This is the era of the captive Apple who is trying to bind you into their ecosystem, and I won't believe the "0%" argument any more than I believe it for MS.


The extension of Rosetta2 to work in virtualization contexts is not the same as Apple shipping a Linux emulator / environment like WSL / WSL2.


Please note that I said a x86 Linux emulator, and it is exactly that. It is:

1. A Linux binary, 2. Which translates Linux/ELF code from x86 to ARM, 3. And translates Linux x86 syscalls to amd64 Linux, 4. With the explicit intention of allowing the user to run x86 Linux binaries under macOS.

If you want to be this level of nitpicking, then WSL is not a Linux emulator either; after all, they also require an existing Linux kernel to work, and may not even emulate anything whatsoever. But BOTH WSL2 and this emulator serve _exactly_ the serve purpose: to facilitate running of Linux binaries under a native host OS that is not Linux, thus my point.


Lol on animosity - now a days merely stating facts implies animosity. Also, another way to think about it is that there's zero possibility that anyone at Apple is spending any time working to make hardware designs that are friendly to alternative OSes. Those are both effectively the same things.

Also since you might have take this personally - I wanted to clarify that my comment mostly is a reflection of wasted potential for the M* hardware - making it friendly to alt OSes would vastly improve the hardware's utility without hurting Apple. (If you are to believe most HNers they are a H/W company that is not interested in selling your personal data - so it should be in their financial interest if alt OSes sell more of their hardware.)


> Those are both effectively the same things.

They are not even close to the same thing.

And if you were more familiar with the Asahi Linux project, you might remember that Apple has done a thing or two that made their work possible/future-proof.


They are the same thing. Unless Apple commits to documenting the hardware, designing with backwards compat(lol not gonna happen - but an example), or starts committing code to Asahi Linux - rest of it is all wishy washy - a variation of no guarantees anything will work or continue to work, you are on your own. It doesn't really matter otherwise if they are or are not spending time actively breaking things - as the article shows the net effect is the same.


They are not at all the same thing.

Absence of beneficence is not evidence of malevolence.

Apple is busy making their custom OS work on their custom chips. They will change either or both, as needed, to serve their needs.

It's no minor thing to ask Apple to limit their own flexibility for a tiny non-revenue-generating external project. Remember Apple does not sell Mx chips on the open market like Intel/AMD/some ARM mfrs. They do not publish hardware specs as a service to customers, because they have no customers.


No one is claiming mal anything - I was pointing out that the results are such that it doesn't really matter if it was done purposely or not. To get different results they would have to actively think and invest in support for alt OSes. So long as they don't do that it doesn't matter if they actively hinder or passively - you can't prove one way or the other anyways - all you can hope for is better results.


I quote you:

> doing everything they can to hinder others by using overly complicated hardware and firmware designs and breaking things

That's malevolence. And it's completely certifiably and demonstrably false.


You omitted "It looks like" before that sentence - for a person who is so bent on rigid meaning and precise wording omitting that part looks a little out of character :)

Again you cannot really prove this from where you and me sit - so it is implied that even without "it looks like" I meant their externally visible actions / results make it looks like they go out of their way to hinder - that can be a combination of thoughtless hardware design and resolve to not help anyone else. And none of those are necessarily "malevolence" (which is a word you used) - it's just business practices.


Apologies for improperly excerpting your quote, if that was the determinate clause!

...but it does not look like they do those things either.

You'd have to ignore all of the reasons it makes no sense for Apple to make decisions like that, in order to believe or even wonder about such a thing.

If you had said "I wish Apple would work with Asahi to make their distro a full-fledged citizen on Apple's desirable hardware." ... then I would have upvoted your comment and moved on.

Instead I saw you describing a hostile and active thwarting scenario, which is unfounded and nonsensical.

I won't even address your "thoughtless hardware design and resolve not to help anyone else", except to say that ... you're doing it again, and you're still wrong.


> Absence of beneficence is not evidence of malevolence.

Except it absolutely is evidence of malevolence. It's not conclusive evidence (i.e., irrefutable proof), but almost nothing has conclusive evidence.


Yay semantics.

I did not make breakfast for you this morning. Is this evidence that I want you to starve?


> Is this evidence that I want you to starve?

It's not conclusive evidence (i.e., irrefutable proof), but almost nothing has conclusive evidence. ;)


This is ridiculous reasoning (which is also being trounced by quesera), and you should just take the L at this point.

Helping Asahi would tangentially benefit Apple (such as increasing mac sales... something that is reasonable to assume might happen a little bit, given how much more efficient the M chips are to Intel, and the fact that most Linux stuff already compiles just fine on it) while also possibly garnering some goodwill. It'd be great if they sold or licensed the chip to others so that it would have to be documented publicly, but we're not there yet.


The thing is, we're moving from CPUs that at least had their external behaviors and interfaces documented to CPUs which are _not_. Apple has already missed plenty of opportunities to generate goodwill or even minimal assurances of interoperability, and the argument that "at least it's not as locked down as the iPhone" (where this would be outright impossible instead of just ridiculously complicated) is hardly reassuring.


The CPU is documented; the documentation is written by ARM.


ANE is not, AMX is not, secure enclave is not... plus a lot of proprietary registers


ANE and SEP aren't the CPU, they're the SoC ;)


Apple explicitly does not make an ARM CPU, they make an "Apple Silicon CPU", and the fact it currently resembles ARM (and not fully) is an implementation detail.


What about the GPU portion?


That isn't the CPU.


Linux was booting within weeks. Hardly a herculian task. I'm really not sure where this idea comes from. Yes of course initial investment is pretty high because of the new architecture. But it's pretty clear they have proven they can do it. M2 was working with a few hours of coding. GPU prototype driver has over 90% compliance. Like when does this idea die?


God it gets tiring - Did I say booting Linux on M1 was a herculean task? I even used the words daily driver - making that a daily driver is definitely a herculean task - it is still not there - the GPU driver piece is pretty herculean by itself.


This is what you said: "It is a truly herculian task to get Linux merely booting up and usable on Apple M* SoCs". And you can use the M1 macs today and it's usable. Whether it's daily driveable is an entirely different, subjective, question. I could probably daily drive it just fine with some manageable pain points. That doesn't mean it's daily driveable for you or anyone else. But whether it's usable is really not in question at this point.


Hey, what about that usable part? Are we going to argue your usable is the same as everyone else's or even the same as Linux on x86 usability?


> Thankfully Intel and AMD are doing pretty good in performance department with their latest chips and it's only going to get better.

I often wonder if AMD was able to use TSMC's cutting edge fabrication node how would their laptop chips compare with the M1. Apple uses TSMC's best node as I understand it.


I think the performance wise Intel 12th gen and Ryzen 6000 are already a little ahead. Power draw will be helped by moving to TSMCs latest node but general purpose x86 will always have some power disadvantage compared to specialized ARM hardware and software that Apple makes.


Yes, x86 will always have some power disadvantage because ARM's heritage is low power embedded devices (and RISC). x86 has other advantages like a very mature and optimized software stack with good compilers.

Apple also has the advantage of cramming every thing on a single piece of silicon while AMD has gone in for the chiplet approach. The single piece of silicon reduces yields but increases performance with a lower power draw. The chiplet approach followed by AMD is more modular, less risky and cost effective.

So if both AMD and Apple use the same TSMC node _and_ AMD went in for "cost is not an objective" and cram everything on a single piece of silicon _and_ add HBM (aka unified memory) it would be real interesting for the two to go "head to head".

I would definitely be interested in paying good dollar for such X86 client system !

I hope someone at AMD is listening !


It's all about tradeoffs isn't it. Can't have diverse, extensible, open ecosystem like x86 and get every ounce of performance and power efficiency - something's gotta give. But the good news is you can get pretty close with great Engineering and competition keeps that up. Maybe one of the many x86 vendors will build such a system - Lenovo and Microsoft are working with AMD on their new custom designed Thinkpad Z series lineup and I hear good things about it.


Apple has had a lead on using TSMC's latest process. However the lead in iGPU performance and perf/watt is a fair bit larger than you'd expect from the process differences.

I'm still puzzled why during a long GPU shortage where supply was short and prices insane that nobody in the x86-64 world managed a > 128 bit wide memory interface to benefit of an iGPU. Apple desktops and laptops have options for 256, 512, and even 1024 bit (on the studio) wide memory systems.


Arm has some structural advantage in the decoding stage (instructions have the same length) and a huge lead in low power systems they got by heritage, not to mention less historical baggage (x86_64 cpus still have a 16 bit mode and a 80 bit fpu)

I think the lead the M1 has is bigger than what can be attributed to the different node.


AMD does use the same node as the Apple now. But that will most likely change with the M2 pro or M3 at the latest.


Far as I know Ryzen 6000 chips use 6nm node - 7000 series will use the 5nm node and they are not out yet?


Ah yes. You are right! It does use 6nm.


> Can Apple be incentivized somehow to help the Asahi Linux folks?

I would guess that the best way to incentivize them is to let them know—en masse—that a large number of people would want to buy new Apple Silicon-based Macs to run Asahi Linux on if it were in better shape.


It's very difficult to do more to help Asahi Linux. If they upstreamed things properly into Linux (which is arguably the most cost-effective way to support hardware on a broad scale), Mesa, QEMU and related projects, Asahi Linux as a separate project would turn obsolete fairly quickly.


They are upstreaming as much as they can


But Apple isn't, that's my point.


Ah, I didn't read it that way, too many pronouns.


I wonder if Apple has a secret internal Linux distribution in use on Mx.

I seem to recall reading that they had an x86/x86-64 internal distribution but maybe I am thinking of another company...


I don't know for a fact, but I am pretty sure they probably do. The question is, what role does it play in the organization? So for me, what I think they probably have an internal distro for is for their some of their servers. If that is the only case, it probably doesnt mean much, but that may not be the case. I could definitely see them having an internal distro that helps with some low level development and testing.


According to marcan’s twitter Apple likely has an internal one for silicon validation, but it’s probably not complete / something you’d want to use as a daily driver.


Apple had an internal project of OSX running on x86 (amd64?) for a while before they got serious about it.

It's less likely that they have an internal ARM/Mx Linux project though. Doesn't serve any known purpose that I'm aware of.


There's a rumor that they have an internal Linux distro that the silicon validation team at Apple uses on their hardware so that they don't have dependencies on the software division.


I recall reading that was only for bringup though, not full peripheral support?

Either way, that's a good point and does have clear internal utility. It's certainly easier to get Linux running on a new chip than OSX!


> It's certainly easier to get Linux running on a new chip than OSX!

I'd say that's wrong. From the outside view, perhaps, but an internal Apple team can certainly get macOS to sing and dance on whatever they need very quickly.


Good point. The x86 OSX port was created and initially maintained by a single person, if the folklore is accurate.

A similar effort for Linux would usually benefit from related work by many people. For OSX or macOS it would need to be an explicit project, starting with fewer initial bits. But the internal Apple team would be exactly the right experts for the job, so it would be easier to organize at least.


What operating system(s) do they use to run their cloud services? Given that MacOS has moved in a different direction for many years, it seems a bit unlikely that they run that at this point.


I think it's all Linux these days.


Okay, I don't have a mac. But the discussion of Apple's design for the trackpad and keyboard is fascinating. Running the touchpad and keyboard through an embedded M3 microcontroller that's wired to the rest of the system over the SPI bus rather than USB for both lower latency and higher power efficiency.


It's somewhat common approach (I think USB was only done in Macs, in fact). HID over I2C/SPI is somewhat common solution, taking over from PS/2, and of course you need an MCU for the matrix scanner and the touchpad anyway.


Yep, all laptops I have used (Samsung, Lenovo, Dell, HP) have their touchpad connected over I²C.


I'm a bit surprised they have two MCUs though (BCM5976 and STM32)


the BCM5976 is dedicated touchscreen ASIC that Broadcom makes for Apple for their iPhone and iPad devices, there's probably MCU (or four, this is broadcom we're talking about) inside, but that's probably because it's simpler to implement this way.


Reminded me of the Raspberry Pi Pico W[1], where they paired a dual-core ARM Cortex M0+ MCU with a wifi chip that has both a Cortex M3 core and a Cortex M4 core[2].

The Arduino Unos being paired with an ESP-01 module is similar. Tons of other examples out there as well of course.

Not sure if there's a similar "power inversion" at play here, just find it funny when it shows up.

[1]: https://www.raspberrypi.com/news/raspberry-pi-pico-w-your-6-...

[2]: https://www.infineon.com/cms/en/product/wireless-connectivit... (see block diagram)


this is one thing i can’t stand with macs anymore - they have enormous input latency.

if SPI results in lower power usage, then great. USB can easily manage single digit millis, and i’d estimate 4 or 5 (?!) frames of composition and/or vsync latency. insane.

it means my 2019 macbook pro has ~80ms input latency - absolutely noticeable but not the worst thing in the world - the m1 and m2 machines are actually worse somehow. the higher refresh rate displays help (back down ~50ms) but HFR is a crutch.

i really wish apple would get on top of that.

i think peripheral interconnect is a moot point :)


I don't see how SPI would introduce latency, quite the opposite. SPI is a super simple bus, you write bytes and get bytes in return - do that in response to an interrupt generated via a separate interrupt line and you can't get any less latency than that.


ah, i didn't mean to suggest that SPI is adding additional latency (i'd have bet the opposite), i'm pretty confident it's everywhere else in the stack causing it.


I doubt the choice of SPI is related to input latency issues.


i was trying to say that switching to SPI to reduce latency would be redundant, given that the rest of the stack seemingly adds so much..!


I wonder why this is true when their iOS touchscreens are very responsive.


I’m not even planning to run Linux on an Mx machine and these updates are fascinating - kudos to whoever writes them, great job!


> Unfortunately, it seems a subset of monitors have a strangely broken behavior where, when waking up from standby, they will virtually disconnect their inputs a few seconds later, then connect them again.

I'd be curious to know whether disabling "EUP Compliance"/"EUP Deep Sleep" mode resolves this issue.

The long and short of it is that the EU passed some idle power requirements that monitor companies found difficult (or expensive?) to implement and it's caused basically a decade of weird bugs in PCs, docks, and other hardware. To hit the power target they put the monitor into a really really deep sleep state and some of the implementations don't always wake up properly, or do weird things as they wake back up and restart.

(this occurs even on some premium monitors and brands... my Dell P2715Q is a very high-end monitor for its day and it has bad issues with freezing after wakeup!)

This is the source of the classic "my monitor sometimes won't wake up after my PC goes to sleep, until I cycle the power to the monitor" and "all my desktop icons reshuffle themselves after sleep" and I suspect also the root cause of many "displayport issues" with thunderbolt docks.

I think EUP power draw is also why many monitors "disconnect themselves" when idle... from the perspective of the PC it's in a deep enough state it's not even responding. This is legal behavior according to the spec, it's not incorrect to do so, but it's not required, and it confuses the PC which obviously now thinks the monitor is disconnected.

I don't have any fine-grain knowledge of exactly what they are doing when they wake up but... rebooting the monitor as a method to wake it up from a deep-sleep state wouldn't surprise me all that much. It would explain the "desktop icons shuffling themselves" failure mode, windows sees a disconnected monitor (even for a second) and goes to the fallback virtual monitor (640x480 res), that's the classic behavior for a disconnected monitor, and rebooting explains why it would disconnect.

Apologies to the planet, I wish it wasn't implemented so poorly, but, turning off EUP Deep Sleep is pretty much the first thing you should do when you buy a monitor.


Asahi Lina's livestreams (mentioned in the post) are very interesting and well worth a look, by the way. Clearly a very talented developer, but with a cool aesthetic and approach too.


Is anyone convinced it's not just marcan?


I'm sure they just coincidentally have the same machine name, and both happen to have /home/marcan directories.


I am curious, is there anybody using Asahi Linux as their daily driver?


Not a daily driver but a server for porting work to Linux ARM.

We had trouble finding a cheap ARMv8.1+ server, and the M1 really fit our needs. We're doing builds in a docker of Amazon Linux 2 ARM since the main use case for our software is running on AWS Graviton 2/3 instances.


this is something that has always puzzled me about bluetooth, which this article has reminded me of:

why are there no bluetooth controllers that operate over pcie? well, with the now notable exception of these macs, apparently.

i’d looked for them in the past, but could never find one. i always thought i’d just never looked in the right places, or there were a few obscure vendors for them. but the fact that bluetooth over pcie is not supported in linux yet indicates there might not be any out in the wild at all! very strange.


Running Bluetooth over USB has always struck me as a bit of an odd choice. With USB having to run through the CPU, doesn’t that make things like Bluetooth audio more susceptible to load-induced dropouts? Would make more sense from my layman perspective to have it handled by a CPU-independent controller over PCI-E?


No CPU has any problem running audio through basically any interface anymore ever since the Windows 9x days; compressing audio will actually use significantly more CPU than transferring it. If your platform is so constrained the CPU can't send audio inline over USB, then it most definitely won't be able to compress it anyway. For such a scenario, there are Bluetooth controllers that send audio data out-of-band (e.g. through a I2S link), and may even perform the compression in hardware. But this is hardware that is significantly limited, not a PC.


What would be the benefit of that?

Even the few PCI express add-on bluetooth cards that are "PCI only" have a small USB controller builtin. So the host OS sees a USB addon card with a Bluetooth device hanging from it. There's zero drivers needed since both the USB controller and the Bluetooth USB device is standarized.

If you now decide to remove the USB controller, all you're doing is suddenly making an incompatible interface that now requires drivers for questionable benefits. I will vote to keep the standard any other day.

That said, most PCI addon cards just ditch the onboard USB controller and force you to connect the USB bluetooth device to the USB controller on the motherboard, though. And THEN they require drivers anyway because volatile firmware or whatever.


> What would be the benefit of that?

well, aside from principle simplicity, you wouldn't need to have a USB stack to have bluetooth connectivity - a protocol plagued with "mysterious" connectivity issues - any reduction in that complexity is a benefit. it might even result in a modest increase in energy efficiency.

> There's zero drivers needed since both the USB controller and the Bluetooth USB device is standarized.

the bluetooth stack is standardized, true, but as far as i understand, this has as much to do with USB as a mouse or a keyboard (indeed, as mentioned in the post, the only required stitch-up work was to between the bluetooth HCI interface and a pcie transport)

> If you now decide to remove the USB controller, all you're doing is suddenly making an incompatible interface that now requires drivers for questionable benefits.

i don't think that'd be the case.


> you wouldn't need to have a USB stack to have bluetooth connectivity - a protocol plagued with "mysterious" connectivity issues - any reduction in that complexity is a benefit. it might even result in a modest increase in energy efficiency.

There's very little to this -- USB is a well known standard and power saving in USB is also well understood and exercised. PCIe power saving was not even a thing until relatively recently, and anything you'd developed for this new "HCI-over-raw-PCIe" thing would have to be done almost from scratch. It's ironic, but USB-over-PCIe may actually be the simpler "logical" protocol than raw PCIe, and may even provide further physical power saving considering how dead simple USB is as an interface. Or used to be. Cough.


> USB is a well known standard and power saving in USB is also well understood and exercised. PCIe power saving was not even a thing until relatively recently

well i mean it's all PCIe at the end of the day - all the USB stacks run on top of PCIe lanes - principally anything USB is pure overhead.

> PCIe power saving was not even a thing until relatively recently, and anything you'd developed for this new "HCI-over-raw-PCIe" thing would have to be done almost from scratch.

as far as i can tell PCIe power saving is still not a thing (at least of any significance). in any case, the energy usage of the transport layer and what messages you're sending over it should be largely orthogonal - i don't know exactly what you think you'd need to implement here. i mean, we're not being hypothetical here: apple have literally done this in their new macs, and the blog post specifies what they had to do to make it work in asahi: send the bluetooth HCI messages over the PCIe transport directly!

> [...] and may even provide further physical power saving considering how dead simple USB is as an interface

again, it's all already on PCIe no matter which way you cut it..


> again, it's all already on PCIe no matter which way you cut it..

This is why I said the "it's ironic" part. First, realize that under no circumstances a USB link would be "higher power" than a PCIe link just because logically there is a USB controller which connects via PCIe "no matter which way you cut it, precisely because in a way it matters "which way you cut [the interfaces]". An USB link can be as low as 2 wires. Make the math...

> apple have literally done this in their new macs, and the blog post specifies what they had to do to make it work in asahi: send the bluetooth HCI messages over the PCIe transport directly!

And how do we even know if this is the correct, low power behavior? Have they implemented runtime power management for this device? D3cold rings a bell ?

For better or worse, power management _is_ a well understood problem with USB devices & controllers; you can be pretty sure the Linux USB controller driver is as low-power as it gets; you have much fewer assurances if you go directly to the PCIe. I'm quite sure that the "mysterious connectivity issues" you attribute to USB are in part due to runtime power management logic.


Probably because Bluetooth has always been relatively slow while also being meant for battery operated devices. PCIe’s bandwidth is way overkill and the nature of pcie makes it more complex to implement while also taking more power than simpler low power alternatives.


Best bet for linux on the desktop, limit it to Apple hardware.


Cool to see this project is still alive. I made a small contribution[0] almost 1.5 years ago when this project debuted.

Best of luck with future endeavors.

[0]:https://github.com/AsahiLinux/AsahiLinux.github.io/pull/4


>Cool to see this project is still alive

Just "still alive"? They delivered solid results very fast, and never lost steam...


Not sure how else I'd describe it. It's either dead or alive.


You could have said "still thriving" or something similar. "Still alive" has connotations of barely surviving. This team has been doing incredible work incredibly quickly from the beginning.


Or alive and kicking.


Tremendous progress. Hope to see some fun GPU support next year sometime!


Has anyone ran a web server on M1/Linux?

Thoughts, comments, observations?



for me the astonishing part is not only the development speed, but also the eGPU support on Linux. That's something which just doesn't work on Mac OS and won't unless Apple decides to support third party hardware which isn't going to happen any time soon.

I hope over the next months there is a way to access eGPUs without giving up the macos installation, even if it is over some Asahi VM with direct access to the hardware somehow.


Asahi Linux has said that the M1 is plagued by the same ARM bug as the Raspberry Pi 4, and so eGPUs will never be supported with the Linux driver.

It can, theoretically, be enabled, but be prepared to give up a lot of hardware security and also expect every app you want to use to need a patch because it breaks ABI.


If I understood it correctly, the video at the end of the post is already using an eGPU.

Do you have any pointers, either blog or article, regarding the issue with arm and the raspberry pi?


> the video at the end of the post is already using an eGPU.

That’s actually a non-Apple-Silicon machine using the M1 as an eGPU, for ease of development.


Amazing work. I should check their website if it supports dual boot. I can't wait to try it out once a nice new Mac Mini is getting released :)


Definitely supports dual boot...it's baked into Apple's bootloader, so it's pretty much the only way it works.


Wow, they got the GPU working!

Does this imply we can anticipate GPU virtualization in MacOS VMs (not running on a MacOS host) at some point?


Love to know the progress on Macbook external displays (i.e. Thunderbolt). This is what is blocking my daily driving.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: