Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Of course most of the platform weirdness is down to "we're vertically integrated and our business goal is the end product and getting that product done fast" but some of the design decisions are kinda baffling even when taking that into account. Like what exactly did they gain by making a weird non-PCIe NVMe situation? Is it really any easier/faster to make the kernel handle that crap than to put the drive on a virtual PCIe bus in hardware+firmware? Are they.. (oh no) trying to improve boot speed by not having to discover the drive on PCIe?!


Why have PCIe when you don't need it? Nothing else is on PCIe internally on mobile SoCs. It's just not a thing. That would just add silicon that you don't need.

All these "crazy" design decisions only look crazy from the point of view of x86/server hardware. From the point of view of an embedded SoC this is all reasonable and standard practice.


Well, yeah – embedded standard practice is not caring about standards and expecting the kernel to accommodate you. Makes sense for Apple with their vertical integration, but even the less integrated things do that, see the entire concept of a "BSP".

But that is what sucks. That's why embedded is creating all these piles of e-waste doomed to only run crappy outdated vendor kernels unless someone invests huge effort into reverse engineering. The whole "we don't need" attitude in regards to standard things that mainline kernels Just Work with is evil.

And now – with both Apple and Qualcomm – we have this embedded crap powering general purpose laptops…


Embedded has the DeviceTree standard to serve the same purpose as PCIe enumeration. This is even supported inside UEFI (which is how we will boot standard distros, once our kernel patches trickle upstream). There's no reason why not using PCIe means "creating all these piles of e-waste doomed to only run crappy outdated vendor kernels". What does that isn't whatever choice of device enumeration you use, it's vendors not bothering to upstream anything.

(This is another difference between the Asahi Linux project and Corellium's kernel; we're going through the bureaucracy of standardizing all of our DeviceTree bindings, which takes time but establishes a common reference that other OSes such as OpenBSD and bootloaders such as U-Boot can use to also support drivers for this hardware using our first-level bootloader.)


how exactly does one standardize DeviceTree bindings?


The canonical repo is the Linux kernel tree, so you go through there (and the DeviceTree maintainers). This is mostly for practical reasons since they are the primary consumer, but the bindings are packaged for use by other OSes too.


>And now – with both Apple and Qualcomm – we have this embedded crap powering general purpose laptops…

Indeed. This is why I'm not as enthusiastic about an ARM desktop future as everyone else is and honestly I'm quite terrified. The happy accident that the original IBM PC was, is that it had and open BIOS and HW interfaces which allowed HW vendors to come up with clones that were compatible with the rest of the ecosystem allowing for a chaotic anarcho-democracy where no vendor had control over the ecosystem, so today in the PC realm we have this open garden that everyone can install virtually whatever HW and SW they want.

Now, Apple, Qualcomm, Microsoft, Nvidia(through their desired acquisition of ARM) have seen the mistakes IBM has made which got them kicked out of their own ecosystem and, instead of going through a standardized, open route, try to create their own HW+SW walled gardens where they can rule with an iron fist and lock everything in.

I don't care if they bring 2X the performance/slimness, I just don't want to be locked into a walled garden and then be monetized through rent-seeking behavior.


The IBM PC was never "standardized" in any sense. It was simply one out of many de-facto standards. Early PC's didn't even support any sort of hardware enumeration, that only came way later with "Plug and Play"-compatible hardware.


NVMe requires a co-processor (which Apple calls "ANS") to be up and running before it works. This co-processor firmware seems to have a lot of code and strings dealing with PCIe. Now I haven't looked at the firmware in detail but I'm willing to bet that the actual drives are on a PCIe bus (or at least used to be on a PCIe bus on previous hardware).

It's just that this bus is not exposed to the main CPU but only to this co-processor instead. The co-processor then seems to emulate (or maybe it's just a passthrough) a relatively standard NVMe MMIO space.


Yes, the raw NAND storage modules are connected over PCIe on all M1 machines, to dedicated root ports that are behind ANS. As far as I can tell ANS implements (parts of?) the FTL and data striping and other high-level features, and effectively RAIDs together the underlying storage modules into a single unified NVMe interface. So in this sense, the PCIe here is just an implementation detail, the physical/logical interface Apple chose to connect its Flash modules to the built-in non-volatile storage controller in the M1.


Ah, that makes a lot of sense. Then this unified MMIO NVMe is "just" emulated inside ANS.


Are there any plans to replace the Apple firmware for ANS as well, or is that so locked down with signature checks that we can't expect to be able to?


> Like what exactly did they gain by making a weird non-PCIe NVMe situation?

When Intel did exactly that, there was a clear plausible chain of decisions leading to that madness. I have no clue what may have led Apple in this direction, but the excuses probably aren't any more pathetic than trying to explain why Intel has shipped two mutually-incompatible "solutions" for preventing NVMe drives from working out of the box with unmodified Windows.


VMD is weird, but at least it doesn't require big intrusive changes like decoupling your NVMe driver from PCIe. It's more like a weird special PCI-PCI bridge. Only needs a little extra driver: https://reviews.freebsd.org/D21383


VMD isn't the only method Intel has used to mess with how NVMe works. Their consumer chipsets going back at least to Kaby Lake had an even weirder "feature" that hid NVMe devices from PCIe enumeration and made them only accessible through proprietary interfaces on the chipset's SATA controller. Intel had to start using VMD on consumer platforms instead when AMD forced them to start providing more PCIe lanes from the CPU.


Well, their SSDs are not PCI-e devices, they are directly connected to the SoC. I think that their use of NVMe is just a compatibility stop gap, they will probably move to a custom direct-access interface in the near future.


They're using NVMe with customizations because NVMe is a good standard. There's no reason to reinvent it from scratch when it works; they can just make the non-standard changes they feel like making, as they have already done.

There's no fundamental reason why NVMe has to be tied to PCIe; it just happens to be that way on existing devices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: