Hacker Newsnew | past | comments | ask | show | jobs | submit | svenpeter's commentslogin

I work at a unionized company in Germany and didn’t have to join the union or pay any dues to start working.

Technically employees who aren’t part of the union aren’t entitled to the benefits they negotiated. In reality everyone gets the same benefits anyway because otherwise the employer would create a huge incentive for everyone to join the union which would make strikes hurt even more.


IIRC we know about their internal Linux port because of some comment left in the open source XNU release.


That merge only has fixed because of how the development model works: there’s a merge window where new features are merged and which then becomes -rc1. After that only fixes are allowed until the final release of that version and then the merge window opens again.


That makes sense, so I just picked another commit at random without a -rc tag:

https://github.com/torvalds/linux/commit/c60c152230828825c06...

  The fix is to change && to ||
This seems like the exact type of bug that a unit test could prevent.


The list was at least updated after iOS:

I've found wiibrew.org and hackmii.com in there which are both Wii homebrew sites that became popular around 2008/2009 and probably declined in popularity starting in ~2012/2013.

Then there's also wiiu-developers.nintendo.com and wiiudaily.com which probably didn't exist before late 2012 or early 2013 when the WiiU was released.


Will Deacon replied on Twitter how at least Linux handles this correctly [1][2] and I assume that the same is true for XNU and Windows as well.

[1] https://twitter.com/WillDeacon/status/1506375874161086471

[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


I don't think ARMv8 requires EL3 but even if it did there is no EL3 on the M1.

Booting Windows natively would require Microsoft's support since some rather invasive changes to the kernel would be required (FIQ support, DART instead of SMMU) on top of implementing drivers for everything.


It’s been a while since my last physics lectures so this might be wrong, but the way i understand it:

We don’t have a good model of quantum gravity yet but our best guess is that the force carrier of gravity might be a particle called graviton. This hypothetical particle has no mass and therefore the length scale of gravity would be infinite. This matches the Newtonian and the general relativity model of gravity.

This is different from the source of gravity which would be the (gravitational) mass of an object (or more accurately the components of the stress energy tensor which describe the density and flux of energy but that’s also the point where I have to start with the hand waving because my knowledge becomes very fuzzy there)

It’s also true for the electromagnetic interaction: the force carrier here is the photon which is also massless and the length scale is also infinite here.


To be fair, we deliberately used very obscure hardware "features" which we knew were not implemented by any emulators and probably not used by any games to build these protections :-)

I'd have to dig up the old code but I'm fairly sure some of them rely on an operating system (Nintendo IOS, unrelated to both Cisco's IOS and Apple's iOS) running on the co-processor (nicknamed "Starlet"). Dolphin doesn't emulate that part at all because IOS exposes a high-level interface that can just be emulated instead. Works amazingly well for games, but will probably trip our protections.


What I find interesting in addition to that is how much slack most software allows. I remember lurking on the alt groups when emus were being discussed for things like an amiga or pc. They were talking sub cycle accuracy would be totally necessary for anything to work at all. Yet most software seems pretty chill with 'sort of close' results. It is oddly counter intuitive. Intuitively they were right but ended up being wrong (mostly). Some bits though you need that accuracy. Mostly you dont.

I think a lot of software companies would do like what you did. Where they would check for some sort of thing that should be there, or not, or too much of something. But the emulator would or would not have it. I think in many cases it would be things like checking to see if 512k of memory was available, when the real box would have 128k. That was usually for things like copiers I think?


Yup, I'm pretty sure many software companies have done similar things. It's a bit easier on video game consoles where your software runs on the very same hardware everywhere and you can pull off more subtle tricks. There were a few commercial games that also used a similar approach [1].

[1] https://dolphin-emu.org/blog/2017/02/01/dolphin-progress-rep...


Is it weird that you sound proud about this? Is there something noble about doing work that makes lives harder for hobbyists trying to preserve video games for the future, and has nearly zero impact on the actual sales of the game at release?


> Is it weird that you sound proud about this? Is there something noble about doing work that makes lives harder for hobbyists trying to preserve video games for the future, and has nearly zero impact on the actual sales of the game at release?

What makes you think we did it for any of those reasons? And how does us abusing hardware bugs make video game preservation any harder?

This was not for a commercial game, this was for an entry point to load your own software on a locked down video game console.

And back then people were selling our (free!) software and we added those protections to make sure we could show a "you have been scammed if you paid for this" screen that couldn't be removed. Unfortunately that also meant breaking our loader from running in emulators, but that didn't matter at all: Those could just directly launch actual .elf files anyway and didn't need the detour through the homebrew channel.

The code (minus those protections) for the Homebrew Channel is also available as open source these days.


I'm clearly not grasping the full context here. Sorry for the snide remark


no worries, this stuff happened over a decade ago. I can sometimes barely remember the details :)


NVMe requires a co-processor (which Apple calls "ANS") to be up and running before it works. This co-processor firmware seems to have a lot of code and strings dealing with PCIe. Now I haven't looked at the firmware in detail but I'm willing to bet that the actual drives are on a PCIe bus (or at least used to be on a PCIe bus on previous hardware).

It's just that this bus is not exposed to the main CPU but only to this co-processor instead. The co-processor then seems to emulate (or maybe it's just a passthrough) a relatively standard NVMe MMIO space.


Yes, the raw NAND storage modules are connected over PCIe on all M1 machines, to dedicated root ports that are behind ANS. As far as I can tell ANS implements (parts of?) the FTL and data striping and other high-level features, and effectively RAIDs together the underlying storage modules into a single unified NVMe interface. So in this sense, the PCIe here is just an implementation detail, the physical/logical interface Apple chose to connect its Flash modules to the built-in non-volatile storage controller in the M1.


Ah, that makes a lot of sense. Then this unified MMIO NVMe is "just" emulated inside ANS.


Are there any plans to replace the Apple firmware for ANS as well, or is that so locked down with signature checks that we can't expect to be able to?


The "queue" format itself is incredibly similar to the normal NVMe queue.

The normal queue is (more-or-less) a ringbuffer with N slots in memory and a head/tail pointer. You append the command to the next slot and increase the tail by writing to a doorbell. Once the controller is done it increases the head the same way.

Apple's "queue" instead is just a memory region without those head/tail pointers. Command submission now works by again putting the request into a free slot followed by just writing the ID of that slot to a MMIO register. Once a command is done the CPU again gets an interrupt and can just take the command out of the buffer again.

This probably makes the driver a little bit easier to implement.

On top of that a similar structure (which identifies the DMA buffers that need to be allowed) also needs to be put into their NVMe-IOMMU with a reference to the command buffer entry. The slightly weird thing about the encryption is that you put the key/iv into this buffer instead of the normal queue. My best guess is that this IOMMU design pushed them to also simplify the command queue to make the matching easier.

Hiding the encryption part inside the IOMMU also makes sense for them because the whole IOMMU management is hidden inside a highly protected area of their kernel with more privileges while the NVMe driver itself is just a regular kernel module which possibly doesn't have access to keys.


Ah, I was thinking of what they did for the T2 Macs (change the queue entry size), not the new changes for M1. But yeah, once they're doing proprietary variants of the NVMe spec, they can do whatever they want and probably had a good reason to make this change too.

(For those following: sven has been working on the NVMe stuff for Asahi Linux recently, he knows about this more than me)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: