Pretty much any Mac bought in the past 5 years can fulfil the requirements, which doesn't feel terribly unreasonable, and I bet the Intel case would be straightforward to cover too, and now you're catering for every Mac bought in the past 6 years.
Apple are dicks about making it easy to test on older macOS revisions, but I'm sure that'd actually be easier than you'd expect too. (I have a FOSS project that has macOS as a supported target. It targets OS X Mavericks, released in 2013. I don't have any easy way of testing anything older than macOS Ventura, released in 2022, and to be honest I don't really care to figure out how to do any better, but, last time somebody complained about OS X Mavericks incompatibility, and I fixed the problem, which was actually very easy to do: it did apparently work.) Put in a modicum of effort and I'm sure you can make this thing work for every Mac sold since like 2015, and there'd be a non-zero chance it'd work for some older ones too.
Thinking back to when BBSs were a thing, since that'd be on topic: perhaps Americans got a lucky break with the Apple II, in production from 1977-1993 (says Wikipedia) and seemingly a viable commercial platform for a measurable fraction of the period? For me, growing up in the UK in the late 20th century, the whole computer ecosystem seemed to get pretty much completely refreshed about every 10 years at the very most. Buy a BBC Micro in 1983: platform dead by 1990. Buy a ZX Spectrum in 1983: platform dead by 1991. Buy an Atari ST in 1985: platform dead by 1992. Buy an Amiga in 1986: platform dead by 1994. The PC was a bit of an exception, as the OS remained the same (for good or for ill...) for longer, but the onward march of hardware progress meant that you'd need new hardware every few years if you wanted to actually run recent software.
Anyway, my basic thinking is that if in March 2026 you are releasing some software that requires you to have a computer manufactured at some point in the 2020s, then this is hardly without historical precedent. It might even simply be the natural order of things.
Me? I set the displays to go to sleep after N minutes.
The great promise and the great disaster of LLMs is that for any topic on which we are "below average", the bland, average output seems to be a great improvement.
I think the "messy ideas" was a reference to the homepage copy "Turn your messy ideas into crystal clear specs.", not continuing the previous thought about the placeholder. I'd agree that "messy" might have more negative connotations than you intended.
Learned a few things I didn't know about exception handling, like Vectored Exception Handling. If it's possible to somehow have enough permissions to install a generic vectored exception handler that has enough complexity to emulate generic instructions, not sure why the shellcode couldn't just be included there instead.
Maybe someone else will have a follow on regarding some product that does some more complicated processing in a VEH that could be used to implement something that has the same shape as this.
The disk controller may decide to write out blocks in a different order than the logical layout in the log file itself, and be interrupted before completing this work.
Just wondering how SQLite would ever work if it had zero control over this. Surely there must be some "flush" operation that guarantees that everthing so far is written to disk? Otherwise, any "old" block that contains data might have not been written. SQLite says:
> Local devices also have a characteristic which is critical for enabling database management software to be designed to ensure ACID behavior: When all process writes to the device have completed, (when POSIX fsync() or Windows FlushFileBuffers() calls return), the filesystem then either has stored the "written" data or will do so before storing any subsequently written data.
A "flush" command does indeed exist... but disk and controller vendors are like patients in Dr. House [1] - everybody lies. Especially if there are benchmarks to be "optimized". Other people here have written up that better than I ever could [2].
It’s worth noting this is also dependent on filesystem behavior; most that do copy-on-write will not suffer from this issue regardless of drive behavior, even if they don’t do their own checksumming.
NVMe drives do their own manipulation of the datastream. Wear leveling, GC, trying to avoid rewriting an entire block for your 1 bit change, etc. NVMe drives have CPUs and RAM for this purpose; they are full computers with a little bit of flash memory attached. And no, of course they're not open source even though they have full access to your system.
I ran qmail for more than a decade I think (after sendmail), and don't regret it. But eventually ended up with postfix because I got tired of chasing patches to get qmail to play ball with the evolving, modern email world. At some point, the risk/reward equation inverted.
the problem of qmail is it's author DJB. He considered it 'done' which obviously isn't true. There are quite a few patched versions around but no fork got enough traction to become a living and maintained project.
For a long time you couldn't actually fork it because qmail didn't have a license and djb refused to add one due his unconventional views on licenses and his general stubbornness. So the only thing you could do was distribute a "patch set". And all of this also meant it wasn't packages in many repos.
By the time he finally added a copyright notice it was kind of a "too little, too late" kind of affair.
This is the story with most "djb-ware": daemontools, djbdns, qmail. I think it's a real shame because all of these had great potential to be picked up by others after djb himself lost interest. I suppose daemontools is the most "successful", but only in the form of the runit re-implementation.
reply