Definitely sounds like clickbait, but this is a genuine problem I've been facing on my more complex projects. I was trying to work around it by making lots of .md files that contain context for various components of the app, as well as ones that "statefully" represent our steps along larger refactors, allowing things to persist across sessions. But that requires the hygiene/discipline of remembering to have the model update those.
Now… how much of our tiny context window does this eat up? And what if we're working on something very different from the previous tasks; might that irrelevant context risk confusing the model?
Also, how does this work if I work on multiple projects across different directories? Will it know to not get those mixed up?
>When FileVault is enabled, the data volume is locked and unavailable during and after booting, until an account has been authenticated using a password. The macOS version of OpenSSH stores all of its configuration files, both system-wide and per-account, in the data volume. Therefore, the usually configured authentication methods and shell access are not available during this time. However, when Remote Login is enabled, it is possible to perform password authentication using SSH even in this situation. This can be used to unlock the data volume remotely over the network. However, it does not immediately permit an SSH session. Instead, once the data volume has been unlocked using this method, macOS will disconnect SSH briefly while it completes mounting the data volume and starting the remaining services dependent on it. Thereafter, SSH (and other enabled services) are fully available.
Heh, reminds me of those boxes Sun used to make that only ran Java. (I don’t know how far down Java actually went; perhaps it was Solaris for the lower layers now that I think about it…)
With hypervisors and a Linux kernel doing the heavy lifting, the WASM on bare metal probably just looks a lot like a regular process. I would bet Sun did similar … minus the hypervisor.
I do miss the Solaris 10/OpenSolaris tech though. I don’t know anything that comes close to it today.
Technically, yes. I built+ported a majority of Debian packages onto Nexenta OS but that effort (and many parallel efforts) just vanished when Oracle purchased Sun. I don't miss SVR4 packages and I never grew fond of IPS. So many open-source folk warned of the dangers of CDDL and they were largely realized in fairly short time. Unsurprisingly, #opensolaris on irc also had a precipitous drop-off around the same time.
dtrace/zones/smf/zfs/iscsi/... and the integration between them all was top notch. One could create a zone, spin up a clone, do some computation, trash the filesystem and then just throw the clone away... in very short time. Also, that whole loop happened without interacting with zfs directly; I know that some of these things have been ported but the ports miss the integration.
eg: zfs on Linux is just a filesystem. zfs on Solaris was the base of a bunch of technology. smf tied much of it together.
eg: dtrace gave you access all the way down to individual read/write operations per disk in a raid-z and all the way up to the top of your application running inside of a zone. One tool with massive reach and very little overhead.
Not much compels me to go back to the ecosystem; I've been burned once already.
I think it was far less special that advertized, so it was probably a stripped Solaris that ran a JRE hoping noone would notice. Dog slow they were at least so from my viewpoint, there was nothing magic about those boxes at all.
Thank you for the quick fix! Your steps worked perfectly.
In any case, I'd like to add that I'm hoping an ACP adapter for OpenAI Codex is in the works; I've grown pretty fond of GPT-5, and would like to be able to tap into my existing ChatGPT Plus subscription; I'd rather not use API pricing at the moment. I prefer using Claude Code vs directly hitting the Anthropic API for the same reason.
Heck, an ACP adapter for Cursor CLI (itself based on Gemini CLI, right?) would even be useful; as that would also let me pick GPT-5.
True, but at least in prison you're (usually) fed… which may NOT be the case if you're fired from your job, put on a list, and blocked from the industry.
The linked report describes a case study of a prison where rat droppings were falling from the ceiling into the prison kitchen. It also states 75% of surveyed prisoners reported being served spoiled or rotting food.
Now… how much of our tiny context window does this eat up? And what if we're working on something very different from the previous tasks; might that irrelevant context risk confusing the model?
Also, how does this work if I work on multiple projects across different directories? Will it know to not get those mixed up?