RH also provides immutable OSes (Kairo uses some of their stuff) via Fedora Atomic and Fedora CoreOS, so for a single RHEL-like immutable instance, i'd just go CoreOS or Kinoite
The main reasons to use Kairo are perhaps more around the P2P mesh, using Cloudinit, deb vs rpm etc.
chroot'ing isn't sandboxing or "containers". And I don't think it's a very good explanation, actually - not that its necessarily easy to explain.
It looks like the author just discovered the kernel and syscalls and is sharing it - but, it's not exactly new or rocket science.
The author probably should use the existing sandbox libraries to sandbox their code - and that has nothing to with AI Agents actually, any process will benefit from sandboxing, that it runs on LLM replies or not.
The problem, especially with AI, IMO is that folks are even more willing to shoot themselves in the foot.
IMO this might be due to a few things like these:
1. Folks are trained on buffer overflows and SQL injections, they don't even question it, these are "bad". But an MCP interface to an API with god-like access to all data? The MCP of course will make sure its safe for them! (of course, it does not). It's learning and social issue rather than a technical issue. Sometimes, it makes me feel like we're all LLMs.
2. 300 things to fix, 100 vendors, and it needs to be done yesterday. As soon as you look into the vendor implementation you find 10 issues, because just like you, they have 300 things to fix and it needs to be done yesterday, so trade-offs become more dangerous. And who's going to go after you if you do not look into it too hard? No one right now.
3. Complete lack of oversight (and sometimes understanding). If you're just playing around by prompting an LLM you do not know what its doing. When it works, you ship. We've all seen it by now. Personally, I think this one could be improved by having a visual scheduler of tasks.
I believe this is neither. I believe this is purely a form of control - not to make money later or lose less money - rather, I believe many are very afraid of how people would use an un-nerfed LLM.
A better way to think of it in my eyes is, employer pays you to write an app that shows cats.
You start and you're like, yeah, no fuck that, and you write an app that shows dogs instead.
Employer comes back to you and says "we pay you to show cats, since you don't wanna do it we'll find someone else".
Sucks but seems logical to me.
That's unrelated to the claims of spying and illegal procedures of course, that the new article claims. Very curious to see what they did that was illegal / what kind of spying they would have done. But right now its just claims.