That is a terrible assumption to make. Regular lacquer for example does poorly under temperatures commonly encountered when preparing food and it’s basically a mix of solvents.
The solvents evaporate when the lacquer cures, right? A lacquered spatula or spoon could leach some plasticizers when heated up. But who on earth would go to the trouble of spray lacquering a spatula? It doesn't seem like a real concern. Wooden spoons from IKEA aren't gonna poison you!
Flexner's "Understanding Wood Finishing" has a section about "the myth of food safety" that pretty directly states that food safety isn't a serious concern for fully cured finishes.
If you're into Nix, check out https://github.com/mbrock/filnix — not yet integrated & maintained in upstream Nixpkgs, but lets you replace Nix/NixOS packages with Fil-C versions quite easily.
I've got a pure Go journald file writer that works to some extent—it doesn't split, compress, etc, but it produces journal files that journalctl/sdjournal can read, concurrently. Only stress tested by running a bunch of parallel integration tests, will most likely not maintain it seriously, total newbie garbage, etc, but may be of interest to someone. I haven't really seen any other working journald file writers.
I see what you mean, but I think it's a lot less pernicious than astrology. There are plausible mechanisms, it's at least possible to do benchmarking, and it's all plugged into relatively short feedback cycles of people trying to do their jobs and accomplish specific tasks. Mechanical interpretability stuff might help make the magic more transparent & observable, and—surveillance concerns notwithstanding—companies like Cursor (I assume also Google and the other major labs, modulo self-imposed restrictions on using inference data for training) are building up serious data sets that can pretty directly associate prompts with results. Not only that, I think LLMs in a broader sense are actually enormously helpful specifically for understanding existing code—when you don't just order them to implement features and fix bugs, but use their tireless abilities to consume and transform a corpus in a way that helps guide you to the important modules, explains conceptual schemes, analyzes diffs, etc. There's a lot of critical points to be made but we can't ignore the upsides.
I'd been imagining taking the Zig Language Server and adding some refactorings to it—it only had a bare minimum like Rename Symbol. It seemed like a huge project with so much context to get familiar with, so I put it off indefinitely. Then on a whim I decided to just ask GPT-5 (this was before Codex, even, I think?) to give it a go. Plopped it down in the repo and said, basically, implement "Extract Function". And it just kind of... did. The code wasn't beautiful, I could barely understand it, some of which must perhaps be blamed on the existing codebase not being exactly optimized for elegance, but it actually worked. On the first try! We continued to implement a few more refactorings. Eventually I realized the code we were churning out actually needs major revision and rewriting—but it took me from less than zero to "hey, this is actually provably possible and we have a working PoC" in, like, fifteen minutes. Which is pretty insanely valuable.
And, fairly uniquely, LLVM has a LLVM_PARALLEL_LINK_JOBS setting that is distinct from the number of parallel jobs for everything else. I think I was using that 15 years ago.
I wish GCC had it. I have a quad core machine with 16 GB RAM that OOMs on building recent GCC -- 15 and HEAD for sure, can't remember whether 14 is affected. Enabling even 1 GB of swap makes it work. The culprit is four parallel link jobs needing ~4 GB each.
There are only four of them, so a -j8 build (e.g., with HT) is no worse.
Look at how fanatic the compatibility actually is. Building Postgres or MySQL is conceivable but probably will require some changes. (SQLite compiles and runs with zero changes right now.)
SQLite runs about 5 times faster compiled with GCC (13.3.0) than it does when compiled with FIL-C. And the resulting compiled binary from GCC is 13 times smaller.
Interesting! I guess that's from your standard benchmark setup. Please note that Fil-C makes no secret of having a performance penalty. It's definitely a pre-1.0 toolchain and only recently starting to pick up some momentum. The author is eager to keep improving it, and seems to think that there's still plenty of low hanging and medium hanging fruit to pick.
It does (or did, at some point) pass the thorough SQLite test suite, so at least it's probably correct! The famous SQLite test coverage and general proven quality might make SQLite itself less interesting to harden, but in order to run less comprehensively verified software that links with SQLite, we have to build SQLite with Fil-C too.
If you run Nix (whether on NixOS or elsewhere) you can do `cachix use filc` and `nix run github:mbrock/filnix#sqlite` and it should drop you into a Fil-C SQLite after downloading the runtime dependencies from my binary cache (no warranty)!