(author here) If you run the parser under a debugger like lldb, then attempt to inspect the AST of a program, it appears as an array of u64. Not very useful, unless you work on special support for debuggers (such as a python script to unpack it in lldb). Compare that to a tree of pointers, you can "expand" nodes without any extra effort.
(author here) I agree that it's a lot of complexity, and I acknowledge this in the article: You can get quite far with just a bump allocator.
I didn't go into this at all, but the main benefit of this design is how well it interacts with CPU cache. This has almost no effect on the parser, because you're typically just writing the AST, not reading it. I believe that subsequent stages benefit much more from faster traversal.
(By the way, I am a huge fan of your work. Crafting interpreters was my introduction to programming languages!)
Right, that's the first category, "It's absolutely possible to bork ownership and have multiple thread trample a struct" resulting in undefined behavior.
We did recently add an export for LLMs[1], but weren't quite confident in how the big models handled it. The biggest issue we kept running into was that it would prefer using older APIs over the latest ones. I tested it just now with ChatGPT, and it seems to be doing a lot better!
The export is kept up-to-date with the latest contents of our docs, which update every release. Sometimes a bit more frequently, if we're doing drive-by doc fixes.
GC gives you a nice cushion, because it makes allocations fast. However, with a little bit of thought put into memory allocation (such as using arenas with bump allocation), you'd easily beat Go with Rust. The author of SWC did exactly that https://swc.rs/docs/benchmarks