Virgil uses a simple stop-the-world semispace copying collector. I did that because it's better to start moving rather than nonmoving. I think the hotly contested questions in systems programming are all complete red herrings. It's not about GC versus non-GC, it's about gracefully handling unsafe code, integrating JITed code, interfacing with the lowest level APIs (e.g. kernel syscalls), implementing the runtime system, etc. Data formats like network packets, file formats, in-memory data structures, etc. That stuff is way more important than if your goddamn lists of widgets are heap, stack, or region allocated.
FFI from GC'd to non-GC'd languages becomes workable if you can manually add FFI-referenced objects as roots (so that a collection cycle will not free them or anything they reference in turn) and ensure that the GC can call finalizers in order to ensure proper disposal (such as decreasing reference counts or freeing memory) whenever a GC object happens to control the lifecycle of non-GC'd resources.
You pretty much described how FFI works in .NET :)
When you are passing a pointer across FFI which points to an object interior, you "pin" such object with `fixed` statement which toggles a bit in the header of that object. Conveniently, it is also used by GC during the mark phase so your object is effectively pre-marked as live. Objects pinned in such way cannot be moved, ensuring the pointer to their interiors remains valid.
It's not as simple implementation-wise - .NET's GC has multiple strategies to minimize the impact of pinned objects on GC throughput and heap size. Additionally, for objects that are expected to be long-lived and pinned throughout their entire lifetime, there's a separate Pinned Object Heap to further improve the interoperability. In practical terms, this is not used that often because most of the time you pass a struct or a pointer to a struct on the stack that needs no marshalling or pinning. In the rare case the pointer needs to point to a long-lived pinned array, these are allocated with `GC.AllocateArray<T>(length, isPinned)`.
.NET has another interesting feature that is important to FFI: ByRefs which are special pointers that GC is aware of that can point to arbitrary memory. You can receive an int*, len from C and wrap it into a Span<int> (which would be ref int + length), and GC will be able to efficiently disambiguate that such pointer does not point to GC-owned heap and can be safely ignored while should it point to object memory, the memory range needs to be marked and byrefs pointing to it need to be updated if it is moved. That Span<int> can then be consumed by most standard library APIs that offer span overloads next to the array ones (there are more and more that accept just spans, since many types with contiguous memory can be treated as one).
This works the other way too and there is a separate efficient mechanism for pinning byrefs when passing them as plain pointers to C/C++/Rust/etc.
I meant the other direction actually, non-GC'd calling GC'd. FFI from GC'd to non-GC'd has its issues but is thankfully much better explored.
With a minimal ref counting "GC" on GC'd side, you just need 2 extra "runtime" functions (incref, decref), which happen to fit like a glove with the scope-based resource management you have in Rust/C++/Swift/C (the latter with GCC extensions).
With a tracing GC, my hope was to prove that if you make it as "minimal" as your usual ref counting implementation, then it's also just a few functions (one to init a GC heap, one to allocate on that heap, one to collect that heap), which can hardly be considered a "runtime".
Your typical tracing GC is a large, powerful and complex beast that doesn't play nice with anything besides the language/vm it was designed for.
I am confused by this comment. All the enumerated features are dependent directly or indirectly on the memory management approach that the system uses. If one chooses badly for the target workload, there are many things that will go wrong including excessive copying (which kills the performance of the system) and inscrutable crashes/logic errors as well as security vulnerabilities.
Personally, I tend to think that if you nail the memory management technique all the other items will tend to work themselves out. And I think this is actually harder to get right _systemically_ than all of those other things.
> I am confused by this comment. All the enumerated features are dependent directly or indirectly on the memory management approach that the system uses.
The key feature of low-level systems programming is that there's no such thing as "the" one memory-management approach. GC with arenas can be used in a "pluggable" way for the limited case of objects referencing one another in a fully general graph (including possible cycles) while preserving other, more lightweight approaches (including refcounting and simple RAII/static ownership tracking) for the bulk of cases where the full generality of GC isn't always required.
> if you nail the memory management technique all the other items will tend to work themselves out.
From my experience, implementing a language runtime for a GC'd language in a non-GC'd language will lead you down one of two well-trodded roads full of footguns and booby traps, which includes either conservative-on-the-stack scanning or a huge PITA handle system which has a bug tail measured in decades. So you want to implement a GC'd language in a GC'd language. But that's probably not what you meant.
To the larger point, no, absolutely not. There are so many other considerations for systems programming that this whole "GC or not" debate is a waste of time. Read the comment again. System programming deals with constraints imposed by other hardware and software, as well as the need to look at the implementation guts of things, generate new code, etc. Solving the language's memory management problems is only tangentially related, meaning, if you do solve it, then you still have these problems left over to solve.