Before you get too excited about possible performance wins, note that gccgo turns out to be much slower than gc (the standard Go compiler) on a lot of real workloads. Here's one such benchmark, taken recently, though it's admittedly a microbenchmark [0]. Dave Cheney found similar results, though quite a while ago [1].
gccgo didn't support escape analysis for quite a long time, which meant that the performance cost of the increased heap allocations absolutely dwarfed whatever performance gain you got from GCC's smarter optimizations. I think it's recently gained support for escape analysis—it's actually quite difficult to find information about this, and whether you need a compiler flag to enable it, etc.—but I don't think that's going to tip the performance scales in gccgo's favor. EDIT: See below: 8.1 ships on-by-default escape analysis!
The primary motivations for gccgo, at least to my understanding, were a) having a second reference implementation that could find bugs in gc and the language specification, and b) support for esoteric platforms that will likely never make it into gc. The announcement of gccgo has more information about its motivation, though it makes some claims about performance that haven't stood the test of time [2].
GCC 8 provides a complete implementation of the Go 1.10.1 user packages.
The garbage collector is now fully concurrent. As before, values stored on the stack are scanned conservatively, but value stored in the heap are scanned precisely.
Escape analysis is fully implemented and enabled by default in the Go frontend. This significantly reduces the number of heap allocations by allocating values on the stack instead.
I read this as "if it is on the stack and looks like a reference when you squint your eyes, then we treat it as a reference" compared to objects on the heap where the GC seems to know the exact location of all references.
Right that's the standard conservative garbage collection. However, new results show that it's significantly worse than a precise collector (reference pending).
I'd love to see that reference, as I've not yet seen an example where this was true in practicd, but know several examples where it is definitely false (e.g. we use a modification of the Julia GC which uses consevative stack scanning to allow interop with another system, and it seems to perform very well).
I think that only scanning the stack conservatively should not change much from a completely precise GC. The chances of misidentifying a potential reference to an enormous data structure that is collectable are pretty slim in a memory region as small as a typical stack.
Scanning the entire heap is different because of its generally much bigger size. It's far more likely then to see references where there are none.
Sadly, those benchmarks (at the bottom of the thread) were done with a GCC that already included all the GCC 8.1 bells and whistles - results were unchanged under 8.1.
gccgo didn't support escape analysis for quite a long time, which meant that the performance cost of the increased heap allocations absolutely dwarfed whatever performance gain you got from GCC's smarter optimizations. I think it's recently gained support for escape analysis—it's actually quite difficult to find information about this, and whether you need a compiler flag to enable it, etc.—but I don't think that's going to tip the performance scales in gccgo's favor. EDIT: See below: 8.1 ships on-by-default escape analysis!
The primary motivations for gccgo, at least to my understanding, were a) having a second reference implementation that could find bugs in gc and the language specification, and b) support for esoteric platforms that will likely never make it into gc. The announcement of gccgo has more information about its motivation, though it makes some claims about performance that haven't stood the test of time [2].
[0]: https://groups.google.com/forum/#!msg/golang-nuts/mjTmIkWKZ6...
[1]: https://dave.cheney.net/2013/11/19/benchmarking-go-1-2rc5-vs...
[2]: https://blog.golang.org/gccgo-in-gcc-471