Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GCC 8.1 Released (gcc.gnu.org)
235 points by edelsohn on May 2, 2018 | hide | past | favorite | 45 comments


I did not realise GCC supported Go until reading these release notes. What advantage/reason would I have for using GCC over the standard Go compiler?


Performance, GCC does more optimization passes than the reference Go compiler.

Supported platforms.

Also it does have some extension pragmas and is easier to use for something like trying to use Go baremetal.


Before you get too excited about possible performance wins, note that gccgo turns out to be much slower than gc (the standard Go compiler) on a lot of real workloads. Here's one such benchmark, taken recently, though it's admittedly a microbenchmark [0]. Dave Cheney found similar results, though quite a while ago [1].

gccgo didn't support escape analysis for quite a long time, which meant that the performance cost of the increased heap allocations absolutely dwarfed whatever performance gain you got from GCC's smarter optimizations. I think it's recently gained support for escape analysis—it's actually quite difficult to find information about this, and whether you need a compiler flag to enable it, etc.—but I don't think that's going to tip the performance scales in gccgo's favor. EDIT: See below: 8.1 ships on-by-default escape analysis!

The primary motivations for gccgo, at least to my understanding, were a) having a second reference implementation that could find bugs in gc and the language specification, and b) support for esoteric platforms that will likely never make it into gc. The announcement of gccgo has more information about its motivation, though it makes some claims about performance that haven't stood the test of time [2].

[0]: https://groups.google.com/forum/#!msg/golang-nuts/mjTmIkWKZ6...

[1]: https://dave.cheney.net/2013/11/19/benchmarking-go-1-2rc5-vs...

[2]: https://blog.golang.org/gccgo-in-gcc-471


Looks like they solved those issues in 8.1

    GCC 8 provides a complete implementation of the Go 1.10.1 user packages.
    The garbage collector is now fully concurrent. As before, values stored on the stack are scanned conservatively, but value stored in the heap are scanned precisely.
    Escape analysis is fully implemented and enabled by default in the Go frontend. This significantly reduces the number of heap allocations by allocating values on the stack instead.


"As before, values stored on the stack are scanned conservatively"

Why would values on the stack be garbage-collected at all? Do they mean scanning the stack for live references?


I read this as "if it is on the stack and looks like a reference when you squint your eyes, then we treat it as a reference" compared to objects on the heap where the GC seems to know the exact location of all references.


Right that's the standard conservative garbage collection. However, new results show that it's significantly worse than a precise collector (reference pending).


I'd love to see that reference, as I've not yet seen an example where this was true in practicd, but know several examples where it is definitely false (e.g. we use a modification of the Julia GC which uses consevative stack scanning to allow interop with another system, and it seems to perform very well).


Sorry for the late follow up. Hopefully you're still around to notice. Apparently there was previous HN discussion on this, but I haven't seen it (or looked for it): https://www.excelsiorjet.com/blog/articles/conservative-gc-i...


I think that only scanning the stack conservatively should not change much from a completely precise GC. The chances of misidentifying a potential reference to an enormous data structure that is collectable are pretty slim in a memory region as small as a typical stack.

Scanning the entire heap is different because of its generally much bigger size. It's far more likely then to see references where there are none.


Scanned, not collected. Specifically, scanned recursively to mark embedded objects as alive.


Author of [0] here.

Sadly, those benchmarks (at the bottom of the thread) were done with a GCC that already included all the GCC 8.1 bells and whistles - results were unchanged under 8.1.

So it's still _really_ slow.


I use gccgo to bootstrap builds of the real Go releases.


Supports x86 processors without sse and mmx, example. https://en.wikipedia.org/wiki/Intel_Quark The google go compiler only supports x86 from pentium pro.


I can see why this would be beneficial to a small number of people but I can't imagine a lot of people needing this feature. Isn't Intel Quark no longer supported or at least isn't a product line that Intel is giving much effort to?


As Google has been paying Ian Taylor to write the GCC Go support since the very early days of Go I'd say you could consider GCC to also be a "standard" Go compiler.


You didnt ask, but the main disadvantage is that they dont support new language constructs as fast as the standard Go compiler.


> Some code that compiled successfully with older GCC versions might require source changes, see http://gcc.gnu.org/gcc-8/porting_to.html for details.

I followed the link, and almost all of the changes are C++ related. The exception is a change to the way some Fortran subroutines are called from C.


Hm, I don't see any mention of D, which is what I was most excited about. I know that D has been merged into gcc, but is it not there for this release yet? Will it be in the next release?

edit: Oh well, gdc 8.1 is there in the Debian repos. I'll ask Iain Buclaw what happened. Maybe gdc will get released with the next gcc. Looks like the merge wasn't quite complete in time for 8.1. I can be patient.


Apparently the process was stalled again. Check the forums.


Cool new/improved warnings:

> The -Wrestrict option introduced in GCC 7 has been enhanced to detect many more instances of overlapping accesses to objects via restrict-qualified arguments to standard memory and string manipulation functions such as memcpy and strcpy.

> The -Wold-style-cast diagnostic can now emit fix-it hints telling you when you can use a static_cast, const_cast, or reinterpret_cast.


Link to the release notes: https://gcc.gnu.org/gcc-8/changes.html


> signed integer overflow is now undefined by default at all optimization levels

Now I'm all for this at -O1 and above, but having this enabled by default and -O0 [0] is just reckless. Default/-O0 should consist of only completely safe transformations like constant folding, with UB-based stuff only performed when optimizations are explicitly enabled.

[0] compare https://godbolt.org/g/ufDu1n on trunk vs the behavior on 7.3 and below


I think your complaint is reasonable, but I also can see their logic in making what's considered UB consistent for -O0 and other optimization levels. The reason is that for most use cases, I think it's pretty typical to develop and debug with -O0 and compile for release with stronger optimizations and other options basically unchanged. There's an argument that in this case, making UB consistent for -O0 might expose code that relies on UB earlier in the process, whereas before it may have been found much later (or never!).

I understand that there are some use cases where teams release binaries compiled with -O0 if they've identified that avoiding unexpected effects of aggressive optimization is more important than raw performance. But in this case, it seems like the right solution is to allow such teams to specifically opt in to allowing overflow via -fwrapv.


> for most use cases, I think it's pretty typical to develop and debug with -O0 and compile for release with stronger optimizations and other options basically unchanged

Be that as it may in most cases, the compiler shouldn't make an assumption that this is always the case. If I don't pass any optimization options, and especially if I pass -O0, the correct behavior should be "no surprises".

If we do want to redefine -O0 as "dev/debug build for code that will eventually be built with optimizations" rather than just "no funny stuff with my code", there are still a couple problems with that. For one thing, a flag intended for debugging with some optimizations already exists, -Og. Also, if the goal is to expose UB sooner, why not just enable UBsan for these builds?

> I understand that there are some use cases where teams release binaries compiled with -O0 if they've identified that avoiding unexpected effects of aggressive optimization is more important than raw performance. But in this case, it seems like the right solution is to allow such teams to specifically opt in to allowing overflow via -fwrapv.

I think you're underestimating the amount of people who will run "cc foo.c -o foo" and expect it to work, without thinking too much about UB as defined in the C standard, and who haven't ever heard of -fwrapv. By virtue of passing "-O1" in, it's safe for the compiler to assume you know what you're doing, but the default behavior should treat the user as a novice.


> Be that as it may in most cases, the compiler shouldn't make an assumption that this is always the case. If I don't pass any optimization options, and especially if I pass -O0, the correct behavior should be "no surprises".

The problem with this argument is that before the compiler even enters the picture, the C standard has always specified that signed overflow is UB. I'd argue that providing the additional guarantee that "actually, it'd defined" for the -O0 case only is a bigger surprise than "funny stuff with your code." To categorize treating signed overflow as UB as doing "funny stuff" is off the mark; the behavior was always unspecified.

> I think you're underestimating the amount of people who will run "cc foo.c -o foo" and expect it to work, without thinking too much about UB as defined in the C standard, and who haven't ever heard of -fwrapv. By virtue of passing "-O1" in, it's safe for the compiler to assume you know what you're doing, but the default behavior should treat the user as a novice.

I can't remember the last time I actually invoked a C compiler directly on the command line in the way you're describing (rather than through higher-level build configuration; e.g. generated Makefiles, VS solutions, whatever). I would hope such a use case for non-toy code is... rare.


You can not expect any consistency with UB.


I don't disagree, and I'm not sure what about my comment would suggest that I expect consistency there. What I was arguing was that any code relying on UB is dangerous / broken, so it's a good idea to surface any underlying code issues at all optimization levels by standardizing how it's handled.


Yes, but a more-consistent-with-UB compiler is better to use than a less-consistent-with-UB.


For avoiding unexpected effects on legacy code, I use: -fno-strict-aliasing -fno-strict-overflow (like -fwrapv but valid also on old gcc) -fsigned-char (for building on ARM)


If someone really wants that, can't they just compile with `-fwrapv`? That seems much safer than relying on blanket optimization levels if you really want defined signed overflow behavior.


I would say it's more reckless to depend on undefined behaviour at multiple optimization levels. Doing this ensures that you have the same bugs with all -O levels, which would be an ideal state.


Would be nice if they give us an int with a defined overflow. They won't do that because organizations would then ban the normal int.


I greatly appreciated the implementation of AddressSanitizer in gcc7, and am happy to see it receive more attention in gcc8. Thank you!


Just to be clear, ASan is available in GCC since 4.8:

https://gcc.gnu.org/gcc-4.8/changes.html


Wow. I had no idea. Thank you!


Has anyone had luck getting newer versions of gcc to run on a Parallella board (16-core Epiphany RISC SOC and Zynq SOC (FPGA + ARM A9))?

It still has 4.9 which isn't bad, I'd just like to use new c++ functionality.


If anyone wants to try this release for C and C++ languages, I have created a Docker image based on Alpine Linux, which you may use. https://hub.docker.com/r/infinitecoder/gcc/tags/


> The C++ front-end now has experimental support for some parts of the upcoming C++2a draft, with the -std=c++2a and -std=gnu++2a options, and the libstdc++ library has some further C++17 and C++2a draft library features implemented too.

Wow, did they finish C++17 support already?


When is C++ getting simple networking support in its stl? Is it this C++2a?


Yes; C++20 seems likely. See https://isocpp.org/std/status



Every new version of GCC breaks some kind of Boost related stuff for us. I haven't tested, but I'm going to assume it does.


Unfortunately gcc-dlang/gdc is not on this release.


The start of the release notes sounds like the start of an infomercial ;)

"Are you tired of your existing compilers? Want fresh new language features and better optimizations? Make your day with the new GCC 8.1!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: