Unlike C++, Go enforces that its dependency graph be a DAG. This drastically reduces compilation time, but it also reduces the headache that comes with decoupling highly coupled code, because it restricts the extent to which your code can be messy to begin with.
The package system also enforces (at compile-time) that every imported package be used (and also that every named identifier be defined, which most dynamic/interpreted languages can't do). This applies not just to imported packages, mind you, but to any lvalue - if you declare/assign to an lvalue that's never used as an rvalue later, you can't compile the program.
That's really helpful when refactoring, because it makes it easy just to move a bunch of code between files, then follow the breadcrumb trail of compiler errors (not warnings!) to figure out what still needs to be fixed. The compiler won't tell you everything you need to do, but it's sort of like having a Roomba helping pick up after you while you clean your house manually.
Yes, for those genius programmers who never makes any mistakes, this may not be much of an improvement. But for those of us who don't trust our human brains as much and want to be absolutely sure that these silly errors don't slip through, it takes a huge load off the mind.
Unlike C++, Go enforces that its dependency graph be a DAG.
In C++ header dependencies are also a DAG, since include guards prevent cycles (and multiple includes). What makes Go faster are a few things: (1) C++ headers contain templates, which are slot to compile; (2) Go only looks at direct imports and uses the compiled form of those imports, rather than recursing over their imports (again); (3) Go is simpler to parse; and (4) there is no overloading, so symbol/method resolving is simpler.
I also think that the advantage is often overstated. C++ is a nightmare in this respect, C programs and libraries often compile very fast (on my current machines, running configure often takes much more time than the actual compilation), the same applies to e.g. Java code.
That's really helpful when refactoring,
And annoying for testing, the printf example has been beaten to death. (Yes, I know that you can add a line such as var use = fmt.Println).
But for those of us who don't trust our human brains as much and want to be absolutely sure that these silly errors don't slip through
It's always surprising how Go fans can sell a feature that any strong statically typed language always had (easy refactoring by letting the type system work) can sell as something unique and new ;).
Compilation units can rely on each other through forward declarations, so the dependency graph is not acyclic. Header exclusion does form a DAG within a compilation unit, I guess.
A forward declaration is not really a compile time dependency.
If you are forward declaring a function, you are just promising that it is present during linking. So, during compilation it is not an edge in the graph.
If you are forward declaring a data type such as a class, a full definition needs to be visible at its first use or you are using the type as a pointer:
class A;
class B {
A *d_a;
[...]
};
In this case it is not really a dependency either, since the compiler does not need to know the size of A, since d_a is a pointer. When you start to dereference d_a, its definition needs to be fully visible, which is done via headers, which are a DAG through guards.
Could you give an example where C++ dependencies are not a DAG during compilation?
Yes, but it is still a dependency. By which I mean the software won't run if you don't supply the necessary thing at resolution time (which is possibly quite late: well into runtime, if you are on a system with lazy linking). Go (and some other languages, like OCaml) enforces that such dependencies form a DAG: C++ does not.
I don't think this has much to do with compilation speed though.
Very interesting to hear. Any chance you could expand on this a bit more?