Oh, you didn't read far enough; the description is written with a sense of humor, essentially parodying the very "intelligence-insulting descriptions" that one sees frequently on the internet.
GNU make starts of with targets, and then talks the build graph (target -> source) to find source files which will re-build the targets. And GNU Make has to scan the files.
Tup starts off with the list of sources which have changed, and then walks the build graph (source -> target) to find out which targets need to be rebuilt. And Tup gets notified of file changes.
So where Make scans all targets and all source files, Tup gets told which source file has changed, and then rebuilds only the necessary targets.
That being said, the following comparison is a bit odd:
I don't think it's a requirement of GNU Make that building a project with N source files takes exponential time. I think its just that the GNU Make code is terrible.
GNU Make doesn't take exponential time, it takes ~linear time. The Y value on the graphs is exponential, because the input is growing exponentially.
The graph is just rubbing in that a noop Tup build is O(1) instead of Make's O(n) by showing crazy large n. Which is certainly great on Tup's part![1]
The Tup author's description of Tup's algorithm in comparison has always rubbed me the wrong way, though. It isn't some entirely new generation of build system like they present in the Tup paper. It's just caching stat(2) between iterations[2]. The build naturally flows down like in the Make diagram on the main Tup page; but with Tup we cache the status of each node, and become informed when one of them becomes invalid; then the upward flowing arrows of the Tup diagram is the cache-invalidation of the lower nodes flowing up to the higher nodes. This is the totally natural way of implementing caching if we were to add that to GNU Make.
So why don't GNU Make and other systems cache the DAG of stat(2) calls? As the saying says, cache invalidation is hard. Tup gets away with caching by receiving an a priori list of filesystem changes, so it knows which nodes to invalidate; other build systems have to scan the filesystem for changes. Tup gets this list of changes by having a FUSE filesystem sit between the user and the actual filesystem, logging all changes. While this is great for many users, it isn't quite general. It will break if we edit anything "offline", perhaps on a thumb drive on another computer. It will break in a number of different NFS setups. I don't mean to say that the caching doesn't have value, but it isn't the silver bullet that the Tup author makes it out to be.
[1]: In the Tup paper, the author gives some fancy O(...) expressions. These are... odd; they're kind of hand-wavey, as they are based on expected fan-out of directories, and don't really give a good apples-to-apples comparison. They certainly are valuable to analyze, but are pretty misleading if you aren't critical.
[2]: OK, that's not strictly true. The "Beta build system" (the next generation of build systems that Tup is supposedly part of) algorithm also has the feature that it will automatically remove a generated file that was part of the DAG, but isn't anymore. This is a nifty feature that is made possible by the caching; but IMO it doesn't constitute a core change.
> Tup gets this list of changes by having a FUSE filesystem sit between the user and the actual filesystem, logging all changes
This is what I was really looking for. Instead of querying stat on all files at run time, it uses a filesystem to know at run time what changes were made.
Clever, so long as the FUSE layer never makes a mistake.
How? Their description was useless in describing why it's "obviously" so much better.