The counterpoint here is that a if your program uses 2gb of memory on a 64bit computer, only 1 in 4 billion random 64 bit values will be plausible addresses (and almost all of them will only pin small amounts of memory).
Less trivial considering that typically only the low ~47 bits of addresses are allowed to be non-zero; then again, values on the stack would still be full 64 bits and any non-zero bit in the top 17 immediately disqualifies it; then again, besides literally random 64-bit integers, ones not pushing up to the 64-bit limit are probably more likely anyway.
Another possibility would be two adjacent 32-bit ints merging; for a 2GB heap this'd require the high half hitting one of the two valid 1≤x≤32767 values (reasonably frequent range for general-purpose numbers) and the bottom one can be anything; though whether such packing can happen on-stack depends on codegen (and, to a lesser extent, the program, as one can just do "12345L<<32" and get a thing that has the 2 in 32767 (0.006%) chance of hitting the heap); but even then fitting a million of such on the stack only gives ~61 potential false roots, and for that to be significant some of those must hit massive objects (brings back some 1 in a billion factor unless there are large cyclic object subgraphs).
Depending on the processor architecture, you might be able to assume a minimum memory alignment and then immediately disqualify values whose least significant bits aren’t zero.
You probably can't if your programming language puts data in those bits (which is fairly commin since it's a very convenient place to store a couple bits of data for the GC).
Didn’t the opposite happen near the end of the 32 bit age? My take was that you could get away with conservative garbage collection if you had 16MB of RAM in a 32 bit address space but as you filled more of it the probability of a significant memory leak converged on 1.
I assume you mean 16 mb of ram in a 32 gb space, but yes. As app size approaches address space size, lots of things get worse.
That said this new generation of only conservatively sweeping the stack has the significant advantage that the stack has remained pretty small (8mb is still the standard stack size), so if you have precise heap scanning the odds of spurious collections from conservative scanning go down a ton.