All of this looks accurate, but it's worth contextualizing: this is an optimization that bears the most fruit in "local frames of reference" -- the timescales where linear scans beat index lookups are likely to be strictly dominated by the network latency to transceive from the database. The conclusion is then that the optimization ~only optimizes for cases that effectively don't matter.