Performance break-even is always with respect to a particular implementation. And even with only a single data point I could state (and still can) with very high confidence that no matter what implementation you run your test on, the break-even point will be less than 5000. A LOT less. You'd have to go out of your way to find a hash table implementation so bad that the break-even point was even within an order of magnitude of 5000. Even just reasoning from first principles you could reasonably guess that the break-even would be a lot less than 5000. Traversing a 5000-element AList requires 5000 memory accesses, with no guarantees of locality of reference so you're probably blowing out your cache dozens (thousands if you're unlucky) of times. You probably couldn't write a hash table implementation whose performance was that bad if you tried.
The fact of the matter is that hash tables are faster than ALists in all but the most trivial of cases. Theory predicts it, and an experiment that takes all of two minutes to do confirms it. Everything else is quibbling over angels on the head of a pin.
The fact of the matter is that hash tables are faster than ALists in all but the most trivial of cases. Theory predicts it, and an experiment that takes all of two minutes to do confirms it. Everything else is quibbling over angels on the head of a pin.