The classic example is matrix multiplication, where we know of incresingly sophisticated algorithms with increasingly better asymptotic performance, but in practice most matrix multiplications are for sub-100x100 (IIRC) matrices where the naive algorithm we all learned as undergrads is fastest.
> in practice most matrix multiplications are for sub-100x100 (IIRC) matrices
I would say that you have the causation reversed: we just don't bother doing large matrix multiplications, because they're so expensive (and yet still don't usually cross the intercepts into making the "better" algorithms worth it, due to those algorithms having ridiculous constant coefficients.)
I would love to run Floyd-Warshall on my 40-million point network graph—but it just ain't gonna happen.
> we just don't bother doing large matrix multiplications, because they're so expensive
Sure, I agree. I'm not a specialist in numerical lin.alg., but I've heard it said that if you're spending a lot of time on large dense matrix multiplications, you're probably approaching the problem in the wrong way.