Performance is generally fine only because this is a ridiculously obscure feature that no one sane would ever trip over. Frankly I had no idea "sparse arrays" existed, nor would I have ever used them if I did (that's what a hash table is for). No doubt they're there because some browser did it once, some important site relied on it, and now we're all stuck with performance constraints (don't explode for huge unassigned array sizes) that don't make any sense.
No, O(N^2) sorting algorithms are just dumb in library/runtime[1] code. No one expects that, it will eventually bite anyone who uses the code. And this code is a booby trap, no one even knows if they're using it!
[1] In application code, where you might know a priori that they'll never sort more than "one screen worth" of data, I can see it being a reasonable hack. But never as part of a standard feature called "sort".
Except that every sort algorithm in every library on earth falls back to selection sort (or similar) for small problems (e.g. N < 10), because that's more efficient than drilling a merge sort all the way down.
How is that relevant? Is your contention that the code in question will never see performance problems because there's a constant factor that outweighs the runtime? That's not my read at all. It looks to me like sorting 40k items that happen to be packed in a sparse array is going to take on the order of a billion operations. No?
No, O(N^2) sorting algorithms are just dumb in library/runtime[1] code. No one expects that, it will eventually bite anyone who uses the code. And this code is a booby trap, no one even knows if they're using it!
[1] In application code, where you might know a priori that they'll never sort more than "one screen worth" of data, I can see it being a reasonable hack. But never as part of a standard feature called "sort".