Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Despite that no real-world quicksort implementations have that worst case (due to fallbacks to other sorts), it seems like it's useful to see the pathological case of pure quicksort anyway.


At which case: why bother using quicksort?


Because in the real world, where all our code must actually run, quicksort beats practically every other algorithm on the vast majority of inputs. Its cache behavior is better than heap sort, its memory usage is lower than merge sort, and even its O(log n) memory usage is efficiently allocated on the stack as opposed to hitting a slow system allocator.

"Engineering a Sort Function"[0] covers this quite well.

In some experiments of my own, I've found that even Smoothsort[1] (an algorithm specifically designed to have a smooth transition from linear to linearithmic behavior on nearly-sorted input) can't beat the C++ system's introsort (a quicksort variation with a fallback to heapsort) in already-sorted inputs.

[0] http://www.skidmore.edu/~meckmann/2009Spring/cs206/papers/sp...

[1] http://www.keithschwarz.com/smoothsort/


It's simple(relatively) and almost always quite fast. Or would you rather use slowsort?


Quicksort is not simple if you have to build in an entire other sorting algorithm as a failsafe.

And merge sort is often faster, especially if the comparison function is relatively expensive.


Yes, sometimes, but it also uses more memory. In practice, with real problems, quicksort wins with in-memory sorting, and mergesort wins with too big for memory data sets.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: