Shameless self promotion, if you want a faster sorting implementation, check out pdqsort (https://github.com/orlp/pdqsort) which is almost twice as fast as libstdc++'s std::sort for randomly shuffled integers, and has all sorts of tricks for input distributions with patterns (like pre-sorted, reverse sorted, single out-of-order element appended, etc).
libstdc++'s std::sort doesn't implement quicksort per se. It implements introsort[1]. I'm curious how a pure C implementation of introsort would fare against std::sort.
" It begins with quicksort, it switches to heapsort when the recursion depth exceeds a level based on (the logarithm of) the number of elements being sorted and it switches to insertionsort when the number of elements is below some threshold. "
You forgot about heapsort. It's a combination of three sorting algorithms, not two. However trivial the difference may seem, I'd still prefer to look at a "C introsort vs C++ introsort" benchmark than a "C quicksort vs C++ often quicksort, but not really" benchmark.
libstdc++'s std::sort doesn't implement quicksort per se. It implements introsort[1]. I'm curious how a pure C implementation of introsort would fare against std::sort.
How does this fare against Python's famous Timsort (used by several languages and systems)? How about the dual-pivot quicksort used by Java for primitive arrays?
Someone has to have put together a nice benchmark for comparing many sorting algorithms. I wish that the author had done some benchmarking first, so that the proposed algorithm can properly be positioned w.r.t. state-of-the-art techniques.
I’d like to see this too but benchmarking sort algorithms is a pain in the ass due to the wide array of sorting shapes and sizes, I wouldn’t expect a well maintained benchmark suite for language X.
Summary: this is a non-recursive merge sort with improvements.
Benchmark of quadsort() versus C qsort():
* ~10x faster on forward-order items
* ~2x faster on reverse-order items
* ~equivalent on random-order items
Improvements:
* Ordering: when blocks of items are in order, or in reverse-order, then do special case handling, which gives quadsort O(n + log n) instead of qsort O(n * log n).
* Boundaries: compare data rather than traditional merge sort wasteful boundary checks.
If you have a k sorted (or reverse sorted) blocks, you can get O(n log k) performance in merge sort variants. Especially, if k is a small fixed constant, like 1 or 2 or 10, you should get linear performance.
A few language's standard library sorts implement that already.
For a serious implementation, of course, you not only care about the asymptotic performance, but also absolute runtimes.
Quick sort is not the fastest sorting algorithm. It would be nice if there is a benchmark comparing with other state of the art algorithms like Timsort.
It depends on the implementation but Quicksort typically beats Timsort with random data. Quicksort is unstable though, so that's not a fair comparison.
Interesting, but I'm surprised if this is the first time we have sorting algorithm that is swapping more than two elements at a time. I would have guessed every possible iteration of sorting algorithms has been already explored, proven and tested.
It isn't. Sorting networks (https://en.wikipedia.org/wiki/Sorting_network) provide the best possible in-place sorting combinations for N elements, where N is currently < about 11. If somebody could find a way to generalize the sorting network algorithm for any number of input elements, they'd be settling the issue of sorting things pretty much forever.
At lazy glance, it looks kind of like Quadsort has re-derived a variation of the 4-element sorting network.
I was imprecise. I was talking about the construction of optimal sorting networks. You're correct to point out that there are several approaches now for constructing sorting networks for arbitrary numbers of inputs, but as noted in that section, they all have some very important tradeoffs. I said, "...provide the best possible in-place sorting combinations for N elements, where N is currently < about 11"; the 11 there was a reference to this part from the second section of that article:
"For one to eleven inputs, minimal (i.e. size-optimal) sorting networks are known, and for higher values, lower bounds on their sizes S(n) can be derived inductively using a lemma due to Van Voorhis: S(n + 1) ≥ S(n) + ⌈log2(n)⌉. The first ten optimal networks have been known since 1969, with the first eight again being known as optimal since the work of Floyd and Knuth, but optimality of the cases n = 9 and n = 10 took until 2014 to be resolved.[11] An optimal network for size 11 was found in December of 2019 by Jannis Harder..."
The point I was trying to make is that sorting networks restrict your algorithm to a data-independent memory access pattern.
For some particular N, you might be able to find the smallest sorting network that sorts N elements. And then you might be able to find an even faster algorithm that sorts N elements using some strategy that's not equivalent to a sorting network (e.g., if your strategy does data-dependent memory access).
You'd probably want to start by defining equivalence and then work from there. If your criteria for equivalence is loose enough you're already done, e.g., if you just look at runtime big O for randomized arrays or something you can't do better than n lg n so there's just the one Pareto optimal choice, with many possible implementations.
O(n log n) is only the frontier for comparison based sorts that know nothing about the distribution of inputs.
If your sorting algorithm is allowed to do anything else on your data, like hashing or looking at bits or arithmetic, different lower bounds might apply.
This reminds me of a programming exercise I was asked to write when I first learned programming: write a sorting program generator that given N, generates a program that sorts an array of N elements optimally: the generated code has N! branches, one for each possible permutation. With some CSE help from the compiler, it can be really quite fast at the expense of code size.
The author's explanation isn't entirely clear, but it seems similar to the above construction with a fixed N and then a merge sort afterwards.
A truly optimal sorting for a given N is a nontrivial problem. By truly optimal I mean the actual absolute minimum in the number of comparisons, no O(...) approximation.
For five elements the lower bound from counting the permutations is ceil(log2(5!)) which says you cannot sort 5 elements in less than 7 comparisons.
An actual 7 comparison algorithm exists but it is not very easy to write it. For greater numbers it gets much much trickier - in general the log(#permutations) lower bound cannot be met.
A program generator seems rather advanced for a beginner's assignment...
When I started, I wrote an Excel spreadsheet to generate what I would now describe as a 50 element array.
I also recall seeing a project to programatically generate mnemonic operators in Haskell, limited only by the ability of the compiler to not run out of memory. Sadly, I can't seem to find it.
In his benchmark, the author is assuming that qsort() is implemented using quicksort, but that's not necessarily true. For example, glibc is using mergesort (although it falls back to quicksort if the system is short on memory).
I can’t imagine the sort of bugs you get when your code relies on stable sort but calls qsort and everything works great until the machine is under heavy load.
Someone recently did something very stupid/clever with an API that I wrote.
In the code review I initially complained, but then I couldn’t think of any other way to interpret the design. So I guess that’s a feature now. ‘Course later we found a performance problem, but, you know…
This feels a little unfair; the function is invoked the same number of times, but C++ has a mechanism for removing the overhead of calling a function (inlining).
Since we're already using O(N) space, it would be interesting to see how this compares to Radix sort[0], which is O(N) space but O(N) time ( due to just hashing everything ).
Like others have said, it would be cool to see quadsort stacked up to other current state-of-the-art sorting algorithms.
Radix sort (similar to bucket sort) groups items based on individual digits of the non-hashed values. If you hash the values before, you will end up with the data being sorted according to their hash, but they will appear almost random in their unhashed form.
A hash function is any function that maps arbitrary data to fixed-size values. A radix is a type of hash. Hashes are not defined as random or required to sort differently than the unhashed values. If you define a hash function that returns the first 32 bits of it’s input, then you have a hash that sorts almost the same as the unhashed values, as long as the first 32 bits are changing frequently, and you also have a hash function that you can call a radix.
I have never heard about hash function in the context of radix sort or anything similar as you describe. Wikipedia says about hash functions, that they "[S]cramble the bits of the key so that the resulting values are uniformly distributed over the key space". I would say that isn't the case for the function in radix sort that is used to 'pick' a digit.
Well, good you were here, now you have heard about radixes as hashes. ;) It's good to see and understand the connections and relationships between these things.
You're right that a radix doesn't scramble the key, but the quote you've picked is a qualified subset of hash functions. That paragraph is attempting to define a practical/good hash function that is used in specific ways. Not all hash functions scramble the bits, and the Wikipedia article is very clear about this if you read the whole thing.
You skipped over two important sentences that came before it, and a whole sub-section on radix hashes after it:
"A hash function is any function that can be used to map data of arbitrary size to fixed-size values." (very first sentence, emphasis mine.)
"In some cases, the key is the datum itself." (Right before the 'scramble' quote)
String hashing is sometimes similar to radix as well: "Simplistic hash functions may add the first and last n characters of a string along with the length" and I've seen string hashes in production that do only the first n characters and stop. That kind of hashing is frequently useful in small, embedded systems, video games, etc. where you have a limited set of strings and a good idea of how well distributed the keys are.
I think it's different in that you don't care what the output of a hash is, as long as it's sufficiently unique or whatever. Whereas here the buckets are are intimately tied to the input. It's more of an arithmetic hack I'd say, as it only works on decimal numbers
I guess it could work well for sets of strings that you know will not go above a certain length? But after that it might become painful, i.e. the algorithm complexity will start depending on the maximum string length
It's the same problem when working with numbers, since you have to assume some maximum number of digits as well. Radix sort essentially treats numbers as zero-padded strings.
When sorting single words consisting only of the letters A-Z, for example, you can think of it as the same thing but with 26 buckets (27 if you pad with spaces instead of As) instead of 10. Or you can think of it as a specific subset of numbers in base 36, if that makes more sense to you.
No, the point of radix sort is that you don't have to do comparisons. Radix sort on strings is the same as radix sort on numbers, just with more buckets.
What they mean is that a standard comparative sort can also become very long if the strings are long, because strong compare can take up to the length of the string to return a result
There are, in general, a number of different sorting algorithms which are optimized for a specific number of elements. In this case, four. It uses five binary comparisons to sort four elements.
You can find other algorithms like this, such as an algorithm that uses seven comparisons to sort five items, or one that uses ten comparisons to sort six items.
If tried that in eg Haskell to see if I can beat the standard library's highly optimized sort.
In theory, you can get O(n log k) performance, where k is the number of distinct elements (so k <= n). And crucially: you don't need to know k up-front.
In practice, all my attempts were absolutely slower than the standard approach based on binary comparisons only. (But that's saying more about me than about the domain.)
Data can be partially sorted and it happens quite often. As I understood, it's exactly the purpose of quadsort - to leverage locally ordered sequences.
Usually what people mean by mostly sorted in CS is that there is some small K such that each element in the input is no more than K places from the position it would be in if the input was sorted.
Well you could extend the definition to allow a small number of items which are entirely out of place. The point is that the right sort algorithm depends a lot on tthe distribution of input data and how much you care about worst-case vs average case trade offs.
I'm a bit sceptical because I don't see any mathematical proof. Only benchmarks which do not prove a lot. It may be faster only by a constant factor on a particular machine but for sufficiently large n it would be as fast as mergesort.
1000000 is peanuts. We need to see convergence for about billions+ of numbers.
No one is claiming that this sort algorithm (or any other) is asymptotically faster than O(n log(n)). It can be mathematically proved that no sort algorithm can improve on this. The object with sort algorithms is to find one with good constant-factor performance on typical inputs.
Frankly, I am more persuaded by the arguments in favor of an algorithm like "Tim-sort", which doesn't claim to micro-optimize hardware more efficiently (that's basically what "better swapping" is claiming), but instead claims that the algorithm is particularly efficient on some commonly-seen patterns (like "partially-sorted", or "pieces of the list are reverse-sorted"). Of course, any kind of argument about why one sort is better than another are inferior to actual benchmarks.
To be precise, we can prove that no sort comparison-based algorithm can do better than O(n log(n)) comparisons. There are some variations though that do better than that, though they are not based on comparisons (i.e counting sort, radix sort).
I'm quite a noob, and this is a bit off-topic, but if it's mathematically proven that no sorting algorithm can be faster then O(n*log(n)), but we know for sure that checking if an array is sorted is O(n), then doesn't that prove that P ≠ NP?
No it doesn't, since O(n*log(n)) is in P too. P = NP doesn't imply that the complexity of finding the solution has the exact same complexity than verifying it, just that both are polynomial.
In the 1990's, NeXT had a demo of their threading capabilities, and you could select a bunch of different sorts to run in parallel. Doesn't seem to be much still "live" on the net about it though: https://www.google.com/search?q=%22SortingInAction%22
technically quick sort needs O(log N) additional space and it is still considered in-place. I guess the threshold for in-place-ness would be less than linear additional space?
quicksort is not stable. the disadvantage of classic mergesort is that it needs to copy elements out of the input array, although there exist (slower) in-place variations.
But "swap space" without the "in" is pretty ambiguous. If it just holds 1 or 4 elements as you perform a single logical swap, you're still sorting in-place.
(This sort does not use the term that way, but another sort could.)
Zhangxp1998 ported quadsort to C++ and compared with std::sort and found it to be slower: https://gist.github.com/zhangxp1998/0e2fa30656c894017d183e0d...
Shameless self promotion, if you want a faster sorting implementation, check out pdqsort (https://github.com/orlp/pdqsort) which is almost twice as fast as libstdc++'s std::sort for randomly shuffled integers, and has all sorts of tricks for input distributions with patterns (like pre-sorted, reverse sorted, single out-of-order element appended, etc).