My favorite "analog computer" example is finding the balance point of a broom or mop.
Start by holding the broom horizontally (so that the shaft is parallel to the floor) and support it between the thumb and fingers of each hand with your hands held about a meter apart. The palms of your hands should be facing each other, fingers and thumb flat in vertical plane, with the thumbs sticking out to make cradles for the broomstick.
side view
////
/\o////
\ /
\ /
Once it's set up, all you do is gently bring your hands towards each other until they are touching, palm to palm. The broom will remain balanced on your hands the entire time.
As you draw your hands together there will be an unequal amount of friction between the broomstick and each hand. The side that is further from the center of mass of the broom will have less friction. The side with less friction will slide. This changes the weight distribution between your two hands until the friction on the sliding hand has increased [enough] past the friction on the non-sliding hand. When that happens, the sliding hand stops sliding and the non-sliding hand starts sliding. The process alternates from hand to hand until, at the end, your hands are touching and the center of mass of the broomstick is [close enough to exactly] between them.
It should remind you of sleep sort, because spaghetti sort is the spatial version of sleep sort.
For sleep sort, the time taken is a constant function of the size of the list; it only depends on the value of the maximum element of the list. T = max(list) + smaller terms associated with scheduling timeouts. So, it's O(max(list)). Although, the "smaller terms" I'm ignoring blow up when the size of the list exceeds your computer's resources. I'd guess that spaghetti-sort is O(sum(list)).
Aha so a sleep sort is translating the operation into the time domain In essentially the same way that spaghetti sort translates into the spatial domain! Thank you.
> The fair queuing CFS scheduler has a scheduling complexity of O(log N), where N is the number of tasks in the runqueue. Choosing a task can be done in constant time, but reinserting a task after it has run requires O(log N) operations, because the runqueue is implemented as a red-black tree.
Linear time O(n) means that as you keep adding numbers to sort, the time taken is bounded by a linear function of how many numbers you have. Since there is an upper bound on the number of spaghetti strands you can hold in your hand, eventually you need to "Slam their lower sides on the table" in batches, and then merge sort the results. However because unlike in merge sort there is an upper bound on the size of the sub lists you use spaghetti sort will actually be O(n^2) instead of O(n log(n)).
If you ignore the usual meaning of linear time and restrict yourself to sorting lists of numbers that will fit in your hand, then spaghetti sort always runs in less time that however long it takes to sort the maximum amount of spaghetti you can hold in your hand. i.e. constant time.
Asymptotic times usually have assumptions on the problem size. On computers, you assume that memory access is constant time, which violates the laws of physics for arbitrarily large N.
Yeah, and IMO it's not even a good theoretical way to look at an analog computer. The hard part that will eat up time is differentiating the small differences between spaghetti to arbitrary precision. Analog systems still have to deal with signal to noise ratio issues, which affects the decision criteria here as in, "which one of these is the next longest?".
Totally OT, but that reminds me of the first iPhone advertisements where they used exceptionally large hands to make the iPhone appear small. Quite funny if you think how smartphone sizes have developed since.
As an exercise, when could spaghetti sort beat computers?
As others have mentioned, your hand can only hold so much pasta (about 10^2). For n=10^2, a computer will easily win. So I'll need to abuse logic a bit...
Let's assume
- The hand is large enough to hold all the pasta.
- The linear time operations take about 1 second total (breaking, removing, transcribing).
Benchmarks[1] for sorting show TencentSort (which looks like it's based on a O(nlog(n)) merge sort[2]?) can sort 100TB in 100 seconds, for 100 byte records. So about 10^12 records/minute.
A piece of spaghetti is about 1 gram. 2^1,000,000,000,000 grams is significantly larger than the mass of the observable universe. (10^56 grams, or 2^186 grams).
How long is that?
2^56 seconds is the age of the universe.
Intuitively, this sort of makes sense. For extremely large values of n, log(n) is dwarfed by n so much that it looks like just n. A computer performing a single step of the sort operation is several orders of magnitude faster than any of the human operations. It takes insanely long, and an insanely large n for spaghetti sort to catch up.
Sometimes it might not be possible to easily determine which is the longest piece. For inctance if the pieces are on opposite sides ov the bundle, and are very close in height.
Also, if you have lots of tiny pieces, and some longer pieces, the bundle won't form a nice column. You'd have to have a minimum length to help form the bundle, then add your number to that minimum length.
It seems so strange to me that there are so many algorithms that our brains use, that we don't fully understand yet. I self reflect all the time about my own decision making, and the way I see, hear, think, and remember. We all do these things, but what are the underlying algorithms? What data is being stored, how is it represented, and how is it being compared, manipulated, and updated?
I believe the worst-case asymptotic time complexity of spaghetti sort is O(n^2).
The algorithm works for any input list, so I will pick a hard class of inputs: The list shall be some permutation of [1, 2, 3, ..., n].
The sum of the spaghetti lengths is 1+2+3+...+n, which is in O(n^2). You need to spend O(n^2) effort to gather the flour to make the spaghetti. Hence this sets a lower bound on the overall algorithm.
I was thinking about how this (and other "physical sorts" mentioned by commenters) sorts instantly by using a physical property of the members, and I began to consider what the computer equivalent would be of "physical sorting".
I think you could take the sort values, use them as memory addresses (or array offsets), and write the original index of each value into its corresponding pointer, then go back and iterate through that entire chunk of memory to find each of them. That might technically be linear time? Really N + M, where N is the number of values and M is their full range. It would have hilariously inefficient usage of memory and a pretty low cap on the range of possible sort values, but still.
It can really be used if the range is small, though I'm not sure I've met such a case where it would be practical. Back in the day, a notable feature of this algo was that it does zero comparisons, which were slower than straightforward crunching.
I think you can also use a large bitmap if all you want is sorted values.
BTW, Bloom filters work on a quite similar idea, and they come in handy in database setups.
This reminds me of an algo to find the 2 points farthest away in a graph.
Imagine the graph is drawn on an handkerchief.
Pick any vertex of the graph and let the handkerchief "fall" around it.
Now, with your other hand, pick the point which is the lowest (farthest away from the one in your hand). The handkerchief falls again around that new point.
Again, with your other hand pick the point which is the lowest. The two points in your hands are those farthest away.
(didn't find the reference of this algo, so it may be wrong; just correct me if it is)
Your algorithm doesn't really work for graphs - it actually treats the vertices as a set of points (the edges are not taken into account, you're just looking for the largest Euclidean distance between two points).
Unforunately, it is not correct even then. Here is a counterexample:
A
|
B-C-D
You have points laid out like this, and: AC > CD=BC, AD=AB < BD. If you start from point C, you will pick A as the lowest point, then B/D, giving you the pair AB/AD as the result. The correct result is BD.
The sorting doesn’t happen in constant time. The actual sort happens by hand as you take out the largest pieces first. Until you do that, they aren’t in linear order.
They're not in order, they're just in a data structure where the largest remaining element can be extracted in O(1) time. One can't say anything useful about any other elements of the list.
It's fun to think about: you have a structure which allows you to iterate all elements in order in O(n) time. Isn't this in many ways equivalent to the structure being "sorted", even if it doesn't have all the properties of a sorted array (like being able to get the nth element in O(1))?
For example, a sorted linked list behaves almost exactly like this spaghetti column.
Interesting remark. So, a "data structure where the largest remaining element can be extracted in O(1) time" can be turned into a sorted list in O(n) time.
So, under the assumtions, the overall algorithm does complete in constant time.
We know. No one is proposing this be used to actually sort. It's funny because it is a linear time algorithm, but incredibly inefficient and impractical in real life.
i've actually implemented a similar spatial-sort method used within an analog computer for some industrial controllers. so it does have some real life applications.
The traditional mechanical systems in automatic vending machines that tell and sort correct coins from wrong or fake ones are probably of the same class of mechanical "sorting" (rather categorization in this case) systems. "Mechanical coin acceptor" appears to be the established expression for it.
Related: One of my favorite realizations back in college was that insertion sort in real life is an optimal sort. The inefficiency in computers is that you need to slide all of the values down a place one by one as you insert numbers. In real life, the values just get shoved down in parallel.
Helped me understand why people default to such an inefficient algorithm when learning about sorting.
> The inefficiency in computers is that you need to slide all of the values down a place one by one as you insert numbers.
You don't need to do this if you use a doubly-linked list, yet the algorithm is still quadratic, so this cannot possibly be the reason why it is quadratic. It is quadratic because you are (in the worst case) comparing every element in the list with every other element in the list.
Sorry mate but that’s just plain wrong... What takes quadratic time in insertion sort is not the insertions but all the comparisons. Otherwise we’d have linear time sort using linked lists.
Probably depends what you're doing with it, physically, but if you were running insertion sort manually it'd be very intuitive to do something like binary search to find the correct insertion point. That would make it O(n lg n).
But if the list of numbers is small enough, our brains essentially finds the insertion position in a single operation.
One could imagine making a CPU which has a similar single-cycle instruction, ie in a list of n<8 numbers find the insertion point of x. Similar to single-cycle adders and multipliers.
Yeah that's kind of cheating, but it would be close to how we do it.
I don't remember the specifics of it, and maybe there were added restrictions, but I do remember working it out to be O(nlog n). Not something I've thought about in a long time.
Pretty big. You're comparing quicksort at O(n log n) to hand at O(n). A computer is about a billion times faster per operation, so log n = 1 billion, or n = 2^billion.
Reminds me how you can compute pi in a hilariously inefficient yet very interesting way with a physics engine by counting collision of 3 blocks bouncing with one another: https://youtu.be/HEfHFsfGXjs
Reminds me of Dijkstra's algorithm explained with threads and beads. The nodes are represented by beads and the weighted edges are represented by threads between the beads with lengths equal to the weights of the corresponding edges. You put down all the beads on a table close to each other and you start lifting up one of the beads. The order the beads leaving the table is the same as the order of visited nodes in Dijkstra's algorithm.
Start by holding the broom horizontally (so that the shaft is parallel to the floor) and support it between the thumb and fingers of each hand with your hands held about a meter apart. The palms of your hands should be facing each other, fingers and thumb flat in vertical plane, with the thumbs sticking out to make cradles for the broomstick.
Once it's set up, all you do is gently bring your hands towards each other until they are touching, palm to palm. The broom will remain balanced on your hands the entire time.As you draw your hands together there will be an unequal amount of friction between the broomstick and each hand. The side that is further from the center of mass of the broom will have less friction. The side with less friction will slide. This changes the weight distribution between your two hands until the friction on the sliding hand has increased [enough] past the friction on the non-sliding hand. When that happens, the sliding hand stops sliding and the non-sliding hand starts sliding. The process alternates from hand to hand until, at the end, your hands are touching and the center of mass of the broomstick is [close enough to exactly] between them.
- - - -
Derp, it's on the youtube: https://www.youtube.com/watch?v=B4axmjVFsK8
Stephen Fry...