Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean, isn’t spawning 1000 threads at least linear time? You could get it to log maybe but I’m not sure that code does that automatically.


Depends on what you mean by "time". Going by algorithmic complexity time it's at least linear yes. Going by wall clock time (the more important one for interview code) it's constant time.


The time to spawn the threads is just inconsequential compared to the time to wait for the threads to exit for small sizes of N. But by doubling the number of numbers to sort you do double the time (actual wall clock time) that it takes to spawn those threads. O(N + 10000000) is still O(N)

Not to mention for the OS to handle N threads would likely take some kind of non-linear time, at least N^2 if I had to take a shot in the dark. But I imagine it'd depend on the OS and how the hardware handles interrupts. Even if the thread is sleeping it still takes resources to schedule it


Linux's CFS is O(log N) to insert; O(1) to switch tasks (N being number of threads). [1] So O(N log N) it seems.

[1] https://en.wikipedia.org/wiki/Completely_Fair_Scheduler#Algo...


You're forgetting that algorithmic complexity deals with asymptotic behavior.

An algorithm that takes N inputs, does N operations, and then sleeps for 10 seconds is not constant time, because in the asymptotic case the N iterations will take longer than the sleep.


I don’t understand how wall clock time is constant.

Sorting [5, 50] is faster wall time then [6, 60].


The constant is the biggest number in the array


If runtime depends on the input, it’s not constant time.


It's constant with regards to the length of the input array, which is the usual metric for sort algorithm performance. That is, d(runtime)/d(len(input)) can be constant, even though d(runtime)/d(max(input)) is not, if you assume that len(input) is independent from max(input) (which is inconsistent with input having uniformly distributed random members, which is probably the more common assumption).


It's constant with respect to the number of elements, just not the maximum magnitude of those elements.


I don't see how it is. Different, but the same number, of elements is a different time. There's no constant time between [0, 1] and [0, 10], both having two elements.


[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

is just as fast as

[9]

If you really want to make it strictly constant time, just append INT32_MAX to the end of the array before sorting, and then pop it after.


Oh yeah, because assumin maximum values in your domain doesn't move all algorithms into constant time and renders complexity theory void.


It doesn't though. How would merge sort become constant time if you assume a maximum value? It's also a joke...


The main "value" in the domain of sorting problems is the number of elements in your collection.

A subproblem also considers the elements to be integers, then they become another "value" domain. (But in general, sorting problems only need their elements to be comparable, not necessarily integers.)


It’s not constant; the algorithm runs in pseudo-polynomial time.


I don't think that sleepsort is any more constant time, than any other sorting algorithm. For sleepsort to be constant time, you must

   - have an upper bound on the input (largest number)
   - you must not count some arbitrary things, like reading and processing the input, or spawning threads
But if are allowed to do these things, all the other sorts will became constant time. (I believe only one of them is enough.)


It isn’t really linear. Sleeping is a heap insertion which takes O(log n).


You still have to read all the inputs at least, which is at least linear time, in the size of the input.


Each sleep is log n for the heap insertion, so the overall complexity is at least n log n


Well they were indeed sleeping through the interview. Otherwise I don't know how could one possibly claim that a program whose first statement is a sequential loop over all values of the input has "constant time" asymptotic complexity.


As with everything in algorithm analysis, it depends on your model of computation. That is, it depends what you're counting.

For sorting (comparison sorts), one fairly typical model is to just count how many comparisons you do. Which, this does none (kind of, not really, they're really just implicit or hidden in the OS).

It's just playing around with computational models, not a serious proposal. It's either just a joke, a troll, or an parable about the need to be careful about what metric you're measuring. Or some combination of those.


> Which, this does none

Neither does bucket sort, and nobody has claimed that bucket sort is constant in time.

We only count comparisons in typical sorting algorithms because we know that the overall complexity is proportional to it.


Yeah, that's kind of the point I was trying to make. It's an innapropriate compuational model, so the answers you get out aren't meaningful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: