Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It would be absurd to compare unary and binary addition relative to the same numeral length.

Why? The Turing machine has no concept of numeric values, it only knows about the length of whatever the input is.

> Adding one million to one million is simply much faster for binary addition than for unary addition, even if unary addition has better complexity relative to its own encoding than binary addition has to its encoding.

From a complexity standpoint, adding one million to one million is O(1), irregardless of the encoding.

> We are interested in complexity relative to the numerical value, not in the length of their respective encodings.

But the complexity relative to the numerical value is not even well-defined, since it _depends_ on the choice of the encoding.

> By the way, as the Wikipedia piece notices, the binary addition algorithm is O(log(n)) in time _relative to the value_, while unary addition is presumably O(n) relative to the value (just writing the numeral out alone takes O(n) steps). So the time complexity is in fact better.

It really is not better. Time complexity is _always_ measured relative to the length of the input, and for binary encoding, the length of the input is O(log(n)), so, taking O(log(n)) steps for the addition is linear, same as the linear time needed for adding numbers in unary encoding (which is basically just copying the input to the output).

> Moreover, only in this case makes a comparison even sense, since we are comparing the same values in both cases, while for the numeral case we would be comparing the lengths of two different types of numerals, apples to oranges.

But _the numeral is not the input_, its _encoding_ is. This is really the whole point, and it is _precisely_ because you get different complexities for different encodings.



> Why? The Turing machine has no concept of numeric values, it only knows about the length of whatever the input is.

That's irrelevant. We are interested in addition, or multiplication, or whatever, which is an operation between numbers (values), and it can be faster or slower depending on which numerals (encoding) are used.

> But the complexity relative to the numerical value is not even well-defined, since it _depends_ on the choice of the encoding.

Yes it depends on the encoding, no it is of course well-defined. You can measure two different algorithms which use two different encodings, relative to the same thing, the value.

> complexity is _always_ measured relative to the length of the input

That's simply not true. There is even a Wikipedia article about it. Even if it calls it "pseudo".

> But _the numeral is not the input_, its _encoding_ is

A numeral is an encoding of a number.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: