Hacker News new | past | comments | ask | show | jobs | submit login
P = NP (arxiv.org)
46 points by Anon84 on Dec 4, 2013 | hide | past | favorite | 50 comments



It’s fascinating how this sort of thing keeps attracting interest. Here’s a list of 100 proofs that P=NP or P≠NP, which includes this one: http://www.win.tue.nl/~gwoegi/P-versus-NP.htm

There’s nothing to make this attempt more worthy of interest than the other 99, as far as I know.


Which makes me wonder: how do you filter these? In physics it's common practice to ignore papers about Perpetuum Mobiles (excuse the probably incorrect plural), but here we have a topic that is somewhat easy to write good and bad about while still having very high relevance.


Here's two lists of ways to filter. They're heuristics, but they work well.

One for general mathematical proofs: http://www.scottaaronson.com/blog/?p=304

One specific to P != NP: http://www.scottaaronson.com/blog/?p=458


Two notes on these:

1. On the first one, note also numbers 11 and 12 Scott mentions in the comments.

2. The second is not always so specific to P vs. NP (and indeed has some overlap with the first -- #1 on the second list is a stricter version of #3 from the first). #3 and #4 for instance could be applied to lots of hard open problems, #6 to really any hard open problem, and #5 is basically fully general.


A proof that P=NP without an accompanying program that solves SAT or another NP problem in polynomial time does not count to me :)


I agree

And let me say, I don't even care about a paper. Sure, it's useful, but it shouldn't be the focus.

A simple implementation that solves any of the NP problems in polynomial time is a much better proof than any article.

(of course, those that prove P != NP can't do that, still)


I beg to disagree with both of you.

You are kinda disrespecting the important of theoretic research, and a big part of any field of interest on research.


Oh the theoretical aspect is important.

But applying it is important as well. Of course, most quantum computing algorithms have not had a chance of being implemented; it's not always possible

Or, as Knuth put it "Beware of bugs in the above code; I have only proved it correct, not tried it."

But if someone comes up with a paper showing a new algorithm but doesn't come up with an implementation of it when it would be straightforward to do so, I begin to doubt this person.

Most other fields don't have the luxury of easy testing.


A program isn't a proof. It's perfectly possible to solve restricted cases of an NP-problem in polynomial time. Doing so in enough of the test cases people throw at it to fool people for a while would itself be a huge accomplishment, but it would still be entirely possible that P != NP while I'm holding a program that has solved every NP problem I have thrown at it in P time (especially if that P is large).


> A simple implementation that solves any of the NP problems in polynomial time is a much better proof than any article.

That completely misses the point of P vs. NP, which is not about programmability but a problem entirely of theoretical nature (and interest).

- O(n^<Graham's number>) is in P. (If you can solve TSM in that time, you've proven P = NP.)

- Cracking AES-256 is even easier than P: it's O(1).


> "That completely misses the point of P vs. NP, which is not about programmability but a problem entirely of theoretical nature"

Yes, there is an important theoretical background, but IF you prove P==NP you should go for it (of course you can have n^a big number but that's unlikely)

> Cracking AES-256 is even easier than P: it's O(1)

Well, you can take any fixed size subset and say it's O(1), TSP for 100 cities is O(1) as well

But if the 'theoretical work' is merely to win grants and ensure tenure yeah, just don't bother.


You would benefit from both, or you might have doubts the NP problem that was solved was ever actually NP.


He does give pseudocode in the paper for transforming an NP problem M<NP> into a P problem. It should be straightforward to apply this with M<NP>=SAT. (Whether it actually works is another matter...)


This was already posted here. It is fatally flawed in that, although the theoretical computing time is polynomial, the memory required grows exponentially.


I'm confident that the paper is indeed flawed. However, a Turing machine can neither write to nor read from an exponential number of locations in polynomial time, so the flaw will be something else.


This statement is not quite coherent. If there is an exponential memory growth, then there is exponential computing time because memory growth doesn't happen magically.


Memory growth is not a consideration in the academic proof, Turing machine tape is infinite in both directions.


Turing machines have may have infinite memory, but they can also only write one step per cell. So a polynomial-time TM can't use superpolynomial memory.


In addition to what siblings said, remember that P is a subset of PSPACE.


I think the statement you meant to remind us of is that NP is a subset of PSPACE.


According to the paper under discussion, the two statements are equivalent. Your version is probably more graceful, however.


Since P refers to polynomial time, it's not clear to me that a memory requirement is relevant.


A definition of P indeed does not require a restriction on memory use. But the definition implies a restriction on memory: a polynomially bounded (in time) machine cannot use exponential memory (because reading 2^n cells on a tape requires 2^n time steps).


So an exponential memory requirement implies that the theoretical time is not polynomial after all? I'm not sure.

It seems possible that I would need to have exponential memory but not necessarily to access it.

Is it true to say that memory not-accessed is not-needed? It's believable but not immediately obvious to me, and if it's false then there's no two-way implication.

Edit: apparently NP is a subset of PSPACE, so fair enough.


Not sure what you mean by "memory requirement" and "theoretical time". If you don't access the memory, what do you mean by saying you "need" it? If a turing machine — or a computer program, for that matter — uses, i.e., "visits", exponentially many memory cells, then, indeed, its computing time cannot be polynomial. But note that every turing machine theoretically has infinite memory. The question is how much of it is actually used. PSPACE consists of those problems that can be solved by a turing machine that uses no more than p(n) memory, where p(n) is some polynomial in the input length n. Note that, for PSAPCE, time is unrestricted. So obviously P and NP are subsets of PSPACE, but it is unknown whether they are proper subsets.


The best intuition I've read that P = NP can't possibly turn out to be true, is that if it were, you could write a piece of music as easily as you could appreciate it.

Also, I can't imagine the answer to something as fundamental as this coming out of a hacky solution. There's probably a more elegant, natural way to prove it. And by natural I mean maybe they should start looking outside Turing's theory.


I've read that one too, but I don't particularly like it. P=NP implies only that all problems which can be verified in polynomial time can also be solved in polynomial time, not necessarily that it can be solved as quickly, nor that it can be solved easily.

Perhaps writing a piece of music is also done in polynomial time, but it requires more training (more advanced algorithm)? Or, perhaps, the time complexity is x^3 for writing, instead of x for appreciating.


Agreed, it is bit sensationalist. But what if someone proves it and then we realize that even though it can be done in polynomial time, there's some other constraint (like "x^N but N has to be super big") that renders the result impractical?

I think once someone proves it, it will be obvious in hindsight as a lot of people are just ignoring the isomorphisms in nature.


Yup, doing a big of handwavy math also demonstrates this: You are comparing something like x^N - x^M with 2^x - x^M. The second will always grow big rapidly, while the first could stay small, but it can also grow arbitrarily large. In fact, it is bound to do so for N > M and it will do so quite rapidly if N >> M.


>The best intuition I've read that P = NP can't possibly turn out to be true, is that if it were, you could write a piece of music as easily as you could appreciate it.

Your intuition isn't always correct, there is algorithmic music that can sort of do this already. And just because it seems weird doesn't mean it's impossible, just that no one has figured it out yet.


This is why I said intuition. What I meant is that N = NP would suggest, for instance, that the amount of effort needed to appreciate a brilliant piece of music (e.g. Mozart, not any music) is in the same "space" of the amount of effort needed to create it.

You can find many examples of things in nature that are "hard to solve" yet "easy to verify".


Say we find an algorithm to solve SAT in O(x^1000). P would equal NP, and yet the set of problems we currently label NP-complete might still be substantially harder to solve than to verify.


Thus rendering the theory rather "useless". What some people are trying to do by attempting to prove N = NP (and this is the main source of criticism), is to find a way to "hack" nature.


Theory often proves fruitful in unexpected ways.


> if it were, you could write a piece of music as easily as you could appreciate it.

Yea, but maybe it would still be exactly 1.000.000 times as hard.


Meaning you wasted a shit ton of time trying to prove something obvious :)


Well no, we've destroyed constant factors of greater than a million before.


This is also a bad analogy because human beings tend to be extremely good at solving, or at least approximating close solutions to, NP-complete problems (on small instances).


When I'm drifting off to sleep, I frequently hallucinate polyphonic songs. Real time composition. P=NP? :)


That is a brilliant way of putting it, thanks!


I would wait until the paper has no more corrections made at least for a whole non-summer month. Well, it looks like there were no corrections in October but maybe he took his vacation then?

Give it some more time, I suggest.


Can anyone comment on how credible this is. It seems to have been submitted many times.


Every once in a while some publishes a paper that seeks to prove that N equals NP, but so far none were conclusive. Given that track record and the fact that the all existing evidence points to N unequal NP, I would be very cautious with this paper.


I'm not going to dig into the paper, but I note that the abstract claims a constructive proof. I sort of assume that anyone with such a proof could organize a convincing demonstration by letting a disinterested observer provide a series of random numbers while clicking a stopwatch.


But only when N is 1 or P is 0.


seems like a solid argument to me for peer reviewed publications.


Even if this content is quite old, the idea of linking it here in those days is quite clever. Just tell me how many of you did not immediately think "Oh crap, my RSA is doomed!".


Cryptography? Really? Wouldn't this mean you could create absurdly intelligent AIs and solve pretty much any optimization problem?


How did this go from being #3 on the front page of Hacker News to non-existent even two or three pages back?


Moderation




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: