Hacker Newsnew | past | comments | ask | show | jobs | submit | ndl's commentslogin

Also interesting: if you look at the acknowledgements of Hauptmann's paper (at the end, just before references), he says, "I would like to thank Norbert Blum for carefully reading preliminary versions of the paper, for helpful remarks and discussions, for his guidance and patience and for being my mentor." This means Blum knew about this past attempt and presumably thought it correct enough to put on arXiv. I assume from the current paper that he has changed his mind since then.


Having worked worked in both quantum information and biology, I have a few things to say about this article...

1) The 1st paragraph really annoys me. Quantum computers don't solve NP Complete problems by trying every solution. This is a common misconception. In fact, it's widely believed that quantum computers wouldn't solve NP-Complete problems in polynomial time, though they may get a moderate speedup over classical computers related to their superior searching capability.

2) The biological computer in the article also doesn't solve NP problems in polynomial time. Rather, it proposes a highly parallel machine that divides runtime by a very large constant. The real claim is to make a low-energy parallel processor that makes it easier to throw lots of processors at the problem. It does not alter the complexity classes. The linked PNAS article states, "it is inherent to combinatorial and NP-complete problems (assuming P != NP) that the exploration of the entire solution space requires the use of exponentially increasing amounts of some resource, such as time, space, or material. In the present case this fundamental requirement manifests itself in the number of agents needed, which grows exponentially with 2^N. Effectively we are trading the need of time for the need of molecular mass." Unfortunately, the experiment hasn't quite got this far, since "the error rates of this first device are too large for scaling up to problems containing more than ∼ 10 variables." It may constitute a step in the right direction, though.

3) The popular science article (as opposed to the original research it links to) is probably too focused on trying to make this about P vs. NP and misses the actual accomplishment: that the researchers can control a microbiological system outside of their own brains well enough to compute with it. This is pretty cool and may be valuable progress in nano/biotech even if it doesn't end up being a viable computer.


In grad school I always heard Scott Aaronson harp on these issues too. He was fond of saying that quantum algorithms organize things so that amplitude here and amplitude there is arranged just so, to ensure that wrong answers experience cancelling amplitudes, rendering them highly unlikely to be the observed outcomes.

It's not about "trying everything in parallel" but rather shoving amplitude around to ensure that correct answers are overwhelmingly more likely to be observed.


This is spot on and what they actually managed to build is actually super interesting.


AVAILABLE FOR FREELANCE - NYC or remote - full stack web developer with scientific data analysis experience

-Scala/Akka/Play/Liftweb/neo4j/Java

-Python w/ Django

-PHP, C/C++, Nvidia CUDA

-HTML5/CSS/Javascript

See here for more, including contact info: http://fearofc.com/?page_id=19

https://github.com/Fear-of-C


I recently had some ideas about how to use the concept of locality-sensitive hashes with thread/worker pools that share locking resources. Basically, it's useful when want things which are going to take write locks on the same resources to end up on the same thread, since otherwise you are just needlessly blocking up extra workers waiting for other workers to finish. Also, in the case of having multiple actors with independent task queues, you can send tasks using a particular resource to the same queue, so that they will be processed in the same order they were received.

There are probably better ways to do this in most cases, but I thought it an interesting idea.


Hopefully, the hashes are physically secure and accessed in a way that would require a great deal of human effort to steal. If that is the case, then yes, you can control the login process.

The technique in the article is relevant when one has the hashes and wants the plaintext (and according to some here, still easy to mitigate then) - if you're guessing a web login, different game.


These terse responses are always open to misinterpretation, especially by non-technical users who may not fully understand open source. That's the problem. "We have a backlog, but you're welcome to contribute to speed things up," or "I do this as a hobby, so take it or leave it," at least expresses some reasoning.


"Unfortunately, I don't have time to learn this system and work my way through your review process. I think I'll have to buy [insert (possibly proprietary) alternative] for now. Best of luck with the project."

For the record, I have written a couple patches and released open source code before. I say this after years of trying to convince all my friends to switch to Linux, and finally coming to terms with the fact that even I still have to keep a Windows boot for certain occasions.


As long as you are happy with the [insert (possibly proprietary) alternative] solution, I'm happy too.

Would you consider paying up to what [insert (possibly proprietary) alternative] costs for someone to write the patch for you? Would you consider gathering more people so you can fund your patch?


For paying a bounty, yes, assuming it can be arranged in a timely fashion.

For the latter, probably not. Assuming that for whatever reason I cannot write the patch myself, I do not have the time to start a new organization every time I encounter a new bug in some project. And I would guess that most ordinary consumers don't either.

This is not supposed to be a necessarily snarky response. Sometimes, an open source project exists, and it's not for everyone. The people behind the project should realize that every feature they choose not to include cause the software's value proposition to cease to exist for some users. It is up to them what to prioritize, and up to me whether using their software is worth the time of getting a patch committed.

Of course, my operating assumption here is that they want to hear about bugs/feature requests, and so it is worth something to them that I would bring it up in the 1st place. But I rarely ever make new feature requests. Usually this is in response to an evangelist telling me that I should switch to their platform, and my responding that it does not replace a current proprietary solution.


Compounding this problem, conventional security wisdom is that you should never acknowledge unsolicited email, because the spammer might be using a fake unsubscribe link to confirm your email address is real. So a system that requires manual unsubscription this way will actually punish accidentally subscribed users for following good protocol.

Furthermore, if the "confirmation" email winds up in a spam filter and the user never sees it, subsequent emails will still go out and probably be auto-marked as spam.


I have Firefox "3.6.16pre," through Ubuntu.

When I try to access the app, it tells me that my browser isn't supported and sends me to mozilla.com to download... Firefox 3.6.15

Version check bug?


Have you tweaked the User Agent header? We just check for the 'Gecko' substring in it.


You should still check this carefully. It would suck to find out after you have started making money that you're about to get slammed with litigation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: