Hacker Newsnew | past | comments | ask | show | jobs | submit | Totient's commentslogin

Scott Aaronson argues that, based on what we know about quantum mechanics, Wolfram's "long-range thread" cannot reproduce special relativity and Bell inequality violations:

www.scottaaronson.com/papers/nks.ps


It seems to me that Aaronson's argument can be easily bypassed by adding non-local hidden variables in the form of a loosely connected network that essentially injects pseudo-randomness into the approximately flat Minkowski spacetime network. Note that Bell's inequality does not rule out non-local hidden variables as a viable explanation of quantum mechanics.


I'm reminded of the classic Charles Baggage quote:

On two occasions I have been asked, "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

I'd argue for a pretty high level of transparency in the process - I would like to see whatever classifiers being used open-sourced, for example. And I'd want to know where people are drawing the training data from.

But the nice thing is that the tech industry has a large population of people very sympathetic to transparency, and with a history of a culture supporting it. Quite frankly, I think the legal community has a lot more to learn from the open-source community than the other way around.


This makes perfect sense. I know people who would be terrified of the thought that their organs could be harvested while they might yet recover, but also people who be horrified that their perfectly good organs are just laying there, unused, after their brain has died, but with artificial respiration/circulation keeping some of the rest of their body going.

I think a sane default is to require opt-out for organ donation after brain death (what Europe does now, I believe) with an opt-in "sliding-scale" for the rest of the possibilities.


I agree, and disagree with the author.

> "Vision 1: CONNECT KNOWLEDGE, PEOPLE, AND CATS."

> This is the correct vision.

I would say this is a correct vision, which I happen to be in favor of.

But I don't understand why it has to be an "us vs. them" dynamic between this and the "BECOME AS GODS, IMMORTAL CREATURES OF PURE ENERGY LIVING IN A CRYSTALLINE PARADISE OF OUR OWN CONSTRUCTION" vision.

Even in strawman form, I'm unapologetically in favor of it. I do want it to go right. I don't think it's going to happen anytime soon - I think human-level AI by 2075 [1] is wildly optimistic - but I hope it does happen eventually, without wiping out everything we hold dear.

> I'm a little embarrassed to talk about it, because it's so stupid.

My first thought was "Try describing the internet to someone 100 years ago - your claim that there is going to be an interconnected global network of electricity-powered adding machines that transport pictures of moving sex by pretending they are made of numbers is going to sound stupid."

But if you want to make fun of Elon Musk because "Obama just has to sit there and listen to this shit", what about:

Shane Legg: "If there is ever to be something approaching absolute power, a superintelligent machine would come close. By definition, it would be capable of achieving a vast range of goals in a wide range of environments. If we carefully prepare for this possibility in advance, not only might we avert disaster, we might bring about an age of prosperity unlike anything seen before."

Stuart Russel: "Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures."

Or freaking Alan Turing: "There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…At some stage therefore we should have to expect the machines to take control."

> But you all need to pick a side.

I don't want to pick a side. I'm in favor of connecting the world now, making it better for everyone. I'd also like to see the world get much better in the more (hopefully not too) distant future.

[1] http://www.nickbostrom.com/papers/survey.pdf


>My first thought was "Try describing the internet to someone 100 years ago - your claim that there is going to be an interconnected global network of electricity-powered adding machines that transport pictures of moving sex by pretending they are made of numbers is going to sound stupid."

Because you're limiting yourself to a single sentence. How about this: "The internet is essentially an expansion of the concept of a telegraph - it is already possible to pass any type of information between two distant people almost instantly. The internet combines this system of information transfer with machines that know how to respond to messages without human intervention - under the condition that the messages follow a certain set of rules. By defining these rules in advance, the telegraph system, which up to now only transferred individual letters, can be co-opted to send things such as pictures and films. The machine can tell who is speaking with it, and perform tasks for that particular person upon request."

This does not sound stupid - it might take some explaining, but anyone with a brain between their ears can understand the basic concept of fast information transfer. They might not think of all of the possible uses of such an invention right away, but I bet that they would easily understand the uses if they were explained in terms of things these people already know.


Well, Totient's main fault was that he didn't ask for a time interval big enough.

Make that 150 years, and you couldn't talk about information running through the lines, because people's understanding of "information" was a completely different (and much less powerful) concept.

Make that 200 years, and people that were trying to make machines react to well formed message were as criticized as AI proponents are now.


Satire, aside, I think a very short addition to the last line holds a lot of truth:

"A hash is simple. A hash is fast. A hash is all you need to start with".

I can think of plenty of good reasons to stop using a map/hash/associative array in code, but I can't think of very many good reasons not to start coding with associative arrays as your default data structure. If there's a performance/memory problem, fix it later. I've seen a lot more code suffer from premature optimization than I've seen suffer from using data structures that were a little too inefficient.


(Theoretically, at least) The ability to exercise a line-item veto would help. The rider to build an new bridge to nowhere would get vetoed, and the rest of the bill would go through.


It would help in that way, but it could hurt in that a popular bill could be neutered at the desk to remove its teeth or any essential protections.


I agree. It would be nice if everyone was exposed to programming, just to get a feel for what is possible/plausible - plus the benefits of learning how to think through a problem logically.

But just as I don't consider someone fatally unknowledgeable about biology if they can't quite remember what, say, the liver does, I don't consider someone fatally unknowledgeable if they don't know how to write a computer program.

It would be nice if everyone tried programming at some point, but the idea that everyone should know how to is kind of silly.


Looks like a system to setup up assurance contracts for funding software. Glad to see more people trying to make this sort of thing work.

What's interesting is that you can actually make pledging a dominant strategy, if you find someone willing to put up enough capital. Roughly speaking, dominant assurance contracts work along the lines of "Everyone who pledges will receive X amount of money if the fundraising fails to hit the appropriate threshold." For anyone who would stand to gain from public good, the outcomes are now "Pledge: either the project will be funded or I will get money vs Don't Pledge: either the project will be funded or I will get no money"

http://mason.gmu.edu/~atabarro/PrivateProvision.pdf


That's not quite right - it's "Pledge: either the project is funded and I spend money, or I get money", vs the "Don't Pledge" you give.

It's still interesting, to be sure.


Any examples of this model being applied in the real world?


There was a followup using actual shocks on a puppy. Similar results: http://www.holah.co.uk/files/sheridan_king_1972.pdf


Hmm... I kind of want to figure out a way to make CAPTCHA-coin work now. Cryptocurrency mining for the people!

I just can't think of a good way to generate CAPTCHAs (or something similar) from a block in a fashion that would give human beings a significant edge over computers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: