Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I will look into Farnham Street's Blog, thanks.

> What he's preaching isn't science.

I don't buy into the necessity that everything has to be peer-reviewed in the old fashioned way. There is peer-review happening in the comments to some extent. I don't take a fancy to dismissing any radical ideas as pseudoscience. It's just the outer fringe of hypotheses that need to be tested against reality, and as long they are approximately humanist, enlightened and don't contradict existing physics without depending on mathematics (or disclaimers), I cannot see anything wrong with it. As a naturalist, I pretty much agree with everything I've read on LW so far, except for the parts I cannot judge (like hypotheses about physics), which I allocate a weaker priors for and a few unconvincing pieces.

> Not to mention, and this is a bit of a pet peeve, I find that most LW people are too self-absorbed, I've literally seen a blog where the person who runs it "warns" the readers that what he writes is too complicated for people to follow.

I have not yet experienced that, but there are also a lot of people on reddit and HN that I don't like, yet I differentiate within these communities between what is valuable and what is not.

> Most of what Yudkowsky says is extremely sci-fi, no real basis in scientific fact, but stretching the current technological progress to the point where his opinions on things (stuff like transhumanism, singularity) can be justified.

At the risk of seeming indoctrinated to you, this is what I believe with high certainty: If Moore's law continues another one or two decades, I think the singularity is a very real possibility. The human brain seems to be nothing more than a learning and prediction machine, nothing what transcends what we can understand in principle. Evolution did come up with complex organisms, but the complexity is limited by biochemical mechanisms and availability of energy. In addition, nature often approximates very simple things in overly complicated ways because evolution is based on incremental changes, not on an ultimate goal that prescribes a design of low complexity. I also think that AI will very likely be superintelligent and that poses a tremendous risk in the 10-40 years to come (on the order of atomic warfare and runaway climate change). By the time someone implements an approximately human-level intelligence, we better have a good idea about how to control such a machine.



> There is peer-review happening in the comments to some extent.

Lol. I guess we don't need college education as well then, there's education happening in the comments to some extent. We don't need traditional means of news, there's news happening on Twitter to some extent. I could go on with analogous line of reasoning.

Don't get me wrong, I'm not 100% in favour of the traditional education model as well, but peer reviews exist for a reason. You and I are not experts in these fields. We rely on the expertise of people who have made it their business and life to study these fields based on a rigorous method. Would you try out homeopathy had it not been rejected completely by doctors and scientists but someone on a forum told you it worked for them? What if someone wrote a very long article with fancy words (like LW tends to do) explaining how and why it works (they exist, I assure you)? Would you try it then?

> I don't take a fancy to dismissing any radical ideas as pseudoscience.

Sure, I'm not saying we should be against radical ideas. That's how scientific progress happens. I'm against LW ideas, for which there is no basis in reality as far as we know based on our current understanding of science.

> I differentiate within these communities between what is valuable and what is not.

Indeed. But I'd rather the community's entire existence not depend on bullshit.

> At the risk of seeming indoctrinated to you, ...

a) Keywords: "If", "Seems" b) Tons of assumptions in that scenario you laid out. If you can't see it, I'm sorry but you're already too far gone. c) Watch some MIT lectures on computer architectures about how the trend of Moore's law has already radically shifted and is flatlining.

Basically, what you've done is precisely the kind of utter crap that LW perpetuates. "If x keeps happening" without providing any reason as to why that would be true. Make some ridiculous simplifications "complexity is limited by ___", nature often does __ because ___. You basically don't provide any rational reason for why you think AI will be super intelligent and even if it were, why that would be risky. You pick numbers out of a hat (10-40 years to come).

Yes, you look pretty well indoctrinated from where I'm sitting. But I hope you see the many (so many) flaws in that last paragraph of yours (it honestly made me laugh out loud :p)

Predicting the future is hard business -- be it the stock market predicting what happens tomorrow, or weather forecast for the next month. It's presumptuous and hella stupid if you think you can predict where Science and Technology will be x years from now.

TL;DR: Stahp.


> Lol. I guess we don't need college education as well then, there's education happening in the comments to some extent. We don't need traditional means of news, there's news happening on Twitter to some extent. I could go on with analogous line of reasoning.

That's a straw man. I did say it's the fringe and it needs to be tested. I didn't say one should replace the other. Peer-review is essentially just mutual corrections, and there are mutual corrections happening in the comments, just not as thoroughly as when it's institutionalized. Most of it is not new anyway, but just summarizes research results and draws logical conclusions from it (for example this [1]). If it wasn't all brought together on LW, I possibly wouldn't have found out about the wealth of knowledge for a long time.

[1] http://papers.nips.cc/paper/2716-bayesian-inference-in-spiki...

> a) Keywords: "If", "Seems" b) Tons of assumptions in that scenario you laid out. If you can't see it, I'm sorry but you're already too far gone. c) Basically, what you've done is precisely the kind of utter crap that LW perpetuates. "If x keeps happening" without providing any reason as to why that would be true.

It's very logical. My certainty referred to the implication, but it is hard, of course, to come up with a prior for that 'if': Exponential progress could continue in various ways, e.g. by invention of more energy efficient chips and by scaling them up, by 3D circuitry, molecular assemblers, memristors, or perhaps quantum computing. There are contradicting studies, so one should put P(Moore's law continues for another 10-20 yrs) at perhaps 50%. So, of course, this is all hedged behind this prior (which I think many people get confused by). The discussion is always concerned with implications which can be made with fairly solid reasoning, by assuming that P(..) above to be 100%.

> Make some ridiculous simplifications "complexity is limited by ___", nature often does __ because ___. You basically don't provide any rational reason for why you think AI will be super intelligent and even if it were, why that would be risky.

That's just a basic assumptions which I find plausible, and which some respectable and knowledgeable persons find plausible too (for example Stephen Wolfram and Mark Tegmark; I am aware that appeal to authority is difficult to argue from, but both have publications which I could also refer to). I agree that mentioning the complexity limitations didn't provide any information because they don't tell us whether it makes it simple enough for us to understand, it merely says that the complexity is not infinite, so I should have left it out entirely. But this is not at all representative for the best contents on LW, it was poor reasoning on my behalf. Bostrom's book Superintelligence gives a pretty good summary about why it is thought to be plausible.

> You pick numbers out of a hat (10-40 years to come).

That's based on estimates of the processing power required for brain simulations by IBM researchers and Ray Kurzweil. Simple extrapolation of Moore's law shows us that we will reach that point roughly between 2019 and 2025. 40 years is just my bet based on what I know about brain models and current obstacles in AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: