Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The LessWrong folks aren’t obviously better or worse at calculating priors than anyone else. The “problem” is that their hobby is spending their free time considering outlandish scenarios, inventing arbitrary assumptions related to such scenarios, drawing questionable conclusions, and then convincing themselves that because they used logic and math, their analysis must be correct. Plenty of other folks who spend time on similar activities with a less pseudo-rigorous framing end up as conspiracy theorists or occultists; belief in AI overlords ruling humanity, the technological singularity, cryogenics, or impending 1000-year human lifespans is far from the kookiest thing people convince themselves about.


Oh come on. That's how you're supposed to use math. To aid your thinking. Without it, and considering "outlandish scenarios", we would not have any scientific progress.

Also, this is one strange thing - any time someone asks us (the STEM crowd), "what will I ever use math for in my life?", the default answer seems to be, "it's about having more tools for thinking, and greater clarity of thought; it'll make you smarter". But then, some of us turn around and refuse to acknowledge that people who actually learn math and try to apply it may be getting those promised results. Whether it's LW people, programmers, engineers or scientists, the moment it matters, the default conclusion is that math gives nothing.


What? Who said “math gives nothing”? I spend most of my day building things out of math. I think math and scientific inquiry are basically the most important tools invented/popularized in the past 1000 years.

The lapse here is not math, but rather spending lots of attention on abstract thought disconnected from any kind of reality check. Of course, there’s nothing inherently wrong with speculating sans evidence about the future, it’s generally a harmless hobby. In the best case it makes for fun SF novels. Convincing yourself, still without direct evidence, that your speculation reflects truth implies that something has gone off the rails in the reasoning process, however.

Using statistical analysis to understand real causal relationships in areas we have real data about is damn hard, and even plenty of people who are highly trained as statisticians screw up all the time. Academic fields like comparative politics (to take an example I spent a fair amount of time studying) are rife with poor conclusions drawn from bad analysis. The LessWrong folks are hardly unique in applying logic poorly. But they do tend to tackle more speculative questions and convince themselves more firmly of their conclusions (at least, such is my impression as an outsider).


I think this is a common problem when people working in fields that have somewhat accurate mathematical models look at fields that don't. They often don't realize how hard it is to create an accurate mathematical model for many situations, and assume that the other fields don't have them because the individuals who work in said fields aren't as good at math.

Which is why every so often you'll get things like a physicist spending a couple months studying economics in their free time and deciding that they can now unlock the secret to economics which has eluded economists.

This xkcd comic sums it up well:

https://xkcd.com/793/


The more you extrapolate the more frequently you will need to adjust your future predictions because of errors in your initial measurements. This goes for any kind of extrapolation (for instance: plotting a course on a map), but it goes even more for extrapolating the future from limited evidence present today. Your 'best guess' might be off by many orders of magnitude if the evidence you have today is only loosely related to the future in terms of importance and where evidence may not be nearly as independent in nature as you currently perceive.

This can lead to your best guess based on available evidence being about as good at predicting the future as randomness in spite of all the apparent effort at making the predictions mathematically sound.


> because they used logic and math, their analysis must be correct

> belief in AI overlords ruling humanity, the technological singularity, cryogenics, or impending 1000-year human lifespans

I don't think anyone on LW believes these are 'facts' that are 'correct'. LW commonly thinks of these ideas as risks / opportunites that might happen (except for the 1000-year human lifespans, which is a new idea to me), and that it's probably worth investing minor amounts of money in case it does. In case of cryonics, that's about $20/month for insurance that covers it; in case of AI safety, that's a couple people doing research on the problem, and some amount of money sent their way.

The way you phrase it seems like LW people are certain that cryonics will definitely let them be revived after death. That's definitely not the case - in fact, IIRC, on a survey a year or two ago LWers subscribed to cryonics assigned lower probability of it working than ones not subscribed. It's not a cargo-cult.


Substitute “plausible” for “correct” if you want to give them the benefit of the doubt. Either way it all so speculative as to be basically pure fiction. It reads very similar to me to various “scientific” defenses of particular religious traditions.

Again, as I said, I don’t think there’s anything inherently wrong with this. Little communities of people should do whatever harmless hobby is fun for them.

I just don’t find it very interesting or insightful.

If people want to spend time and attention and resources on existential risks, how about the ones which are clear and imminent, like wealth inequality, the retreat of world democracy and increasing power of entirely unaccountable and amoral multinational corporations, or global climate change, e.g. http://www.esquire.com/news-politics/a36228/ballad-of-the-sa...


But these aren't examples so much as vague caricatures. The subject matter that LessWrong considers is certainly unusual, but that alone should not be enough to call it arbitrary, questionable or outlandish.



https://en.wikipedia.org/wiki/Pascals_wager

I don't think it would be fair to malign philosophers because they come up with outlandish scary scenarios that scare people with OCD sometimes. It's not like LW gives Roko significant air time or serious treatment (EY freaking out and deleting it was partially principle of the thing, partially the fear that somebody might follow this road of thought to come up with something more terrifying and he doesn't want to take the community there, etc) somebody doing this is generally taken as a sign of serious crankery.

(FWIW, I agree with the top parent post that the hyping of Bayes Theorem is one of the LW foibles. At least the presentation of it.)


It's less about the plausibility of the thought experiment, and more about typical online drama and hysteria that ensued, which sort of belies that LW is made up of mortals like you and me. They aren't hyper-rational machines, after all.


> They aren't hyper-rational machines, after all.

No one claims they are. In fact, if I was to name a single overarching theme of all lesswrong discussions, it would be the fallibility of human reasoning. How is having some reddit-like drama on an open internet forum even relevant?

ps. to belie is to contradict, whereas I think you meant the drama shows that LW is made up of mortals? Just making sure I understood you correctly.


In that case, how is it a reply to woodchuck?

"The subject matter that LessWrong considers is certainly unusual, but that alone should not be enough to call it arbitrary, questionable or outlandish."

"But let me tell you about the time LW experienced internet drama."

Doesn't seem particularly germane?


>I don't think it would be fair to malign philosophers because they come up with outlandish scary scenarios that scare people with OCD sometimes.

Actually, that sounds like a fine reason to malign philosophers. In order to consider "outlandish scary scenarios", you must first be quite sure that those scenarios are realistic. If they're not, then you're wasting everyone's time.

And yes, if your expected-utility expressions fail to converge because you believe in taking every Pascal's Wager/Mugging scenario into account, or because you don't believe in time, then you've attempted to take the limit of a nonconverging sequence and no amount of philosophizing will help.


This is still working backwards: a thought experiment that increases your chances of being tortured by a future AI? Surely outlandish! But why? Which premises are truly outlandish, arbitrary, etc? What I see given the premises is no more than what Yudkowsky's already said: ".. a Friendly AI torturing people who didn't help it exist has probability ~0, nor did I ever say otherwise."

(However, I agree that to be genuinely distressed by the thought experiment possibility suggests more is going on psychologically than a rational assessment of unknowns, but this seems to be a minority of the community)


That's the best you got?


It's the most infamous example LW offers.


So I'm a LessWronger and know a bit about the "movement", and think you are misunderstanding what "LessWrongers think". Obviously not all LessWrongers think the same thing at all, but I'm talking about the average position of the people who believe AI safety should be worked towards.

I'd love to explain the basic position, and tell me where you disagree with it. This is the basic position:

1. Intelligence can be created, because there is nothing "special"/"magical" about humans, and our intelligence was eventually created.

2. At some point, humanity will create an "artificial general intelligence". (Since we'll just keep improving science and technology, and there's no fundamental reason why this won't eventually allow us to create an intelligence.

3. "Artificial general intelligence" basically means a machine that is capable of achieving its goals, where the goals and methods it uses to achieve them are general. I.e. not "is able to play chess really well", but rather "is able to e.g. cure cancer".

4. For various reasons, once we have an artificial intelligence, it will likely become much smarter than us. (There are many reasons and debates about this, but let's just assume that since it's a computer, we can run it much faster than a human. If you dispute this point, we can talk about it more).

5. Something being much more intelligent than us means that, in effect, it has almost absolute power over what happens in the world (like we are basically all-powerful from the vantage point of monkeys, and their fate is totally in our hands).

6. (This is, I believe, the main point): Something being "intelligent" in the sense we're talking about doesn't say anything about what its goals are, or about how its mind works. We're used to everything that's intelligent being a human being, therefore the way our mind works is basically the same across every human. With an artificial intelligence, it will work completely differently from our mind. So if we "tell it" something like "cure cancer", it won't have our intuition and background knowledge to understand that we mean "but don't turn half the world into a giant computer in order to cure it".

7. Combine the two points above, and you get the large idea - whatever the goals of the AI will be, it will achieve them. Its goals won't, by default, be ones that are good for humanity, if only because we have no idea how to program our "value system" into a computer.

8. Therefore, we need to start working on making sure that when AI does come, it's safe. Even if we create an AI, the "extra" problem of making it safe is both hard, and we have absolutely no idea how to do it right now. We have no idea how long AI will take, or how long figuring out safety will take, but since this is a humanity-threatening problem, we should devote at least some resources to working on it right now.

That's it, that's the basic idea. I'd love to hear which part you disagree with. I totally understand that not everyone will agree on some of the final details like, e.g., how many resources we should effectively devote right now (you might even claim it's 0 because anything we do now won't be useful).

But I think the overall reasoning is sound, and would love to hear an intelligent disagreement.


> 1. Intelligence can be created, because there is nothing "special"/"magical" about humans, and our intelligence was eventually created.

Human intelligence evolved through a (very long!) series of natural processes, to the best of my knowledge. To say it was "created" implies something closer to a religious or philosophical opinion, rather than something supported by science.

> 2. At some point, humanity will create an "artificial general intelligence". (Since we'll just keep improving science and technology, and there's no fundamental reason why this won't eventually allow us to create an intelligence.

This is hugely debatable. Why is AGI inevitable? Even given great amounts of computing resources, a artificial general intelligence does not just automatically appear, it must somehow be designed and programmed. Fields like computer vision have grown tremendously using techniques like deep learning, but there really isn't any evidence that I know of that a general intelligence is any closer than it was 20 years ago.


Totally agree with your first point, I just didn't want to have too many caveats and nitpicking words. If it's not clear, then of course my arugment in no way implies that human intelligence was "created" by an intelligence - it evolved. Poor wording aside, my statement remains the same.

"This is hugely debatable. Why is AGI inevitable? Even given great amounts of computing resources, a artificial general intelligence does not just automatically appear [...]"

Well no one thinks AGI will appear without anyone working on it, but lots and lots of people are working on it now. And since there are huge incentives to create one, the belief is that more people will work on it as time goes on.

"[...] there really isn't any evidence that I know of that a general intelligence is any closer than it was 20 years ago."

Well, in some sense I agree, in that we still have no idea how far off AGI is. If it's going to happen in 10 years, we should definitely prepare now. If it's 500 years away, maybe it's too early to think about it. But since neither of us knows, wouldn't you say it's worth putting some effort to working towards safety?

In another sense though, I disagree with you that we're not any closer to AGI. As you said jsut the sentence before, fields like comptuer vision have advanced tremendously. While this doesn't necessarily mean AGI is closer, it certainly seems that the fields are related, so advancement in one is a sign that advancement in the other is closer.


Yeah, you go off the rails around step 5. "Something being much more intelligent than us means that, in effect, it has almost absolute power over what happens in the world" makes no sense. Since when does intelligence get you power? Are the smartest people you know also in positions of power? Are the most powerful people all highly intelligent?

"whatever the goals of the AI will be, it will achieve them". Dude, if intelligence meant you could achieve your goals, Hacker News would be a much less whiny place.


"Since when does intelligence get you power?" You hit the nail on the head there. Its about I/O. (Just as its about I/O in the original article - garbage in, garbage out). Jaron Lanier makes this point in.

http://edge.org/conversation/jaron_lanier-the-myth-of-ai

"This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don't have to worry about is the AI algorithm running them, because that's speculative. There isn't an AI algorithm that's good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it's a massive misdirection."


As I've said before, the singularity theorists seem to be somewhere between computer scientists, who think in terms of software, and philosophers, who think in terms of mindware, and they seem to have a tendency to completely forget about hardware.

There seems to be this leap from 'superintelligent AI' to 'omnipotent omniscient deity' which is accepted as inevitable by (what for shorthand here is being called the 'lesswrong' worldview) which seems to ignore the fact that there are limited resources, limited amounts of energy, and limitations imposed by the laws of physics and information, that stand between a superintelligent AI and the ability to actuate changes in the world.


You're not engaging with the claim as it was meant. In context, no human being has ever been "much more intelligent" than me. Not in the same way that I am "much more intelligent" than the monkey von Neumann.

You might decide that this means edanm goes off the rails at step four, instead. But you should at least understand where you disagree.


I'm still not sure you could assume ultimate power and achieve everything you desired if you were the only hacker news reader on a planet of 8 billion monkeys.


> I'm still not sure you could assume ultimate power and achieve everything you desired if you were the only hacker news reader on a planet of 8 billion monkeys.

I would think it relatively easy for a human armed with today's knowledge and a reasonable yet limited resource infrastructure (for comparison to the situation of an unguarded AI) to quite easily engineer the demise of primate competitors in the neighborhood. Set some strategic fires, burn down jungles would be the first step. "Fire" might be a metaphor for some technology that an AI might master that humans don't quite have the hang of yet that can be used against them. For example, a significant portion of Americans seem way too easily manipulated by religion and fear, an AI-generated televangelist or Donald Trump figure might be a frightening thought.


Well "is able to e.g. cure cancer" is not actually very general. Which leads to the problem with 2) whats the economics behind creating a general intelligence when a specific intelligence will get you better results in a given industry. Even then specific intelligence is still going to be subject to the good-enough economic plateau that has killed so many future predictions.

Then the problems with 4 on up really concern the speed with which 4 can feasibly happen. The AI goes FOOM doomsaysers seem to think that we'll end up with an AI which is so horribly inefficient that/and it will be able to rewrite it self to be super duper intelligent without leaving its machine (and won't accidentally nerf itself in the attempt) and then that super duper intelligent computer will trick several industries into building an even more powerful body for itself etc... all of this happening before humans pull the plug. no step of which is has anything beyond speculation to support it.

In a general note the full employment theorems mean that even if general AI is economically incentivized there's still going to be dozens/hundred/thousands of different AIs carving out niches for themselves which, given that the earth/universe has limited resources, handily prevents the paper clip maximizer problem. While the future may not need humans it will still be a diverse future.


1) Define intelligence, knowledge, truth, proof (deductive and inductive)... how do concepts work?, etc. I am not being facetious here. AI is an epistemology problem not a technological one.

2) I agree but we have to solve the problem of induction first but LW/EY are certain that there is no problem of induction. How can one be certain in a Kantian/Popper framework where statements can be proved false but never true?

3a) Here is where we part ways. It is a common assumption that AI implies consciousness but I think that is an unwarranted assumption. Whatever the principles behind intelligence are, we know that consciousness minds have found a way to (implicitly) enact them. It does not follow that consciousness is necessary for intelligence (just the biological manifestation of them) and I think good arguments exist to think that they are not correlatives. If they are correlatives then it will be easier to genetically design better babies, now that evolution is in conscious control, than to start from scratch.

3b) Goals, values, aims, etc. are teleological concepts that apply to living things only because they face the alternative of life or death. Turning off your computer does not kill it in the same sense that a living thing that stops functioning dies forever. 3a) & 3b) diffuses all the scary AI scenarios about AI taking over the world. It does raise the issue of AI in the hands of bad people with evil goals and values, like the dictator of North Korea who now apparently has the H-Bomb. This is the real danger today.

4) I agree. Computer aided intelligence will allow us to accelerate the accumulation of knowledge (and its application to human life) in unimaginable ways. But it will be no more conscious than your (deductive) calculator.

5) Non Sequiturs. Possibly psychological projection of helplessness or hopelessness.

6) As the joke goes, we can always unplug it.

7) Granting your premises then the goal of LW/EY should not be AI but the scientific, rational proof and definition of ethics but their fundamental philosophic premises won't allow it.

8) For me the threat is bad, evil people in possession of powerful technologies.


>Granting your premises then the goal of LW/EY should not be AI but the scientific, rational proof and definition of ethics but their fundamental philosophic premises won't allow it.

That is the goal of MIRI, the organization that EY founded, and is a frequent topic of discussion on LW


MIRI’s three research objectives are, at present:

•highly reliable agent design: how can we design AI systems that reliably pursue the goals they are given?

•value learning: how can we design learning systems to learn goals that are aligned with human values?

Not exactly what I meant. What are these human values (for humans not robots) and how do you prove they are rational and scientific? Their goal is to design AI that will accept human goals/values without defining a rational basis for those human values.


(5) isn't convincing: pull the plug of the computer hosting the AI: it's "dead".


I've been saying t forever. Thanks for putting it so succinctly. LessWrong is a cult of people who want to be smart and they've essentially found a community in which certain assumptions and hypothetical scenarios combined with mathematical concepts make them think they've found the answer to everything in the Universe.

They're no better than any other cult in my book. The problem is that it's only going to get worse with the advances in AI that are going on. Yudkowski has managed to convince some wealthy people to fund his so called research and we have OpenAI operating in the same waters which somehow gives LW people more legitimacy.


What is the answer to everything in the Universe they think they've found?

I've read a lot of the bigger posts on Lesswrong (http://lesswrong.com/top/?t=all) and none of them are anything like.

No better than any other cult? How are you deciding that? The LW community hasn't killed people. Doesn't cut people off from their family. Does't emotionally/physically abuse people. Etc...

Even if they are a "cult", this puts them miles ahead of other cults, like say, Scientology which has done far, far more harm to people.

I struggle to think in what way LW has harmed anyone at all.


cult |kʌlt| noun 1 a system of religious veneration and devotion directed towards a particular figure or object: the cult of St Olaf. • a relatively small group of people having religious beliefs or practices regarded by others as strange or as imposing excessive control over members.

The veneration of Yudkowski and others in the LW community is more than a bit "religious". So I'd say by definition it's a cult.

LW hasn't done harm to people physically, but what it's done is spawn some very questionable ideas, perpetuate pseudo-science and pseudo-mathematics. The cult leader has no formal training, zero research in peer-reviewed journals and still calls himself a "senior research fellow" in an institute he himself started.

Hell, he even has an introductory religious text - The Sequences and the Methods of Rationality fan fiction (which by the way, he wanted to monetise before the broader fan fiction community stopped him. A clear violation of copyright law).

For more, see: http://rationalwiki.org/wiki/Yudkowsky

I'll quote the section titled "More controversial positions"

Despite being viewed as the smartest two-legged being to ever walk this planet on LessWrong, Yudkowsky (and by consequence much of the LessWrong community) endorses positions as TruthTM that are actually controversial in their respective fields. Below is a partial list: Transhumanism is correct. Cryonics might someday work. The Singularity is near![citation NOT needed]

Bayes' theorem and the scientific method don't always lead to the same conclusions (and therefore Bayes is better than science).[21]

Bayesian probability can be applied indiscriminately.[22]

Non-computable results, such as Kolmogorov complexity, are totally a reasonable basis for the entire epistemology. Solomonoff, baby!

Many Worlds Interpretation (MWI) of quantum physics is correct (a "slam dunk"), despite the lack of consensus among quantum physicists.[23]

Evolutionary psychology is well-established science.

Utilitarianism is a correct theory of morality. In particular, he proposes a framework by which an extremely, extremely huge number of people experiencing a speck of dust in their eyes for a moment could be worse than a man being tortured for 50 years.[24]

Also, while it is not very clear what his actual position is on this, he wrote a short sci-fi story where rape was briefly mentioned as legal.

TL;DR: If it's associated with LessWrong/Yudkowsky, it's probably bullshit.


I cannot judge to which degree these theses are bullshit, but I've found LW a tremendously rich source of thinking tools and I'm convinced that reading or skimming a lot of the sequences have improved my thinking.

Regarding the rape sequence in HPMOR: It's a terribly chosen trope to convey that the fictional society has very different values from ours. Apparently it ties into various parts of the story, so that EY didn't remove it and only toned it down after it was criticized.


> I cannot judge to which degree these theses are bullshit.

Go the link, go to references, read about them. I'll outline the gist: Most of what Yudkowsky says is extremely sci-fi, no real basis in scientific fact, but stretching the current technological progress to the point where his opinions on things (stuff like transhumanism, singularity) can be justified.

What he's preaching isn't science. Certainly not rigorous experimental science. He (along with Bostrom) tends to engage in extreme hypotheticals. Which, sure if you're a philosopher, is fine. But even then, wouldn't you want your work to be judged by like-minded peers? But alas, here has a convenient excuse of being "auto didactic" to fall back on, so he can sit on his armchair and critique traditional education, and his lack of peer reviewed material.

Not to mention, and this is a bit of a pet peeve, I find that most LW people are too self-absorbed, I've literally seen a blog where the person who runs it "warns" the readers that what he writes is too complicated for people to follow. This sort of narcissistic, self congratulatory thinking is what puts me off more than anything. Writing long form posts on the Internet which use complicated words don't make you smart.

http://laurencetennant.com/bonds/cultofbayes.html is another critique. It comes off as bit crass, but stick with it.

> I've found LW a tremendously rich source of thinking tools and I'm convinced that reading or skimming a lot of the sequences have improved my thinking.

There are other ways to improve your thinking. Read books. Read different kind of books, that offer counter point of views. Farnham Street Blog is a good place to start for a list of resources for thinking tools/mental models btw. :)


I will look into Farnham Street's Blog, thanks.

> What he's preaching isn't science.

I don't buy into the necessity that everything has to be peer-reviewed in the old fashioned way. There is peer-review happening in the comments to some extent. I don't take a fancy to dismissing any radical ideas as pseudoscience. It's just the outer fringe of hypotheses that need to be tested against reality, and as long they are approximately humanist, enlightened and don't contradict existing physics without depending on mathematics (or disclaimers), I cannot see anything wrong with it. As a naturalist, I pretty much agree with everything I've read on LW so far, except for the parts I cannot judge (like hypotheses about physics), which I allocate a weaker priors for and a few unconvincing pieces.

> Not to mention, and this is a bit of a pet peeve, I find that most LW people are too self-absorbed, I've literally seen a blog where the person who runs it "warns" the readers that what he writes is too complicated for people to follow.

I have not yet experienced that, but there are also a lot of people on reddit and HN that I don't like, yet I differentiate within these communities between what is valuable and what is not.

> Most of what Yudkowsky says is extremely sci-fi, no real basis in scientific fact, but stretching the current technological progress to the point where his opinions on things (stuff like transhumanism, singularity) can be justified.

At the risk of seeming indoctrinated to you, this is what I believe with high certainty: If Moore's law continues another one or two decades, I think the singularity is a very real possibility. The human brain seems to be nothing more than a learning and prediction machine, nothing what transcends what we can understand in principle. Evolution did come up with complex organisms, but the complexity is limited by biochemical mechanisms and availability of energy. In addition, nature often approximates very simple things in overly complicated ways because evolution is based on incremental changes, not on an ultimate goal that prescribes a design of low complexity. I also think that AI will very likely be superintelligent and that poses a tremendous risk in the 10-40 years to come (on the order of atomic warfare and runaway climate change). By the time someone implements an approximately human-level intelligence, we better have a good idea about how to control such a machine.


> There is peer-review happening in the comments to some extent.

Lol. I guess we don't need college education as well then, there's education happening in the comments to some extent. We don't need traditional means of news, there's news happening on Twitter to some extent. I could go on with analogous line of reasoning.

Don't get me wrong, I'm not 100% in favour of the traditional education model as well, but peer reviews exist for a reason. You and I are not experts in these fields. We rely on the expertise of people who have made it their business and life to study these fields based on a rigorous method. Would you try out homeopathy had it not been rejected completely by doctors and scientists but someone on a forum told you it worked for them? What if someone wrote a very long article with fancy words (like LW tends to do) explaining how and why it works (they exist, I assure you)? Would you try it then?

> I don't take a fancy to dismissing any radical ideas as pseudoscience.

Sure, I'm not saying we should be against radical ideas. That's how scientific progress happens. I'm against LW ideas, for which there is no basis in reality as far as we know based on our current understanding of science.

> I differentiate within these communities between what is valuable and what is not.

Indeed. But I'd rather the community's entire existence not depend on bullshit.

> At the risk of seeming indoctrinated to you, ...

a) Keywords: "If", "Seems" b) Tons of assumptions in that scenario you laid out. If you can't see it, I'm sorry but you're already too far gone. c) Watch some MIT lectures on computer architectures about how the trend of Moore's law has already radically shifted and is flatlining.

Basically, what you've done is precisely the kind of utter crap that LW perpetuates. "If x keeps happening" without providing any reason as to why that would be true. Make some ridiculous simplifications "complexity is limited by ___", nature often does __ because ___. You basically don't provide any rational reason for why you think AI will be super intelligent and even if it were, why that would be risky. You pick numbers out of a hat (10-40 years to come).

Yes, you look pretty well indoctrinated from where I'm sitting. But I hope you see the many (so many) flaws in that last paragraph of yours (it honestly made me laugh out loud :p)

Predicting the future is hard business -- be it the stock market predicting what happens tomorrow, or weather forecast for the next month. It's presumptuous and hella stupid if you think you can predict where Science and Technology will be x years from now.

TL;DR: Stahp.


> Lol. I guess we don't need college education as well then, there's education happening in the comments to some extent. We don't need traditional means of news, there's news happening on Twitter to some extent. I could go on with analogous line of reasoning.

That's a straw man. I did say it's the fringe and it needs to be tested. I didn't say one should replace the other. Peer-review is essentially just mutual corrections, and there are mutual corrections happening in the comments, just not as thoroughly as when it's institutionalized. Most of it is not new anyway, but just summarizes research results and draws logical conclusions from it (for example this [1]). If it wasn't all brought together on LW, I possibly wouldn't have found out about the wealth of knowledge for a long time.

[1] http://papers.nips.cc/paper/2716-bayesian-inference-in-spiki...

> a) Keywords: "If", "Seems" b) Tons of assumptions in that scenario you laid out. If you can't see it, I'm sorry but you're already too far gone. c) Basically, what you've done is precisely the kind of utter crap that LW perpetuates. "If x keeps happening" without providing any reason as to why that would be true.

It's very logical. My certainty referred to the implication, but it is hard, of course, to come up with a prior for that 'if': Exponential progress could continue in various ways, e.g. by invention of more energy efficient chips and by scaling them up, by 3D circuitry, molecular assemblers, memristors, or perhaps quantum computing. There are contradicting studies, so one should put P(Moore's law continues for another 10-20 yrs) at perhaps 50%. So, of course, this is all hedged behind this prior (which I think many people get confused by). The discussion is always concerned with implications which can be made with fairly solid reasoning, by assuming that P(..) above to be 100%.

> Make some ridiculous simplifications "complexity is limited by ___", nature often does __ because ___. You basically don't provide any rational reason for why you think AI will be super intelligent and even if it were, why that would be risky.

That's just a basic assumptions which I find plausible, and which some respectable and knowledgeable persons find plausible too (for example Stephen Wolfram and Mark Tegmark; I am aware that appeal to authority is difficult to argue from, but both have publications which I could also refer to). I agree that mentioning the complexity limitations didn't provide any information because they don't tell us whether it makes it simple enough for us to understand, it merely says that the complexity is not infinite, so I should have left it out entirely. But this is not at all representative for the best contents on LW, it was poor reasoning on my behalf. Bostrom's book Superintelligence gives a pretty good summary about why it is thought to be plausible.

> You pick numbers out of a hat (10-40 years to come).

That's based on estimates of the processing power required for brain simulations by IBM researchers and Ray Kurzweil. Simple extrapolation of Moore's law shows us that we will reach that point roughly between 2019 and 2025. 40 years is just my bet based on what I know about brain models and current obstacles in AI.


I don't understand how you can be so certain that the hypothetical scenarios they imagine can't possibly happen. Even if it really is laughable, we spend a lot of money on laughable research (homeopathy, anyone?), so why is this case so particularly bad?


...Just because we've spent money on one laughable research doesn't mean we should spend it on more, does it? Are you seriously going to argue that?


No, what I'm arguing is that if we only spent money on things everyone agrees are useful, we'd never get anything done.


Not everyone has to agree. That's why we have the scientific method and research institutes. If the current science shows that something is worth exploring in more detail, that some avenues are worth spending money on, spending money on them makes sense.

Bullshit Ideas like AI apocalypse and singularity, transhumanism, downloading brain into a computer do not fit into that criteria.


So how do ideas get to the stage where the 'research institutes' agree they're worth investigating?

It seems to me like you're saying "I don't like these ideas, no one should be working on them". That seems a worse principle then "everyone should work on ideas they find worth investigating".

I also have no idea how you distinguish 'Bullshit Ideas' from non-bullshit ideas without investigating them. Your gut is not that good at distinguishing truth from a blunder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: