Hacker News new | past | comments | ask | show | jobs | submit login

> Consider: "Previous books published by authors hailing from Country X contained flaws in their logic; therefore since this book's author came from Country X, this book must also have a logical flaw." It's not a very strong form of argument: you might as well just read the book to see if it has logical flaws. Similarly even if a claim seems superficially similar to the kind of claim made by non-credulous people, that's far from conclusive evidence for it being an invalid claim.

You're right, that's not a very strong form of argument. But that is not my argument. The connection between Bostrom and previous doomsayers is not as arbitrary as geography. Your analogy is false. The connection between Bostrom and other doomsayers in the structure of their thought. The way they falsely extrapolate great harm from small technological shifts.

> It would be a shame if religious doomsayers have poisoned the well sufficiently that people never listen to anyone who is saying we should be cautious of some future event.

I think many people do not realize the quasi-religious nature of AI speculation. Every religion and charismatic mass movement promises a "new man." In the context of AI, that is the singularity, the fusion of humans with intelligent machines. The power we associated with AI present and future leads us to make such dramatic, quasi-religious predictions. They are almost certainly false.

>Sure, but there have been a few like nuclear weapons that were very much not a waste of time. Again, you really have to take things on a case by case basis.

Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous.




It sounds to me as though you believe that because religious nuts have prophesied doomsday since forever, we can rule out any doomsday scenario as "almost certainly false" (a very confident statement! Prediction is difficult, especially about the future. Weren't you just explaining how hard it is to predict these things?) But no actions on the part of religious doomsayers are going to protect us from a real doomsday-type scenario if the universe throws one at us.

The fact that a claim bears superficial resemblance to one made by wackos might be a reason for you to believe that it's most likely not worth investigating, but as soon as you spend more than a few minutes thinking about a claim, the weight of superficial resemblances is going to be overwhelmed by other data. If I have been talking to someone for three hours, and my major data point for inferring what they are like as a person is what clothes they are wearing, I am doing conversation wrong.

"Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous."

Sure, and in the same way nuclear bombs were a weaponization of physics research, there will be some way to weaponize AI research.


Guys, this line of argument was dealt with over 8 years ago... https://web.archive.org/web/20140425185111/http://www.accele...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: