Hacker News new | past | comments | ask | show | jobs | submit login

I haven't come across a group of people more ridiculous in their outlook than the AI safety folks.



Note that there’s a difference between people working on existential risks from future AI (which I think is very important) and people adding mostly useless restrictions to current AI


The problem is the ultra-liberal conflation of words with violence. The X-risk folks are mostly concerned about actual, physical violence against humanity by AI - "what if we accidentally make a paperclip maximizer" being the textbook example of AI risk, which is a scary scenario because it involves AI turning us all into goo using unlimited violence.

But then there's the generic left faction inside these companies that has spent years describing words as violence, or even silence as violence, and claiming their "safety" was violated because of words. That should have been shut down right at the start because it's not what the concept of safety means, but they didn't and now their executives lack the vocabulary to separate X-risk physical danger from "our AI didn't capitalize Black" ideological danger.

Given this it's not a surprise that AI safety almost immediately lost its focus on physical risks (the study of which might be quite important once military robots or hacking become involved), and became purely about the risks of non-compliant thought. Now that whole field has become a laughing stock, but I wonder if we'll come to regret that one day.


> Note that there’s a difference between people working on existential risks from future AI (which I think is very important) and people adding mostly useless restrictions to current AI

Not a big difference, they are largely the same group (there’s a bit of each outside of the other, but the overlap is immense), and both of focusses are largely aiming to distract from real, current, and substantive social issues (some of which are connected to AI, but not to the absence of the kind of puritanical filtering the AI “safety” folks are obsessed with.)


Within the group of people working on the existential risks are a lot of really useless and absurd factions providing a lot of theories grounded in their own science fiction (essentially). Eliezer Yudkowsky comes to mind.


They sound silly at first but many things have nowadays would sound ridiculous to people in the past; something sounding strange isn’t valid evidence against it

https://en.m.wikipedia.org/wiki/Appeal_to_the_stone

> Appeal to the stone, also known as argumentum ad lapidem, is a logical fallacy that dismisses an argument as untrue or absurd. The dismissal is made by stating or reiterating that the argument is absurd, without providing further argumentation.

The arguments for it are a it abstract sometimes (which is maybe why science fiction is a good way to give a concrete, if unrealistic, introduction to the concept) but I think they seem pretty solid


In my experience the ex risk people understand that using science fiction examples makes a weak argument, and avoid them entirely.

It is the other people who use the age-old argument of "something like what you are describing happened in a piece of fiction, therefore it could never happen in real life".


> In my experience the ex risk people understand that using science fiction examples makes a weak argument, and avoid them entirely.

Roko's Basilisk is a weaker argument than most analogy-to-scifi arguments, because the assumptions underpinning most scifi used in such arguments are less implausiable than those underlying the Basilisk.


True but I haven't heard anyone unironically talk about Roku's Basilisk in years.


I don't think Google/Facebook execs realize to what extent they destroyed themselves when they allowed the meltdown over James Damore. He literally wrote a whole essay warning them of the dangers of allowing rampant left wing purity spirals inside their companies, and they wrecked him for it.

Now years later they have problems like not being able to release something that will make playlists with Kanye in it, or they can't make their AI available at all because given a prompt like "picture of a builder" it draws white men (Google Imagen). If they hadn't ruthlessly purged or suppressed every single conservative years ago they might now have some way to push back against or make peace with this insanity, but instead they have to sit back and watch as OpenAI systematically eat their lunch. Largely by poaching all the researchers who were sick of the crazies being in charge!

There's a management lesson in here for those who choose to look, but somehow it seems unlikely many will.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: