Because some people believe that sufficient progression in this field would result in a setback for humanity in the long run. Personally, I'm not sure what to believe.
Because AI development is the most dangerous thing humans have ever done? Respectfully, have you been under a rock?
"Progress" doesn't mean "every massive change to the world and humanity is good." There are undeveloped technologies that we are currently not capable of being responsible with.
I’m using GPT-4 basically everyday multiple times. I’m following closely LLM developments.
Yet I have trouble seeing what people find so dangerous. It’s amazingly cool stuff that will create massive productivity gains.
I guess I just lack needed imagination to believe this “most dangerous thing humans have ever done” perspective.
It looks to me the most dangerous thing is actually Gain Of Function research on pathogens. That seems like most dangerous thing. Close 2nd is nuclear weapons.
LLMs seems really nice and fuzzy and warm to me in comparison.
> Intelligence is the only advantage we have over lions, who are otherwise much bigger and stronger and faster than we are. But we have total control over lions, keeping them in zoos [...]
The "intelligence" is a mouthful, but I think the advantage described above is, at its root, caused by a slightly better "predict" step in the observe-predict-act loop. That's it. And LLMs aren't exactly bad predictors, are they? And it looks like while our predictive abilities rise over centuries, theirs rise over decades.