Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT3 does not think like a human, but it definitely executes code in a way that is more similar to a human than a computer..

Proof is, that indeed humans do get the wrong answer in quizzes like these sometimes!

So i cannot understand this point of view of diminishing it as "spark of intelligence". It is exactly what advertised: a very big step forward towards real AI, even if definitely not the last one?



>> Proof is, that indeed humans do get the wrong answer in quizzes like these sometimes!

GPT-3 gets the wrong answer because it has memorised answers and it generates variations of what it has memorised. It generates variations by sampling at random from a probability distribution over what it's memorised. If it has the correct answer memorised, sometimes it will generate the correct answer, sometimes it will generate a slight variation of it, sometimes it will generate a large variation of it, sometimes it will generate something completely irrelevant (i.e. with a very small probability).

Failure is not an exclusive characteristic of humans. In particular, any mechanical device will fail, eventually. For example, a flashlight will stop functioning when it runs out of battery. But not because it is somehow like a human and it just got it wrong that one time.


It is the Emperor's New Clothes incarnate.

It has the special talent of hijacking your own intelligence to make you think it is intelligent.

People understood this about the 1966 ELIZA program but intellectual standards have dropped greatly since then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: