Hacker News new | past | comments | ask | show | jobs | submit login

> I'm referring to the behaviour, not the inner nature.

Since the inner nature does affect behavior, that's a non sequitur.

> we had to invent counter-intuitive maths to make most of our modern technological wonders.

Indeed, and that's worth considering, but we shouldn't pretend it's the common case. In the common case, the machine's lack of real-world context is a disadvantage. Ditto for the absence of any actual understanding beyond "word X often follows word Y" which would allow it to predict consequences it hasn't seen yet. Because of these deficits, any "intuitive leaps" the AI might make are less likely to yield useful results than the same in a human. The ability to form a coherent - even if novel - theory and an experiment to test it is key to that kind of progress, and it's something these models are fundamentally incapable of doing.




> Since the inner nature does affect behavior, that's a non sequitur.

I would say the reverse: we humans exhibit diverse behaviour despite similar inner nature, and likewise clusters of AI with similar nature to each other display diverse behaviour.

So from my point of view, that I can draw clusters — based on similarities of failures — that encompasses both humans and AI, makes it a non sequitur to point to the internal differences.

> The ability to form a coherent - even if novel - theory and an experiment to test it is key to that kind of progress, and it's something these models are fundamentally incapable of doing.

Sure.

But, again, this is something most humans demonstrate they can't get right.

IMO, most people act like science is a list of facts, not a method, and also most people mix up correlation and causation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: