Hacker News new | past | comments | ask | show | jobs | submit login

Yes it's very fascinating! The language is so clear but the concepts are totally confused.

Does this mean real logical reasoning is very close, only some small improvements away, or does it mean we're just on the wrong track (to reach actual AGI)?




IMHO (and this is just my own uniformed view), this means that language models by themselves are insufficient for certain important tasks. It seems to be hard for systems to learn deductive reasoning purely based on text prediction.

OTOH, who knows what would happen if you somehow managed to combine the generating capabilities of a language model with a proper inference engine, e.g. Wolfram|Alpha. Maybe it would bring us significantly closer to AGI, but maybe that way is also a dead-end because it's not guaranteed that those systems would work well together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: