Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are also very good at translating between natural and structured/"computer" language.

At a much higher rate than anything I've seen before in any algorithm, they "get" irony, sarcasm etc., when until a few years ago, I (as a non-linguist), assumed that any such solution to the "aboutness" problem would require solving AGI.

That alone is worth a lot in a world which spends considerable resources on people that effectively work as translators between human and structured language.

Besides that, I suspect that their existence is seen as a strong and remarkable hint that intelligence really might just be a quantitative phenomenon emergent from throwing a lot of compute at a lot of data, which at some point might become self-reinforcing.

Whether that's true or not, and for better or worse, that's what people now seem to be set on doing.



LLMs don’t get irony. That is not how they work. They simulate getting it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: