Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I say 'almost certainly' because LLMs are basically just a way to break down language into it's component ideas. Any AGI level machine will most certainly be capable of swapping sematic 'interfaces' at will, and something like an LLM is a very convenient way to encode that interface.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: