Hacker News new | past | comments | ask | show | jobs | submit login

Could you explain what your issue is here? I think we are generally just try to reason through this phenomena, not make grand conclusions about the model. We talk about how hard/time consuming it is for a human to pose possible theories for what the LLM could be doing. It is not to assert anything about how "difficult" it is for the LLM compared to a human, because we can't ascribe difficulty/ease to anything the model "does", simply because we know the fundamental mechanics of its inferencing. We can only after the fact of an LLM's output say something like: "Wow that would have been hard for me to output" or "I could have written something similar in like 5 minutes." But these claims can only ever be counter factuals like this, because in reality the output of an llm comes out at a constant rate no matter what you prompt it with.

If you try to say more, you'll end up falling in weird contradictions: it would take an llm a lot longer to output 10 million 'a's than a human, so it must be "harder" for the llm to do that than a human.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: