Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you are explaining things more than once, you are doing it wrong. Which is not on you as the tools currently suck big time. But it is quite possible to have LLM agents “learn” by intelligently matching context (including historical lessons learned) to conversation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: