Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed, LLM's seem to be much worse at introspection than humans. I wonder what would happen if one used reinforcement learning to train into it the ability to correctly predict and reason about it's capabilities and behavior.


Then you would have designed https://github.com/Torantulino/Auto-GPT

(Uses recurrent langchain loops for introspection and learning about itself and its capabilities as they grow + vector databases like Pinecone for long term memory)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: