Hi HN, I share a heavily downvoted LessWrong post with a long conversation I had yesterday with Google Bard, because it seemed like they did things I didnt know they could do. How do we know if they are already AGI/ASI and we’re not asking them to act like it? Anyway, I feel worn out from reading and thinking too much, anyone else feeling that when working with AI LLMs? The volume of material they can generate is way more than I’m accustomed to.