Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use LLMs mainly as a mirror for my own thinking, not as a source of authority.

When I explain my ideas to the model during development, I often see flaws or confusion in my own words. This is where I learn the most. The author talks about people who rely on AI for arguments or research. They let the model's smooth, but statistical, language replace their own thinking. Language is naturally uncertain. LLMs just show this uncertainty using statistics. If you understand this, LLMs are no longer a "confidence engine." Instead, they become a tool to fix and improve your thoughts.

A key point is that even if we try hard, we cannot help but react to what the AI says. We must remember that neither AI nor humans are perfect. I believe we should accept AI responses critically and always be skeptical, just like when meeting a stranger.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: