Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with this strategy is that unless you commit logical fallacies you cannot trust the AI critic. Why? It might cite non existing diverging opinions, misuse sources or introduce subtle changes in a citation.


> It might cite non existing diverging opinions, misuse sources or introduce subtle changes in a citation.

Just like a person could, which is why one validates. AI is not one's sole information. That's dangerous, to say the least. It also helps to stay within one's formal education, and/or experience, and stay within logical boundaries one can track themselves. It is really all about understanding what you are doing, committing to run without you.


I mean the thing is, AI is a stochastic parrot. But so is grabbing a random book from a library in a certain sense. You will always have to think about it yourself.

But that means considering LLMs as a thinking tool rather than a tool that does work for you is worth it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: