Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find that it is terrible for research, and hallucinates 25% to 90% of its references.

If you tell it to find something and give it a detailed description of what you're looking for, it will pretend like it has verified that that thing exists, and give you a bulletpoint lecture about why it is such an effective and interesting thing that 1) you didn't ask for, and 2) is really it parroting your description back to you with embellishments.

I thought I was going to be able to use LLMs primarily for research, because I have read an enormous number of things (books, papers) in my life, and I can't necessarily find them again when they would be useful. Trying to track them down through LLMs is rarely successful and always agonizing, like pulling teeth that are constantly lying to you. A surprising outcome is that I often get so frustrated by the LLM and so detailed in how I'm complaining about its stupid responses that I remind myself of something that allows me to find the reference on my own.

I have to suspect that people who find it useful for research are researching things that are easily discoverable through many other means. Those are not the things that are interesting. I totally find it useful to find something in software docs that I'm too lazy to look up myself, but it's literally saving me 10 minutes.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: