Hacker News new | past | comments | ask | show | jobs | submit login

I think it’s a very dangerous place to be in an area you’re not familiar with. I can read Python code and figure out if it’s what I want or not. I couldn’t read an article about physics and tell you what’s accurate and what’s not.

Legal Eagle has a great video on how ChatGPT was used to present a legal argument, including made up case references! Stuff like this is why I’m wary to rely on it in areas outside of my expertise.




There’s a world of difference between blindly trusting an LLM and using it to generate clues for further research.

You wouldn’t write a legal argument based on what some random stranger told you, would you?


> Oh so you mean I have at my fingertips a tool that can generate me a Scientific American issue on any topic I fancy?

I’m responding to this comment, where I think it’s clear that an LLM can’t event achieve the goal the poster would like.

> You wouldn’t write a legal argument based on what some random stranger told you, would you?

I wouldn’t but a lawyer actually went to court with arguments literally written by a machine without verification.


> I’m responding to this comment, where I think it’s clear that an LLM can’t event achieve the goal the poster would like.

I know it can't - the one thing it's missing is the ability to generate coherent and correct (and not ugly) domain-specific illustrations and diagrams to accompany the text. But that's not a big deal, it just means I need to add some txt2img and img2img models, and perhaps some old-school computer vision and image processing algos. They're all there at my fingertips too, the hardest thing about this is finding the right ComfyUI blocks to use and wiring them correctly.

Nothing in the universe says an LLM has to do the whole job zero-shot, end-to-end, in a single interaction.

> I wouldn’t but a lawyer actually went to court with arguments literally written by a machine without verification.

And surely a doctor somewhere tried to heal someone with whatever was on the first WebMD page returned by Google. There are always going to be lazy lawyers doctors doing stupid things; laziness is natural for humans. It's not a valid argument against tools that aren't 100% reliable and idiot-proof; it's an argument for professional licensure.


Your entire argument seems to be “it’s fine if you’re knowledgeable about an area,” which may be true. However, this entire discussion is in response to a comment who is explicitly not knowledgeable in the area they want to read about.

All the examples you give require domain knowledge which is the opposite of what OP wants, so I’m not sure what your issue is with what I’m saying.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: