Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's the answer of his LLM which has decomposed the question and built the answer following the op prompt obviously. I think you didn't get it.


> I think you didn't get it.

I did get it, and in my view my point still stands. If I need to use special prompts to ask such a simple question, then what are we doing here? The LLMs should be able to figure out a simple contradiction in the question the same way we (humans) do.


Not really a special prompt. It's basically my custom instruction to ChatGPT, the purpose of that instruction is to disambiguate my ramblings, basically. It's pretty effective. I always use speech to text, so it's messy and this cleanup really helps.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: