Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't care much about hype one way or the other, but I find that continually asking for changes/improvements past the first prompt or two almost always sends the AI off into the weeds except for all of the simplest use cases.


New prompts in the same session are dangerous because the undesired output (including nonsense reasoning) is getting put back into the context. Unless you’re brainstorming and need the dialogue to build up toward some solution, you are much better off removing anything that is not essential to the problem. If the last attempt was wrong, clear the context, feed in the spec, what information it must have like an error log and source, and write your instructions.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: