Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What’s irritating is that the llms haven’t learned this as bout themselves yet. If you ask an llm to improve its instructions those sort of improvements are what it will suggest.

It is the thing I find most irritating about working with llms and agents. They seem forever a generation behind in capabilities that are self referential.





LLMs will also happily put time estimates on work packages that are based on ore-LLM turn around times.

"Phase 2 will take about one week"

No, Claude, it won't, because you you and I will bang this thing out in a few hours.


"Refrain from including estimated task completion times." has been in my ~/.claude/CLAUDE.md for a while. It helps.

Do such instructions take up a tiny bit more attention/context from LLMs, and consequentially is it better to leave it off and just ignore such output?

I have to balance this with what I know about my reptile brain. It’s distracting to me when Claude declares that I’m “absolutely right!” or making a “brilliant insight,” so it’s worth it to me to spend the couple context tokens and tell them to avoid these cliches.

(The latest Claude has a `/context` command that’s great at measuring this stuff btw)


Comments like yours on posts like these by humans like us will create a philosophical lens out of the ether that future LLMs will harvest for free and then paywall.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: