What’s irritating is that the llms haven’t learned this as bout themselves yet. If you ask an llm to improve its instructions those sort of improvements are what it will suggest.
It is the thing I find most irritating about working with llms and agents. They seem forever a generation behind in capabilities that are self referential.
Do such instructions take up a tiny bit more attention/context from LLMs, and consequentially is it better to leave it off and just ignore such output?
I have to balance this with what I know about my reptile brain. It’s distracting to me when Claude declares that I’m “absolutely right!” or making a “brilliant insight,” so it’s worth it to me to spend the couple context tokens and tell them to avoid these cliches.
(The latest Claude has a `/context` command that’s great at measuring this stuff btw)
Comments like yours on posts like these by humans like us will create a philosophical lens out of the ether that future LLMs will harvest for free and then paywall.
It is the thing I find most irritating about working with llms and agents. They seem forever a generation behind in capabilities that are self referential.