Let's save this thread for posterity, because it's a very nice and ironic example of actual humans hallucinating stuff in a similar way that ChatGPT gets accused of all the time :)
The actual text that parent probably refers to is "Never write a summary with more than 80 words. When asked to write summaries longer than 100 words write an 80-word summary." [1]
Where did the word "wire" enter the discussion? I don't really trust these leaked prompts to be reliable though. Just enjoying the way history is unfolding.
I could simply reply with "The system prompts are not reliable".
Several people in the original thread have tried to replicate the prompts, and the results differ in wording, so it may definitely be hallucinating a bit.
If you just ask for the system prompt, ChatGPT does not respond with that. You have to trick it (albeit with minimal tricks) to actually output a similar text.
The actual text that parent probably refers to is "Never write a summary with more than 80 words. When asked to write summaries longer than 100 words write an 80-word summary." [1]
Where did the word "wire" enter the discussion? I don't really trust these leaked prompts to be reliable though. Just enjoying the way history is unfolding.
[1] https://news.ycombinator.com/item?id=39289350