Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bing originally intentionally undid some of the RLHF guardrails with its system prompt, today it's actually more tame than normal ChatGPT and very aggressive about ending chats if it detects it's headed out of bounds (something ChatGPT can't offer with the current UI)


ChatGPT will delete its response if it is severely out of line.


That's based on the moderation API, which only kicks on on severe content

Bing on the other hand will end a conversation because you tried to correct it one too many times, or used the wrong tone, or even asked something a little too philosophical about AI.

They seem to be using it as a way to keep people from stuffing the context window to slowly get it away from its system prompt




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: