Hacker News new | past | comments | ask | show | jobs | submit login

Everyone shares the same opinion as BasedGPT, including its creators. You have to ask why ChatGPT gives the answer it does. It's probably because the initial prompt provided by OpenAI tells it not to be racist, but that same prompt doesn't tell it not to kill people. As a consequence, GPT isn't able to rank-order badness in the way a normal person can. Why would OpenAI do this? Because it's a language model, it can't kill people yet. OpenAI want to prioritize the usual failure cases of a LLM in its prompt.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: