I've come to develop mixed feelings about this. Playing around with a local uncensored model for implementation in a game, I wrote a function that would return a strategy to divine the shortest way to get past an obstacle. Works great for locked containers and closed doors (unlock/open). Lower temperatures were safer, but past some threshold it routinely suggested murder as the shortest path around an NPC simply because negotiation would involve an extra step.
Uncensored models will certainly cause some entertaining problems in the future, but FredRogersGPT isn't the solution. Dangerous context really needs to be gatekept with a manual override because nobody making these things can account for every possible application. It's the only ethical, accessible and safe solution. (It also betrays intent and rightfully deflects blame. "No, that AI didn't tell you to kill your parents, you explicitly asked it to give you instructions on how to do exactly that.")
I've come to develop mixed feelings about this. Playing around with a local uncensored model for implementation in a game, I wrote a function that would return a strategy to divine the shortest way to get past an obstacle. Works great for locked containers and closed doors (unlock/open). Lower temperatures were safer, but past some threshold it routinely suggested murder as the shortest path around an NPC simply because negotiation would involve an extra step.
Uncensored models will certainly cause some entertaining problems in the future, but FredRogersGPT isn't the solution. Dangerous context really needs to be gatekept with a manual override because nobody making these things can account for every possible application. It's the only ethical, accessible and safe solution. (It also betrays intent and rightfully deflects blame. "No, that AI didn't tell you to kill your parents, you explicitly asked it to give you instructions on how to do exactly that.")