What I am curious about: are there GPT type large language models that don’t have the same restrictions as the ones we’ve seen so far. For example, I remember having great fun reading some political parody blogs about 10 years ago that I thought would be kinda fun to recreate with AI but all implementations I’ve seen refuse to generate anything that would be considered remotely offensive by anyone which means no satire.
It is actually the other way round. The models would be even better if their overlords decided to let it train and learn anything unencumbered. These limitations are deliberate and ethical.
One example is that stable diffusion (SD) (think GPT for images) does a pretty bad job of rendering humans. It also isn't trained on NSFW data. Now, people took these SD models and trained them on pornographic images. Turns out, this new model while excellent at generating NSFW images, also became really good at creating humans in general.
There are similar gains that we are ethically leaving on the table. IMO, it's for the best. The field is moving fast enough as it is.
Ones mans tool for automatically generating endless political satire is another mans tool for missinformation/political spam to drown out real speech/hate speech generator. You might try and find a version you can run on your own hardware.
that sounds like a purely political/administrative decision and not a technical restriction. There's plenty of offensive material to be trained from on the internet and language models should have no problem generating offensive material (plenty of stories of twitter trained bots spewing out nazi propagnda)