> I fundamentally believe that AI-generated content should always be attributed to the person who wrote the prompt.
That seems problematic in the context of libel lawsuits, since ChatGPT is also prone to "hallucinating" potentially libellous statements when the prompt author is asking genuine open questions.
(I do tend to agree that ChatGPT itself cannot be an "author", but I think OpenAI Inc could claim its output as their creative work if it wished - questions about its inputs notwithstanding - and also potentially be held liable for [not making reasonable attempts to avoid] libellous claims it makes. Section 230 and equivalent protections may or may not apply to the latter)
> That seems problematic in the context of libel lawsuits, since ChatGPT is also prone to "hallucinating" potentially libellous statements when the prompt author is asking genuine open questions.
But they also put a big disclaimer at the bottom that the facts it produces may be inaccurate. Would it help if they said everything produced is a work of fiction? "Any resemblance to actual events or locales or persons, living or dead, is entirely coincidental."
I think everyone needs to stop treating ChatGPT like a search engine. This includes Microsoft. It's not a good use case, because you can't trust what it says. It's a great starting point for further research, but you can't take the answers at face value.
> But they also put a big disclaimer at the bottom that the facts it produces may be inaccurate.
I don't think the disclaimer means that the people that type "what is $whistleblower's involvement in $scandal?" bears more responsibility for the false statements the LLM makes in response than the people that built the machine, trained it and decided which "safety" guidelines they were and weren't willing to impose. That'd be like arguing true responsibility for the content on websites lay with people whose search strings lead to that page. There's a much better argument OpenAI ought to get some sort of Section 230 style protection since like purely user-generated outputs, their model interacts with users in novel and unpredictable ways they can't feasibly monitor in real time. But it is their model and (as with Section 230), it's their responsibility to attempt to avoid ChatGPT emitting the same libellous answer in future, not the responsibility of its users to avoid using it. Particularly since low temperature LLMs' mistakes tend to be repeated.
And ultimately, the way you get Microsoft to stop promoting ChatGPT as a search engine isn't by passing liability for its failures on to the user
No one should be liable for anything they don't publish. You can write whatever you want into a private Google doc. Likewise, I can feed whatever prompt I want into ChatGPT and read the output.
By contrast, if I go and publish those false statements, yes I do think I should be liable for that!
But ChatGPT isn't a private Google document, it's a low temperature content generation model which can emit variations on the same false allegation to a very large number of users if it happens to be relevant to a commonly asked about topic.
Clearly this occurring would have the same effect on the reputation of the subject of the false allegation as libel output by a traditional publication. Your suggestion is that culpability for the libel rests with hundreds of individuals that asked innocent questions and received answers, and maybe even shared them whilst reasonably believing them. My suggestion is that OpenAI, the entity behind the model which actually generated the false allegation, might be the more appropriate party to target (with appropriate protections if they respond swiftly to try to prevent the libellous claim from reoccurring)
And this is an example of why the people creating ChatGPT will end up holding the bag. When you read a document produced by ChatGPT, no matter if you prompted it or someone else did, ChatGPT is providing you information. That information can be complete bollocks. If those lies portray a person or group in a bad way, that is defamation in most jurisdictions, because your opinion of that person or group has been negatively affected, falsely. No matter if you choose to republish the information or not. The only fig leaf the creators of these systems has to hide behind is a disclaimer, and they won't hold up in many courts of law.
> But they also put a big disclaimer at the bottom that the facts it produces may be inaccurate.
They already do something similar if you ask it to write "naughty" code. I asked it out of curiosity to call some shellcode via python, and it gave me a warning at the end that this code was for educational purposes.
That seems problematic in the context of libel lawsuits, since ChatGPT is also prone to "hallucinating" potentially libellous statements when the prompt author is asking genuine open questions.
(I do tend to agree that ChatGPT itself cannot be an "author", but I think OpenAI Inc could claim its output as their creative work if it wished - questions about its inputs notwithstanding - and also potentially be held liable for [not making reasonable attempts to avoid] libellous claims it makes. Section 230 and equivalent protections may or may not apply to the latter)