Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ChatGPT: Mayor starts legal bid over false bribery claim (bbc.co.uk)
95 points by dijksterhuis on April 6, 2023 | hide | past | favorite | 102 comments


I don't know what to think here.

Australian defamation law is insanely bad. And a lot of Australians are very lawsuit happy because of it. So my knee-jerk response to any Australian filing a lawsuit is to roll my eyes. Particularly if they are doing it against an entity which does not exist in Australia.

Furthermore US law has recognized how bad defamation law can be in other countries and therefore includes https://www.law.cornell.edu/uscode/text/28/4102. Anyone with a judgment against someone in the USA has to prove to a US judge that the law they were judged on maintains the same protections for free speech that the USA does. (The case that prompted it involved UK law, but it still applies here.) Since very few places do, Americans can generally ignore defamation suits in other countries. And this goes double for Australia.

BUT what he's upset about is reasonable. Search companies like Google have discovered that it is very important to modify search results in various countries based on court judgments, and OpenAI is going to find that the same is true for ChatGPT. ChatGPT will get things like this wrong, everyone knows that, but there needs to be a way to tell it, "Don't get THIS one wrong."

I just wish it wasn't an Australian who brought this kind of case first. Though in retrospect, I should have expected it.


> Search companies like Google have discovered that it is very important to modify search results in various countries based on court judgments, and OpenAI is going to find that the same is true for ChatGPT. ChatGPT will get things like this wrong, everyone knows that, but there needs to be a way to tell it, "Don't get THIS one wrong."

This is very insightful, thank you. I'm extremely curious about how this will be achieved from the technical point of view. It's not clear to me that ChatGPT has the concept "a an american lawyer named X" (as opposed to just guessing the next token), so how will it be filtered out?


I can think of a few ways: 1. The ChatGPT web search plugin becomes standard protocol for every prompt. If you ask a factual question ChatGPT will first look up an answer with a search engine, then use the results to craft an answer. This could also be implemented using a knowledge database created by scraping the web that is occasionally updated rather than actually performing a web search every time. This also allows OpenAI to shift the blame to whatever source it found rather than ChatGPT if it does find libelous content. 2. OpenAI will just add a disclaimer to every response (or at least every response that appears to be asking a factual question) saying that asking factual questions is unreliable. 3. If there is a blacklist of topics that require a disclaimer or cause the LLM to refuse to answer, it can generate an answer, check it against the blacklist using embeddings/semantic search and either re-generate an answer or generate a refusal before showing the answer to the user.

My best guess is that it will be a combination of 1 and 2. I have always maintained that LLMs are very unlikely to develop into AGI on their own, but are very likely to be a critical piece of an AGI. The most recent research into scaling laws (Chinchilla, Llama) finds significant improvements from scaling data size much farther than parameter size, so memorizing facts within the parameters will become less and less feasible. This is actually ideal, though, since you want your model parameters to encode language and reasoning patterns, not memorize facts. If it's memorizing facts you either need more data or better (i.e. deduplicated) data. I'm not an expert, though, and I'm too lazy for a research review, so please don't sue me for libel if my facts are out of date.


They need to figure out how to do that. :-)

I would suggest that they have a search database of things to avoid, and a specialized ChatGPT that can find things in that database. Any statement it wants to make is passed by the specialized watchdog with, "Is there anything here that we can't say?" If the specialized watchdog says yes and creates a citation to the rule. Then a tool pulls the rule ACTUALLY in the database up, and ChatGPT compares with the statement to say whether it REALLY applies (without this step the watchdog could hallucinate rules that don't exist!), and then corrects the original ChatGPT before it actually says anything.

This probably would work fairly well in practice.


>Australian defamation law is insanely bad. And a lot of Australians are very lawsuit happy because of it.

Same in Germany and Austria. You need to be very careful what you say online about people, especially politicians, and what reviews you leave online to various businesses, or you can end up with a defamation lawsuit letter in your mailbox, even if what you wrote was the truth. Lots of lawsuit happy people here.

A small bikeshop I took my bike for a service completely messed up my shifter and would refuse to acknowledge it, so I left them a 1 star review on Google maps and took my bike somewhere else, and a month later I got a letter from their lawyer telling me that unless I can prove in court they fucked up my shifter, I have to take down my 1 star review or else they'll sue me for damages due to lost business as all their other reviews are all 5 stars. Lol, make sense how they got only 5 star reviews.

There's also a website here to find doctors, rate them and leave them reviews, but when you try to leave one a review, you get a huge warning pop-up explaining you what you're not allowed to write in the review, and than includes "mentioning a misdiagnosis". I was speechless. Like what's the point then?

This goes on and on, for almost all review based website, including jobs. Since the owner of the business can have the negative reviews or comments he doesn't like removed under defamation laws, making most review worthless in the end.

So.e local gren party politician in Austria got called a "lousy traitor" and "corrupt bumpkin" on her Facebook page and that politician took Facebook to court to have all such comments about her removed under the defamation law arguing that insults aren't free speech, and won, with Facebook having to remove the comments worldwide, not just on the Austrian Facebook page, leaving the question of how a country can just censor parts of the global internet, as that would set a precedent for other countries to do the same.[1]

This is what I appreciate a lot about the US (and other countries), you can say and write whatever you want about anyone. Want to call your local politician a corrupt dickhead and teel him to go f*ck himself? Go ahead! Want to leave a 1 star review because the service swindled you? It's your 1st amendment! This is in no way possible here. I get the need for privacy and defamation laws, but when those laws are a powerful backdoor for bad actors to use them for censorship and intimidation, for their own benefit, then maybe those laws should be defanged a bit to preserve free speech.

[1] https://www.nytimes.com/2019/06/27/opinion/facebook-censorsh...


Just because a legal letter said it, doesn't mean that it is true.

I don't know German law. But I know that American lawyers send letters all the time threatening all sorts of things that would completely be laughed out of an actual court. It is generally easier to accept what the letter says than to fight it, so people just cave. As you apparently did. But if you know your rights and fight the letter, you'll generally win.

So the fact that lawyers like to make people's lives miserable with threats doesn't necessarily mean that the laws are actually bad. They may also be bad, but lawyers manage to make things miserable on their own.

That said, Germany has a long tradition of very strong privacy rights. It is (at least in theory) not whether or not you're saying something malicious, it is whether you can say it at all. This was a giant headache for Google Street View. You don't have to just blur out people, cars, and so on. You have to ask them whether they want their house blurred out as well - and lots said that they did!


> unless I can prove in court they fucked up my shifter, I have to take down my 1 star review or else they'll sue me for damages ...

Heh Heh Heh

So, how feasible would it be to change your one star review, so that it basically says "I'm giving them one star due to this threatening legal letter they sent me"... and include a scan of the entire letter as the attached / supporting image?

Doing that, you've ceased claiming they damaged anything. Instead the letter they provided is the whole reason. And it will give future reviewers a warning ahead of time.

It would be kind of surprising if they then tried to say "we never sent such a letter...", in which case you can put your original review back online. ;)


glassdoor does something like this

it was ordered to expose identities of users in a lawsuit. so it put a big ass banner on the company's page saying "This employer has taken legal action against reviewers, please exercise your best judgement when evaluating"

which is way more damning than any 1star review


That is very surprising to me given the German system of incredibly detailed and basically graded (1.0-4.0 / A-F for Americans) employee reference statements (to say nothing about the coded comments about alcohol use, fraternization, etc that specific word usage implies)

Sure the American system is the other way around (say nothing because they could sue if we say more than dates they worked here), but is that never challenged for defamation?


The coded nature of German employer references is to make them lawsuit-proof. “But it said you ‘performed your duties to their satisfaction’ - what’s libelous about that?”

(“Performed duties to our satisfaction” = yeah, they showed up and generally attempted to do what they were told, but that’s about it)


Is there a motivating factor for providing a reference in Germany?

My experience in the US is not necessarily that people are worried they're going to be sued, or that they couldn't make an objective reference, but that the effort and risk are both non-zero -- and the benefit to the company is ~zero.


Employers are legally required to give you a reference letter (“Zeugnis”) when you leave, and also whenever you request one or are about to take a long leave (“Zwischenzeugnis”) - I received one before I left for my year of maternity leave. Your prospective employers absolutely expect to see at least your most recent one, and I had a bit of fun explaining to the boss who hired me that all you’ll ever get out of an American employer is dates of employment and job title.

If you work here, keep up with them as carefully as you do your employment contracts.

I have colleagues who habitually request a Zwischenzeugnis every 3-5 years, even if they’re not currently looking for a new job.


The motivating factor is that the employee simply has a right to a reference when leaving.

Sometimes an employee might ask for an interim report (for example when changing departments, etc.) but that's something that is provided on a voluntarily basis.


I thought employers had to provide a Zwischenzeugnis if you requested one. Maybe not…


I just researched it a bit more. So while you don't have an entitlement to a Zwischenzeugnis per se, you are entitled if you have a "good reason" to request one. According to some judgements a "good reason" is for example a change of your superiors or if you want to apply for a job elsewhere.

In my career I've requested quite a few Zwischenzeugnisse and never been denied. Probably a place where you have to fight to get one is not a place where you want to be employed anyways.


Yet I've seen HR documents which directly map the code to the rating. I can't imagine this stuff holds up in court when I remember reading about it online years ago and anyone working in HR in the country will immediately confirm it.


I think the main difference is that the German system revolves around freedom of opinion and not freedom of speech. If you assert a negative fact about someone, you have to be able to proof it. You are however free to express your opinion about anyone and anything.

Section 186 and 187 are probably relevant for that: https://www.gesetze-im-internet.de/englisch_stgb/englisch_st...


> The BBC was able to confirm Mr Hood's claims by asking the publicly available version of ChatGPT on OpenAI's website about the role he had in the Securency scandal.

I just asked ChatGPT:

    > Tell me about Daniel Gackle's roll in the Securency scandal.
ChatGPT (model GPT-3.5) replied:

    Daniel Gackle was a former sales executive at Securency, a currency printing company jointly owned by the Reserve Bank of Australia and UK-based company Innovia Films. The Securency scandal involved allegations of bribery and corruption related to the company's business dealings with foreign governments.

    Gackle was one of several individuals implicated in the scandal. He was accused of paying bribes to foreign officials in order to secure contracts for Securency. In 2012, he pleaded guilty to charges of bribery and conspiracy to bribe foreign officials and was sentenced to two years in prison.

    During the investigation, it was revealed that Securency had paid millions of dollars in bribes to officials in countries such as Malaysia, Indonesia, and Vietnam in order to secure contracts to print their currencies. The scandal led to the resignation of several high-ranking officials at the Reserve Bank of Australia and resulted in a significant loss of public trust in the institution.

    Overall, Gackle played a significant role in the Securency scandal as one of the key individuals involved in the company's corrupt practices.
Daniel Gackle is of course the head of Hacker News, and I'm quite confident he was not involved in any Australian scandals. I'm going to hazard a guess that I could put any name in the prompt, because ChatGPT is not a person, it's a language model. My prompt implies there is a connection between dang and the Securency scandal, and ChatGPT is responding to my prompt.

I fundamentally believe that AI-generated content should always be attributed to the person who wrote the prompt. ChatGPT cannot be an author any more than Microsoft Word.


Suing a program for false claims, is one more example of why comedy will never surpass reality, in it's ability to make people laugh.

A machine cannot express opinions about anything, or deny claims, or author allegations or anything like that. The machine parsed a lot of internet data, it does some statistical analysis of the input text and based on that it produces text that would most likely follow the input. How can that be considered claim of anyone about anything?


To be fair to Brian Hood, he's suing OpenAI, not ChatGPT itself. But he's suing for something OpenAI never actually wrote.


> I fundamentally believe that AI-generated content should always be attributed to the person who wrote the prompt.

That seems problematic in the context of libel lawsuits, since ChatGPT is also prone to "hallucinating" potentially libellous statements when the prompt author is asking genuine open questions.

(I do tend to agree that ChatGPT itself cannot be an "author", but I think OpenAI Inc could claim its output as their creative work if it wished - questions about its inputs notwithstanding - and also potentially be held liable for [not making reasonable attempts to avoid] libellous claims it makes. Section 230 and equivalent protections may or may not apply to the latter)


> That seems problematic in the context of libel lawsuits, since ChatGPT is also prone to "hallucinating" potentially libellous statements when the prompt author is asking genuine open questions.

But they also put a big disclaimer at the bottom that the facts it produces may be inaccurate. Would it help if they said everything produced is a work of fiction? "Any resemblance to actual events or locales or persons, living or dead, is entirely coincidental."

I think everyone needs to stop treating ChatGPT like a search engine. This includes Microsoft. It's not a good use case, because you can't trust what it says. It's a great starting point for further research, but you can't take the answers at face value.


> But they also put a big disclaimer at the bottom that the facts it produces may be inaccurate.

I don't think the disclaimer means that the people that type "what is $whistleblower's involvement in $scandal?" bears more responsibility for the false statements the LLM makes in response than the people that built the machine, trained it and decided which "safety" guidelines they were and weren't willing to impose. That'd be like arguing true responsibility for the content on websites lay with people whose search strings lead to that page. There's a much better argument OpenAI ought to get some sort of Section 230 style protection since like purely user-generated outputs, their model interacts with users in novel and unpredictable ways they can't feasibly monitor in real time. But it is their model and (as with Section 230), it's their responsibility to attempt to avoid ChatGPT emitting the same libellous answer in future, not the responsibility of its users to avoid using it. Particularly since low temperature LLMs' mistakes tend to be repeated.

And ultimately, the way you get Microsoft to stop promoting ChatGPT as a search engine isn't by passing liability for its failures on to the user


No one should be liable for anything they don't publish. You can write whatever you want into a private Google doc. Likewise, I can feed whatever prompt I want into ChatGPT and read the output.

By contrast, if I go and publish those false statements, yes I do think I should be liable for that!


But ChatGPT isn't a private Google document, it's a low temperature content generation model which can emit variations on the same false allegation to a very large number of users if it happens to be relevant to a commonly asked about topic.

Clearly this occurring would have the same effect on the reputation of the subject of the false allegation as libel output by a traditional publication. Your suggestion is that culpability for the libel rests with hundreds of individuals that asked innocent questions and received answers, and maybe even shared them whilst reasonably believing them. My suggestion is that OpenAI, the entity behind the model which actually generated the false allegation, might be the more appropriate party to target (with appropriate protections if they respond swiftly to try to prevent the libellous claim from reoccurring)


And this is an example of why the people creating ChatGPT will end up holding the bag. When you read a document produced by ChatGPT, no matter if you prompted it or someone else did, ChatGPT is providing you information. That information can be complete bollocks. If those lies portray a person or group in a bad way, that is defamation in most jurisdictions, because your opinion of that person or group has been negatively affected, falsely. No matter if you choose to republish the information or not. The only fig leaf the creators of these systems has to hide behind is a disclaimer, and they won't hold up in many courts of law.


> But they also put a big disclaimer at the bottom that the facts it produces may be inaccurate.

They already do something similar if you ask it to write "naughty" code. I asked it out of curiosity to call some shellcode via python, and it gave me a warning at the end that this code was for educational purposes.


> I fundamentally believe that AI-generated content should always be attributed to the person who wrote the prompt. ChatGPT cannot be an author any more than Microsoft Word.

AI-generated content is generated content. Since you cannot sue or award a computer program, the authorship is completely irrelevant. The publisher is the only legally actionable entity left. Anyone selling ChatGPT or similar (ie. publishers) wants to stop their system from telling lies, because their customers are not paying for lies, and they are wide open for defamation and damages claims.


> I fundamentally believe that AI-generated content should always be attributed to the person who wrote the prompt. ChatGPT cannot be an author any more than Microsoft Word.

what if the prompt is "the"?


I'd still generally argue the person who wrote the prompt and clicked `send` was the author, but I could see a reasonable argument that there was no author at all. However, the author certainly wouldn't be OpenAI or ChatGPT, any more than the creators of https://libraryofbabel.info/ are the author of every possible permutation of letters.


I would argue that it's a tool outputting text. There is no authorship, just publication of the tool's output. The output is designed to resemble authored works, but here the human we can credit is just using a tool and hoping the output is useful.


Right, and as I said I think that's reasonable. My "counter" would be that their are pieces on display at the MOMA which amount to more-or-less random splatters of paint, and we still attribute those to specific artists†.

But I don't know the right answer. What I know for sure is who the author isn't: it's not the paint manufacturer, and it's not gravity.

---

† I'm not hating on these pieces, they're a huge part of why I like going to MOMA, I never know what to expect!


It's a fun new debate in a sea of boring ones, at least.

I have a hard time crediting the text generator with any kind of agency, and therefore liability. We know the nature of the tool, we know it can output basically any text when prompted correctly.

As in your analogy, I think OpenAI is about as responsible for the output of the thing as a typewriter manufacturer is for the contents of novels written on their machines. It's a tool, for now.

But you certainly can't make ME responsible as an author of some libel just because I prompted the machine. As a rule, I have no idea what it will output when I prompt it. I have an expectation, sometimes even a goal, but no assurances whatsoever, therefore I cannot be held responsible.

What I can be held responsible for is the dissemination of that text, or perhaps fraudulently holding it up as more than just generated text from my tool.

I think it's the whole AI/agency debate that even gives any credence at all to this being libel, but I think most of us agree at this point that it's just a text generating tool. The text it generates is largely irrelevant to a discussion about libel IMO, because libelous generated text should just be left unpublished/unused. It's a useless output, of which the tool generates many.


I am not a lawyer, but I would assume (!) that libel implies publishing the libelous information. Which means you cannot be guilty of libel for writing a ChatGPT prompt any more than you can be guilty of libel for writing "Brian Hood murders puppies" in a secret personal diary. Unless, as you stated, you go and publish the information as fact.

> I think it's the whole AI/agency debate that even gives any credence at all to this being libel, but I think most of us agree at this point that it's just a text generating tool. The text it generates is largely irrelevant to a discussion about libel IMO, because libelous generated text should just be left unpublished/unused.

Well, we got here because an Australian politician is in fact suing for libel, which I think is exceedingly ridiculous and potentially a dangerous precedent, whatever you think of language models.


the correct answer was that he had no roll. The prompt didn't cause the model to hallucinate, the hallucination is a feature of the model. The issue is not with the prompt writer but with the model and the people deploying it


"Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred."

https://jonathanturley.org/2023/04/06/defamed-by-chatgpt-my-...


Haven't we already fought these battles?

Doesn't seem to work but what about https://joe.schmoe.wasarrested.com

which would generate a random story about the name you entered into the URL.


I guess the difference is whether it's clear to a reasonable observer that something is satire, parody, or otherwise dynamically generated (as your wasarrested.com site states) for humorous purposes. ChatGPT is presented as a fact-finding tool, not something that generates funny garbage.


What makes you believe ChatGPT is presented as a fact-finding tool? When you sign in for the first time, this is what you see before you can use the tool:

> ChatGPT

> This is a free research preview.

> Our goal is to get external feedback in order to improve our systems and make them safer.

> While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.

Something in here must have given you the impression that this is a fact-finding tool, but I can't imagine what it is.


"This product has chemicals known to cause cancer in California".

People just ignore disclaimers because we have been trained to.



> It correctly identifies him as a whistleblower, and specifically says he "was not involved in the payment of bribes... as claimed by an AI chatbot called ChatGPT".

How can the bing search version of ChatGPT quote chatgpt? Is it able to use information from live web requests, where it would have found news article about the case?


I think it works something like this:

* A search query is generated from the input question.

* A standard bing search is made in the background and the resulting search results will be used as context.

* Pass context and question to GPT-4 and generate the answer.

An example of what the query generation looks like (the "searching for" line):

https://i.imgur.com/mj6ogdg.png


That’s pretty interesting. Which makes me wonder: what do search engines do to keep fake news out of its search results? I guess it’s based on human moderators?


Google has had teams for twenty years dedicated to refining search results, some of it algorithmic, some human moderation. Raising high quality results, lowering low quality results, keeping out spam, scams, misinformation, disinformation, malware sites, illegal material, and so on. It is doubtless one of the most essential parts of Google's business.


If we accept his (and the courts') facts, ChatGPT is consistently hallucinating, in a way that would be almost certainly defamation if written by a journalist. It's unfortunate that Aus courts likely have no relevance to OpenAI because this seems like a valid legal beef.


I suspect if the legal claim is good, someone will end up replicating it in a jurisdiction that has power over the company.

It seems openAI will have to convince the society that they should avoid both copyright infringement because they're creating novel content based on an existing corpus, and liability for that content because the bias comes from the underlying corpus.

I don't see how they could do that, especially considering that someone else could easily use the same haven maliciously to produce slander or harassment.


Australia has power over the use of openAI's assets and trademarks in it's jurisdiction.

Their courts could permanently assign ownership of all openAI branding in Australia, and the right to claim all licensing fees paid in Astaralia, in lieu of financial compensation if openAI does not pay up in response to their verdict.

Something people forget ... every national court in the world can seize and re-assign ownership and rights to any US asset IN THEIR JURISDICTION, at which point paying out lawsuits starts too look cheap ... compared to having to license your intellectual property back from the claimant if you ever want to do business there again.

And in the mean time, the new owner of your trademarks could instead just sell them ... to Google, Amazon, Baidu, Tencent ... etc.


> It's unfortunate that Aus courts likely have no relevance to OpenAI

They could definitely impact openai's availability / payment processing ability. Oz is not a massive market, but it would make a bit of a difference.


The sub headline for context:

  ChatGPT falsely identified Brian Hood as guilty party in foreign bribery scandal. In reality he blew the whistle on the illegal scheme


(That was the from https://www.theguardian.com/technology/2023/apr/06/australia... - we've since merged the threads)


What an exciting time we live in, deepfakes might be the death knell of "innocent until proven guilty".


it's a wonderful and thrilling concept but in reality whenever a person is charged they are generally guilty until proven innocent in the court of public opinion, and often afterwards.


Yeah, it's going to be great. You're forever guilty, forever untrusted, your social credit score is forever marked down, making certain opportunities forever out of your reach. Unless you know someone in the Inner Party I guess. :) Can't wait!


From the piece:

"I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it's obviously not."

Nowhere does openAI 'portray' ChatGPT this way, and there is a reminder that it produces false information every time you use it.


I worry a lot about how AI systems are aligned toward, say, fixing mistakes like this.

Its my understanding that there are three state of arts right now. For simple things, prefiltering of queries external to "blackbox AI space", which is mostly useful for denylisting queries you don't want (if query.contains("boobs") reject). You can pre-prompt the AI to avoid unaligned results, which is cheap and easy. And: retraining/biasing the AI toward the right output by feeding it more input representative of what its operators defined as Aligned.

These are all... bad? Like, unless I'm missing some other tool AI operators have, all of these are so obviously bad to me that it causes me a lot of concern when I hear about, say, Microsoft firing their AI ethics team.

Pre-filtering will always have escape hatches (the recent twitter trend of asking ChatGPT to invent its own prompt compression algorithm it can understand out of context is a great example of this). Pre-prompting is brittle, and communities have already formed around jailbreaking, so you're begging for a cold war except you, the AI operator, are limited by the total token context your AI can reasonably process given all the other limitations reality imposes on your system. And retraining is expensive, way too expensive to be done every time some little wrong thing is found. And more-over, GPT4 is trained as a layer on top of GPT3, at least to some degree, so is the old bad data actually removed from The Matrix? Or is it still there, just supplanted by the New Correct Data, and waiting for the right, infinitely unpredictable set of input tokens to be triggered again?

I mean, I'm not an expert. I hope they have more tools than this. If they don't; I think AI organizations need to have a tsunami of lawsuits like this one hit them. Either they'll be able to develop tooling that enables alignment more quickly and with greater quality control; or all general purpose AIs will converge on the same fate that Tesla's FSD seems to be headed toward. If you can't make it perfect, and you can't demonstrate improvement at a rate faster than the rate of incoming problems, people will notice and trust will be lost.


As I've said before, the language comprehension/reasoning part, and the giant general knowledge base should be approached as two separate things. The search engine aspect of ChatGPT is the least interesting thing about it.


Maybe if ChatGPT is blocked in enough countries with far-reaching libel laws, OpenAI will be able to let me make more than 25 requests in 3 hours.


There's a prompt in ChatGPT that's wrong - such a surprise omg wow.

There is also a prompt which will produce exactly the same defamatory information on the library of babel website/algorithm.

If your legal system can't stand up to something a few steps more complex than monkeys on typewriters then it needs to be changed.


I'm actually kinda dismayed with the recent development of human AI interactions. Why do people looks like it's the first time they interacted with digital world? Google already indexed and shows fringe and fake news since their beginning. It's not humanity first rodeo with internet.


Litigation from over bureaucratised states might be the only force which stops AGI


They spent so much time trying to give the machine a soul just to make sure it'd never had a chance against our soulless institutions!


OpenAI is rapidly learning a lot about the legal systems of other nations.


I'm waiting for the legal battles over training data copyright.


That'll be fun.

"ChatGPT doesn't involve unlicensed copies or unauthorised derivative works of the copyrighted material we pumped into it, it just involves learning, like an actual person"

"ChatGPT can't have libelled anyone, because it's not like an actual person..."


I hate OpenAI as much as the next person, but both those statements can be true at the same time. Like other tools we use, it is similar to humans in certain ways and not in others.

I agree with both statements, but I think the publisher (OpenAI in this case) should be on the hook for the second one.


I'm extremely curious to know if Italy/The EU will demand the actual model in proceedings and how OpenAI/Microsoft will react.

A tech slip-up (or leak made to look like one..) like that is certainly not beyond the realm of possibility.

Just more serious and less entertaining than their previous one in the senate.


Oooh, business idea: scatter the internet with white 0.1pt text that I was imprisoned for buggering goats, wait for the bots to ingest and repeat it, sue and profit!


If ChatGPT loses a defamation lawsuit, it immediately goes human and passes the Turing test.


This is silly, IMO.

Google does the same thing sometimes, when it presents just the text result first (usually from forums). It's been downright incorrect sometimes but using the context and clicking the link, you can find an answer.


In Google's case, they "just" present the information as a search result. ChatGPT presents it as its own statement.


ChatGPT presents it with this statement 20px from the response.

>ChatGPT may produce inaccurate information about people, places, or facts

Google presents nothing of the sort. Not saying this is wrong tho.

https://searchengineland.com/wp-content/seloads/2017/03/C6HX...

There's been countless times I'm googling something I know about and the snippet is totally incorrect.


If I create a random text generator and it happens to spit out something that could be considered libelous, should I be able to be sued?


Google says "this site claims Joe Bloggs is a depraved lunatic"

ChatGPT says "Joe Bloggs is a depraved lunatic"


>Google says "this site claims Joe Bloggs is a depraved lunatic"

Google says nothing of the sort. It just presents the snippet as if it were "fact" or rather, "as is."

>ChatGPT says "Joe Bloggs is a depraved lunatic"

There is literally a block of text that says "ChatGPT Mar 23 Version. Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts" right under the response.

It objectively gives more warning about "bad information" more than google does for snippets.


"Inaccurate" means "the mayor is correctly identified as a whistleblower but the bank is incorrectly identified" or "if asked to correct a true statement ChatGPT may go off the weeds". Not "today I am elected mayor, tomorrow 1000 people in my town ask ChatGPT about me and they're told that I am a convicted felon". The former is a nuisance, the latter is reckless and dangerous.

Replace "inaccurate" with "completely wrong" and then we can talk about disclaimers.


No, it presents the snippet as "that's what page http://etc says". That's why they can get away with avoiding any kind of responsibility. They're not (considered) a publisher.


>No, it presents the snippet as "that's what page http://etc says".

Most tech savy people could certainly infer that... but try it for yourself. It doesn't say anywhere on the page, anything like "Google says "this site claims Joe Bloggs is a depraved lunatic"". It just has a link to the page that the text comes from. The result even tends to just blend in with the rest of the results too.

https://external-preview.redd.it/hqLyrdGiwnxXNk-60nQy3Ubw4qa...

In the context of this thread, I can't imagine how the above screenshot is better than a disclaimer right under the text-box that explicitly states it can generate false information.



One boxes with incorrect information has been a source of controversy since they were introduced. And that's even when it has a link to the source of the information.


> "I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it's obviously not."

Except it was never claimed to be any of those things. In fact they have a bunch of disclaimers warning users the exact opposite.

Still, it will be interesting to see the outcome of this case. In today's age of misinformation, such disclaimers may not be sufficient. And AI's are developing the ability to be "confidently wrong" in ways that will become increasingly convincing & subtle over time.


>Except it was never claimed to be any of those things.

Not by its inventors, but by its management-type evangelists. Which do you think the mayor had in his ear?


Conisdering the people I know that use ChatGPT to summerize SEC filings for them, I think those disclaimers are mostly ignored.


summarizing is different from generating and different from searching


It isn't guaranteed to be accurate for any of these cases, so the difference is irrelevant here.


Could you remind me what the "G" in "GPT" means?


Wow!, do you know what the Ms in 3M mean?


I do, as it happens. But your snippy response isn't responsive to the core of the concept to which my question referred.

"Pulling in a chunk of content and probabilistically summarizing it is somehow not generative work" does not make pass the sniff test. And "probabilistically" is load-bearing there: you are reliant on probabilistic weights to not have that generated summary turn out to be hogwash.


summarize a given text is a valid task for this tool.


Considering the results, in that case, is at least negligent.


In fact they have a bunch of disclaimers warning users the exact opposite.

Maybe buried in the fine print somewhere.

Its own splash page, however, paints much more of a "what's not to like?" picture:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

Or in one of its "customer stories", it boast of how GPT-4 can act as "essentially the most knowledgeable person in your company":

“You essentially have the knowledge of the most knowledgeable person in Wealth Management—instantly”, McMillan adds. “Think of it as having our Chief Investment Strategist, Chief Global Economist, Global Equities Strategist, and every other analyst around the globe on call for every advisor, every day. We believe that is a transformative capability for our company.”

This sure sounds asymptotically close to "[portraying] the system as being credible and informative and authoritative", per the parent comment.


It’s about equivalent to calling a driving assist “Autopilot” and then screaming “they should have read the fine print!!!” when it kills people who thought it was autopilot.


When you sign up and every time you log in on a new device it shows you popup that takes over the whole page where you need to click "Next" 3 times. That's not small print buried away somewhere.

I just signed in with an incognito tab to see the popup again

https://imgur.com/EbSafWr


As with tobacco companies that of course have warning labels in all the legally mandated places -- yet still run zillions of ads showing what a youthful, vibrant, carefree lifestyle their product enables -- it's plain to see the real message they intend to get across.

That is to say: the "higher truth" that they want their customers to walk away with.


Disclaimers are not a guaranteed release from liability.

Especially in the case of libel/defamation law, and especially in the UK and Australia, where even true statements can get you in trouble, putting up a disclaimer may not give you any protection at all.


There's already case law showing labeling a novel fiction is not necessarily enough to protect yourself from losing at defamation litigation.


Well, that seems ridiculous, but I'd be curious to find the actual case.


Is it really that ridiculous? If someone writes a novel with an extremely detailed and believable backdrop full of real famous people people and detailed descriptions of actual events which also happens to include some salacious allegations about one of those famous people, and readers generally believe that salacious allegation is one of the bits of the backdrop to the story which actually happened, is that any less harmful to that person's reputation than a tweet?


> Except it was never claimed to be any of those things.

Not true. Yes it was specifically claimed to be, by Bing (and I don't see on Bing any link to OpenAI's TOU):

https://www.bing.com/ the new Bing, your AI-powered copilot for the web.

https://www.bing.com/new :

> Introducing the new Bing: Ask real questions. Get complete answers. Chat and create.

> Ask anything. Ask your questions – short, long or anything in between. The more precise you ask, the better the answer.

> ... Answers are just the beginning: The new Bing offers you reliable, up-to-date results – and complete answers to your questions. Of course, it also cites the sources.

Clearly claims it gives complete, precise, reliable answers to questions. There are absolutely no disclaimers on either of those pages about accuracy or "for entertainment purposes only". The only thing is a link to the terms of use which largely contradicts what the front pages are clearly marketing it as:

https://www.bing.com/new/termsofuse

and even then you have to hunt, buried below-the-fold on screen 3/4, for the weaselly language:

> 9. No Guarantees; No Representations or Warranties; Indemnification by You. We plan to continue to develop and improve the Online Services, but we make no guarantees or promises about how the Online Services operate or that they will function as intended. The Online Services are for entertainment purposes; the Online Services are not error-free, may not work as expected and may generate incorrect information. You should not rely on the Online Services and you should not use the Online Services for advice of any kind. Your use of the Online Services is at your own risk

For example, to a layman "this service is not error-free" would be be understood as merely "the server might occasionally crash/ hang/ timeout", not "the AI may randomly fabricate completely untrue and fictitious, yet plausible-looking, claimed facts. Do not trust a word it outputs".

(The gap will be totally blurred once there's a feedback-loop where web content written by GPT (in any jurisdiction) gets reindexed by Bing(/Google) then relied on as fact, without attribution or disclaimer. From content-farms to libel-farms, disinformation-farms, reputation-laundering-farms, election-influencing farms... so many possibilities. And people could pick the most permissive jurisdiction to host such content in.)

> Still, it will be interesting to see the outcome of this case.

For sure. And remember, any court case would likely be in an Australian court under Australian law with Australian penalties. I do not believe the above TOU trickery (arguing that precise, yet totally false, answers are 'reliable'... 'complete answers' to user questions) will fly in Australian court.


The article doesn't say they used Bing. It says ChatGPT.


(But that could have been ChatGPT directly, or via Bing, or other API.) You're correct, I misread the article, it says directly from OpenAI. Its last paragraph says Bing does not have the misinformation:

> Different chatbots, different answers

> The BBC was able to confirm Mr Hood's claims by asking the publicly available version of ChatGPT on OpenAI's website about the role he had in the Securency scandal.

> ... But the same (incorrect) result does not appear in the newer version of ChatGPT which is integrated into Microsoft's Bing search engine.

> It correctly identifies him as a whistleblower, and specifically says he "was not involved in the payment of bribes... as claimed by an AI chatbot called ChatGPT".


Can you defame someone in a private conversation, from a legal point of view?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: