Australian defamation law is insanely bad. And a lot of Australians are very lawsuit happy because of it. So my knee-jerk response to any Australian filing a lawsuit is to roll my eyes. Particularly if they are doing it against an entity which does not exist in Australia.
Furthermore US law has recognized how bad defamation law can be in other countries and therefore includes https://www.law.cornell.edu/uscode/text/28/4102. Anyone with a judgment against someone in the USA has to prove to a US judge that the law they were judged on maintains the same protections for free speech that the USA does. (The case that prompted it involved UK law, but it still applies here.) Since very few places do, Americans can generally ignore defamation suits in other countries. And this goes double for Australia.
BUT what he's upset about is reasonable. Search companies like Google have discovered that it is very important to modify search results in various countries based on court judgments, and OpenAI is going to find that the same is true for ChatGPT. ChatGPT will get things like this wrong, everyone knows that, but there needs to be a way to tell it, "Don't get THIS one wrong."
I just wish it wasn't an Australian who brought this kind of case first. Though in retrospect, I should have expected it.
> Search companies like Google have discovered that it is very important to modify search results in various countries based on court judgments, and OpenAI is going to find that the same is true for ChatGPT. ChatGPT will get things like this wrong, everyone knows that, but there needs to be a way to tell it, "Don't get THIS one wrong."
This is very insightful, thank you. I'm extremely curious about how this will be achieved from the technical point of view. It's not clear to me that ChatGPT has the concept "a an american lawyer named X" (as opposed to just guessing the next token), so how will it be filtered out?
I can think of a few ways:
1. The ChatGPT web search plugin becomes standard protocol for every prompt. If you ask a factual question ChatGPT will first look up an answer with a search engine, then use the results to craft an answer. This could also be implemented using a knowledge database created by scraping the web that is occasionally updated rather than actually performing a web search every time. This also allows OpenAI to shift the blame to whatever source it found rather than ChatGPT if it does find libelous content.
2. OpenAI will just add a disclaimer to every response (or at least every response that appears to be asking a factual question) saying that asking factual questions is unreliable.
3. If there is a blacklist of topics that require a disclaimer or cause the LLM to refuse to answer, it can generate an answer, check it against the blacklist using embeddings/semantic search and either re-generate an answer or generate a refusal before showing the answer to the user.
My best guess is that it will be a combination of 1 and 2. I have always maintained that LLMs are very unlikely to develop into AGI on their own, but are very likely to be a critical piece of an AGI. The most recent research into scaling laws (Chinchilla, Llama) finds significant improvements from scaling data size much farther than parameter size, so memorizing facts within the parameters will become less and less feasible. This is actually ideal, though, since you want your model parameters to encode language and reasoning patterns, not memorize facts. If it's memorizing facts you either need more data or better (i.e. deduplicated) data. I'm not an expert, though, and I'm too lazy for a research review, so please don't sue me for libel if my facts are out of date.
I would suggest that they have a search database of things to avoid, and a specialized ChatGPT that can find things in that database. Any statement it wants to make is passed by the specialized watchdog with, "Is there anything here that we can't say?" If the specialized watchdog says yes and creates a citation to the rule. Then a tool pulls the rule ACTUALLY in the database up, and ChatGPT compares with the statement to say whether it REALLY applies (without this step the watchdog could hallucinate rules that don't exist!), and then corrects the original ChatGPT before it actually says anything.
>Australian defamation law is insanely bad. And a lot of Australians are very lawsuit happy because of it.
Same in Germany and Austria. You need to be very careful what you say online about people, especially politicians, and what reviews you leave online to various businesses, or you can end up with a defamation lawsuit letter in your mailbox, even if what you wrote was the truth. Lots of lawsuit happy people here.
A small bikeshop I took my bike for a service completely messed up my shifter and would refuse to acknowledge it, so I left them a 1 star review on Google maps and took my bike somewhere else, and a month later I got a letter from their lawyer telling me that unless I can prove in court they fucked up my shifter, I have to take down my 1 star review or else they'll sue me for damages due to lost business as all their other reviews are all 5 stars. Lol, make sense how they got only 5 star reviews.
There's also a website here to find doctors, rate them and leave them reviews, but when you try to leave one a review, you get a huge warning pop-up explaining you what you're not allowed to write in the review, and than includes "mentioning a misdiagnosis". I was speechless. Like what's the point then?
This goes on and on, for almost all review based website, including jobs. Since the owner of the business can have the negative reviews or comments he doesn't like removed under defamation laws, making most review worthless in the end.
So.e local gren party politician in Austria got called a "lousy traitor" and "corrupt bumpkin" on her Facebook page and that politician took Facebook to court to have all such comments about her removed under the defamation law arguing that insults aren't free speech, and won, with Facebook having to remove the comments worldwide, not just on the Austrian Facebook page, leaving the question of how a country can just censor parts of the global internet, as that would set a precedent for other countries to do the same.[1]
This is what I appreciate a lot about the US (and other countries), you can say and write whatever you want about anyone. Want to call your local politician a corrupt dickhead and teel him to go f*ck himself? Go ahead! Want to leave a 1 star review because the service swindled you? It's your 1st amendment! This is in no way possible here. I get the need for privacy and defamation laws, but when those laws are a powerful backdoor for bad actors to use them for censorship and intimidation, for their own benefit, then maybe those laws should be defanged a bit to preserve free speech.
Just because a legal letter said it, doesn't mean that it is true.
I don't know German law. But I know that American lawyers send letters all the time threatening all sorts of things that would completely be laughed out of an actual court. It is generally easier to accept what the letter says than to fight it, so people just cave. As you apparently did. But if you know your rights and fight the letter, you'll generally win.
So the fact that lawyers like to make people's lives miserable with threats doesn't necessarily mean that the laws are actually bad. They may also be bad, but lawyers manage to make things miserable on their own.
That said, Germany has a long tradition of very strong privacy rights. It is (at least in theory) not whether or not you're saying something malicious, it is whether you can say it at all. This was a giant headache for Google Street View. You don't have to just blur out people, cars, and so on. You have to ask them whether they want their house blurred out as well - and lots said that they did!
> unless I can prove in court they fucked up my shifter, I have to take down my 1 star review or else they'll sue me for damages ...
Heh Heh Heh
So, how feasible would it be to change your one star review, so that it basically says "I'm giving them one star due to this threatening legal letter they sent me"... and include a scan of the entire letter as the attached / supporting image?
Doing that, you've ceased claiming they damaged anything. Instead the letter they provided is the whole reason. And it will give future reviewers a warning ahead of time.
It would be kind of surprising if they then tried to say "we never sent such a letter...", in which case you can put your original review back online. ;)
it was ordered to expose identities of users in a lawsuit. so it put a big ass banner on the company's page saying "This employer has taken legal action against reviewers, please exercise your best judgement when evaluating"
That is very surprising to me given the German system of incredibly detailed and basically graded (1.0-4.0 / A-F for Americans) employee reference statements (to say nothing about the coded comments about alcohol use, fraternization, etc that specific word usage implies)
Sure the American system is the other way around (say nothing because they could sue if we say more than dates they worked here), but is that never challenged for defamation?
The coded nature of German employer references is to make them lawsuit-proof. “But it said you ‘performed your duties to their satisfaction’ - what’s libelous about that?”
(“Performed duties to our satisfaction” = yeah, they showed up and generally attempted to do what they were told, but that’s about it)
Is there a motivating factor for providing a reference in Germany?
My experience in the US is not necessarily that people are worried they're going to be sued, or that they couldn't make an objective reference, but that the effort and risk are both non-zero -- and the benefit to the company is ~zero.
Employers are legally required to give you a reference letter (“Zeugnis”) when you leave, and also whenever you request one or are about to take a long leave (“Zwischenzeugnis”) - I received one before I left for my year of maternity leave. Your prospective employers absolutely expect to see at least your most recent one, and I had a bit of fun explaining to the boss who hired me that all you’ll ever get out of an American employer is dates of employment and job title.
If you work here, keep up with them as carefully as you do your employment contracts.
I have colleagues who habitually request a Zwischenzeugnis every 3-5 years, even if they’re not currently looking for a new job.
The motivating factor is that the employee simply has a right to a reference when leaving.
Sometimes an employee might ask for an interim report (for example when changing departments, etc.) but that's something that is provided on a voluntarily basis.
I just researched it a bit more. So while you don't have an entitlement to a Zwischenzeugnis per se, you are entitled if you have a "good reason" to request one. According to some judgements a "good reason" is for example a change of your superiors or if you want to apply for a job elsewhere.
In my career I've requested quite a few Zwischenzeugnisse and never been denied. Probably a place where you have to fight to get one is not a place where you want to be employed anyways.
Yet I've seen HR documents which directly map the code to the rating. I can't imagine this stuff holds up in court when I remember reading about it online years ago and anyone working in HR in the country will immediately confirm it.
I think the main difference is that the German system revolves around freedom of opinion and not freedom of speech. If you assert a negative fact about someone, you have to be able to proof it. You are however free to express your opinion about anyone and anything.
Australian defamation law is insanely bad. And a lot of Australians are very lawsuit happy because of it. So my knee-jerk response to any Australian filing a lawsuit is to roll my eyes. Particularly if they are doing it against an entity which does not exist in Australia.
Furthermore US law has recognized how bad defamation law can be in other countries and therefore includes https://www.law.cornell.edu/uscode/text/28/4102. Anyone with a judgment against someone in the USA has to prove to a US judge that the law they were judged on maintains the same protections for free speech that the USA does. (The case that prompted it involved UK law, but it still applies here.) Since very few places do, Americans can generally ignore defamation suits in other countries. And this goes double for Australia.
BUT what he's upset about is reasonable. Search companies like Google have discovered that it is very important to modify search results in various countries based on court judgments, and OpenAI is going to find that the same is true for ChatGPT. ChatGPT will get things like this wrong, everyone knows that, but there needs to be a way to tell it, "Don't get THIS one wrong."
I just wish it wasn't an Australian who brought this kind of case first. Though in retrospect, I should have expected it.