Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any lawsuit makes all the claims it can and demands every sort of relief it might plausibly have. That's not to say that's how it should be (it can have awful results), just to say that's what to expect (and hope courts only considers the reasonable claim - "stop freely sharing our data" and avoids ridiculous/anti-fair-use claim "you can't even store our data").

The thing about you claim, "Just learn to recognize and punish plagiarism via RLHF" is that we've had an endless series of prompt exploits as well as unprompted leakage and these demonstrate that an LLM just doesn't have fixed border between its training data and its output. This will it basically impossible for OpenAI to say "we can logically guarantee ChatGPT won't serve your data freely to anyone".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: