Hacker News new | past | comments | ask | show | jobs | submit login

> I do think they should quickly course correct at this point and accept the fact that they clearly owe something to the creators of content they are consuming.

Eventually these LLMs are going to be put in mechanical bodies with the ability to interact with the world and learn (update their weights) in realtime. Consider how absurd your perspective would be then, when it'd be illegal for this embodied LLM to read any copyrighted text, be it a book or a web page, without special permission from the copyright holder, while humans face no such restriction.




A human faces the same restriction, if it provides commercial services on the internet creating code that is a copy of copyrighted code.


This isn't true; if you hire a contractor and tell them "write from memory the copyrighted code X which you saw before", and they have such a good memory that they manage to write it verbatim, then you take that code and use it in a way that breaches copyright, you're liable, not the person you paid to copy the code for you. They're only liable if they were under NDA for that code.


> they have such a good memory that they manage to write it verbatim

No, there is no clause in copyright law that says "unless someone remembered it all and copied it from their memory instead of directly from the original source." That would just be a different mechanism of copying.

Clean-room techniques are used so that if there is incidental replication of parts of code in the course of a reimplementation of existing software, that it can be proven it was not copied from the source work.


And what professional developer would not be under NDA for the code he produces for a corporation?


The topic of this thread is LLMs reproducing _publicly available_ copyright content. Almost no developer would be under NDA for random copyrighted code online.


> while humans face no such restriction.

I have no idea what on earth you are talking about. People and corporations are sued for copyright infringement all the time.

https://copyrightalliance.org/copyright-cases-2022/

Reading and consuming other people content isn't illegal, but it also wouldn't be for a computer.

Reading and consuming content with the sole purpose of reproducing it verbatim is frowned upon, and can be sued, whether it's an LLM or a sweatshop in India.


>I have no idea what on earth you are talking about. People and corporations are sued for copyright infringement all the time.

They're sued for _producing content_, not consuming content. If a human takes copyrighted output from an LLM and publishes it, they're absolutely liable if they violated copyright.

>Reading and consuming other people content isn't illegal, but it also wouldn't be for a computer.

That is absolutely what people in this thread are suggesting should happen: that it should be illegal for OpenAI et. al. to train models on publicly available content without first receiving permission from the authors.

>Reading and consuming content with the sole purpose of reproducing it verbatim is frowned upon, and can be sued, whether it's an LLM or a sweatshop in India.

That's irrelevant here because people training LLMs aren't feeding them copyrighted content for the sole purpose of reproducing it verbatim.


> That's irrelevant here because people training LLMs aren't feeding them copyrighted content for the sole purpose of reproducing it verbatim.

Disagree, it is completely relevant when discussing computers Vs people, the bar that has already been set is alternative uses.

LLMs don't have a purpose outside of regurgitating what it has ingested. CD burners at least could be claimed they were backing up your data.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: