Hacker News new | past | comments | ask | show | jobs | submit login

Libgen turns into a problem when you have a company developing generative AI with it, either giving money to GPU manufacturers or themselves with paid services (see OpenAI)



What are we actually worried about happening?

Are AI-written books getting published?

If they start out-competing humans, is that bad? According to most naysayers, they can't do anything original.

Are people asking the AI for books? And then hoping it will spit it out a human-written book word for word?


> Are AI-written books getting published?

Yes, online bookstores are full of them:

https://www.nytimes.com/2023/08/05/travel/amazon-guidebooks-...

The issue is there's an asymmetry between buyer/seller for books, because a buyer doesn't know the contents until you buy the book. Reviews can help, but not if the reviews are fake/AI generated. In this case, these books are profitable if only a few people buy them as the marginal cost of creating such a book is close to zero.


This really has fuck-all to do with copyright though, correct?

If you can't tell how the content is before you read it, it could be written by a monkey.


This is starting to get pretty circular. The AI was trained on copyrighted data, so we can make a hypothesis that it would not exist - or would exist in a diminished state - without the copyright infringement. Now, the AI is being used to flood AI bookstores with cheaply produced books, many of which are bad, but are still competing against human authors.


the problem with how circular the argument is is that the essence of there being an actual problem is being taken for granted

it's not clear that detriments actually exist, and the benefits are clear


The benefits are not clear: why should an "author" who doesn't want to bother writing a book of their own get to steal the words of people who aren't lazy slackers?


It's as much stealing as piracy is stealing, ie none at all. If you disagree, you and I (along with probably many others in this thread) have a fundamental axiomatic incompatibility that no amount of discussion can resolve.


It is not theft in the property sense, but it is theft of labor.

If a company interviewed me, had me solve a problem, didn't hire me or pay me in any way and then used the code I wrote in their production software, that would be theft.

That is the equivalent of what authors claiming they wrote AI books are doing. That they've fooled themselves into thinking the computer "wrote" the book, erasing all the humans whose labor they've appropriated, in my opinion makes it worse, not better. They are lying to the reader and themselves, and both are bad.


Stealing is not the right word perhaps, but it is bad, and this should be obvious. Because if you take the limit of these arguments as they approach infinity, it all falls apart.

For piracy, take switch games. Okay, pirating Mario isn't stealing. Suppose everyone pirates Mario. Then there's no reason to buy Mario. Then Nintendo files bankruptcy. Then some people go hungry, maybe a few die. Then you don't have a switch anymore. Then there's no more Mario games left to pirate.

If something is OK if only very, very few people do it, then it's probably not good at all. Everyone recycling? Good! Everyone reducing their beef consumption? Good! ... everyone pirating...? Society collapses and we all die, and I'm only being a tad hyperbolic.

In a vacuum making an AI book is whatever. In the context of humanity and pushing this to it's limits, we can't even begin to comprehend the consequences. I'm talking crimes against humanity beyond your wildest dreams. If you don't know what I'm talking about, you haven't thought long enough and creatively enough.


> Because if you take the limit of these arguments as they approach infinity, it all falls apart.

Not everyone is a Kantian, who has the moral philsophy you are talking about, the categorical imperative. See this [0] for a list of criticisms to said philosophy.

> In a vacuum making an AI book is whatever. In the context of humanity and pushing this to it's limits, we can't even begin to comprehend the consequences. I'm talking crimes against humanity beyond your wildest dreams. If you don't know what I'm talking about, you haven't thought long enough and creatively enough.

Not really a valid argument, again it's circular in reasoning with a lot of empty claims with no actual reasoning, why exactly is it bad? Just saying "you haven't thought long enough and creatively enough" does not cut it in any serious discussion, the burden of substantiating your own claim is on you, not the reader, because (to take your own Kantian argument) anyone you've debating could simply terminate the conversation by accusing you of not thinking about the problem deep enough, meaning that no one actually learns anything at all when everyone is shifting the burden of proof to everyone else.

[0] https://en.wikipedia.org/wiki/Categorical_imperative#Critici...


> Not really a valid argument

It is, because the quote you quoted is in reference to what I said above.

I explained real consequences of pirating. Companies have gone under, individuals have been driven to suicide. This HAS happened.

It's logically consistent that if we do that, but increase the scale, then the harm will be proportionally increased.

You might disagree. Personally, I don't understand how. Really, I don't. My fundamental understanding of humanity is that each innovation will be pushed to it's limits. To make the most money, to do it as fast as possible, and in turn to harm the most people, if it is harmful. It is not in the nature of humanity to do something half-way when there's no friction to doing more.

This reality of humanity permeates our culture and societies. That's why the US government has checks and balances. Could the US government remain a democracy without them? Of course. We may have an infinite stream of benevolent leaders.

From my perspective, that is naive. And, certainly, the founding fathers agreed with me. That is one example - but look around you, and you will see this mentality permeates everything we do as a species.


> Stealing is not the right word perhaps, but it is bad, and this should be obvious.

Many people say things that they don't like "should be obvious"ly bad. If you can't say why, that's almost always because it actually isn't.

Have a look at almost any human rights push for examples.

.

> For piracy, take switch games.

It's a bad metaphor.

With piracy, someone is taking a thing that was on the market for money, and using it without paying for it. They are selling something that belongs to other people. The creator loses potential income.

Here, nobody is actually doing that. The correct metaphor is a library. A creator is going and using content to learn to do other creation, then creating and selling novel things. The original creators aren't out money at all.

Every time this has gone to court, the courts have calmly explained that for this to be theft, first something has to get stolen.

.

> If something is OK if only very, very few people do it

This is okay no matter how many people do it.

The reason that people feel the need to set up these complex explanatory metaphors based on "well under these circumstances" is that they can't give a straight answer what's bad here. Just talk about who actually gets harmed, in clear unambiguous detail.

Watch how easy it is with real crimes.

Murder is bad because someone dies without wanting to.

Burglary is bad because objects someone owns are taken, because someone loses home safety, and because there's a risk of violence

Fraud is bad because someone gets cheated after being lied to.

Then you try that here. AI is bad because some rich people I don't like got a bunch of content together and trained a piece of software to make new content and even though nobody is having anything taken away from them it's theft, and even though nobody's IP is being abused it's copyright infringement, and even though nobody's losing any money or opportunities this is bad somehow and that should be obvious, and ignore the 60 million people who can now be artists because I saw this guy on twitter who yelled a lot

Like. Be serious

This has been through international courts almost 200 times at this point. This has been through American courts more than 70 times, but we're also bound by all the rest thanks to the Berne conventions.

Every. Single. Court. Case. Has. Said. This. Is. Fine. In. Every. Single. Country.

Zero exceptions. On the entire planet for five years and counting, every single court has said "well no, this is explicitly fine."

Matthew Butterick, the lawyer that got a bunch of Hollywood people led by Sarah Silverman to try to sue over this? The judge didn't just throw out his lawsuit. He threatened to end Butterick's career for lying to the celebrities.

That's the position you're taking right now.

We've had these laws in place since the 1700s, thanks to collage. They've been hard ratified in the United States for 150 years thanks to libraries.

.

> Everyone recycling? Good! Everyone reducing their beef consumption? Good! ... everyone pirating...?

This is just silly. "Recycling is good and eating other things is good, but let's try piracy, and by the way, I'm just sort of asserting this, there's nothing to support any of this."

For the record, the courts have been clear: there is no piracy occurring here. Piracy would be if Meta gave you the book collection.

.

> In the context of humanity and pushing this to it's limits, we can't even begin to comprehend the consequences.

That's nice. This same non-statement is used to push back against medicine, gender theory, nuclear power, yadda yadda.

The human race is not going to stop doing things because you choose to declare it incomprehensible.

.

> I'm talking crimes against humanity beyond your wildest dreams.

Yeah, we're actually discussing Midjourney, here.

You can't put a description to any of these crimes against humanity. This is just melodrama.

.

> If you don't know what I'm talking about,

I don't, and neither do you.

"I'm talking really big stuff! If you don't know what it is, you didn't think hard enough."

Yeah, sure. Can you give even one credible example of Midjourney committing, and I quote, "crimes against humanity beyond your wildest dreams?"

Like. You're seriously trying to say that a picture making robot is about to get dragged in front of the Hague?

Sometimes I wonder if anti-AI people even realize how silly they sound to others


> The creator loses potential income

Okay. AI books make books 1 million times faster, let's say. Arbitrary, pick any number.

If I, a consumer, want a book, I am therefore 1 million times more likely to pick an AI book. Finding a "real" book takes insurmountable effort. This is the "needle in a haystack" I mentioned earlier.

The result is obvious - creators look potential money. And yes, it is actually obvious. If it isn't, reread it a few times.

To be perfectly and abundantly clear because I think you're purposefully misunderstanding me - I know AI is not piracy. I know that. It's, like, the second sentence I wrote. I said those words explicitly.

I am arguing that while it is not piracy, the harm it creates it identical in form to piracy. In your words, "creators lose potential income". If that is the standard, you must agree with me.

> how silly they sound to others

I'm not silly, you're just naive and fundamentally misunderstand how our societies work.

Capitalism is founded on one very big assumption. It is the jenga block keeping everything together.

Everyone must work. You don't work, you die. Nobody works, everyone dies.

Up until now, this assumption has been sound. The "edge cases", like children and disabled people, we've been able to bandaid with money we pool from everyone - what you know as taxes.

But consider what happens if this fundamental assumption no longer holds true. Products need consumers as much as consumers need products - it's a circular relationship. To make things you need money, to make money you must sell things, to buy things you must have money, and to have money you must make things. If you outsource the making things, there's no money - period. For anyone. Everyone dies. Or, more likely, the country collapses into a socialist revolution. Depending on what country this is, the level of bloodiness varies.

This has happened in the past already, with much more primitive technologies. FDR, in his capitalist genius, very narrowly prevented the US from falling into the socialist revolution with some aforementioned bandaid solutions - what we call "The New Deal". The scale at which we're talking about now is much larger, and the consequences more drastic. I am not confident another "New Deal" can be constructed, let alone implemented. And, I'm not confident it would prevent the death spiral. Again, we cut it very, very close last time.


shop at a real bookstore, they don't have this problem.


> Are AI-written books getting published?

actually i think they are. lots of e-book slop

> If they start out-competing humans, is that bad?

Not inherently, but it depends on what you mean by out-competing. Social media outcompeted books and now everyone's addicted and mental illness is more rampant than ever. IMO, a net negative for society. AI books may very well win out through sheer spam but is that good for us?


Nobody has responded to me with anything about how authors are harmed, so I don't really get who we're protecting here.

It feels more like we just want to punish people, particularly rich people, particularly if they get away with stuff we're afraid to try.


> Nobody has responded to me with anything about how authors are harmed

i imagine if books can be published to some e-book provider through an API to extract a few dollars per book generated (mulitiplied by hundreds), then eventually it'll be borderline impossible to discover an actual author's book. breaking through for newbie writers will be even harder because of all of the noise. it'll be up to providers like Amazon to limit it, but then we're then reliant on the benevolence of a corporation and most act in self interest, and if that means AI slop pervading every corner of the e-book market, then that's what we'll have.

kind of reminds me of solana memecoins and how there are hundreds generated everyday because it's a simple script to launch one. memecoins/slop has certainly lowered the trust in crypto. can definitely draw some parallels here.


> Nobody has responded to me with anything about how authors are harmed

The same way good law-abiding folk are harmed when Heroin is introduced to the community. Then those people won't be able to lend you a cup of sugar, and may well start causing problems.

AI books take off and are easy to digest, and before long your user base is quite literally too stupid to buy and read your book even if they wanted.

And, for the record, it's trivial to "out compete" books or anything else. You just cheat. For AI, that means making 1000 books that lie for every one real book. Can you find a needle in a haystack? You can cheat by making things addictive, by overwhelming with supply, by straight up lying, by forcing people to use it... there's really a lot of ways to "outcompete".

> It feels more like we just want to punish people, particularly rich people, particularly if they get away with stuff we're afraid to try.

If by "afraid to try" you mean "know to be morally reprehensible" and if by "punish people" you mean "punish people (who do things that we know to be morally reprehensible)", then sure.

But... you might just be describing the backbone of human society since, I don't know, ever? Hm, maybe there's a reason we have that perspective. No, it must just be silly :P


> know to be morally reprehensible

In your opinion, not to everyone. There has been no actual argument as to why it's supposedly "morally reprehensible."


I just explained how it's morally reprehensible. The argument is right there, above the quote you chose to quote. Neat trick, but I'm sorry, a retort that does not make.


You didn't explain anything about why it is so, you just said it is, hence why I said it's your opinion. If you can't explain why, in more concrete terms, then there is no reason to believe your argument.


I just explained how AI books are able to cheat - they make more, faster, cheaper, and win based not on quality, never on quality, but rather by overwhelming. Such a strategy is morally reprehensible. It's like selling water by putting extra salt in everything.

Consumers are limited by humanity. We are all just meat sacks at the end of the day. We cannot, and will not, sift through 1 billion books to find the one singular one written by a person. We will die before then. But, even on a smaller scale - we have other problems. We have to work, we have family. Consumers cannot dedicate perfect intelligence to every single decision. This, by the way, is why free market analogies fall apart in practice, and why consumers buy what is essentially rat poison and ingest it when healthier, cheaper options are available. We are flawed by our blood.

We can run a business banking on the stupidity of consumers, sure. We can use deceit, yes. To me, this is morally reprehensible. You may disagree, but I expect an argument as to why.


> I just explained how AI books are able to cheat - they make more, faster, cheaper, and win based not on quality, never on quality, but rather by overwhelming. Such a strategy is morally reprehensible.

Okay, I fundamentally disagree with your premises, analogies to water and banking (or even in your other comment about piracy [0], as I have not seen any evidence of piracy leading directly to "suicides," as you say, and have instead actually benefited many companies [1]), and therefore conclusions, so I don't think we can have a productive conversation without me spending a lot of time saying why I don't equate AI production to morality, at all, and why I don't see AI writing billions of books having anything to do with morals.

That is why I said it is your opinion, versus mine which is different. Therefore I will save both our time by not spending more of it on this discussion.

[0] https://news.ycombinator.com/item?id=42971446#43054300

[1] https://www.wfyi.org/news/articles/research-finds-digital-pi...


You're of course allowed to disagree, but past a certain point you're yelling at clouds and people might think you're insane.

It's very simple logic, and it doesn't require your understanding to be true. Piracy is good for companies? Really? That's... your legitimate position?

If nobody is paying for anything how does a company operate? That's not a rhetorical question. Is it fairy dust? Perhaps magical pixies keep the lights running?

If you don't have explanations for even the simplest of problems with your position, your position isn't worth listening to.


Again, you're a Kantian and I'm not. Your arguments do not sway those who aren't, as I said, they are fundamentally different moral philosophies. If you cannot produce even the evidence of harm as you previously stated (please, link me suicide news reports directly caused by piracy, as you claimed) then "your position isn't worth listening to" either.


Does it make a difference? What I'm saying is plainly true and undeniable. I'll break it down, perhaps a bit slower this time so you can keep up.

You must agree companies require money to operate. No money, no company. You must also agree that piracy OR any action which takes money away from a company results in less money. In addition, you must agree every individual will take whichever action costs them the least amount of money.

Okay. Do you see where I'm going? Following these very simple rules, the result is that there is no money left for companies, and they go under.

Whether that's bad or not is, technically, debatable. Whether that's how it works or not, isn't.

I grow tired of having to explain very simple logic to bumbling idiots. Of course you're not a bumbling idiot. Rather, you're someone with a belief and a delusion. Meaning, you will simply ignore any and all reality to maintain your belief, even if, right before your very eyes, it is refuted. I don't know why people act this way. Maybe there's some medication that can help with that.

People might say I'm a prophet, maybe some kind of psychic. Really, I'm just a guy with, like, a quarter of a brain. We can often "see into the future" if we just rub some brain cells and put two and two together.

Until you can find away around these rules, perhaps some alternative economic system which has not been invented, there is nothing for you to refute. Not that you've been trying at all, your entire "argument" has been "erm, I disagree". Which, by the way, is not an argument. It's more of a statement, and one which is embarrassing to say out loud when you don't have anything to back it up with.

And, to be clear, this is well past the land of morality. I'm operating in a much simpler framework here. Even if you're under the belief everyone is perfect, or some people are perfect and some aren't, or whatever other moral beliefs - that doesn't change the rules and therefore doesn't change the result.


Logic can seem very consistent in a vacuum but again, because you can't find a single statistic to support your claims about piracy (while I already cited some evidence for my side), you cling to what you think is true, not what is empirically studied to be true. Your bloviating in as many paragraphs doesn't really mean anything, so unless you can cite something meaningful, I'm done with this nearly two week old conversation.


I think the concern goes to the point of copyright to begin with, which is to incentive people to create things. Will the inclusion of copyrighted works in llm training (further) erode that incentive? Maybe, and I think that's a shame if so. But I also don't really think it's the primary threat to the incentive structure in publishing.


> the point of copyright to begin with, which is to incentive people to create things

Is it?

(I don't agree)


Yes, it is. It's not actually an opinion thing. It's a "what did the people who came up with the idea of copyright think it was for?" thing.

I haven't read the primary source material, so you could teach me something here, but my understanding is that the idea was to incentive creators.


That is not actually the goal of it.

Copyright was invented by publishers (the printing guild) to ensure that the capitalists who own the printing presses could profit from artificial monopolies. It decreases the works produced, on purpose, in order to subsidize publishing.

If society decides we no longer want to subsidize publishers with artificial monopolies, we should start with legalizing human creativity. Instead we're letting computers break the law with mediocre output while continuing to keep humans from doing the same thing.

LLMs are serving as intellectual property laundering machines, funneling all the value of human creativity to a couple of capitalists. This infringement of intellectual property is just the more pure manifestation of copyright, keeping any of us from benefitting from our labor.


Interesting! I'd love a citation on this...


People have created for millennia before the modern institution of copyright, so I'm not sure how that's a cogent argument.


Yeah it's an interesting point, but it was also hard to physically copy things for all of those millennia.


i wrote a book and copyright was not once on my mind. having created something is the incentive to create for most artists


I don't think we can infer the motives of most artists from your personal motives.


fine, because that means your claim that artists create for the copyright incentive is also false


>What are we actually worried about happening?

Few company can amass such quantities of knowledge and leverage it all for their own, very-private profits. This is unprecedented centralization of power, for a very select few. Do we actually want that? If not, why not block this until we're sure this a net positive for most people?


Meta open-sourced it my guy


Because they expect not to have to opens-source future models. Easy to open stuff as long as you strengthen your position and prevent the competition from emerging.

Ask Google about Android and what they now choose to release as part of AOSP vs Play Services.


…why? Will people buy less books because we have intuitive algorithms trained on old books?

Personally, I strongly believe that the aesthetic skills of humanity are one of our most advanced faculties — we are nowhere close to replacing them with fully-automated output, AGI or no.


old books? i can imagine the shit/hallucinated-like generative AI we would have if the training weight was restricted to public domain stuff...

i think when chatGPT was around version 2 or 3, i had extracted almost 2 pages (without any alteration from the original) with questions that considered the author from this book here, https://www.amazon.com/Loneliness-Human-Nature-Social-Connec...

now it's up to you to think this is okay... but i bet you are no author


I find this such a strange remark on this front.

You got less than 1% of a book... from an author who has passed away... who wrote on a research topic that was funded by an institution that takes in hundreds of millions of dollars in federal grants each year...

I'm not an author (although I do generate almost exclusively IP for a living) and I think this is about as weak a form of this argument as you possibly make.

So right back at ya... who was hurt in your example?


I think the key is to think through the incentives for future authors.

As a thought experiment, say that the idea someday becomes mainstream that there is no reason to read any book or research publication because you can just ask an AI to describe and quote at length from the contents of anything you might want to read. In such a future, I think it's reasonable to predict that there would be less incentive to publish and thus less people publishing things.

In that case, I would argue the "hurt" is primarily to society as a whole, and also to people who might have otherwise enjoyed a career in writing.

Having said that, I don't think we're particularly close to living in that future. For one thing I'd say that the ability to receive compensation from holding a copyright doesn't seem to be the most important incentive for people to create things (written or otherwise), though it is for some people. But mostly, I just don't think this idea of chatting with an AI instead of reading things is very mainstream, maybe at least in part because it isn't very easy to get them to quote at length. What I don't know is whether this is likely to change or how quickly.


  there is no reason to read any book or research publication because you can just ask an AI to describe and quote at length from the contents of anything you might want to read
I think this is the fundamental misunderstanding at the heart of a lot of the anger over this, beyond the basic "corporations in general are out of control and living authors should earn a fair wage" points that existed before this.

You summarize well how we aren't there yet, but I'd say the answer to your final implied question is "not likely to change at all". Even when my fellow traitors-to-humanity are done with our cognitive AGI systems that employ intuitive algorithms in symphony with deliberative symbolic ones, at the end of the day, information theory holds for them just as much as it does for us. LLMs are not built to memorize knowledge, they're built to intuitively transform text -- the only way to get verbatim copies of "anything you might want to read" is fundamentally to store a copy of it. Full stop, end of story, will never not be true.

In that light, such a future seems as easy to avoid today as it was 5 years ago: not trivial, but well within the bounds of our legal and social systems. If someone makes a bot with copies of recent literature, and the authors wrote that lit under a social contract that promised them royalties, then the obvious move is to stop them.

Until then, as you say: only extremists and laymen who don't know better are using LLMs to replace published literature altogether. Everyone else knows that the UX isn't there, and the chance for confident error way too high.


that was just a metaphor, you can ask your AI what's that or use way less energy and use Wikipedia's search engine... or do you think OpenAI first evaluates if the author is an independent developer &/or has died &/or was funded by a public university before adding the content to the training database? /s

and one thing is publishing a paper with jargon for academics, another is to simplify the results for the masses. there's a huge difference between finishing a paper and a book


It isn't that someone was hurt. We have one private entity gaining power by centralizing knowledge (which they never contributed to) and making people pay for regurgitating the distilled knowledge, for profit.

Few entities can do that (I can't).

Most people are forced to work for companies that sell their work to the higher bidder (which are the very entities mentioned above), or ask them to use AI (under the condition that such work is accessible to the AI entities).

It's obviously a vicious circle, if people can't oppose their work to be ingested and repackaged by a few AI giants.


Are you talking about Meta? They released the model. It's free to use.


Then you should be in support of OSS models over private entity ones like OpenAI's.


Like supporting Android Open Source Project… until Google decides to move the critical parts into Google Play Services? I run GrapheneOS (love it) but almost no banks will allow non-Google-sponsored Android ROMs and variants to do NFC transactions because… the AOSP is designed to miss what Google actually needs.

Idem with ML Kit loaded by Play Services, which makes Android apps fail in many cases.

And I'm not talking about biases introduced by private entities that open source their models but pursue their own goals (e.g geopolitical).

As long as AI is designed and led by huuge private entities, you'll have a hard time benefiting from it without supporting the entities' very own goals.


Something is better than nothing, better to have AOSP than to have a fully closed-source OS like iOS.


The answer is to censor the model output, not the training input. A dumb filter using 20 year old technology can easily stop LLM's from verbatim copyright output.


I know that this seems likely from a theoretical perspective (in other words, I would way underestimate it at the sprint planning meeting!), but

A) checking each output against a regex representing a hundred years of literature would be expensive AF no matter how streamlined you make it, and

B) latent space allows for small deviations that would still get you in trouble but are very hard to catch without a truly latent wrapper (i.e. another LLM call). A good visual example of this is the coverage early on in the Disney v. ChatGPT lawsuit:

[1] IEEE: https://spectrum.ieee.org/midjourney-copyright

[2] reliable ol' Gary Marcus: https://garymarcus.substack.com/p/things-are-about-to-get-a-...


What if the model simply substitutes synonyms here and there without changing the spirit of the material? (This might not work for poetry, obviously.) It is not such a simple matter.


It's pretty simple, you are absolutely allowed to do that, and it's been done forever.

Imagine having the copyright claim to "Person's family member is killed so they go and get revenge".


So I can duplicate a book and change and word or two and sell it? That does not sound right.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: