Hacker Newsnew | past | comments | ask | show | jobs | submit | whichfawkes's commentslogin

Do you take a standard deduction?

It's clearly not enough to cover all of the expenses that are required to generate your "revenue", but it's a gesture in that direction.


Seconding the Onyx Box Max Lumi here. I bought one back when they were much more expensive and I still think it's been worth it.


I did the same thing actually. There must be dozens of us! I was continually impressed by how easily and reliably it worked.


It's totally solved, first class is just too expensive for most people. The future is not evenly distributed.


Why not bunk beds like in a sleeper car?

They don’t take up more space, and let you have a good night’s rest.


The fact is it doesn't matter what people "deserve".

People who are willing to forsake some degree of convenience can be granted greater privacy by simply informing them.

People who are seeking convenience will always be giving up something else. In this domain, they're often giving up privacy.

A lot of people these days are essentially forced to seek convenience. They don't have the time or money to spare to do otherwise.


It seems weird to sue an AI company because their tool "can recite [copyrighted]" content verbatim.

If I paid a human to recite the whole front page of the New York Times to me, they could probably do it. There's nothing infringing about that. However, if I videotape them reciting the front page of the New York Times and start selling that video, then I'd be infringing on the copyright.

The guy that I paid to tell me about what NYT was saying didn't do anything wrong. Whether there's any copyright infringement would depend what I did with the output.


In your analogy, AI would be the videotape, not the person, because OpenAI is selling access to it.


I'm not so sure about that. It seems to me that they're selling me a service. Just like I might pay for a subscription to Adobe Photoshop or pay per-render fees to a rendering farm.

I could use Photoshop to reproduce a copyrighted work, and in some circumstances (i.e. personal use) that'd be fine. Or I could use Photoshop to reproduce a copyrighted work and try to sell it for profit, which would clearly not be fine. Nobody is saying that Adobe has to recognize whether or not the pixels I'm editing constitute a copyrighted work or not.


The difference here is that Adobe is selling a set of tools that can recreate copyrighted work from the ground up. The Mona Lisa being previously incorporated into their tools is not a foundational necessity for their paintbrush to brush digital paint.

The same is not true for AI, which require copyrighted work be contained therein, in order for the tool part to function.


While I 100% agree, there is another angle to consider this from, in that ChatGPT replaces reading the NYT. ChatGPT competes with it in the delivery of information.

To add to your point though, a sufficiently advanced AI trained on licensed data could reproduce copywrited content from prompt alone. It's the next step that would cause infringement where someone does something withcthe output.


Human brains are still the main legal agents in play. LLMs are just a computer programs used by humans.

Suppose I research for a book that I'm writing - it doesn't matter whether I type it on a Mac, PC, or typewriter. It doesn't matter if I use the internet or the library. It doesn't matter if I use an AI powered voice-to-text keyboard or an AI assistant.

If I release a book that has a chapter which was blatantly copied from another book, I might be sued under copyright law. That doesn't mean that we should lock me out of the library, or prevent my tools from working there.


I see two separate issues, the one you describe which is maybe slightly more clear cut: if a person uses an AI trained on copyrighted works as a tool to create and publish their own works, they are responsible if those resulting works infringe.

The other question, which I think is more topical to this lawsuit, is whether the company that trains and publishes the model itself is infringing, given they're making available something that is able to reproduce near-verbatim copyrighted works, even if they themselves have not directly asked the model to reproduce them.

I certainly don't have the answers, but I also don't think that simplistic arguments that the cat is already out of the bag or that AIs are analogous to humans learning from books are especially helpful, so I think it's valid and useful for these kinds of questions to be given careful legal consideration.


> Human brains are still the main legal agents in play.

No, they're not. This is The New York Times (a corporation) vs OpenAI and Microsoft (two more corporations).


Aren't corporations considered 'persons' in the US?


Trying to prohibit this usage of information would not help prevent centralization of power and profit.

All it would do is momentarily slow AI progress (which is fine), and allow OpenAI et al to pull the ladder up behind them (which fuels centralization of power and profit).

By what mechanism do you think your desired outcome would prevent centralization of profit to the players who are already the largest?


I'm not saying copyright is without problems (e.g. there is no reason I think its protection should be as long as it is), but I think the opposite, where the incentive to create new content (especially in the case of news reporting) is completely killed because someone else gets to vacuum up all the profits, is worse. I mean, existing copyright does protect tons of independent writers, artists, etc. and prevents all of the profits from their output from being "sucked up" by a few entities.

More critically, while fair use decisions are famously a judgement call, I think OpenAI will lose this based on the "effect of the fair use on the potential market" of the original content test. From https://fairuse.stanford.edu/overview/fair-use/four-factors/ :

> Another important fair use factor is whether your use deprives the copyright owner of income or undermines a new or potential market for the copyrighted work. Depriving a copyright owner of income is very likely to trigger a lawsuit. This is true even if you are not competing directly with the original work.

> For example, in one case an artist used a copyrighted photograph without permission as the basis for wood sculptures, copying all elements of the photo. The artist earned several hundred thousand dollars selling the sculptures. When the photographer sued, the artist claimed his sculptures were a fair use because the photographer would never have considered making sculptures. The court disagreed, stating that it did not matter whether the photographer had considered making sculptures; what mattered was that a potential market for sculptures of the photograph existed. (Rogers v. Koons, 960 F.2d 301 (2d Cir. 1992).)

and especially

> “The economic effect of a parody with which we are concerned is not its potential to destroy or diminish the market for the original—any bad review can have that effect—but whether it fulfills the demand for the original.” (Fisher v. Dees, 794 F.2d 432 (9th Cir. 1986).)

The "whether it fulfills the demand of the original" is clearly where NYTimes has the best argument.


> Trying to prohibit this usage of information

It's not trying to prohibit. If they want to use copyrighted material, they should have to pay for it like anyone else would.

> prevent centralization of profit to the players who are already the largest?

Having to destroy the infringing models altogether on top of retroactively compensating all infringed rightsholders would probably take the incumbents down a few pegs and level the playing field somewhat, albeit temporarily.

They'd have to learn how to run their business legally alongside everyone else, while saddled with dealing with an appropriately existential monetary debt.


Why do you expect an AI to cite it's source? Humans are allowed to use and profit on knowledge they've learned from any and all sources without having to mention or even remember their sources.

Yes, we all agree that it's better if they do remember and mention their sources, but we don't sue them for failing to do so.


Quite simply, if you're stating things authoritatively, then you should have a source.


Do you have a source for this claim?


If society were the customer, society ought to be paying.


Right, which is why we have publicly funded education systems - in much of the world, and to various extents.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: