Hacker News new | past | comments | ask | show | jobs | submit login

The general argument (IANAL) is that it's Fair Use, in the same vein as Google Images or Internet Archive scraping and storing text/images. Especially since the outputs of generated images are not 1:1 to their source inputs, so it could be argued that it's a unique derivative work. The current lawsuits against Stability AI are testing that, although I am skeptical they'll succeed (one of the lawsuits argues that Stable Diffusion is just "lossy compression" which is factually and technically wrong).

There is an irony, however, that many of the AI art haters tend to draw fanart of IP they don't own. And if Fair Use protections are weakened, their livelihood would be hurt far more than those of AI artists.

The Copilot case/lawsuit IMO is stronger because the associated code output is a) provably verbatim and b) often has explicit licensing and therefore intent on its usage.




>it could be argued that it's a unique derivative work

Creating a derivative work of a copyrighted image requires permission from the copyright holder (i.e., a license) which many of these services do not have. So the real question is whether AI-generated "art" counts as a derivative work of the inputs, and we just don't know yet.

>b) often has explicit licensing and therefore intent on its usage

It doesn't matter. In the absence of a license, the default is "you can't use this." It's not "do whatever you want with it." Licenses grant (limited) permission to use; without one you have no permission (except fair use, etc. which are very specifically defined.)


"Creating a derivative work of a copyrighted image requires permission from the copyright holder"

That's why "fair use" is the key concept here. Under US copyright law "fair use" does not require a license. The argument is that AI generated imagery qualifies as "fair use" - that's what's about to be tested in the courts.

https://arstechnica.com/tech-policy/2023/04/stable-diffusion... is the best explanation I've seen of the legal situation as it stands.


If a person trained themselves on the same resources, and picked up a brush or a camera and created some stunning art in a similar vein, would we look at that as a derivative work? Very interesting discussion. Art of all forms are inspired by those who came before.

Inspired/trained… I think these could be seen as the same.


I don't think we should hold technology to the same standards as humans. I'm also allowed to memorize what someone said, but that doesn't mean I'm allowed to record someone without their knowledge (depending on the location)


Training a human and training a model may use the same verb but are very different.

If the person directly copied another work, that's a derivative work and requires a license. But if a person learned an abstract concept by studying art and later created art, it's not derivative.

Computers can't learn abstract concepts. What they can do is break down existing images and then numerically combine them to produce something else. The inputs are directly used in the outputs. It's literally derivative, whether or not the courts decide it's legally so.


> Computers can't learn abstract concepts

Goalposts can be moved on whether it has "truly learned" the abstract concept, but at the very least neural networks have the ability to work with concepts to the extent that you can ask to make an image more "chaotic", "mysterious", "peaceful", "stylized", etc. and get meaningfully different results.

When a model like Stable Diffusion has 4.1GB of weights and was trained on 5 billion images, the primary impact of one particular training image may be very slightly adjusting what the model associates with "dramatic".

> If the person directly copied another work, that's a derivative work and requires a license

Not if it falls under Fair Use. Here's a fairly extreme example for just how much you can get away with while still (eventually) being ruled Fair Use: https://www.artnews.com/art-in-america/features/landmark-cop... - though I wouldn't recommend copying as much as Richard Prince did.

> The inputs are directly used in the outputs

Not "directly" - during generation, normal prompt to image models don't have access to existing images and cannot search the Internet.


> Computers can't learn abstract concepts

I would say that abstract concepts is the only thing that computers can learn at the moment, at least until they are successfully embodied.

> It's literally derivative, whether or not the courts decide it's legally so.

To be a derivative work you should be able to at least identify the work it is a derivative of. While SD and friends can indeed generate obviously copyright infringing works (then again so can photoshop or a camera or even a paintbrush), for the vast majority of the output you can at best point out to the general direction of an author or a style.


> Creating a derivative work of a copyrighted image requires permission from the copyright holder

It does not (in US law) if it falls within Fair Use, which is an exception to what would otherwise be the exclusive rights of copyright holders.


> Especially since the outputs of generated images are not 1:1 to their source inputs, so it could be argued that it’s a unique derivative work.

I think what you mean to say is that the argument is that both the models themselves and (in many cases) the output from the models, to the extent it might otherwise be a derivative work of one or more the input images, are transformative uses. [0]

[0] https://www.nolo.com/legal-encyclopedia/fair-use-what-transf...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: