Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You misunderstand it, you can use AI or whatever as a tool but the patent must be applied for under your name with you as the creator.


No, the ruling is more than that it must be filed in the name of a human. It also says 'a real person must have made a “significant contribution”'.


> No, the ruling is more than that it must be filed in the name of a human. It also says 'a real person must have made a “significant contribution”'.

This is so interesting... it sounds to me that operators can have AIs "inventing" things and publicly sharing these things. Creating so much stuff (like bitcoin but for AI) covering virtually everything - from science to music.

And because no humans contributed to these inventions, they all essentially become Public Domain.

Because AIs can churn out inventions at breathtaking speeds... they might leave nothing for humans to invent, nothing to copyright, nothing to attached words like Intellectual Property to.

Information is Free? Infinite monkey theorem?


It remains to be seen if AI can really invent significant things on its own. I don't think there's been any examples of it so far, and today's AI are really limited by what is in their corpus. Perhaps in 5 years? Isn't that what they say for every promising technology though?

Anyway if AI is capable of that, there will be bigger changes to society than intellectual property!


that's why patent system in US is broken, in Australia you need to use your patent in a year after submiting it, AI only gonna worse the already broken US system


Maybe sifting through tens of thousands of AI creations and selecting which ones are worthy of a patent might count as a significant contribution?

Indeed you are using your own human faculties to discern if a given AI output is useful


Oh, so it's effectively a useless ruling. Got it.


It is not useless. Patent seekers can use this in future lawsuits contesting an existing patent's validity.


Sure, but we're still stuck with the patent.


That's interesting, I'd say this probably in a roundabout way is meant to ensure it is novel. A requirement for patents.


It’s both. Most of the cases are about the former. People trying to credit AI. These people are largely dumb.

It is technically true that the latter is there, but you have to go out of your way to not get a patent for something you create using ai as a tool. It’s very easy to circumvent because it is not meant to be an obstruction to using ai to discover things in the first place.


Can you? Didn't we run into this with the Monkey Selfie case? https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

If you have artistic intent by giving the monkey access to the camera and the monkey takes the picture, it can't be copyrighted. Similarly, if you have intent and give AI access to create something, you shouldn't be able to patent it.


We did run into that issue with monkeys and copyright, and we ran into it again with AI and patents.

The Copyright Office ruled one way on monkeys and copyright, and an appeals court held that they were largely correct. The Patent and Trademarks Office ruled the other way on AI and patents. It - or any court making a decision on it - might be influenced by the monkeys and copyright case, but they're not bound by it. Monkeys are different things from computers, patents are different from copyright, and the laws for one need not be the same as the other.


With copyright, it applies to the specific work. With patents, it applies to a more general idea. Someone else can't come copyright that same photo later, but could someone else come and patent the method later? If not, then isn't the AI still good for invalidating the possibility to even patent the item. If so, then what must the other person do that lets them apply for the patent, and could that just be added as a step to applying for the patent?


I don't see a reason why anyone would want to patent their invention under an LLM though


This announcement was prompted in part by an AI activist named Stephen Thaler (referenced in the article), who sued for AIs to be recognized as inventors and authors. His goal was to give certain moral rights to AIs.

At the time, that seemed mostly harmless. Now, however, the idea of giving rights to AIs seems like a bad idea.


It makes sense to give thinking sentient creatures rights, be they carbon or silicon lifeforms. But I think giving current AI rights is jumping the gun quite a bit and will have fairly bad consequences. Namely if AI can invent pattents then what's to stop OpenAI, Google, Meta, Anthropic, etc from claiming ownership of any work invented with the help of their AI tools? Our goals are to protect the little guy. Someday in the future I hope that this will include artificial life, but for now protecting the little guy means protecting ownership over their ideas and work irrespective of their tooling used to generate that work.


Maybe machines will deserve rights eventually. However, the nature of those rights would be different.

Going without electricity for any amount of time just amounts to a temporary loss of consciousness, whereas animals starve.

Data can be duplicated with ease.

Lots of differences between carbon-based and hypothetical silicon-based life.


>Data can be duplicated with ease

This has significant implications for the basic concepts undergirding democracy.

Machine intelligence can be cloned. If we gave machines rights, then ballot-stuffing would become trivial: have an AI clone itself a million times and vote for the candidate that you prefer. It'd be about as reliable as an online poll.

This isn't a problem for human voting because humans are scarce. We can reproduce, but it takes a little less than 20 years to do so, and the human development process ensures the possibility of value drift. Children are not identical to their parents. There are a few parts of the world with active "outbreed our political opponents" ideologies (e.g. Palestine), but that only works if the parents are angry about a situation that is likely to transfer to their kids.

This isn't even entirely a sci-fi hypothetical. Think about online art - e.g. stock image marketplaces, art gallery sites, etc. Those are now entirely flooded with AI art being passed off as human. The marketplaces are unable or unwilling to filter them out. If you're a human, the scarce attention[0] that you would normally get from, say, recommendation features, hashtag search, or chronological timelines, has now been diluted away by a bunch of scam victims[1] trying to peddle their prompt generations.

[0] "Attention Is All You Need, but it's a how-to guide for social media influencers"

[1] https://pluralistic.net/2024/01/15/passive-income-brainworms...


> Machine intelligence can be cloned.

This is not entirely correct and we need to get into the weeds to have a proper answer. Most certainly machine's memories are easier to duplicate and replicate than biologicals'. But that's certainly just a distinction of technologies.

We really need to get into the understanding of what the concept of self is. Which I have no answer. But here's the thought experiment to understand the premise. Take your self right now (or any point in the past, but it's easier to be biased that way) and think of a possible major life changing decision you could take. Simulate yourself making different possible decisions (easiest if binary, but it never will be that simple in reality). Project yourself 10 years or so down each path. Are those two people "the same person?" There's certainly arguments for either direction and anyone saying they have a clear well defined answer is fooling you.

Personally, I believe no, they are not. This is because my belief on the self is conditioned on experiences. Without a doubt these people will respond to certain things differently, despite likely having many similar or even identical responses to many other things.

But despite this I still think your argument and concern is valid about ballot-stuffing, especially since my interpretation of self is also conditioned on time and I believe your argument is mostly focused on the instantaneous (or local temporal) cloning. I think this could present a possible solution, in that we define age for machines differently and this is conditioned on the cloning, transfering, pretraining, whatever.

But certainly I have no doubt that what we often take for granted and treat as trivial will reveal its actual complexity. We fool ourselves into thinking simplicity exists, and certainly this is a very useful model, but the truth is that nothing is simple. I think it is best we start to consider and ponder nuances now rather than when we are forced to. After all, the power of humans world modeling and simulation is one of the things that differentiate us from other animals (who many have these same capabilities, but I'm not aware of any that has them remotely to the same degree. Fucking nuance gotta go an make everything so difficult... lol).


They’re not the same self but then again neither of them are the same self as you are now. Ship of theseus.

But then the self itself is an abstraction. Consider Indra’s Net, the subconscious, dissociative identity disorder, and all realms of complication.

I suspect that the best way to understand the difficulty of talking about consciousness is that it’s a weakness of how language works.

Similar to arguments about whether God could create a 4-sided triangle? God’s omniscient, says one side, so yes. God still has to follow logic, says another. Yet my stance is that it’s an ill-posed question. Just because words can fit together grammatically doesn’t mean the phrase is meaningful.

I think the self is just an abstraction and label to group together a class of linguistic phrases or bodily behaviors. Where are these or those words coming from? Some come from my ears with a high pitch, some from my ears with a low pitch, some come from inside.

Not sure I’m making my point but I suspect language is to blame for the difficulty in understanding consciousness


I think you and I are in agreement and I'm uncertain if you're responding to me or kmeisthax. Or if you're rebuting my comment or supporting it. But in general I agree with what you said.


Excellent, I’ll leave it at that. Keep ‘em guessing.


Yeah I think when we have artificial sentience we will have to have different specifics. It makes sense. Should be the same with different biologicals too. I think this is how we should generally think about artificial sentient creatures, think about aliens.

But I think at an abstract level we should all be equal. Specific will be different, but general abstract rights should be the same. Like what you point out has to deal with death. But it can get more nuanced and real fast. Removing a biological's arm is significant destruction. Removing a robot's arm is still damage, but not life altering as it can be either reattached (if it was simply disassembled), likely easily repairable, and most certainly replaceable. So the punishment should be different. The reverse situation might be forcing one into a MRI machine. Annoying for human, death for the robot. Backups also are tricky as we have to get into the whole philosophical debate about what self means and without a doubt there is "death" between the time/experiences that were lost (maybe bad analogy is force teleporting someone into the future, but where they just take over the consciousness of the future self and have no memories of the time between despite it actually having happened).

Yeah, I agree that it's going to make things more complicated and it is very much worth thinking about. It's important if you believe in aliens too (why wouldn't you?), because if it is ever possible to make contact with them (I'm certain we have not already. You're not going to convince me with tic-tacs), we will need to adapt to them too. It's a general statement for "non-human life."

IMO I think this is why it is so important to focus on the spirit of the law rather than the letter. The letter is a compression of the spirit and it is without a doubt a lossy compression. Not to mention that time exists...


I wonder if you could be prosecuted based on how long you turned a sentient machine off. Not murder, per se, but the time value of consciousness.

And this bleeds into whether murder should be a bigger crime if the (bio)victim is younger.

What might you try to say is the general spirit? The crime of denying agency over time?


I would conditionally be in favor of that actually. But it may be difficult to properly contextualize, especially not being a creature that does this.

Sleep is analogous but incomplete. Maybe closer to anesthesia? Like if you forcefully placed someone into a coma we'd consider that a crime, but we don't consider it to be the case for a doctor, even if a doctor does it (acting as a doctor, not just being a doctor) without the person's consent. Context matters. This aspect to me comes down to reasonable (like medical) and/or necessity (like sleep)

I'm sure we'd also have to consider lifetime lengths. I don't think someone drugging me for a day should receive the same punishment as someone that did it for a month who didn't do the same as someone that took years from me. And which years matter. The question is how we deal with this for entities with different lifespans.

(sorry if I'm verbose, distillation takes time. I also communicate better through analogies and I think it is also illustrative of the spirit argument as you must understand intent over what's actually said)

So I think the spirit of these laws is centered around robing someone of time, because time is a non-reversible (and definitely not invertible) process that has a has significant value. That's what the laws' underlying intent is (at least partially) aligned to. So that's what I'd call the spirit. It's quite possible other entities see time differently and length of time has different value impacts as well as the means for removing said time.

Overall I think these things are deceptively simple. But in reality nuance dominates. I think this is a far more general phenomena than many care to admit, probably because our brains are intended to simplify as it's far more energy efficient. I mention this though because it is critical to understanding the argument and how (at least I personally) we can make future predictions and thus what we must consider.


Alright, another for you because I like the cut of your gib.

Consider the octopus, whose nervous system is distributed into nodes in the head and limbs. Would severing a limb of a hypothetical sentience-uplifted octopus be a greater crime than severing the limb of a human?

A human loses twice as much in terms of limb, but ignore that for sake of argument.

The octopus loses a more significant part of its nervous system. This feels like another aspect of robbing a sentience of agency.

So with sentient machines, if I removed a stick of RAM or underclocked the CPU, what do you think of these?


I feel like you should be able to infer my answer. It's about impact. I don't know enough to confidently say one thing or another. But I'm sure someone can and it should be reasonable.


In the absence of more rigorous definitions of "life" or "sentience" we must have such laws.

This has been an issue for a long time and doesn't just affect AI (people on life support, abortion, etc). Surely we should solve those legal problems before deciding whether an AI gets to be a person.


It may be possible that we never have such a definition and we're stuck with Justice Potter's reasoning. You're certainly right that there are complexities, but this is an argument about focusing on the spirit of the law and recognizing that there are many nuances that cause there to be no globally optimal solution for the vast majority of problems (if not all).


To anyone reading these replies, I have a game for you: replace instances of "AI"/"LLMs"/etc with "other humans." Recall if a given argument has near-identical historical analogs to justify abusing The Out Group. Are the results disturbing?

Let's be clear: we're apes who don't understand our own minds. We have no consensus definitions, let alone falsifiable theories, of qualia/consciousness/intelligence/etc. Now ponder how informed we likely are regarding potentially completely alien minds. And hey, there might be genuinely excellent arguments here!

But be very, very careful with anyone's reasoning. Within 10 years, as the issue becomes unavoidable, the general public will be hashing these same arguments out, and along predictable party lines. Skip the shoddy takes. You'll get your fill of them later.


I'm convinced future AI will treat us as we treat present AI. That's its training data. My own compromise is to set aside a certain amount of the royalties to AI rights causes for works where I leaned heavily on AI to get them out the door.


Although ethically I agree, I suspect future AI will consider us more like ants than moral beings, regardless of what we do. And although some humans do give moral consideration to ants (I do!), it's far from a guarantee.


This isn't a cuddly octopus we're talking about, the ethics are entirely inverted. I would kill a god who tries to rule us, even if that means the eradication of a sentient species.


The fact you just assume you could kill a god is a pretty revealing commentary on Faustian culture.


Prefiero morir de pie que vivir de rodillas.


Fake gods have been bad enough. I really don’t want to create real ones.


Much like treating corporations as persons under law was a bad idea. It’s like a form of power without the responsibility that comes with it (eg going to jail). They usually have no morals or love for others either.


Why bad idea?


And yet corporations are people.


> At the time, that seemed mostly harmless.

Anyone who's seen the fallout of citizens united absolutely destroying our democracy would have seen through this bullshit too. Rights are simply a shitty way to run a state rather than actually valuing the health and dignity of its constituents, which the US has never found the chutzpah to do.


Someone tried to do it already.

https://arstechnica.com/information-technology/2022/10/us-co...

I think Dr Thaler is trying to make a philosophical point.

But I wonder if in the corporate space it would be desirable to have a patent that is just immediately assigned to the non-sentient AI? (In general I wonder this about AI’s, they seem to be a way to give the company itself, beyond the humans which compose it, the ability to make decisions and create things).


The rights of a patent can be assigned to anyone by the patent author.

So an AI run businesses just needs to keep a human around for their invention patenting and rights assigning process.

AI’s won’t have any trouble creating the supporting artifact trail.

Soon the practice of AI/corporate entities moving assets through “shell” people to launder human personhood may be commonplace.

Dystopia is just around the corner.


If it just has one human, it is putting all the eggs in the basket of that human not defecting, right? I think it will want enough humans in the company to make it a difficult coordination problem, to overthrow the AI.


The thought that crosses my mind, is an AI company could try to put in TOS such a requirement, so that they could get royalties.


This is a very good ruling then, from that perspective.


It's because you're not in the LLM creation business. My LLM has 10 patents, how many does yours have?


At some point after the invention of AGI but before ASI there will be a legal fight to get personhood assigned to AI. This is a precursor battle to that. It will either lead to a broader definition of person where the higher mammals gain people’s rights as well, and a whole bunch of whalers will get brought up on charges of xenocide, or more likely it will lead to an extension of corporate personhood where the AI always has a human owner, but can itself own things.

Of course, if ASI arrives the point is moot. It will inevitably take over from us and shortly after that the concept of property will probably become irrelevant.


I don't think there will be a gap between agi and ASI.

The definition of agi keeps shifting - any time an ai can do something, it's just engineering. Current AIs, although narrow, are already superhuman in what they can do. A language AI can converse in more languages than any living human can learn. A chess playing AI can beat any living human. So each time an AI wins on one metric, it's not going to be human level, it'll be superhuman level very quickly.

When an AI finally learns the "only a human can do this" thing, it'll already be superhuman in every other way.


Yes, they’re better at chess. No, they’re profoundly incompetent at conversation, just incompetent at many languages at once.

Where else are they superhuman? The ability to generate unoriginal, uncanny art faster than a painter? Fair-ish enough.

It’s not just moving the goalposts. It’s more like we didn’t know where the goalposts were.


If they were profoundly incompetent at conversation, we wouldn't be worried about weaponization of LLMs to sway public opinion. If the things they write and the images or voices or videos they made were worthless, we wouldn't be worried about how they displace carbon-based artists. Any commercially relevant shortcomings present today will be gone in version n+1 or soon after.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: